Swap GPUs at the Press of a Button: Liqid Fabric PCIe Magic Trick!

Поділитися
Вставка
  • Опубліковано 9 чер 2024
  • Easily allocate hundreds of GPUs WITHOUT touching them!
    Check out other Liqid solutions here: www.liqid.com
    0:00 Intro
    1:00 Explaining the Magic
    2:00 Showing the Use Case Set Up
    4:41 The Problem with Microsoft
    6:07 Infrastructure as Code Hardware Control
    10:36 Game Changing our Game Testing and More
    16:12 Outro
    ********************************
    Check us out online at the following places!
    bio.link/level1techs
    IMPORTANT Any email lacking “level1techs.com” should be ignored and immediately reported to Queries@level1techs.com.
    -------------------------------------------------------------------------------------------------------------
    Music: "Earth Bound" by Slynk
    Edited by Autumn
  • Наука та технологія

КОМЕНТАРІ • 260

  • @user-eh8oo4uh8h
    @user-eh8oo4uh8h 25 днів тому +82

    The computer isn't real. The fabric isn't real. Nothing actually exists. We're all just PCI-express lanes virtualized in some super computer in the cloud. And I still can't get 60fps.

    • @AlumarsX
      @AlumarsX 24 дні тому +1

      Goddamn Nvidia all that money and keeping us fps capped

    • @gorana.37
      @gorana.37 24 дні тому

      🤣🤣

    • @jannegrey593
      @jannegrey593 24 дні тому +2

      There is no spoon taken to the extreme.

    • @fhsp17
      @fhsp17 24 дні тому

      The hivemind secret guardians saw that. They will get you.

    • @nicknorthcutt7680
      @nicknorthcutt7680 8 днів тому +1

      😂😂😂

  • @Ultrajamz
    @Ultrajamz 25 днів тому +94

    So I can literally hotswap my 4090’s as they melt like a belt fed gpu pc?

    • @christianhorn1999
      @christianhorn1999 24 дні тому +11

      thats like a gatlinggun for gpus. dont bring manifacturers on ideas.

    • @TigonIII
      @TigonIII 24 дні тому +4

      Melt? Like turning them to liquid, pretty on brand. ;)

    • @BitsOfInterest
      @BitsOfInterest 22 дні тому

      I don't think 4090's fit in that chassis based on how much room is left in the front with those other cards.

    • @nicknorthcutt7680
      @nicknorthcutt7680 8 днів тому +1

      Lmao

    • @KD-_-
      @KD-_- 4 дні тому +1

      The VHPWR connector might be durable enough to justify hand loading because the belt system could dislodge the connectors from the next one in line.
      Would need to do analysis.

  • @ProjectPhysX
    @ProjectPhysX 24 дні тому +8

    That PCIe tech is just fantastic for software testing. I test my OpenCL codes on Intel, Nvidia, AMD, Arm, and Apple GPU drivers to make sure I don't step on any driver bugs. For benchmarks that need the full PCIe bandwidth, this system is perfect.

  • @abavariannormiepleb9470
    @abavariannormiepleb9470 27 днів тому +99

    Please Liqid, introduce a tier for homelab users!

    • @popeter
      @popeter 25 днів тому +9

      oh yea could do so much, proxmox systems on dual ITX all sharing GPU and network off one of these

    • @marcogenovesi8570
      @marcogenovesi8570 24 дні тому +8

      I doubt this can be made affordable for common mortals

    • @AnirudhTammireddy
      @AnirudhTammireddy 24 дні тому +6

      Please deposit your 2 kidneys and 1 eye before you make any such requests.

    • @abavariannormiepleb9470
      @abavariannormiepleb9470 24 дні тому +2

      My humble dream setup would be a “barebones” kit consisting of the PCIe AIC adapters for the normal “client” motherboard and the “server” board that offers four x16 slots. You’d have to get your own cases and PSU solution for the “server” side.

    • @mritunjaymusale
      @mritunjaymusale 24 дні тому +1

      @@marcogenovesi8570 you can tho, in terms of hardware it's just a pci switch the hard part is the low level code to match the right pci device to right cpu and on top of that software that connects it to workflows that can understand this.

  • @wizpig64
    @wizpig64 27 днів тому +80

    WOW! imagine having 6 different CPUs and 6 GPUs, rotating through all 36 combinations to hunt for regressions! Thank you for sharing this magic trick!

    • @joejane9977
      @joejane9977 24 дні тому +4

      imagine if windows worked well

    • @onisama9589
      @onisama9589 24 дні тому

      Most likely the windows box would need to be shutdown before you switch or the OS will crash.

    • @jjaymick6265
      @jjaymick6265 20 днів тому

      I do this daily in my lab. 16 different servers, 16 GPUs (4 groups of 4) and do fully automated regressions for AI/ML models/GPU driver stacks / Cuda version comparisons. Like I have said in other posts once you stitch this together with Ansible / Digital Rebar things get really interesting. Now that everything is automated... I just simply input a series of hardware and software combos to test and the system does all the work while I sleep. Just wake up review the results and input the next series of tests. There is no more cost effective way for one person to test the thousands of combinations.

    • @formes2388
      @formes2388 12 днів тому

      @@joejane9977 It does. I mean, it works well enough that few people go through the hastle of conciously switching. It's more a default switch if people go start using a tablet as a primary device, do to not needing a full fat desktop for their day to day needs.
      For perspective of where I am coming from: I have a trio of Linux systems, a pair of windows systems; one of the windows systems is also dual booted to 'nix. Used to have a macOS system but, have no need of one, and better things to spend money on.
      For some stuff: Linux is great; thing is, I have better things to do with my time than tinker with configs to get things running - so sometimes, a windows system just works.

  • @d0hanzibi
    @d0hanzibi 25 днів тому +20

    Hell yea, we need that consumerized!

  • @chaosfenix
    @chaosfenix 25 днів тому +11

    I would love this in the home setting. If it is hot pluggable it is also programmable which means that you could upgrade GPUs periodically but instead of just throwing it away you would push it down the list on your priority. Hubby and Wifey could get priority on the fastest GPU and if you have multiple kids they would be lower priority. If mom and dad aren't playing at the moment though they could just get the fastest GPU to use. You could centralize all of your hardware in a server in a closet and then have weaker terminal devices. They could have an amazing screen, keyboard, etc but they could cheap out on the CPU, RAM, GPU etc because those would just be composed when they booted up. Similar to how computers will switch between an integrated GPU and a dGPU now you could just use the cheap devices iGPU while doing the basics but then if you opened an application like a game it would dynamically mount the GPU from the rack. No more external GPUs for laptops and no more insanely expensive laptops with hardware that is obsolete for its intended task in 2 years.

    • @christianhorn1999
      @christianhorn1999 24 дні тому +2

      moooom?! why is my fortnite dropping fps lmao

    • @SK83RJOSH
      @SK83RJOSH 24 дні тому

      I would have concerns about cross talk and latency from like, signal amplifiers, in that scenario. I could not imagine trying to triage the issues this will introduce. 😂

    • @chaosfenix
      @chaosfenix 23 дні тому

      @@SK83RJOSH I think latency would be the biggest one. I am not sure what you mean by cross talk though. If you are meaning signal interference I don't think that would apply here any more than it would apply in any regular motherboard and network. If you are meaning about cross talk in wifi then this really would not be how I would do it. I would use fiber for all of this. Even Wifi 7 is nowhere near fast enough for this kind of connectiviy and would have way too much interference. Maybe if you had a 60ghz connection but that is about it.

  • @cs7899
    @cs7899 24 дні тому +6

    Love Wendell's off label videos

  • @seanunderscorepry
    @seanunderscorepry 24 дні тому +6

    I was skeptical that I'd find anything useful or interesting in this video since the use-case doesn't suit me personally, but Wendell could explain paint drying on a wall and make it entertaining / informative.

  • @Maxjoker98
    @Maxjoker98 25 днів тому +5

    I've been waiting for this video ever since Wendell first started talking about/with the Liquid people. Glad it's finally here!

  • @nicknorthcutt7680
    @nicknorthcutt7680 8 днів тому

    This is absolutely incredible! Wow, I didn't even realize how many possibilities this opens up. As always, another great video man.

  • @pyroslev
    @pyroslev 25 днів тому +7

    This is wickedly cool. Practical or usable for me? Naw, not really. But seeing that messy workshop lived in, satisfying as the tech.

  • @totallyuneekname
    @totallyuneekname 25 днів тому +112

    Can't wait for the Linus Tech Tips lab team to announce their use of Liqid in two years

    • @mritunjaymusale
      @mritunjaymusale 24 дні тому +13

      I mentioned this idea in his comments when Wendell was doing interviews with the liqid guys, but Linus being the dictator he is in his comments has banned me from commenting.

    • @krishal99
      @krishal99 24 дні тому +22

      @@mritunjaymusale sure buddy

    • @janskala22
      @janskala22 24 дні тому +10

      LTT does already use Liqid, just not this product. You can see in one of their videos they have a 2U Liqid server in their main rack. It seemd like a rebranded DELL server, but still from Liqid.

    • @totallyuneekname
      @totallyuneekname 24 дні тому

      Ah TIL, thanks for the info @janskala22

    • @tim3172
      @tim3172 24 дні тому

      Can't wait for you to type "ltt liqid" into UA-cam search and realize LTT has videos from the last 3 years showcasing Liquid products.

  • @TheFlatronify
    @TheFlatronify 24 дні тому +4

    This would come in so handy in my small three node Proxmox cluster, assigning GPUs to different servers / VMs when necessary. The image would be streamed using Sunshine / Moonlight (similar to Parsec). I wish there was a 2 PCIe Slot consumer tier available for a price that enthusiasts would be willing to spend!

    • @jjaymick6265
      @jjaymick6265 24 дні тому

      I use this every day in my lab running Prox / XCP-NG / KVM. Linux hot plug PCIe drivers work like a champ to move GPUs in an out of hypervisors. If only virt-io had reasonable support for hot-plug PCIe into the VM so I would not have to restart the VM every time I wanted to change GPUs to run a new test. Maybe someday.

  • @N....
    @N.... 23 години тому

    A workaround for lack of hotplug is to just keep all the GPUs connected at once and disable/enable via Device Manager. Changing primary display to a display connected to the GPU works for most stuff but some games like to pick a different GPU than the primary display, hence disabling in Device Manager to prevent that.

  • @MatMarrash
    @MatMarrash 17 днів тому

    If there's something you can cram into PCIe lanes, you bet Wendell's going to try it and then make an amazing video about it!

  • @cem_kaya
    @cem_kaya 25 днів тому +8

    this might be very useful with CXL if it lives up to expectations.

    • @jjaymick6265
      @jjaymick6265 25 днів тому +2

      Liqid already has demos of CXL memory pooling with their fabric. I would not expect it to reach production before mid 2025.

    • @hugevibez
      @hugevibez 24 дні тому +2

      CXL already goes far beyond this as it has cache coherency, so you can pool devices much more easily together. I see at as an evolution to this technology (and the nvswitch stuff), which CXL 3.0 and beyond expands on even further with the extended fabric capabilities and PCIe gen 6 speeds. I think that's where the holdup has been since it's a relatively new technology and those extended capabilities are significant for hyperscalar adoption which is what drives much of the industry and especially the interconnects subsector in the first place.

  • @scotthep
    @scotthep 24 дні тому

    For some reason this is one of the coolest things I've seen in a while.

  • @brandonhi3667
    @brandonhi3667 24 дні тому +1

    fantastic video!

  • @ralmslb
    @ralmslb 24 дні тому +7

    I would love to see performance tests comparing the impact of the cable length, etc.
    Essentially, the PCI speed impact not only in terms of latency but also throughput, the native solution vs LiquidFabrid products.
    I have a hard time believing that this solution has 0 downsides, hence wouldn't be surprised that the same GPU has worse performance over LiquidFabric.

    • @MiG82au
      @MiG82au 24 дні тому

      Cable length is a red herring. An 8 m electrical cable only takes ~38 ns to pass a signal and the redriver (not retimer) adds sub 1 ns, while normal PCIe whole link latency is on the order of hundreds of ns. However, the switching of the Liqid fabric will add latency as will redrivers.

    • @paulblair898
      @paulblair898 24 дні тому

      There are most definitely downsides. Some PCIe devices drivers will crash with the introduction of additional latency because fundamental assumptions were made when writing them that don't handle the >100ns latency the liqid switch adds well. ~150ns additional latency is not trivial compared to the base latency of the device.

  • @shinythings7
    @shinythings7 25 днів тому +1

    I was looking at the vfio stuff to have everything in a different part of the house. Now this seems like just as good of a solution. Having the larger heat generating components in a single box and having the mobo/cpu/os where you are would be a nice touch. Would be great for SFF mini pc's as well to REALLY lower your footprint on a desk or in an office/room.

  • @michaelsdailylife8563
    @michaelsdailylife8563 24 дні тому

    This is really interesting and cool tech!

  • @DaxHamel
    @DaxHamel 24 дні тому

    Thanks Wendell. I'd like to see a video about network booting and imaging.

  • @chrismurphy2769
    @chrismurphy2769 24 дні тому

    I've absolutely been wanting and dreaming of something like this

  • @Ben79k
    @Ben79k 24 дні тому

    I had no idea something like this was possible. Very cool. Its not the subject of the video, but that iMac you were demoing on, is it rigged up to use as just a monitor? Or is it actually running? Looks funny with the glass removed

  • @reptilianaliengamedev3612
    @reptilianaliengamedev3612 23 дні тому +1

    Hey if you have to record in that noisey environment again you can leave about 15 or 30 seconds of you being silent at the beginning or end of video to use as a noise profile. In Audacity use the noise reduction effect generate the noise profile than run it on the whole audio track. Then it should sound about 10x better, or nearly get rid of all noise.

    • @MartinRudat
      @MartinRudat 8 днів тому

      I'm surprised Wendell isn't using a pair of communication earmuffs; hearing protection coupled with a boom mic (or a bunch of mics and post-processing) possibly being fed directly to the camera.
      As far as I know a good, comfortable set of earmuffs, especially something like the Sensear brand (which allow you to have a casual conversation next to a diesel engine at full throttle) are more or less required equipment for someone that works in a data center all day.

  • @andypetrow4228
    @andypetrow4228 24 дні тому

    I came for the magic.. I stayed for the soothing painting above the techbench

  • @mritunjaymusale
    @mritunjaymusale 24 дні тому

    I really wanted to do something similar in my Uni's server for deep learning since we had 2 GPU based systems that had multiple GPUs using this we could've pooled those GPUs together to make a system of 4 gpu in one click.

  • @AzNcRzY85
    @AzNcRzY85 24 дні тому +2

    Wendell does it fit in the Minisforum MS-01?
    It woukd be a massive plus if it would and works.
    RTX A2000 12GB is already good but this is a complete game changer for alot of systems mini or full desktop.

  • @christianhorn1999
    @christianhorn1999 24 дні тому

    cool and so. is that the same notebooks do that have a switchable igpu and dedicated gpu?

  • @_GntlStone_
    @_GntlStone_ 25 днів тому +27

    Looking forward to a L1T + GN collaboration video on building this into a working gaming test setup (Pretty Please ☺️)

    • @Mervinion
      @Mervinion 25 днів тому +6

      Throw Hardware Unboxed into the mix. I think both Steves would love it. Only if you could do the same with CPUs...

  • @stamy
    @stamy 24 дні тому

    Wow very interesting !
    Can you control power on those PCI devices ? I mean lets say only one GPU powered on at a time, the one that is currently used remotely.
    Also how do you sent the video signal back to the monitor ? Are you using a extra long display port cable, or a fiber optic cable at some sort ?
    Thank you.
    PS: What is the approximative price of such a piece of hardware ?

  • @LeminskiTankscor
    @LeminskiTankscor 25 днів тому

    Oh my. This is something special.

  • @bluefoxtv1566
    @bluefoxtv1566 24 дні тому

    Such a good thing for cloud computing.

  • @jayprosser7349
    @jayprosser7349 24 дні тому

    The Wizard at Techpowerup must be aware of this.

  • @ryanw.9828
    @ryanw.9828 24 дні тому +1

    Hardware unboxed! Steve!!!!

  • @ShankayLoveLadyL
    @ShankayLoveLadyL 21 день тому

    WoW.. this is truly amazing, impressive, I dunno... like, I usually expect smart stuff on this channel from my list of tech channels, but this time, what Wendell done is a complete another league.
    I bet Linus was thinking about something similar with his tech lab, but now there is someone to be hired for his project with automated mass testing.

  • @AGEAnimations
    @AGEAnimations 24 дні тому

    Could this use all the GPUs for 3D Rendering in Octane or Redshift 3D for a single PC or is it just one GPU at a time? I know Wendell mentions SLI briefly but to have a GPU render machine connected to a small desktop pc would be ideal for a home server setup.

  • @smiththewright
    @smiththewright 25 днів тому

    Very cool!

  • @brianmccullough4578
    @brianmccullough4578 25 днів тому

    Woooooo! PCI-E fabrics baby!!!

  • @dangerwr
    @dangerwr 23 дні тому

    I could see Steve and team at GamersNexus utilizing this for retesting older cards when new GPUs come out.

  • @solidreactor
    @solidreactor 24 дні тому +1

    I have been thinking about this use case for a year now, for UE5 development, testing and validation. Recently also thought about using image recognition with ML or "standard" computer vision (or a mix) for automatic validation.
    I can see this being valuable for both developers and also for tech media benchmarking. I just need to allocate time to dive into this.... or get it served "for free" by Wendel

  • @shodan6401
    @shodan6401 20 днів тому

    Man, I'm not an IT guy. I know next to nothing. But I love this sht...

  • @misimik
    @misimik 24 дні тому +1

    Guys, can you help me gather Wendel's most used phrases? Like
    - coloring outside the lines
    - this is not what you would normally do
    - this is MADNESS
    - ...

    • @tim3172
      @tim3172 24 дні тому

      He uses "RoCkInG" 19 times every video like he's a tween that needs extra time to take tests.

  • @immortalityIMT
    @immortalityIMT 22 дні тому

    How to do cluster for training LLM, first 4 x 8GB in one system and second 4x8gb over lan.

  • @Edsdrafts
    @Edsdrafts 24 дні тому

    How about power usage when you have all these GPUs running? Do the rest idle when unused at reasonable wattage / temp.? It's also hard doing game testing due to thermals as you are using different enclosure from standard PC etc. There muat be noticeable performance loss too.

    • @jjaymick6265
      @jjaymick6265 24 дні тому

      I can't speak for client GPUs but enterprise GPUs have power saving features embedded into the cards. For instance an A100 at idle pulls around 50 watts. At full tilt it can pull close to 300'ish watts. The enclosure itself pulls about 80 watts empty (no GPUs). As far as performance loss. Based on my testing of AI/ML workloads on GPUs inside Liqid fabrics compared with published MLPerf results I would say performance loss is very minimal.

  • @Jdmorris143
    @Jdmorris143 24 дні тому

    Magic Wendell? Now I cannot get that image out of my head.

  • @sebmendez8248
    @sebmendez8248 24 дні тому

    This could genuinely be useful for massive engineering firms, most engineering firms nowadays use 3d modelling and thus having a server side gpu setup could technically mean every single computer on site has access to a 4090 for model rendering and creation without buying and maintaining 100+ gpus.

  • @chrisamon5762
    @chrisamon5762 24 дні тому

    I might actually be able to use all my pc addiction parts now!!!!!

  • @NickByers-og9cx
    @NickByers-og9cx 24 дні тому +1

    How do I buy one of these switches, I must have one

  • @ko260
    @ko260 25 днів тому

    so instead of a disk shelf I could have one of those racks, fill it with HBAs instead of gpus or replacing them all with m.2 cards would that work ?!?!!? @Level1Techs

  • @dmytrokyrychuk7049
    @dmytrokyrychuk7049 24 дні тому

    Can this work in an internet cafe or would the latency be too big for competitive gaming?

  • @Jimster481
    @Jimster481 20 днів тому

    Wow this is so amazing, I bet the pricing is far out of the range of a small office like mine though

  • @cal2127
    @cal2127 24 дні тому +2

    whats the price?

  • @shodan6401
    @shodan6401 20 днів тому

    I know that GPU riser cables are common, but realistically, how much latency is introduced by having the GPU at such a physical distance compared to being directly in the PCIe slot on the board?

  • @GameCyborgCh
    @GameCyborgCh 24 дні тому

    your test bench has an optical drive?

  • @Ironic-Social-Phobia
    @Ironic-Social-Phobia 24 дні тому +1

    Now we know what really happened to Ryan this week, Wendell was practicing his magic trick!

  • @stamy
    @stamy 24 дні тому

    Let's say you have a WS motherboard with 4 expansion slots PCIe x16.
    Can you dynamically activate/deactivate by software these PCIe slots so that the CPU can only see one at a time ? Each of the slot is populated with a GPU of course. This need then to be combined with a KVM to switch the video output to the monitor.

  • @rojovision
    @rojovision 24 дні тому

    What are the performance implications in a gaming scenario? I assume there must be some amount of drop, but I'd like to know how significant it is.

    • @Mpdarkguy
      @Mpdarkguy 24 дні тому

      A few ms of latency I reckon

  • @_neon_light_
    @_neon_light_ 24 дні тому

    From where can one buy this hardware? I can't find any info on Liqid's website. Google didn't help either.

  • @talon262
    @talon262 24 дні тому

    My only question is how much latency does this add, even in a short run in the same rack?

  • @abavariannormiepleb9470
    @abavariannormiepleb9470 27 днів тому +2

    …could you hook up a second Liqid adapter in the same client system to a Gen5 x4 M.2 slot to not interfere with the 16 dGPU lanes?

    • @jjaymick6265
      @jjaymick6265 25 днів тому +2

      Liqid does support having multiple HBAs in a single host. Each Fabric device provisioned on the fabric is directly provisioned to a specific HBA so your thinking of isolating disk IO from GPU IO would work.

    • @abavariannormiepleb9470
      @abavariannormiepleb9470 24 дні тому +1

      Thanks for that clarification.

  • @spicyandoriginal280
    @spicyandoriginal280 24 дні тому

    Does the card support 2 gpus at 8x each?

  • @gollenda7852
    @gollenda7852 24 дні тому

    Where can I get a copy of that Wallpaper?

  • @leftcoastbeard
    @leftcoastbeard 24 дні тому

    Reminds me of Compute Express Link (CXL)

  • @lemmonsinmyeyes
    @lemmonsinmyeyes 24 дні тому

    This could greatly cut down on hardware for render farms in VFX. Neat

  • @kirksteinklauber260
    @kirksteinklauber260 24 дні тому +2

    How much it costs??

  • @abavariannormiepleb9470
    @abavariannormiepleb9470 24 дні тому

    Thought of another question: Can the box that houses all the PCIe AICs hard-power off/on the individual PCIe slots via the management software in case there is a crashed state? Or do you have to do something physically at the box?

    • @jjaymick6265
      @jjaymick6265 24 дні тому +1

      There is no slot power control features... There is however a bus reset feature of the Liqid fabric to ensure that devices are reset and in a good state prior to being presented to a host. So if you have a device in a bad state you can simply just remove it from the host and add it back in and it will get bus reset in the process. Per slot power control is a feature being looked at in future enclosures.

    • @abavariannormiepleb9470
      @abavariannormiepleb9470 24 дні тому

      Again, thanks for that clarification. Would definitely appreciate the per slot power on/off control, would be helpful for diagnosing maybe defective PCIe cards and of course also reduce power consumption with unused cards not just idling around.

  • @arnox4554
    @arnox4554 24 дні тому

    Maybe I'm misunderstanding this, but wouldn't the latency between the CPU and the GPU be really bad here? Especially with the setup Wendell has in the video?

  • @georgec2932
    @georgec2932 24 дні тому

    How much worse is performance in terms of timing/latency compared to the slot on the motherboard? I wonder if it would be noticeable for gaming...

  • @wobblysauce
    @wobblysauce 24 дні тому

    Cool as heck it is.

  • @thepro08
    @thepro08 24 дні тому

    so you saying i can do this with my 15 gbs internet, and connect my monitor or pc to a server game and ps5??? just have to pay 20 per month right like netflix?

  • @4megii
    @4megii 25 днів тому

    What sort of cable does this use? Could this be run over fibre instead?
    Also can you have a singular GPU Box with a few GPUs and then use those GPUs interchangeably with different hosts.
    My thought process is. GPU box in the basement with multiple PCs connected over a fibre cables so I can just switch GPU on any device connected to the fibre network.

    • @jjaymick6265
      @jjaymick6265 25 днів тому

      The cable is a SFF-8644 cable using copper as a media. (mini-sas) There are companies that use optical media but they are fairly pricey.

  • @fanshaw
    @fanshaw 23 дні тому

    I just want this inside my workstation - a bank of x16 slots and I get to dynamically (or even statically, with dip switches) assign PCIE lanes to each slot or to the chipset.

  • @hugevibez
    @hugevibez 24 дні тому

    The real question is, does this support Looking Glass so you can do baremetal-to-baremetal video buffer sharing between hosts? I know it should technically be possible since PCIe devices on the same fabric/chassis can talk to one another. Yes, my mind goes to some wild places, I've also had dreams of Looking Glass over RDMA. Glad you've finally got one of these in your lab. Anxiously awaiting the CXL revolution which I might be able to afford in like a decade.

  • @Dan-Simms
    @Dan-Simms 24 дні тому

    Very cool

  • @philosoaper
    @philosoaper 24 дні тому

    Fun.. not sure it would be ideal for competitive gaming exactly.. but very very cool

  • @BestSpatula
    @BestSpatula 24 дні тому

    With SR-IOV, Could I attach different VFs of the same PCIe card to separate computers?

    • @jjaymick6265
      @jjaymick6265 24 дні тому +1

      Liqid does support SRIOV, but the VFs are not composable. The way SRIOV is leveraged today is a single card that supports SRIOV is exposed to a host and the VFs and SRIOV bar space is then registered by that host. That host then can present each of those VFs to a VM just as if the card was physically installed into the host.

  • @Haleskinn
    @Haleskinn 24 дні тому

    @linustechtips some ideas for upcoming video? :P

  • @felixspyrou
    @felixspyrou 24 дні тому

    Here take my money, this is amazing, me with a lot of computers and with I would be able to use my best GPU on all of them!

  • @daghtus
    @daghtus 24 дні тому

    What's the extra latency?

  • @dgo4490
    @dgo4490 24 дні тому

    How's the latency? Every PHY jump induces latency, so considering all the hardware involved, this should have at least 3 additional points of extra latency. So maybe like 4-5 times the round trip of native pcie...

    • @jjaymick6265
      @jjaymick6265 24 дні тому

      100ns per hop. In this specific setup that would mean 3 hops between the CPU and the GPU device. 1 hop at the HBA, 1 hop at the switch, 1 hop at the enclosure. so 600 nanoseconds round trip.

  • @animalfort3183
    @animalfort3183 25 днів тому

    I don't know how to thank you enough without being weird man....XOXO

  • @shadowarez1337
    @shadowarez1337 21 день тому

    Have we cracked the code to pass a igpu to a vm in say TrueNas?

  • @annebokma4637
    @annebokma4637 23 дні тому

    I don't want an expensive box in my basement. In my attic high and DRY. 😂

  • @PsiQ
    @PsiQ 24 дні тому

    i might have missed it.. But would/will/could there be an option to shove around gpus (or AI hardware) on a server running multiple VMs to the VMs that currently need it, and "unplug" it from idle ones ? .. Well, ok, you would need to run multiple uplinks at some point i guess.. Or have all gpus directly slotted in your vm server.

    • @jjaymick6265
      @jjaymick6265 24 дні тому +1

      The ability of Liqid to expose a single or multiple PCIe devices to a single or multiple hypervisors is 100% a reality. As long as you are using a linux based hypervisor hot-plug will just work. You can then expose those physical devices or vGPUs (if you can afford the license) to one or many virtual machines. The only gotcha is to change GPU types in the VM you will have to power-cycle the VM because I have not found any hypervisor (VMware / Prox/ XCP-NG / KVM-qemu that support hot-plug PCIe into a VM.

    • @PsiQ
      @PsiQ 23 дні тому

      @@jjaymick6265 thanks ;-) you seem to be going round answering questions here 🙂

    • @jjaymick6265
      @jjaymick6265 21 день тому

      @@PsiQ I have 16 servers (Dell MX blades and various other 2U servers all attached to a Liqid fabric in my lab with various GPUs/NICs/FPGAs/NVMe that I have been working with for the past 3 years. So have a fair bit of experience on what it is capable of. Once you stitch it together with some CI/CD stuff via Digital Rebar or Ansible it become super powerful for testing and automation.

  • @Gooberpatrol66
    @Gooberpatrol66 24 дні тому

    This would be great for KVM. Plug USB cards into PCIE, and send your peripherals to all your computers.

  • @japanskakaratemuva5309
    @japanskakaratemuva5309 22 дні тому

    Nice ❤

  • @OsX86H3AvY
    @OsX86H3AvY 24 дні тому

    Id like to be able to hotplug GPUs in my running VMs as well...how nice would it be to have say two or three VM boxes for CPU and MEM, one SSD box, one GPU box and one NIC box so you could just swap any nic/gpu/disk to any VM in any of those three boxes in any combination.... that'd be sweet....i definitely dont need it but that just makes me want it more

    • @jjaymick6265
      @jjaymick6265 20 днів тому

      Over the last couple days I have been working on this exact use case. In most environments this simply is not possible, however in KVM (libvirt) I have discovered the capability to hot-attach a PCIe device to a running VM like this... virsh attach-device VM-1 --file gpu1.xml --current . So with Liqid I can hot attach a GPU to the hypervisor and then hot attach said GPU all the way to the VM. The only thing I have not figured out how to get done is to get the BAR address space for GPU pre-allocated in the VM so the device is actually functional without a VM reboot. As of today the GPU will show up in the VM but drivers cannot bind to it because there is no bar space allocated for it so in lspci the device has a bunch of unregistered memory bars and drivers don't load. Once bar space can be pre-allocated in the VM I have confidence this will work. Baby steps.

  • @ThatKoukiZ31
    @ThatKoukiZ31 24 дні тому

    Ah! He admits it, he is a wizard!

  • @mohammedgoder
    @mohammedgoder 25 днів тому +2

    Is there any PCIe rack-mount chassis that can allow this to be a rack-mounted solution?

    • @jjaymick6265
      @jjaymick6265 24 дні тому

      Typical installation is rackmount. It is all standard 19 inch gear that gets deployed in datacenters around the world.

    • @mohammedgoder
      @mohammedgoder 24 дні тому

      @@jjaymick6265 can you post a model number that you'd recommend?

    • @mohammedgoder
      @mohammedgoder 24 дні тому

      @@jjaymick6265 Is there any particular model that you'd recommend.

    • @jjaymick6265
      @jjaymick6265 20 днів тому

      @@mohammedgoder Somehow my previous comment got removed. If you are looking for supported systems / fabric device etc the best place to check is on Liqid's website. Under resources they publish a HCL of "Liqid Tested/Approved" devices.

    • @mohammedgoder
      @mohammedgoder 13 днів тому

      I found it. Wendell mentioned it in the video.
      Liqid makes rackmount PCIe enclosures.

  • @Dr_b_
    @Dr_b_ 24 дні тому +1

    Do we want to know what this costs?

  • @AdmV0rl0n
    @AdmV0rl0n 24 дні тому

    I like some of this.
    But let me look at the far end, outside of Parsec or similar, how am I re-routing the video signal or playback. Perhaps there needs to be a wink wink nudge nudge Level One KVM solution. But outside of this, walkiing down to the basement, to re-plumb the video cables old school to new host/ changed host, kinds degrades the magic a bit on the idea...

  • @dansanger5340
    @dansanger5340 25 днів тому +1

    I didn't even know this was possible. How long can the cable run be without degrading performance?

    • @jjaymick6265
      @jjaymick6265 25 днів тому +1

      In a datacenter setting using copper cables it is limited to 3 meters between host and switch port and 2 meters between switches and enclosures. (he did not show the switches but yes there are PCIe switches involved also) Host --> 3m --> PCIe switch --> 2m --> enclosure

    • @dansanger5340
      @dansanger5340 24 дні тому

      @@jjaymick6265 Thanks for the info!

  • @NdxtremePro
    @NdxtremePro 24 дні тому

    This seems tailor made for all those single slot consumer boards that get sold. It would make them much more useful.
    I can imagine it could in some future reduce the cost of a recording studio with all of their specialized audio cards if their could spend 1/1th the cost on the motherboard, and share the cards across multiple equipment.
    I could seen cryptominers using the best cards depending on the current pricing.
    I could see switching GPUs depending on which gives the best gaming performance.
    How about retro machines using older PCI-E cards with VM's?
    I imagine the bandwidth of older GPUs wouldn't sature the bus, so you could connect them to the system and pas them through to individual VMs?
    Or, some PCI-E 1.0 cards in Crossfire and SLI with a one slot motherboard.
    Way overkill, but seriously cool tech.
    Speaking of that, you could ger some PCI-E to PCI-X audio equipment, pass that through to some Windows XP VMs, and get that latency goodness and unrestricted access audio engineers loved in a modern one slot solution.
    Enterprise side, I could see creating a massive networking VM set with one of these cards in each of the main systems slot, and attached to a separate PCI-E box, each setup with those multifunction cards. Custom bespoke network switch.

  • @vdis
    @vdis 25 днів тому

    What's your monthly power bill?!

  • @crazykkid2000
    @crazykkid2000 24 дні тому

    I want this!!!

  • @hentosama
    @hentosama 24 дні тому

    if no bottle neck. seems perfect for video card eviews

  • @HumblyNeil
    @HumblyNeil 24 дні тому

    The iMac bezel blew me away...

  • @Orochimarufan1900
    @Orochimarufan1900 13 днів тому

    This looks like it might also eventually enable migration of VMs with PCIe passthrough.