Its nice to see a holistic explanation of designing / building / installing a complex multi-rack system...As someone that has spent years working on both sides of the "analog/digital divide" (physical data center world / digital world's various segments), the un-sexy physical aspects of available rack space / power / cooling / floor loading / network uplink bandwidth are often overlooked (often assumed)...A semi arrives with a pallet: "Hey Carl, you can have this online in a couple days, right?"
Hey Carl, thanks for the kind comment. Glad you like the video. It's always funny how difficult it can be to 'bridge the divide' between the physical world and virtual world. Many SWEs expect to be able to "spin up" 1000 servers with an API call and forget that there are actual physical objects and tons of people that actually make that happen when you're on-prem.
Best of the best presentation on server clusters. Author presented deep understanding of server clusters so that he explains things in an easy way. thank you!!!
Lots and lots of A100 GPUs. Every single one of them is a monster, almost 2x faster memory than the next best GPU. An entire room full of A100 racks... holy cow.
I have three computers, and a nas, and a external hub. I think that I don’t need a another server because of the NAS. As far as my architecture goes, is there anything else that you can advise?
Hey Stephen, this is highly informative. I work on this clustering. Now am able to connect the dots and get the bigger picture. where can i read about the relationship between numa topology and GPU peering capability.
Meng Xu, you can email support@lambdalabs.com 24/7 or call +1 (866) 711-2025 during business hours. Sorry to hear you're having issues, I'm sure we'll be able to resolve them quickly.
What if I have a model that I just want to run as provided, it hasn't really been optimized to run around the cluster and has memory requirements greater than any individual system I have. I feel safe to assume that for that specific case a shared distributed memory model would be the solution to run that specific app, yes? Is there any distribution of Linux that has support for such a memory model? It doesn't have to be a full-blown single system image. Perhaps a patch to the memory management driver so storage can be treated as an extension of system memory and not swap memory? Does any such software exist?
Thanks. I’m planning on building a “massive” 2 GPU system for home use.
How did it go man? I also want to build something like that and then stumbled on this video, which is excellent
Extraordinary presentation. Covered all the important topics in depth and with real teaching talent. Many thanks!!
What an amazing presentation - one of the better videos I have watched. Great breadth and depth.
Its nice to see a holistic explanation of designing / building / installing a complex multi-rack system...As someone that has spent years working on both sides of the "analog/digital divide" (physical data center world / digital world's various segments), the un-sexy physical aspects of available rack space / power / cooling / floor loading / network uplink bandwidth are often overlooked (often assumed)...A semi arrives with a pallet: "Hey Carl, you can have this online in a couple days, right?"
Hey Carl, thanks for the kind comment. Glad you like the video. It's always funny how difficult it can be to 'bridge the divide' between the physical world and virtual world. Many SWEs expect to be able to "spin up" 1000 servers with an API call and forget that there are actual physical objects and tons of people that actually make that happen when you're on-prem.
Most professional and holistic explanation I heard about this topic.
Thank you so much!!
Ground level details with all the critical aspects covered nice for GPU Cluster to the last cable length calculation.
One of the best presentations on GPU cluster design, even at 3 years old. Great teaching skills!
Thank you for highlighting an underrated topic/options that company should re-consider within their compute infrastructure.
Thank you. You got me started years ago with your lambda stack -- the only way I could get TensorFlow installed on Linux.
Best of the best presentation on server clusters. Author presented deep understanding of server clusters so that he explains things in an easy way. thank you!!!
Lots and lots of A100 GPUs. Every single one of them is a monster, almost 2x faster memory than the next best GPU. An entire room full of A100 racks... holy cow.
Tell me how difficult it is so i can buy your solution kind of talk
Very expert suggestions for hpc and compute sizing.
This is super insightful!
Genius bait and switch. Props!
Lambda needs an explanation on the difference between "building" and "designing".
I have three computers, and a nas, and a external hub. I think that I don’t need a another server because of the NAS. As far as my architecture goes, is there anything else that you can advise?
Really good analysis and presentation!
I want to build a multi dual epyc 7742 based system for goofing around learning this stuff.
Hey Stephen, this is highly informative. I work on this clustering. Now am able to connect the dots and get the bigger picture.
where can i read about the relationship between numa topology and GPU peering capability.
Our group ordered around 10 lambda PCs 1 year ago. Right now more than 5 have problems. Some of them do not start up. Mine gets stuck randomly....
Have you tried looking into the reasons?
Meng Xu, you can email support@lambdalabs.com 24/7 or call +1 (866) 711-2025 during business hours. Sorry to hear you're having issues, I'm sure we'll be able to resolve them quickly.
Our team has 5 lambda laptops, they work perfectly for over a year now..
We also have a workstation with 3 GPUs, works great too.
Looking for work would love to help
Highly appreciated...UA-cam should have a separate category called Founder's video.
This was amazing. Thank you.
Thanks for the video.
What if I have a model that I just want to run as provided, it hasn't really been optimized to run around the cluster and has memory requirements greater than any individual system I have. I feel safe to assume that for that specific case a shared distributed memory model would be the solution to run that specific app, yes? Is there any distribution of Linux that has support for such a memory model? It doesn't have to be a full-blown single system image. Perhaps a patch to the memory management driver so storage can be treated as an extension of system memory and not swap memory?
Does any such software exist?
What a remarkable video
I just love this kind things. How do i can start this kind bussnes how i can find customer for like small node and start building up
Does it work in man????
very informative , thank you.
very informative, thanks!
Is this opensourced?
Do you guys have a gpu cluster optimized for 3d rendering.
Great insight!
Excellent.
Still most relevant today, 2 years later. Thanks.
thanks for the inspiration
it is a lecture more than a tutorial, Thx.
Does lambda products (gpu cluster) ship with a manual to help you set up the servers for use
Your are insane, thank you
Very Based
My machine learning team consists of me baby
Hell yes Lambda Lambda Lambda.
This guy is smart as fak
Now if only I was a billionaire so I could make use of this great information...
Half Life man!
Talk about what ur expert.. don’t talk useless stuff without knowing all facts
headeggs
This dudes in full submission mode . Sad
speak UP