@@rd_626what the hell are you talking about?.The satisfying OP said isn't about “owning and training”...its about how fast it will train LLMs. Try renting tpuv4 or even the new ones tpuv5s....smooth as butter. you will never ever run anything on gpus again lol
@@rd_626 “satisfying” is literally OPs point of contention. you can't take it out of OPs context....any ML engineer only gets satisfied/orgasms on how fast his large data is being trained before his eyes....not by owning costly shit just to see it train mid compared to tpus
I am not a fan of liquid cooled servers, but we have air cooled units with similar specs (mainly Supermicro) and they are way more crowded and noisy as hell. That one looks really nice and neat, and you should be able to sit next to it without going deaf. Also, in the long run it won't end so full of dust and crap. This one of yours is a custom unit though, so the client probably had certain requirements in mind, but not others. I see he/she decided to skip things like plug and play redundant PSUs, for example. Also, any leak could cause catastrophic failure and down time, which is not always acceptable. Anyway, very nice.
I completely agree, liquid cooling in the server environment is a complete nonsense. It doesn't matter how qualitative is the cooling system, it will leak sooner or lately.
@@noJobProgrammer I know some servers use liquid cooling but it's probably more like AC, with heat exchanger close to the server racks rather than using water blocks directly on components
@@noJobProgrammerif it leaks, you haven't done it right. Does your house plumbing leak every few years for example? If you do it correctly, there is absolutely no problem to be had there
@@MustafaKhan-hg8lu I have my own experience that I have made over the last decade on many different computers and laptops. But I'm not going to argue just do whatever you think is right.
@@Turboriderthen you should use Kryonaut Extreme or a Kryosheet. I myself work on a lot of stuff and MX-4 is completely fine for everything you could want as a "normal" person. It also really depends on what you want to do with it. For example I've had a bunch of issues with NT-H2 squishing away from my GPU die due pump out caused by the extreme heat and power I was pushing through that card (3090 with an XOC BIOS). On my 7900X, its another story. No direct die setup. NT-H2 was fine and pretty okay, one of the better pastes that I used. But with Kryonaut Extreme, I got way more stable clock speeds while pushing the same amount of power and thermals. Heat transfer is simply just better with this one. However, I would not recommend it to anyone that doesn't want to push for the absolute best results. It's simply too expensive. But for everything I use personally, I use not much else anymore.
I just had to ask... "The classic question, "But can it run Crysis?" has been a humorous way to ask if a computer is truly powerful, given the notoriously demanding nature of the original "Crysis" game when it was released. The server workstation you've described, with its dual Intel Xeon Scalable 5220 CPUs, 256GB of DDR4 ECC/REG RAM, and especially with four NVIDIA A100 80GB GPUs, is vastly more powerful than what's required to run the original "Crysis" game or even its remastered versions. However, there are a few caveats to consider: GPU Compatibility: The NVIDIA A100 GPUs are designed for compute tasks, AI workloads, and data center applications rather than gaming. While they possess more than enough raw power to handle a game like "Crysis," their architecture and drivers are not optimized for gaming performance. Therefore, without proper gaming drivers, the experience might not be as smooth as expected on gaming-optimized GPUs. Software Compatibility: Running a game on a server-grade operating system might require additional configuration, as these systems are not typically set up for gaming out of the box. Compatibility layers or virtualization with a gaming-optimized OS might be necessary. Overkill Configuration: This setup is overkill for just about any current gaming requirement, including "Crysis." The cost and capabilities of this system far exceed what's necessary for gaming, making it more suitable for professional and industrial applications mentioned previously. In summary, while this server workstation could technically run "Crysis" and virtually any other game at the highest settings, it's akin to using a space shuttle to commute to work-far beyond what's necessary and not optimized for the task. The real value of this setup lies in its ability to perform high-level computational tasks, AI and ML workloads, and handle massive data processing challenges."
@@Mr.Not_Sure A100 on AWS can use upto 450W. Still, in most local setups people will limit its usage to 350-ish because even a good liquid cooling won't cut it. Fyi, the instance i use is p4d.24xlarge so they are 40Gb versions. not 80Gb. i don't know if 80Gb ones work in lower watt or not.
why would you need one ? It's mostly for data centers and stuff, unless you're doing something to use that 80gb of vram, because for gaming it would be pretty uderwhelming, as the driver suport wouldn't be the best.
@@cooooooooooooooool3 because ai applications in general, aswell as LLMs run like shit on the RTX 4070. Training obj detection nets also take quite a while even with pytorch+cuda :(
@reapiu8316 That makes sense, you have pretty vram intensive workloads. My dayjob is video editing, so I never exceed 4gb of vram use even with 4k editing
2 questions: 1- Why not fill the remaining memory slots with 4x32GB? 2- Would custom length cables have removed some clutter and added to the airflow in the U4 chassis? Nice job, I've already got a perfect spot picked out for it.
Your water cooling looks great. Such a high-end product requires the best cooling solution. If it needs to be improved, it will need to use oil cooling.
Y'all have been marketed to too hard. The thermal paste is maybe the least important part of cooling this over here as long as the paste is applied evenly. You don't actually need thermal grizzly unless you wanna overclock or if you just really like UA-cam sponsors
Strange setup IMO. These GPUs are built to run in servers, passive cooled by server hi speed fans, connected between each other with nvlink for rapid data transfer. Water cooling is quieter of course, but who cares about noise in datacenters? BTW nvidia produce a100 in sxm credit card size format, you can easily fit 8x GPUs into 2U server. Oh, the second PSU would be nice for some redundancy.
I would never ever place the thermal compound on the bare GPU chip in that way. The CPU has an heat spreader so it can work well even if some part of the spreader ends up without thermal compound be the bare chips of the GPUs don't have so much luck. I would spread the thermal compound with a plastic tool so as to make sure every part of the gpu chips is covered.
This is great but I am confused. I have question: ASUS WS C621E SAGE motherboard has PCIe 3.0 x16 support which means it can be transferred in speed 16GB, but the GPU NVIDIA A100 support PCIE 4.0 x16 which means 32 GB. Doesn't it mean that you can not use max performance of the GPU NVIDIA A100 ? It can use only 16gb in via PCIe 3.0 x16 slot.
That has so much glue, latency and bottlenecking. Plus, it's overprice AF. You think that 7/9980XE asking price of US$2K was nuts, AMD's line-up takes the cake. Thanks Evil Su!
Maybe it has to do with some compatibility issues on AMD side. Although the Intel/AMD CPUs share the same instruction sets for the most part, the software that is going to be running this hardware might be very picky.
It sounds like a jet engine while its running and needs cooling made for a small office to keep it running. Unless you are 100% gpu and 100% ram usage, it won't hit near its total wattage consumption. In comparison to a dgx1, it can do 40% more work while consuming 20% less power doing a task that would max out a dgx1.
@@ChristianStout this korean bro is running his pc assembly service company for at least 7 yrs and he is professional. if your claim was a serious debate and his client's server just broke down, his company would lose its reputation and he has to discharge tens of thousands of bucks, and he would have already had to close his company a few years ago, whereas he is still running with the reputation of many past years. And let me guess your biggest achivement is that you built your own small gaming pc whereas this korean guy has been building hundreds of pcs and servers.
@@qpakrkdhw7966 This thermal paste application will cause the GPU to perform worse and fail within 4 years. His clients' machines don't break because they are liquidated after only 3 years. With a proper amount of thermal paste, they would last 8 years. Ask an Nvidia engineer, they'll tell you the same thing.
me: *goes into a super expensive tech store* employee: hello! welcome to our store, how can i help you? me: i need a mid-tier PC literally employee that didn't hear "mid-tier":
did this thing actually work?? you have six components on one thick 360 and normal 120 fans. I cannot believe temperatures under load this system better than 60C
Finally, IT industry come to the point where you can't just do something with a book, computer and your own head. For LLM you has to spend a lot of money.
It doesn't matter (whatever was available/affordable in terms of CPU+mobo). Its a GPU node , the CPUs are only there to provide the PCI lanes, run the OS and handle the IO. Everything else will be handled by the GPUs on application level. That is why he didn't put much RAM in it because , it doesn't need it.
It's not a joke. In livejournal, one famous (locally) user named Oldmann told story about night guard somewhere in one of Central Asia banks, who cooked his meal on IBM mainframe thermal exhaust, by switching off regular cooling system and putting the mainframe into emergency mode.
I would be too scared to wear gloves with fibers like this one while installing a CPU because a stray fiber that you don’t see may get caught on a pin on the socket, and pull it, and I really don’t want to do socket repairs. This is why I wear nitrile gloves instead.
No way that doesn't run hot with the cooling solution they chose. I mean it is probably all within specs, but would run with less power if they used more in terms of cooling.
if i would be building a server like this,i would idk what would i do but i won't be happy when i get to the cable managment,in fact,cable managment is only for personal computers,but i just wont see where the cable is
Man that had to be quite a fortune to afford such build, I bet the amount of dollars spend here enough to live for 5 years, 10 years maybe if you choose Thailand....
Finally a build that can run Crysis in 1080p 30fps,nice 👍🏼
640p 20 fps Maximum
@@Porsche_x my results are using Frame Generation and low settings
@@dez7roy3r So yes, I said 640p at 20 fps using graphics very low without dlss and without frame generator
LTT ran Crysis on AMD Server CPU with software rendering and got like 10 or so
with dlss 3, fsr 3 and low seting 🤣
You know shit gonna be real expensive when you don't even recognize half of the hardware's names.
Could not be for cheap one
But all of them are easily recognizable?
tell them that you get nvidia A100 to some average gamer and see their reaction@@Lainyyyyy
We r talking about kinda above average pc user, not tech professionals@@Lainyyyyy
Jelous 😂😂😂😂
Finally, a server for running Latest version of minecraft in the future.
😂😂😂😂😂 Minecraft
Tetris
still only get 45fps in vanilla MC
Vastly underrated comment. 😂
А ты в теме😎
This PC consumed more power than my entire house electronics combined
about 2kw when all processores is max-loaded
Your water kettle would easily consume more power than this machine.
@@NowhereNear42 we talking about electricity, not heat
@@thenasiudk1337 You talking about power. This PC uses electricity to produce heat. So does your water kettle.
Doesn't use more than 2kw.
finnaly a pc to compile gentoo
You would have the fastest emerge times in the world
my i5 13600kf took 4h to emerge -avq
this thing would do that in 10 sec xD
I can't imagine how satisfying it must be to train LLM´s on that thing.
you can easily rent set up like this for around $4 per hour
@@AmanUrumbekov Renting is not satisfying as owing and running and training LLM's locallly
@@rd_626what the hell are you talking about?.The satisfying OP said isn't about “owning and training”...its about how fast it will train LLMs.
Try renting tpuv4 or even the new ones tpuv5s....smooth as butter. you will never ever run anything on gpus again lol
@@sohailmd123 Just FYI..... I wasn't saying that to the op
@@rd_626 “satisfying” is literally OPs point of contention. you can't take it out of OPs context....any ML engineer only gets satisfied/orgasms on how fast his large data is being trained before his eyes....not by owning costly shit just to see it train mid compared to tpus
would not like a coolant leak in that
-Для чего тебе такая сборка.
-Чтобы Excel работал без лагов.
Наконец-то я смогу открыть 4 вкладки в хроме))
В перерыве можно и пасьянс разложить по-человечески.
LOL
I'm afraid to ask this question and rather just look at this beauty but "Does it even run..?"
Crysis....
Used for AI backend stuff. Mostly for training. Runs only on linux
😂👏... it's top of the line ... Runner 🏃♂️ it seems
I am not a fan of liquid cooled servers, but we have air cooled units with similar specs (mainly Supermicro) and they are way more crowded and noisy as hell. That one looks really nice and neat, and you should be able to sit next to it without going deaf. Also, in the long run it won't end so full of dust and crap. This one of yours is a custom unit though, so the client probably had certain requirements in mind, but not others. I see he/she decided to skip things like plug and play redundant PSUs, for example. Also, any leak could cause catastrophic failure and down time, which is not always acceptable. Anyway, very nice.
I completely agree, liquid cooling in the server environment is a complete nonsense. It doesn't matter how qualitative is the cooling system, it will leak sooner or lately.
@@noJobProgrammer I know some servers use liquid cooling but it's probably more like AC, with heat exchanger close to the server racks rather than using water blocks directly on components
In high density applications such as HPC or AI training liquid to chip can be used, particularly in the 80kW/rack+ range
IBM Mainframe can be liquid cooled. Since its reputation for reliability, I’m sure reliability isn’t an issue if do it correctly.
@@noJobProgrammerif it leaks, you haven't done it right. Does your house plumbing leak every few years for example? If you do it correctly, there is absolutely no problem to be had there
It must be so scary to work on gpu's that expensive
Kinda want to see the thermal performance of this build. The fans don’t seem like powerful enough.
Its not about the fans. Its about shitty MX-4 that he put on the everything
Yeah, the most overrated thermal compound. Noctua NT-H2 is way better for example.
@@Turboriderthermal compound isn't making or breaking this build lol. Maybe the most unimportant part of cooling as long as it's applied right.
@@MustafaKhan-hg8lu I have my own experience that I have made over the last decade on many different computers and laptops. But I'm not going to argue just do whatever you think is right.
@@Turboriderthen you should use Kryonaut Extreme or a Kryosheet. I myself work on a lot of stuff and MX-4 is completely fine for everything you could want as a "normal" person.
It also really depends on what you want to do with it. For example I've had a bunch of issues with NT-H2 squishing away from my GPU die due pump out caused by the extreme heat and power I was pushing through that card (3090 with an XOC BIOS).
On my 7900X, its another story. No direct die setup. NT-H2 was fine and pretty okay, one of the better pastes that I used. But with Kryonaut Extreme, I got way more stable clock speeds while pushing the same amount of power and thermals. Heat transfer is simply just better with this one.
However, I would not recommend it to anyone that doesn't want to push for the absolute best results. It's simply too expensive. But for everything I use personally, I use not much else anymore.
Watching that cloth glove get close to the socket in the first 5 seconds was nerv racking
I just had to ask...
"The classic question, "But can it run Crysis?" has been a humorous way to ask if a computer is truly powerful, given the notoriously demanding nature of the original "Crysis" game when it was released. The server workstation you've described, with its dual Intel Xeon Scalable 5220 CPUs, 256GB of DDR4 ECC/REG RAM, and especially with four NVIDIA A100 80GB GPUs, is vastly more powerful than what's required to run the original "Crysis" game or even its remastered versions.
However, there are a few caveats to consider:
GPU Compatibility: The NVIDIA A100 GPUs are designed for compute tasks, AI workloads, and data center applications rather than gaming. While they possess more than enough raw power to handle a game like "Crysis," their architecture and drivers are not optimized for gaming performance. Therefore, without proper gaming drivers, the experience might not be as smooth as expected on gaming-optimized GPUs.
Software Compatibility: Running a game on a server-grade operating system might require additional configuration, as these systems are not typically set up for gaming out of the box. Compatibility layers or virtualization with a gaming-optimized OS might be necessary.
Overkill Configuration: This setup is overkill for just about any current gaming requirement, including "Crysis." The cost and capabilities of this system far exceed what's necessary for gaming, making it more suitable for professional and industrial applications mentioned previously.
In summary, while this server workstation could technically run "Crysis" and virtually any other game at the highest settings, it's akin to using a space shuttle to commute to work-far beyond what's necessary and not optimized for the task. The real value of this setup lies in its ability to perform high-level computational tasks, AI and ML workloads, and handle massive data processing challenges."
simple example would be movie making, games, large scale 3d render/preview, realtime weather or anything 'ai' like water simulation etc
there are 'desktop supercomputer' if you keen on having server cpu to run crysis
W ai comment
i guess a single cooling fan is worth my entire house but okok 👍
Love to see the performance stats on it once it's powered up.
A montagem de um servidor é uma das artes mais linda de se ver!!! Parabéns pelo video!!!
Is that 2kw PSU enough to run two Xeons and 4 A100s?
Yes
If you assume 500W per component at full power (it could happen), then it's 1000W short.
@@NegativeROGA100's are 250 W.
@@Mr.Not_Sure lul
@@Mr.Not_Sure A100 on AWS can use upto 450W. Still, in most local setups people will limit its usage to 350-ish because even a good liquid cooling won't cut it.
Fyi, the instance i use is p4d.24xlarge so they are 40Gb versions. not 80Gb. i don't know if 80Gb ones work in lower watt or not.
Why don't you do an actual review of the power it has? Some benchmarks. It would be awesome.
Just one A100 would be my dream frfr
why would you need one ? It's mostly for data centers and stuff, unless you're doing something to use that 80gb of vram, because for gaming it would be pretty uderwhelming, as the driver suport wouldn't be the best.
@@cooooooooooooooool3 because ai applications in general, aswell as LLMs run like shit on the RTX 4070. Training obj detection nets also take quite a while even with pytorch+cuda :(
@reapiu8316 That makes sense, you have pretty vram intensive workloads. My dayjob is video editing, so I never exceed 4gb of vram use even with 4k editing
@@cooooooooooooooool3my Firefox uses 11 gb vram :(
Depending on the task. Nvidia Tesla P40 is not a bad entry for 24gb of vram in the used market. Runs RVC smooth as butter in my experience
I'm waiting for the benchmark test, do you have it? how long does it takes to train nanoGPT medium size model?
He just wanted to show off and prove he could afford one.
@@shidongxu3410 They are an actual computer shop! This is for their client
@@shidongxu3410 This is probably his job and he's just showing the process
@@shidongxu3410 no way someone can own 4 A100's and use it for personal usage
@@rd_626 a lot of people can but why would these people do it is the proper question.
2 questions: 1- Why not fill the remaining memory slots with 4x32GB?
2- Would custom length cables have removed some clutter and added to the airflow in the U4 chassis?
Nice job, I've already got a perfect spot picked out for it.
Props to the guy building it, has to be the first person I have seen who isn't deathly afraid of a little extra thermal paste.
Your water cooling looks great. Such a high-end product requires the best cooling solution. If it needs to be improved, it will need to use oil cooling.
Nice. Do similar water cooling adapters fitting V100 SXM2 modules exist?
MX-4? Seriously???? Why didnt you use Noctua NT-H2 thermal paste? Or Thermal Grizzly Kryonaut Extreme?
Y'all have been marketed to too hard. The thermal paste is maybe the least important part of cooling this over here as long as the paste is applied evenly. You don't actually need thermal grizzly unless you wanna overclock or if you just really like UA-cam sponsors
@@MustafaKhan-hg8lu you know its A100 we are talking about, right?.. its not your average 4060
там и GD900 справится
You should have used the newest amd threadripper pro for the servers.
It's crazy how much tech goes into pumping around less than 2 volts through a few chips, and then we need a hoover dam to keep it all cool.
Strange setup IMO. These GPUs are built to run in servers, passive cooled by server hi speed fans, connected between each other with nvlink for rapid data transfer. Water cooling is quieter of course, but who cares about noise in datacenters? BTW nvidia produce a100 in sxm credit card size format, you can easily fit 8x GPUs into 2U server. Oh, the second PSU would be nice for some redundancy.
smx is not the size of a credit card ,but more like a phone like s23 Ultra ,
I would never ever place the thermal compound on the bare GPU chip in that way. The CPU has an heat spreader so it can work well even if some part of the spreader ends up without thermal compound be the bare chips of the GPUs don't have so much luck. I would spread the thermal compound with a plastic tool so as to make sure every part of the gpu chips is covered.
yeah if pasting on a bare die it's definitely a better idea to manually spread to cover the whole die
quá khủng
interesting setup. is 2kw power supply enough to fuel system like this? and what about temperatures with closed cover?
This is great but I am confused. I have question: ASUS WS C621E SAGE motherboard has PCIe 3.0 x16 support which means it can be transferred in speed 16GB, but the GPU NVIDIA A100 support PCIE 4.0 x16 which means 32 GB. Doesn't it mean that you can not use max performance of the GPU NVIDIA A100 ? It can use only 16gb in via PCIe 3.0 x16 slot.
now im ready to play minecraft
Ready to open google chrome with 10 tabs)))
Epyc line kicking the ass of Xeon Scalable line and the people STILL using Xeon.
"bro the servers are bad im lagging"
The servers in question:
guys thats pure power, nothing else
Hmm, this build needs a bigger acrylic case, awesome setup!
That's progress. In the 90's we would pay this voor a DEC VAX 128MB memory expansion.
out of curisousty...why 2 Xeons and not the 7995WX Pro Threadripper?
sweet server build though!
That has so much glue, latency and bottlenecking. Plus, it's overprice AF. You think that 7/9980XE asking price of US$2K was nuts, AMD's line-up takes the cake. Thanks Evil Su!
Maybe it has to do with some compatibility issues on AMD side. Although the Intel/AMD CPUs share the same instruction sets for the most part, the software that is going to be running this hardware might be very picky.
budget system with last generation nvidia ampere GPUs even v3 or v4 haswell or broadwell xeons would be ebough
For machine learning, this is what the purpose of this build is for. For ml task, use Xeons due to Intel’s math library
Amd threadripper is for workstation
Intel xeon and amd epyc is for server
Finally a computer that can run my simulink model
It sounds like a jet engine while its running and needs cooling made for a small office to keep it running. Unless you are 100% gpu and 100% ram usage, it won't hit near its total wattage consumption. In comparison to a dgx1, it can do 40% more work while consuming 20% less power doing a task that would max out a dgx1.
That's not enough thermal paste on the GPUs. Anything less than 100% coverate will create hot spots that will kill parts of the chip.
u know pc better than him?
@@qpakrkdhw7966 I know at least one thing he doesn't.
@@ChristianStout this korean bro is running his pc assembly service company for at least 7 yrs and he is professional.
if your claim was a serious debate and his client's server just broke down, his company would lose its reputation and he has to discharge tens of thousands of bucks, and he would have already had to close his company a few years ago, whereas he is still running with the reputation of many past years.
And let me guess your biggest achivement is that you built your own small gaming pc whereas this korean guy has been building hundreds of pcs and servers.
@@qpakrkdhw7966 This thermal paste application will cause the GPU to perform worse and fail within 4 years. His clients' machines don't break because they are liquidated after only 3 years. With a proper amount of thermal paste, they would last 8 years.
Ask an Nvidia engineer, they'll tell you the same thing.
Que tal se viaja en el tiempo en esa máquina??
Thanks, amazing, will be nice if you power on the server. Windows? Linux?
The case cover doesn't look like it'll go back on, your water cooling pipes are standing too tall
Where are your CPU heat sink clips? Or do those water blocks not allow them?
me: *goes into a super expensive tech store*
employee: hello! welcome to our store, how can i help you?
me: i need a mid-tier PC
literally employee that didn't hear "mid-tier":
Everyone talked shit until the snow part of crysis then parts started to melt
did this thing actually work?? you have six components on one thick 360 and normal 120 fans. I cannot believe temperatures under load this system better than 60C
60c is ridiculously cold, especially for server shit. There are thousands of those on earth running 80°+ right now, and they're doing just fine
@@lillee4207 if it's intended for loud operation / server closet then can go air-cooled and save a ton of money / complexity
@@paulwais9219 100%. But don't discredit the thiccboi rad
Finally, IT industry come to the point where you can't just do something with a book, computer and your own head. For LLM you has to spend a lot of money.
Это для 4ой вкладки в хром? Или для 20 ФПС в киберпанке с пастрэйсингом?:)
Mom:Why is it cost so much
Me:Its for school
Question, why Xeon and not Epyc platform? If i recognize correctly, Epyc is made for servers, and its powerful
It doesn't matter (whatever was available/affordable in terms of CPU+mobo). Its a GPU node , the CPUs are only there to provide the PCI lanes, run the OS and handle the IO. Everything else will be handled by the GPUs on application level. That is why he didn't put much RAM in it because , it doesn't need it.
Imagine your wife's face when the 1st powerbill hits the doormat
you can have the best waifu simulator on this thing
Diesen Arbeitsplatz würde ich auch gerne haben. Den ganzen Tag mit Teurer Hardware basteln .
Impressive build but its the memory kind of slow?
I imagine starting BBQ party on the exhaust of that thing.
It's not a joke. In livejournal, one famous (locally) user named Oldmann told story about night guard somewhere in one of Central Asia banks, who cooked his meal on IBM mainframe thermal exhaust, by switching off regular cooling system and putting the mainframe into emergency mode.
What the hell IS bro cooking
His CPU's most likely with that thermal paste application.
Bro gonna create another dimension
I have a question: with such a huge configuration, what will you use it for?
finally nobody whining like a baby about memory
The lack of redundant power supplies worries me
I can't imagine how satisfying is disassembly and repaste all again in the correct way.
Moore’s law says that in 9 years, this level of power will be in your laptop.
Q: What does this computer do?!
A: yes.
I would be too scared to wear gloves with fibers like this one while installing a CPU because a stray fiber that you don’t see may get caught on a pin on the socket, and pull it, and I really don’t want to do socket repairs. This is why I wear nitrile gloves instead.
Ok I can build my setup I have the Alphacool A100 Carbon, now I need a A100 80GB.
Bros getting ready for gta 7
financially never ready
GTA 6 recommended requirements:
Nice pc video bro❤
“Hey bro nice gaming pc, how much did it cost?”
“Ahh just a little over $90,000”
👁️👄👁️
Finally a server that python can print 1 to 10000000000 in 1 second
A build like that, and not even using redundant PSU? 😮
ur crazy
@@lastlyhi Thanks
All zero maintenance tubing too. Very nice.
Finally a PC that could seamlessly run Plant and Zombies..
Doom 1 To launch or not,,,,,?????????
3 120mm fans blocked by the pipe concentrator? To cool all that? Nice, definitely will work, yeah.
Server fans are extremely powerful and so noisy that being near them is harmful to your health.
They can blow through anything.
@@marcusmt4746but they are not server fans, they are made by fanteks. Probably spins at ~2500RPM
Why leave 4 memory channels unused? Is the client buying more DIMMs later?
Any reason it should be mounted this orientation? Why not upside down, so leaking coolant will not damage the mobo?
"Mom I swear it's just for school work."
No way that doesn't run hot with the cooling solution they chose. I mean it is probably all within specs, but would run with less power if they used more in terms of cooling.
finally a PC that can run MS word at 240 FPS👍
Why you didn't put ram in all slots? I think you crippled the memory speed quite a bit. 🤷♂️
How long will game streamers have to play before you get your money back, including the energy bill.
Finally, something adequate to run Tetris
i bet this rig can finish the f@h workloads my rig does in a week, in like an hour
oh so that is why we have a pc part shortage
More vram then ram in this lmao
no ram, vrm, eth cooling, different length of tubes lead to uneven water flow, correct me if I'm wrong
finally a computer that can run half life
if i would be building a server like this,i would idk what would i do but i won't be happy when i get to the cable managment,in fact,cable managment is only for personal computers,but i just wont see where the cable is
Very nice build, but this watercooling loop seems underpowered
Finally I can run GTA 6 at 480p low with 10 fps
This PC goes into the category of:"Nasa Super Computer" 😅
Bro build his own personal super computer 😮
i understand nothing abt it but guess so, if theyre useful/help to achieve more, then its good
Man that had to be quite a fortune to afford such build, I bet the amount of dollars spend here enough to live for 5 years, 10 years maybe if you choose Thailand....
Yes
You know it’s powerful when they use HP instead of watts ☠️☠️
Hi Dad, it's just for my school stuffs :D