CPU temps max out around 72 degrees which isn't bad at all. GPU's get into the low 80's though, but they're made for it and I've been running them for a long time 24/7 already without any issues. Jump to 28:16 to see the final build.
Similar build to mine, except I bought an older Dell workstation with a Xeon CPU and P5000 before upgrading the CPU (to E5-1650 v4) and RAM (to 256GB) and adding a pair of A6000s. The CPU and RAM were 2nd hand and cheap, and I got great deals on the A6000s, so it came in around $11,000 and the P5000 is a great spare (needed it recently when my desktop's 3080ti died.) I don't need an insane CPU, since most of the work is done on the GPUs. ETA - I first built it in Sept '21 with 1 A6000, added the second A6000 in Dec '22
Druuzil just 1 question if u can help please ,,would dfl utilise dual 4090 s or would it only use 1 as theres no nvlink ?? thx BTW youve been such a great help to the dfl community,, kudos
It'll use the VRAM of both, but not the processing power. So you'll have 48GB of VRAM, allowing for higher res/higher batch, but it will run at the speed of a single card. If you're going to try something like that, I'd suggest just getting a single 48GB card (or a 32GB card like an RTX 5000 ADA). You can find stuff like that on eBay for somewhat reasonable prices. They use less power by a pretty wide margin than even 1 4090 (a 5000 Ada uses 200w vs like 450w for a single 4090), and are maybe 15% slower, but have more VRAM so you can do higher res models. RTX 5880 or RTX 6000 Ada are both quite expensive, but faster than a 4090 and have 48GB. Or just wait and get a 5090 in a few months, they're supposedly going to have 32GB GDDR7, and like 26k CUDA cores. Very powerful.
@@DruuzilTechGames Mostly blender and some small video projects. I am kind of an amd fan so unless it's a massive hit in performance, like ridiculously so, probably will use that approach. W7700 is $1k so a viable budget for me.
I had a W6800 Pro for a while and it wasn't bad, just not amazing for Deepfacelab unfortunately. 32GB was a pretty huge amount of VRAM at the time though so that helped.
Out of curiosity, what is the max resolution and batch size before OOM? It would also be cool to see what the iteration speed would be on like a 4 batch 200 and 300 range resolution we typically can run😂
Well it's honestly not very fast. The GPU's are primarily where the training speed comes from in DF Lab and the A6000's are slower than a 4090. The value comes from the large amount of VRAM which lets me either crank up the resolution and/or the batch size. So training a 320 res model at batch 4 is actually a bit faster on a 4090 than it is on 2 of these. But on the other hand, I can train stuff on these I can't even attempt with a 4090, like 640 res models, or high dims 512 SAEHD, or 448 AMP with stupid high dims. But the training speed is pretty slow when you do stuff like that.
Display Port is actually Display in for Thunderbolt. That motherboard have Intel Thunderbolt chip build in. So if you want to transfer video over Thunderbolt/USB3 you connect DP exit of GPU DP to input on the motherboard.
@@DruuzilTechGames You welcome. Also for optimal cooling use Arctic freezer 4U or Be Quiet but for TR PRO. TR PrO have right orientation for fans. it cost less and its far better solution for new TR or TR pro from Noctua.
I'm pretty happy with the temps I'm getting from the Noctua (low 70's at most), and I already have it. If I have to replace it though I'll look at the BeQuiet model.
I wouldn't really call it a "wrong" orientation. It just blows the hot air up, which gets quickly exhausted out of the case. I don't think it matters if it blows up or to the back of the case.
AMAZING Setup!! Do you use both A6000 to train at the same time the same model? As far As i Knew multi GPU was not supported by deepfacelab, is that now possible?
Multi-GPU has always been supported, but they have to be linked cards to get any speed benefit (ie linked with an SLI or NVLINK bridge). Nvidia has dropped support for NVLINK now on current gen cards, which is why I still keep the A6000's (which are last gen).
Such extraordinary setup ❤. But i admit it's beyond my paygrade, haha. Sir I'm trying to do a PC setup for gaming and partly for AI. I was thinking of 4090 suprim liquid GPU, and noticed you tried it about a year ago. Were you happy with that purchase? Is it a sustainable card? What motherboard and means of installation do you recommend for it to last? Thanks in advance for your experienced input.😊
I'm still using that 4090 in one of my Intel systems. The motherboard doesn't really matter so long as it's modern (ie in the last generation or 2). idk what you mean by "means of installation" though.
If you're going to vertically mount the card it requires a specific case that allows for that. If you're concerned about the weight of the card, you can buy a GPU brace for like $10-15 that'll help with that.
Hello. Forgive me for commenting using translation software. I saw your comment in the comments section of this video regarding temperature and was interested in more specific information. In the video it looked like you were opening the side panel of the computer and using a blower to send airflow to the computer. Do you always use it that way? Also, can you give us specific gpu temperature values? Sorry for the long post. I was interested in your comment because the configuration of the two gpu's in my PC are similar, although the parts are different.
I usually leave the side panel off and have a sizable fan blowing on it, yes. Not really required but the GPU's in this system are intended to be in an HVAC/datacenter environment, so I'm trying to emulate that. I don't have the same GPU's anymore, I now have RTX 5000 Ada edition cards in the system. Similar temps though, in the mid 70's with the fans ~75%. The CPU also hangs around the 70's.
Glad!! To see u.. I have one question. You made teeth training. Is toungue also possible in deepface live. Because toungue get struck when we try to pull out and show in live
I'm speccing out and have most of the components for a very similar build on the way. Only thing I'm missing at this point is the NVlink. Which NVlink bridge are your running? It is it the 3 or 4 slot one? Haven't been able to find a definitive answer to how the PCIE slots are spaced on the gigabyte board.
What are your thoughts on AIOs for TR cpus? I really want to get the asus ryujin3 360 but it doesnt state it supports TRS5 despite the previous gen staying it and this 3rd gen has a 32% larger cold plate...kinda annoying there arent many AIOs that fully cover a TRs IHS.
What TDP can the Ryujin handle? The new TR chips use like 350w. Also the plate on the AIO is usually round whereas the chip is huge and slightly rectangular, so you aren't getting the best/full coverage. I would use a cooler made for the chip personally. Also depending on if you're going to leave it running a lot, I prefer air coolers these days simply because they never fail (other than maybe one of the fans which can be cheaply replaced). Don't want your pump dying while you're on vacation or something and frying out your system.
@DruuzilTechGames it wouldn't berunning 24/7 so that's the good thing. The cold plate is square but not sure how much it would cover. I don't plan on overclocking it and it's a newer one that is designed for the latest sockets minus TR so I would think it would be sufficient considering it can handle a 14900ks overlooked and also considering AMDs page mentions a kraken 360 being compatible despite the cold plate being round and smaller. Super odd tho...
SD is pretty much on the GPU (idk if it can support more than 1) so I don't think that's changed. I've done some messing around with it on my older 3960x platform.
Is that crap used to mine crapcoins? Or is the new overblown by the media machine learning hype that will stay but as a cool tool but not the solution, as always, to real human like AI or solving economic and the rest of the world problems. YEAH, IT'S A DOUBLE DARE!
CPU temps max out around 72 degrees which isn't bad at all. GPU's get into the low 80's though, but they're made for it and I've been running them for a long time 24/7 already without any issues. Jump to 28:16 to see the final build.
A work of art, that is 👌🏻
Similar build to mine, except I bought an older Dell workstation with a Xeon CPU and P5000 before upgrading the CPU (to E5-1650 v4) and RAM (to 256GB) and adding a pair of A6000s. The CPU and RAM were 2nd hand and cheap, and I got great deals on the A6000s, so it came in around $11,000 and the P5000 is a great spare (needed it recently when my desktop's 3080ti died.) I don't need an insane CPU, since most of the work is done on the GPUs.
ETA - I first built it in Sept '21 with 1 A6000, added the second A6000 in Dec '22
I’m guessing your PCIe is 3rd generation, then. How do you find that, in terms of bottlenecking?
Impressive!! You must be able to train models really fast.
What a beast!!!
Sometimes I think and I'd like to start into AI but nbot sure if I'd make it. But definitely gonna enjoy this video
Druuzil just 1 question if u can help please ,,would dfl utilise dual 4090 s or would it only use 1 as theres no nvlink ?? thx
BTW youve been such a great help to the dfl community,, kudos
It'll use the VRAM of both, but not the processing power. So you'll have 48GB of VRAM, allowing for higher res/higher batch, but it will run at the speed of a single card. If you're going to try something like that, I'd suggest just getting a single 48GB card (or a 32GB card like an RTX 5000 ADA). You can find stuff like that on eBay for somewhat reasonable prices. They use less power by a pretty wide margin than even 1 4090 (a 5000 Ada uses 200w vs like 450w for a single 4090), and are maybe 15% slower, but have more VRAM so you can do higher res models. RTX 5880 or RTX 6000 Ada are both quite expensive, but faster than a 4090 and have 48GB.
Or just wait and get a 5090 in a few months, they're supposedly going to have 32GB GDDR7, and like 26k CUDA cores. Very powerful.
Looks great. I have been thinking about a build using the amd W7700 graphics card and 24 core threadripper, Linux
If you intend to use yours for training Deepfakes/any kind of Machine Learning, I'd not go with an AMD GPU.
@@DruuzilTechGames Mostly blender and some small video projects. I am kind of an amd fan so unless it's a massive hit in performance, like ridiculously so, probably will use that approach.
W7700 is $1k so a viable budget for me.
I had a W6800 Pro for a while and it wasn't bad, just not amazing for Deepfacelab unfortunately. 32GB was a pretty huge amount of VRAM at the time though so that helped.
Out of curiosity, what is the max resolution and batch size before OOM? It would also be cool to see what the iteration speed would be on like a 4 batch 200 and 300 range resolution we typically can run😂
Well it's honestly not very fast. The GPU's are primarily where the training speed comes from in DF Lab and the A6000's are slower than a 4090. The value comes from the large amount of VRAM which lets me either crank up the resolution and/or the batch size.
So training a 320 res model at batch 4 is actually a bit faster on a 4090 than it is on 2 of these. But on the other hand, I can train stuff on these I can't even attempt with a 4090, like 640 res models, or high dims 512 SAEHD, or 448 AMP with stupid high dims. But the training speed is pretty slow when you do stuff like that.
Display Port is actually Display in for Thunderbolt. That motherboard have Intel Thunderbolt chip build in. So if you want to transfer video over Thunderbolt/USB3 you connect DP exit of GPU DP to input on the motherboard.
Interesting. Thanks!
@@DruuzilTechGames You welcome.
Also for optimal cooling use Arctic freezer 4U or Be Quiet but for TR PRO. TR PrO have right orientation for fans. it cost less and its far better solution for new TR or TR pro from Noctua.
I'm pretty happy with the temps I'm getting from the Noctua (low 70's at most), and I already have it. If I have to replace it though I'll look at the BeQuiet model.
@@DruuzilTechGames Sorry BeQuiet model.my bad they oriented it wrong way also rechecked
I wouldn't really call it a "wrong" orientation. It just blows the hot air up, which gets quickly exhausted out of the case. I don't think it matters if it blows up or to the back of the case.
Epic!!!
👍👍👍👍
Awesome, thanks for sharing!
I just got my new 4070 super for my DeepLiveLab. It's gonna take weeks for me to train something...
Depends. You could probably do a 320 model in a week or so. I'm not certain how fast the new Super cards are
@@DruuzilTechGames 12Gb of almost same as 4070 or 4070 ti.
Yeah I thought they had 16GB on the new cards, I guess they're still 12GB. So 320 is a bit high unless you lower the dims a bit.
AMAZING Setup!! Do you use both A6000 to train at the same time the same model? As far As i Knew multi GPU was not supported by deepfacelab, is that now possible?
Multi-GPU has always been supported, but they have to be linked cards to get any speed benefit (ie linked with an SLI or NVLINK bridge). Nvidia has dropped support for NVLINK now on current gen cards, which is why I still keep the A6000's (which are last gen).
@@DruuzilTechGames but if I have 2x3090s i would be able to use the vram as 48gb vram card?
Yes, if you can NVLINK the cards and have sufficient power etc. Motherboard must support it.
Monster
Such extraordinary setup ❤. But i admit it's beyond my paygrade, haha. Sir I'm trying to do a PC setup for gaming and partly for AI. I was thinking of 4090 suprim liquid GPU, and noticed you tried it about a year ago. Were you happy with that purchase? Is it a sustainable card? What motherboard and means of installation do you recommend for it to last? Thanks in advance for your experienced input.😊
I'm still using that 4090 in one of my Intel systems.
The motherboard doesn't really matter so long as it's modern (ie in the last generation or 2). idk what you mean by "means of installation" though.
@@DruuzilTechGames i heard many recommend vertical because of weight. But i don't know what new implications this introduces
If you're going to vertically mount the card it requires a specific case that allows for that. If you're concerned about the weight of the card, you can buy a GPU brace for like $10-15 that'll help with that.
@@DruuzilTechGames what topology arrangement did you go for in your 4090 card?
Just normal installation for both of them.
Hello. Forgive me for commenting using translation software.
I saw your comment in the comments section of this video regarding temperature and was interested in more specific information.
In the video it looked like you were opening the side panel of the computer and using a blower to send airflow to the computer. Do you always use it that way?
Also, can you give us specific gpu temperature values?
Sorry for the long post. I was interested in your comment because the configuration of the two gpu's in my PC are similar, although the parts are different.
I usually leave the side panel off and have a sizable fan blowing on it, yes. Not really required but the GPU's in this system are intended to be in an HVAC/datacenter environment, so I'm trying to emulate that.
I don't have the same GPU's anymore, I now have RTX 5000 Ada edition cards in the system. Similar temps though, in the mid 70's with the fans ~75%. The CPU also hangs around the 70's.
Thank you for your reply. I am glad you responded so quickly.
Glad!! To see u.. I have one question. You made teeth training. Is toungue also possible in deepface live. Because toungue get struck when we try to pull out and show in live
I don't believe it's possible to get the tongue to work outside of the mouth. It just gets blended into the face.
It will be close to 20k with tax
I'm speccing out and have most of the components for a very similar build on the way. Only thing I'm missing at this point is the NVlink. Which NVlink bridge are your running? It is it the 3 or 4 slot one? Haven't been able to find a definitive answer to how the PCIE slots are spaced on the gigabyte board.
3 slot for the Gigabyte board. I have a 4 slot bridge also in case lol.
@@DruuzilTechGamesThanks!
What are your thoughts on AIOs for TR cpus? I really want to get the asus ryujin3 360 but it doesnt state it supports TRS5 despite the previous gen staying it and this 3rd gen has a 32% larger cold plate...kinda annoying there arent many AIOs that fully cover a TRs IHS.
What TDP can the Ryujin handle? The new TR chips use like 350w. Also the plate on the AIO is usually round whereas the chip is huge and slightly rectangular, so you aren't getting the best/full coverage. I would use a cooler made for the chip personally. Also depending on if you're going to leave it running a lot, I prefer air coolers these days simply because they never fail (other than maybe one of the fans which can be cheaply replaced). Don't want your pump dying while you're on vacation or something and frying out your system.
@DruuzilTechGames it wouldn't berunning 24/7 so that's the good thing. The cold plate is square but not sure how much it would cover. I don't plan on overclocking it and it's a newer one that is designed for the latest sockets minus TR so I would think it would be sufficient considering it can handle a 14900ks overlooked and also considering AMDs page mentions a kraken 360 being compatible despite the cold plate being round and smaller. Super odd tho...
Ah yeah if it can do the 14900ks it should be fine I think assuming it covers enough of the threadripper.
That is beastly. I want to get into deepfakes but I have a 4090 and the latest build goes up to 3090. Think my card would work?
Yes 4090's work with the 3000 build. I have 2 systems with 4090's, they work fine with DF Lab.
Fantatstic! Thanks for the answer@@DruuzilTechGames Your channel is really inspiring and thanks for the tutorials!
Does multi gpu shared memory work out of the box on deepfacelab?
Yes.
@@DruuzilTechGames Thanks 👍
Hi can a couple of RTX A5000 24gb each handle 512 res with high dims batch size 16?
Thank you
Define "high dims". 512 dims you could possibly do it, 640 AE dims no. Maybe batch 8-12.
@@DruuzilTechGames archi: df-udt
ae_dims: 420
e_dims: 70
d_dims: 70
d_mask_dims: 22
Pretty good chance. I don't know the VRAM usages of various AE dims but yeah I'd say there's a good likelihood it'd work.
@@DruuzilTechGames Thank you!!
Have you tested running other AI stuff such as an LLM or Stable Diffusion?
Peace!
SD is pretty much on the GPU (idk if it can support more than 1) so I don't think that's changed. I've done some messing around with it on my older 3960x platform.
What do you pay for 1 KW/h? I had to pay 1620€ for Energy 1 KW/h is 31,30€cts here in Germany.
My 4090 is taking a Break for a few Weeks 😂
idk I guess I should look at things like that more closely lol
BUT! Can it run Crysis though, sir?
Probably.
Can you do an update and compare to 2x4090 vs the 2xa6000
4090 doesn't support NVLINK.
@DruuzilTechGames performance wise I meant how much faster is the a6000 over the 4090s not just in deepfakes but in all the ai you do
An A6000 is slower than a 4090 by quite a bit. It just has more memory.
Give me it 🥺
nice, what do you do for work? drug dealer?
Only in my downtime.
You said earlier you work for a company Can I get a space to work in the company 😁😁
I have a couple jobs now. :D
Is that crap used to mine crapcoins? Or is the new overblown by the media machine learning hype that will stay but as a cool tool but not the solution, as always, to real human like AI or solving economic and the rest of the world problems. YEAH, IT'S A DOUBLE DARE!
What's the matter buddy lose all your money on doge😂
@cesaru3619 First, learn English. Secondly, I use it for making Deepfakes which was stated in the video a few times.