Yeah it is kind of a pain they chose to use EPS 12v on these. I need to make more of my own custom cables as mine are keyed to make it harder to use them wrong.
Thanks for this video, it explained what was happening to my r720 when I connected my Tesla p40 to my riser port using a similar aftermarket Cable (9H6FV). Thankfully nothing burnt and I had a 8pin Corsair CPU cable that I tried and it worked and my server booted up. I then checked the manual for the Tesla p40 and it does say to use a 1x CPU 8-pin cable - so I guess I was lucky to have one around.
Thank you so much for this video, and you are correct on nobody mentioning that you need a separate EPS connector and not a standard GPU combiner cable. I have an HP z840, it came stock with the 1125w power supply and three six pin PCI-E power cables. Not knowing any better, I plugged in the two six pin cables and the 8 pin side wouldn't fit in the Tesla P40 card I picked up. This was due to the plastic bridge between pins 1 and 2. My eyes aren't that good, so I assumed it was a molding defect in the cheap Amzn sourced cable and just carefully cut out the plastic bridge. As soon as I plugged the card in I got the 4 beeps from the power supply. I found out from your video that I was shorting 12v to ground. So, about an hour to re-pin the generic power cable, that was after about 2 hrs of searching the web before I stumbled across your video. Luckily my P40 is working now. I'll stress test it later to hopefully find it functions well and I'm grateful that I didn't damage any hardware on my server. Hopefully the card tests good. Thank you again!
Yeah the cards are weird that they don't use normal PCIe power. In theory the short protection circuits should protect your card. I never noticed any issues with my hardware after using the wrong cable.
I had the same problem, I ended up using cables from an old Corsair ATX PSU. If you get the standard type4 to CPU power cable that comes with this unit the lead just needs a mod to remove the extra positive.....(snip)
That is interesting you just had to snip the one 12V wire. I had to completely rewire the cable in this video. I am starting to make my own out of cables cut off old power supplies though.
@@computersales instead of snipping the one that goes to Earth it could actually be connected to the middle positive pin which would give you the extra current capacity down the cable. Actually I will do this as well
@@sonicalstudios Ok I did some more digging since I was slightly confused. That is weird that on the type 4 Corsair modular cables the pinout is reversed on the PSU side. Looks like you would have to cut pin 4 specifically on the PSU side of the cable. Does the missing wire cause the card to misbehave or act weird though? I'm not sure how the connector is wired electrically on the Tesla PCBs. Otherwise yeah splicing would be the prudent thing to do. Link to pinouts of the Type 4 cables that I looked at: pc-mods.com/blogs/psu-pinout-repository/corsair-psu-type-4-cables-pinout
@@computersales no it does not but it could mean that all of the amperage is put on the 3 positive wires and if you look the k80 uses the CPU power which has four so I'm guessing it would be better to cut that extra connection that goes to Earth add it to one of the positives so it can handle the current correctly
@@computersales also if you use a multimeter you can see that the Tesla has a row of positive and a row of negative all connected together so no matter what all you need to do is get 3 to 4 positive connections to handle the current and also the 4 negatives
Thanks for your reply, I forgot to mention that I have a Tesla K20Xm, and I think that's on the "list" of compatibly GPU's for an R720, I'm hoping that won't need to go to 1,100W PSU's...if it doesn't work, then I'll just run the onboard Matrox. I have seen a YT clip of an R720 with a RTX2060, and it clearly shows the 750W PSU's, he's also running 2 monitors, one off the onboard, and one off the 2060.... Can the Lifecycle Controller be disabled ?? This is all new to me, I only got the R720 earlier this week, got to update the BIOS, etc, etc....I just want to run Windows on an SSD.
I believe the K20xm runs off a PCI Express 6-pin and 8-pin connector. You can basically run any video card you want in the R720 as long as it fits physically and has adequate cooling. I haven't tested running the R720 with 750 watt power supplies and video cards. Lifecycle can't be disabled. If lifecycle stops working the server won't work either. It is pretty straightforward to get Windows on an r720 so that shouldn't be too big of a worry. I was running Windows 10 on one to test some things.
Yes, the K20Xm does run that connection, which should run off the riser plug, correct ? And I was thinking of running an SSD off that Sata port, behind the fans.
@@geoffhowse3591 Yeah the riser plug can provide PCI Express power. You can run a SATA drive off the optical drive bay connection but you will need an adapter for the power connector. They do make adapter bays that will convert a laptop optical drive into a two and a half inch SATA drive carrier.
@@geoffhowse3591 So I decided to run a test. I was able to get a R720 with dual 750W PSU to boot to Windows 10 with a quadro 4000 installed. So the 1100W power supply requirement may be subjective? Got the demo video going up tomorrow.
Hello, I'm really glad I've found your channel...I just got an R720, and I am trying to get some GPU's to work, and had no luck yet, but one thing I am reading a LOT is that you need to be running the 1,100W PSU's, what are you running, I've only got the 750's. As for the cables, I think that a cable off an Antec HCG850, seems to be wired OK for the Riser GPU plug, for both a K80, and K20Xm, etc, But yeah, I'll be checking out your clips, that's for sure. Cheers
Yes the 1100W power supplies are required. I've never tried the 750W ones. They might work but my assumption is your life cycle controller will stop it. Also might want to double check the pin out of that cable carefully before using it. I didn't do a ton of research but it looks like the EPS 12 volt cable off that power supply is wired for EPS 12V on both ends. That won't work with an R720.
Dell does say an 1100W PSU is required to use a GPU on the R720, but that is just if you want a system dell will support. I have been running my 250W Tesla M40 off my single 750W for a week now 24/7. Average wattage when using full GPU is 550W, max recorded watts 620W. I am going to upgrade to 1100W only if I put another GPU in there.
I never expected this video to do as well as it has so far. As time progresses I have gotten more knowledgeable on the topic. Due to the interest I have been making cables for sale using recycled parts. If I get enough interest I may make new ones from scratch. Custom R720 cables I have for sale on eBay (If in stock) www.ebay.com/sch/m.html?_ssn=professionalrecycling402&_osacat=0&_odkw=&_from=R40&_trksid=m570.l1313&_nkw=r720+tesla&_sacat=0 Instructions on how to setup a NVIDIA Tesla card in ESXi: ua-cam.com/video/Mydx28tQ7Dg/v-deo.html
Thanks for the video and the extended explanation of the power issue. Did you have any issues with the fans staying at a higher-than-normal rpm even after the system spools down from initial startup? After a bunch of reading on another issue, I found a racadm command that I ran that is supposed to ignore (or something) thermal settings for non-dell equipment installed in the system. I was initially having the same fan problem while trying to install an add-on card for NVMe's. The racadm command solved the issue for that but it seems to be happening again, somewhat less noisy, when I then install the nvidia K80 into my R730. Any assistance would be appreciated. Thanks!
I honestly haven't had a chance to mess with putting Tesla cards in an r730. In my r720 the fans run like normal. Although one nice thing about the k80 and the r720 is they somehow were able to communicate temps and manage fan speed. Although unlikely you're having any hardware issues might not be a bad idea to run the diagnostics in the lifecycle controller. I'm not sure if there is any significant difference between the r720 and r730 from a life cycle standpoint.
Your question is too open-ended for me to even speculate. You would have to give me more details of your setup. Although my first guess would be you're trying to pass both GPUs to one windows operating system which the best of my knowledge isn't possible.
@@computersalesI tired both proxmox (using a guide by craft Computing) with both gpus passed through, that produced a code 10 so i tried installing win10 on bare metal. That resulted in a code 12
Hey, what psus are required, I found a 2x 750w one cheaply buy the 1100w psus are not cheap where I live. Also this one has 2x e5-2690 which are like 130w which concerns me.
It should be fine to run 750W PSUs as long as you aren't running them in redundant mode. Just some rough numbers you'd be looking at: 135W CPU x2 > 270W Fans at 100% > 100W 250W Tesla x2 > 500W 3.5" 7.2K RPM HDD x8 > 48-120W DDR3 memory x24 > 48-72W Everything else > 75W? Total: ~1137W Honestly to run the Tesla's at full 250W TDP you gotta be really pushing them. Something abnormal like Furmark, mining, or Folding at Home. When I intentionally loaded my server down with tasks I peaked at 697W. Although "technically" not recommended you should be fine if you are being reasonable.
i did the same thing bud... just used a meter n made my own... powered up fine, card shows up in device manager, but wont install along with my GTX960, so im gonna try it with the motherboard on-board (intelHD) card only + the K80, and if that doesnt work, im gonna try it (with a pci-e 16x extension cable) on my R620 server... fingers crossed... i also took the plastic cover off, removed both heatsinks, and installed fans on each chip (because i've seen a ton of videos about the k80 running hot) - but thanks for the video, cuz i was lookin for someone that did the same thing i did to rule that out. stay guessin brov, its how i live every day... just remember: all machines are smoke machines if you use em wrong... :)
@@JayarBass it may be a compatibility issue with your motherboard. I forget how they react in windows if the motherboard isn't supported. I think it will show up but throw error in device manager. Hopefully you get something to work though.
Hi, again, I really appreciate your replies & info :) but this question is a little off this topic, but here we go...I found an SSD drive that has Windows 10 installed, so I plugged that in, and got it boot into Windows, however, there is NO USB control for the mouse & keyboard, and I have no way to navigate to find what the problem is. USB works just fine when navigating around the DELL menu's & setting's pages, I'm going to try a fresh setup of "something" Windows, and hoping that the USB will be OK, after that. I have also done a BIOS & iDRAC update, so I slowly learning. Cheers.
It is possible that USB control is disabled for everything but BIOS use. I am not sure if there is a setting for that off the top of my head. You could try resetting the BIOS to factory defaults. If that doesn't work maybe see if you have a PCI express USB controller card you could install.
Interesting. There could be other factors at play such as your workload, or ambient room temperature. Technically that is an unsupported config, but honestly not sure why it wouldn't work. Can you feel the airflow going through your P100?
@@computersales Well, it's working, but when i use some app like furmark, it's working at first 60s, then heat dissipation can't keep up, fps go down, think it is an air duct problem
@@jyl0328 Furmark isn't really a good way to test a GPU imo. Furmark is more of a tool for when you want to see how many punches to the face you can take before falling over. I would recommend trying a more realistic synthetic load like heaven benchmark. I think even crypto mining is more realistic than Furmark.
Have you found a cable that works out of the box yet? I am trying to get a K80 or M40 24 GB to work in a R730. I bought a cable from Amazon that claims to work but when I try to power on the server the power supplies just blink orange which leads me to think it is not wired right. I got a COMeap CPU 8 Pin Male to Dual 8 Pin(6+2) Male PCIe Power Adapter Cable for Dell PowerEdge R720 720XD R730 and NVIDIA Tesla GPU J30DG 15-inch(38cm)
@@ThomasLindsay So that cable technically would work on some Tesla cards. Your cable is shorting 12V to ground most likely. Definitely look at the pinout of the wiring at 2:30 the left side shows the tesla input. The cable side should have pins 5-8 yellow (Plus 12V) and the pins 1-4 black (Ground) Any other config won't work. I can't tell with certainty, but it looks like yours is wired for standard PCI Express 8-Pin. If you refer to page 8 of the manual linked in the description it shows wiring diagram and the adapter you can buy for the K80.
@@computersales I did get that adapter with the M40 and if I plug the two ends from the cable I bought into that adapter it looks like the wiring lines up so I can plug in the single end of the adapter into the card and the single end of that after market cable into the riser. :) Thanks!!
@@ThomasLindsay Hey, so I am not really at home in electrical engineering and what not. I have ordered a Tesla K80 and have got a Dell R720, any recommendations you can give for what adapter to buy? (I assume a regular 8pin CPU power cable from a modular psu won’t do the job)
@@mr.damien826 I bought a "COMeap CPU 8 Pin Male to Dual 8 Pin(6+2) Male PCIe Power Adapter Cable for Dell PowerEdge R720 720XD R730 and NVIDIA Tesla GPU J30DG 15-inch(38cm)" from Amazon and the card I ended up using had a converter included as seen at the 5:00 minute mark in this video(and the document linked in the description). If your K80 does not have the converter included (say if you bought it used) then you can buy a convertor from Amazon like the "COMeap NVIDIA Graphics Card Power Cable 030-0571-000 CPU 8 Pin Male to Dual PCIe 8 Pin Female Adapter for Tesla K80/M40/M60/P40/P100 4-inch(10cm)" The first cable splits the riser power into two connections for standard GPUs and the converter cable recombines the 2 GPU connections to the single one needed on the Tesla K80 card if that makes sense. I was able to get mine up and running that way thanks to the info in this video! Good Luck!
To the best of my knowledge most of the NVIDIA Tesla and Grid compute cards should have the same physical dimensions. Depending on your config it may have less PCI Express slots available though.
Hey, I know you probably wont see this, and it's a long shot, but what temperatures did you manage under full load? Were you using low-profile CPU heatsinks by any chance? I replaced the thermal paste on mine, and I'm still struggling to keep it cool.
I'm running the standard full height CPU heatsinks. I don't have good data since I didn't pay attention. From what I can find it looks like it was running around 70 C under full load. ua-cam.com/video/vJKAqC2PReQ/v-deo.html You can potentially control your server's fan speeds if needed. There are some fancy scripts people have written otherwise just force a constant fan speed. www.dell.com/community/Systems-Management-General/Dell-PowerEdge-fan-speed-change-fanspeed-offset/td-p/5187784
@@computersales my issue is I'm running mine at 100% fan speed, ive replaced the thermal compound, but I'm getting about 80c with a full extended stress test. Trying to get it significantly lower
@@corbinxtitus These are datacenter cards and should be able to handle some heat. My Tesla M40's run 40 C idle. If it is still running hot after replacing thermal compound you may have disturbed something or made a mistake. Also synthetic loads aren't always an accurate way to test thermal performance. Mostly because synthetic loads don't replicate real world scenarios.
@@jacksonpham2974 My apologies I didn't see your reply sooner. I'm using dual 1100W supplies. It may be possible to use 750W supplies but I haven't verified personally.
Thank you. I read bad advice and have a cable on the way that would have shorted out my Tesla M40 and R720. Now I will avoid this fate.
Yeah it is kind of a pain they chose to use EPS 12v on these. I need to make more of my own custom cables as mine are keyed to make it harder to use them wrong.
Thanks for this video, it explained what was happening to my r720 when I connected my Tesla p40 to my riser port using a similar aftermarket Cable (9H6FV). Thankfully nothing burnt and I had a 8pin Corsair CPU cable that I tried and it worked and my server booted up.
I then checked the manual for the Tesla p40 and it does say to use a 1x CPU 8-pin cable - so I guess I was lucky to have one around.
Yeah it is kinda tricky when the aftermarket cables aren't keyed correctly. Pretty cool you had a compatible one on hand.
Thanks! Ran into the same exact issue! The UPS kicked the power off nearly instantly, so I hope that it saved the card.
@@QuakeDragon it didn't hurt my card thankfully. Hopefully yours will be fine as well.
thank you for the good video and sharing your experience with us - i've ordered cables that are designed to work with Tesla GPU and an R720.
@@alzeNL glad it was helpful. 🙂
Thank you so much for this video, and you are correct on nobody mentioning that you need a separate EPS connector and not a standard GPU combiner cable.
I have an HP z840, it came stock with the 1125w power supply and three six pin PCI-E power cables. Not knowing any better, I plugged in the two six pin cables and the 8 pin side wouldn't fit in the Tesla P40 card I picked up. This was due to the plastic bridge between pins 1 and 2. My eyes aren't that good, so I assumed it was a molding defect in the cheap Amzn sourced cable and just carefully cut out the plastic bridge. As soon as I plugged the card in I got the 4 beeps from the power supply. I found out from your video that I was shorting 12v to ground.
So, about an hour to re-pin the generic power cable, that was after about 2 hrs of searching the web before I stumbled across your video. Luckily my P40 is working now. I'll stress test it later to hopefully find it functions well and I'm grateful that I didn't damage any hardware on my server. Hopefully the card tests good.
Thank you again!
Yeah the cards are weird that they don't use normal PCIe power. In theory the short protection circuits should protect your card. I never noticed any issues with my hardware after using the wrong cable.
I had the same problem, I ended up using cables from an old Corsair ATX PSU. If you get the standard type4 to CPU power cable that comes with this unit the lead just needs a mod to remove the extra positive.....(snip)
That is interesting you just had to snip the one 12V wire. I had to completely rewire the cable in this video. I am starting to make my own out of cables cut off old power supplies though.
@@computersales instead of snipping the one that goes to Earth it could actually be connected to the middle positive pin which would give you the extra current capacity down the cable. Actually I will do this as well
@@sonicalstudios Ok I did some more digging since I was slightly confused. That is weird that on the type 4 Corsair modular cables the pinout is reversed on the PSU side. Looks like you would have to cut pin 4 specifically on the PSU side of the cable. Does the missing wire cause the card to misbehave or act weird though? I'm not sure how the connector is wired electrically on the Tesla PCBs. Otherwise yeah splicing would be the prudent thing to do.
Link to pinouts of the Type 4 cables that I looked at:
pc-mods.com/blogs/psu-pinout-repository/corsair-psu-type-4-cables-pinout
@@computersales no it does not but it could mean that all of the amperage is put on the 3 positive wires and if you look the k80 uses the CPU power which has four so I'm guessing it would be better to cut that extra connection that goes to Earth add it to one of the positives so it can handle the current correctly
@@computersales also if you use a multimeter you can see that the Tesla has a row of positive and a row of negative all connected together so no matter what all you need to do is get 3 to 4 positive connections to handle the current and also the 4 negatives
Thanks for your reply, I forgot to mention that I have a Tesla K20Xm, and I think that's on the "list" of compatibly GPU's for an R720, I'm hoping that won't need to go to 1,100W PSU's...if it doesn't work, then I'll just run the onboard Matrox.
I have seen a YT clip of an R720 with a RTX2060, and it clearly shows the 750W PSU's, he's also running 2 monitors, one off the onboard, and one off the 2060....
Can the Lifecycle Controller be disabled ??
This is all new to me, I only got the R720 earlier this week, got to update the BIOS, etc, etc....I just want to run Windows on an SSD.
I believe the K20xm runs off a PCI Express 6-pin and 8-pin connector.
You can basically run any video card you want in the R720 as long as it fits physically and has adequate cooling. I haven't tested running the R720 with 750 watt power supplies and video cards.
Lifecycle can't be disabled. If lifecycle stops working the server won't work either.
It is pretty straightforward to get Windows on an r720 so that shouldn't be too big of a worry. I was running Windows 10 on one to test some things.
Yes, the K20Xm does run that connection, which should run off the riser plug, correct ?
And I was thinking of running an SSD off that Sata port, behind the fans.
@@geoffhowse3591 Yeah the riser plug can provide PCI Express power.
You can run a SATA drive off the optical drive bay connection but you will need an adapter for the power connector. They do make adapter bays that will convert a laptop optical drive into a two and a half inch SATA drive carrier.
@@geoffhowse3591 So I decided to run a test. I was able to get a R720 with dual 750W PSU to boot to Windows 10 with a quadro 4000 installed. So the 1100W power supply requirement may be subjective? Got the demo video going up tomorrow.
Hello, I'm really glad I've found your channel...I just got an R720, and I am trying to get some GPU's to work, and had no luck yet, but one thing I am reading a LOT is that you need to be running the 1,100W PSU's, what are you running, I've only got the 750's. As for the cables, I think that a cable off an Antec HCG850, seems to be wired OK for the Riser GPU plug, for both a K80, and K20Xm, etc, But yeah, I'll be checking out your clips, that's for sure. Cheers
Yes the 1100W power supplies are required. I've never tried the 750W ones. They might work but my assumption is your life cycle controller will stop it.
Also might want to double check the pin out of that cable carefully before using it. I didn't do a ton of research but it looks like the EPS 12 volt cable off that power supply is wired for EPS 12V on both ends. That won't work with an R720.
Dell does say an 1100W PSU is required to use a GPU on the R720, but that is just if you want a system dell will support. I have been running my 250W Tesla M40 off my single 750W for a week now 24/7. Average wattage when using full GPU is 550W, max recorded watts 620W. I am going to upgrade to 1100W only if I put another GPU in there.
I never expected this video to do as well as it has so far. As time progresses I have gotten more knowledgeable on the topic. Due to the interest I have been making cables for sale using recycled parts. If I get enough interest I may make new ones from scratch.
Custom R720 cables I have for sale on eBay (If in stock)
www.ebay.com/sch/m.html?_ssn=professionalrecycling402&_osacat=0&_odkw=&_from=R40&_trksid=m570.l1313&_nkw=r720+tesla&_sacat=0
Instructions on how to setup a NVIDIA Tesla card in ESXi:
ua-cam.com/video/Mydx28tQ7Dg/v-deo.html
Thanks for the video and the extended explanation of the power issue. Did you have any issues with the fans staying at a higher-than-normal rpm even after the system spools down from initial startup?
After a bunch of reading on another issue, I found a racadm command that I ran that is supposed to ignore (or something) thermal settings for non-dell equipment installed in the system. I was initially having the same fan problem while trying to install an add-on card for NVMe's. The racadm command solved the issue for that but it seems to be happening again, somewhat less noisy, when I then install the nvidia K80 into my R730.
Any assistance would be appreciated. Thanks!
I honestly haven't had a chance to mess with putting Tesla cards in an r730. In my r720 the fans run like normal. Although one nice thing about the k80 and the r720 is they somehow were able to communicate temps and manage fan speed.
Although unlikely you're having any hardware issues might not be a bad idea to run the diagnostics in the lifecycle controller. I'm not sure if there is any significant difference between the r720 and r730 from a life cycle standpoint.
hey, is there anything special you did on the software side?
my k80 produced a code 12 ....
Your question is too open-ended for me to even speculate. You would have to give me more details of your setup. Although my first guess would be you're trying to pass both GPUs to one windows operating system which the best of my knowledge isn't possible.
@@computersalesI tired both proxmox (using a guide by craft Computing) with both gpus passed through, that produced a code 10 so i tried installing win10 on bare metal. That resulted in a code 12
@@tomhackz250 does it give that error if you pass the GPUs through to two separate VMs?
@@computersales Ive just tired allocating one half to a gpu. Is it required to allocate both??!?
@@tomhackz250 you don't have to use both but you can only allocate one GPU per windows vm.
Hey, what psus are required, I found a 2x 750w one cheaply buy the 1100w psus are not cheap where I live.
Also this one has 2x e5-2690 which are like 130w which concerns me.
It should be fine to run 750W PSUs as long as you aren't running them in redundant mode. Just some rough numbers you'd be looking at:
135W CPU x2 > 270W
Fans at 100% > 100W
250W Tesla x2 > 500W
3.5" 7.2K RPM HDD x8 > 48-120W
DDR3 memory x24 > 48-72W
Everything else > 75W?
Total: ~1137W
Honestly to run the Tesla's at full 250W TDP you gotta be really pushing them. Something abnormal like Furmark, mining, or Folding at Home. When I intentionally loaded my server down with tasks I peaked at 697W. Although "technically" not recommended you should be fine if you are being reasonable.
i did the same thing bud... just used a meter n made my own... powered up fine, card shows up in device manager, but wont install along with my GTX960, so im gonna try it with the motherboard on-board (intelHD) card only + the K80, and if that doesnt work, im gonna try it (with a pci-e 16x extension cable) on my R620 server... fingers crossed... i also took the plastic cover off, removed both heatsinks, and installed fans on each chip (because i've seen a ton of videos about the k80 running hot) - but thanks for the video, cuz i was lookin for someone that did the same thing i did to rule that out. stay guessin brov, its how i live every day... just remember: all machines are smoke machines if you use em wrong... :)
@@JayarBass it may be a compatibility issue with your motherboard. I forget how they react in windows if the motherboard isn't supported. I think it will show up but throw error in device manager. Hopefully you get something to work though.
Hi, again, I really appreciate your replies & info :) but this question is a little off this topic, but here we go...I found an SSD drive that has Windows 10 installed, so I plugged that in, and got it boot into Windows, however, there is NO USB control for the mouse & keyboard, and I have no way to navigate to find what the problem is. USB works just fine when navigating around the DELL menu's & setting's pages, I'm going to try a fresh setup of "something" Windows, and hoping that the USB will be OK, after that. I have also done a BIOS & iDRAC update, so I slowly learning. Cheers.
It is possible that USB control is disabled for everything but BIOS use. I am not sure if there is a setting for that off the top of my head. You could try resetting the BIOS to factory defaults. If that doesn't work maybe see if you have a PCI express USB controller card you could install.
Can anyone help me I need to get a power cable dont know what to buy
I don't have any made up at the moment. Your best bet is to buy the Dell GPU power cable and then the dual 8 pin to EPS 12v adapter.
try to add p100 in r720xd,but it's always not at full power, because the temp is too high to 80°c ,even though I've turned the server fan to maximum😂
Interesting. There could be other factors at play such as your workload, or ambient room temperature. Technically that is an unsupported config, but honestly not sure why it wouldn't work. Can you feel the airflow going through your P100?
@@computersales Well, it's working, but when i use some app like furmark, it's working at first 60s, then heat dissipation can't keep up, fps go down, think it is an air duct problem
@@jyl0328 Furmark isn't really a good way to test a GPU imo. Furmark is more of a tool for when you want to see how many punches to the face you can take before falling over. I would recommend trying a more realistic synthetic load like heaven benchmark. I think even crypto mining is more realistic than Furmark.
Have you found a cable that works out of the box yet? I am trying to get a K80 or M40 24 GB to work in a R730. I bought a cable from Amazon that claims to work but when I try to power on the server the power supplies just blink orange which leads me to think it is not wired right. I got a COMeap CPU 8 Pin Male to Dual 8 Pin(6+2) Male PCIe Power Adapter Cable for Dell PowerEdge R720 720XD R730 and NVIDIA Tesla GPU J30DG 15-inch(38cm)
Also forgot to ask if you changed your PSU configuration to non-redundant and whether you use a single or dual CPU configuration?
@@ThomasLindsay So that cable technically would work on some Tesla cards. Your cable is shorting 12V to ground most likely. Definitely look at the pinout of the wiring at 2:30 the left side shows the tesla input. The cable side should have pins 5-8 yellow (Plus 12V) and the pins 1-4 black (Ground) Any other config won't work. I can't tell with certainty, but it looks like yours is wired for standard PCI Express 8-Pin. If you refer to page 8 of the manual linked in the description it shows wiring diagram and the adapter you can buy for the K80.
@@computersales I did get that adapter with the M40 and if I plug the two ends from the cable I bought into that adapter it looks like the wiring lines up so I can plug in the single end of the adapter into the card and the single end of that after market cable into the riser. :) Thanks!!
@@ThomasLindsay Hey, so I am not really at home in electrical engineering and what not. I have ordered a Tesla K80 and have got a Dell R720, any recommendations you can give for what adapter to buy? (I assume a regular 8pin CPU power cable from a modular psu won’t do the job)
@@mr.damien826 I bought a "COMeap CPU 8 Pin Male to Dual 8 Pin(6+2) Male PCIe Power Adapter Cable for Dell PowerEdge R720 720XD R730 and NVIDIA Tesla GPU J30DG 15-inch(38cm)" from Amazon and the card I ended up using had a converter included as seen at the 5:00 minute mark in this video(and the document linked in the description). If your K80 does not have the converter included (say if you bought it used) then you can buy a convertor from Amazon like the "COMeap NVIDIA Graphics Card Power Cable 030-0571-000 CPU 8 Pin Male to Dual PCIe 8 Pin Female Adapter for Tesla K80/M40/M60/P40/P100 4-inch(10cm)" The first cable splits the riser power into two connections for standard GPUs and the converter cable recombines the 2 GPU connections to the single one needed on the Tesla K80 card if that makes sense. I was able to get mine up and running that way thanks to the info in this video! Good Luck!
Hi, Do you think the K1 Grid Nvidia card will also work great witht he R720 XD??
To the best of my knowledge most of the NVIDIA Tesla and Grid compute cards should have the same physical dimensions. Depending on your config it may have less PCI Express slots available though.
Hey, I know you probably wont see this, and it's a long shot, but what temperatures did you manage under full load? Were you using low-profile CPU heatsinks by any chance? I replaced the thermal paste on mine, and I'm still struggling to keep it cool.
I'm running the standard full height CPU heatsinks. I don't have good data since I didn't pay attention. From what I can find it looks like it was running around 70 C under full load.
ua-cam.com/video/vJKAqC2PReQ/v-deo.html
You can potentially control your server's fan speeds if needed. There are some fancy scripts people have written otherwise just force a constant fan speed.
www.dell.com/community/Systems-Management-General/Dell-PowerEdge-fan-speed-change-fanspeed-offset/td-p/5187784
@@computersales my issue is I'm running mine at 100% fan speed, ive replaced the thermal compound, but I'm getting about 80c with a full extended stress test. Trying to get it significantly lower
@@corbinxtitus These are datacenter cards and should be able to handle some heat. My Tesla M40's run 40 C idle. If it is still running hot after replacing thermal compound you may have disturbed something or made a mistake. Also synthetic loads aren't always an accurate way to test thermal performance. Mostly because synthetic loads don't replicate real world scenarios.
Hi friend, Will the Tesla K80 also work great with R720xd?
As far as I know the R720xd should be the same internally. Depending on your config it may have less PCI express slots available though.
@@computersales are you using dual 750W of PSU on your server to run the K80?
@@jacksonpham2974 My apologies I didn't see your reply sooner. I'm using dual 1100W supplies. It may be possible to use 750W supplies but I haven't verified personally.