@lucian6172 Yes, the pins do have a different shape, but nothing is stopping you from pushing a little bit harder to get it in there. I've tried to do it, and it goes in with some work. Unfortunate. The spacing for the connector is exactly the same.
@lucian6172 No, I didn't power it on. I just checked if it was possible, and it definitely is. And it wouldn't fry it. It would short directly to ground on the power pins, and the power supply would shut off. No harm can be done. But you still shouldn't do it. That extra 6 pin on the rx 580 is for if you wanted to overclock. 75 watts come from the pcie slot, even an 8 pin and a slot would give you 225 watts of available power. 8+6+slot would be 300 watts of available power.
@lucian6172 the gpu doesn't use all of the tdp at all times. Overclocking will get you better performance at the same tdp limit. If you want to overclock past the tdp you need to raise the tdp limit which is only available on cards with extra power connector headroom. You don't seem to know how overclocking works.
This card with a modded bios and proper cooling is basically an 980 ti, and with an output of some sort, So I bought 3 of them! +GT 710 for the output because they both use the same Kepler/Kepler 2.0 architecture
DUDE, WHAT A VIDEO! Finally, teslas are the least documented cards on the internet, and after seeing this video, I will say that I'm definitely going to keep an eye on the tesla k40, mainly because my new workstation is an actual rackmount server that I have just placed in my room (it's a bit noisy, but 12 core and 24gb of ram for 230€ in a shitty used market is an absurd deal, and for a complete amateur is pretty good imo), and I also know that these cards are fantastic for vms (that are a think that I'm definitely going to experiment with in the future), great video, keep up the good content! (sorry for my broken english, I'm from Italy)
@@linux3584 If you are talking to me: I bought an HP Proliant DL380 G6, because it's the most flexible machine for my needs (a workstation able to do video editing and other interesting workflows like VMs with online services and stuff like that), it has 12 cores (the cpus are two xeon x5650) and 24Gb of ECC ddr3 clocked at 1333mhz (not the fastest RAM but it's still very solid for what I want to do), at the moment due to the fact that I'm very buisy with work I haven't still bought a GPU, but I'm a bit limited by Power consumption and space due to the case being designed for servers. The fact of having a server is something really personal, and if you want to go that route (which is really cheap and flexible) you have to take in consideration all the disadvantages (noise and power consumption), for example my server make a noise at 54-56db constantly, I can live with that easily, becuase I love my machine, but many of my friends told me that they couldn't stand it for extended periods of times. I would also add that if you live in a hot place you are gonna have a bad time with a machine like mine. As far as the gpu is concerned I reached the conclusion that probably going for a tesla is not the best of ideas, but just because the passthrough to VMs is quite limited by licences and shit like that (damn Nvidia), but if you just have to do video editing like it's shown in this video it would be a very nice card. Sorry, it's a bit of a long comment but I feel like it's necessary, because buying strange hardware is always a risk, so I wanted to explain my situation as best as I could.
@@WillCarterTech lol ahahahaha...so hilarious I laughed and was rolling on the floor...you are way so talented in narration than troubleshooting the card with cardboard...
Hey this may be a little old now, but I would reccomend selling the K80 and buying a Tesla M40 24GB, as it's on the Maxwell Architecture (Which does support DX 12, Cuda 5.2, and other modern technologies, furthermore it still gets driver updates from Nvidia)
Heck of a video! Quadro would be a good option for a display GPU. Those are typically spec’d. out for machines that run CAD and engineering modeling. They generally possess less “grunt” and lower clock speeds than their GTX/RTX counterparts, but often provide stability. They’re geared more for accuracy in linear rendering, etc. as opposed to brute force.
Yeah, there are plenty of applications that could use the tesla but alot of them are dropping support for the Kepler architecture this year; such as solid works.
They need to support the architecture. If the application doesn’t support the processor’s architecture, it will just act like the GPU isnt plugged into the system.
@@WillCarterTech Oh wow so does that mean games probably won't be able to use the Tesla GPU then? So if nothing specifically supports the architecture anymore then these Tesla GPUs are effectively bricked?
They are e-waste at this point. You would need to be willing to work with an older version of softwares. Like right now, I’m using Davinci Resolve 16. But topaz labs doesnt even have a older version that supports the kepler architecture. So i still can’t use topaz labs if i wanted to. unless i do everything through the cpu. i rendered a 20 second clip in topaz labs video enhance ai using just the CPU. It took all night. Its just not worth it at that point. these Tesla gpus weren’t ever good at Gaming even when they were supported.
Yeah, entirely depends on the use case.. For videography / encoding etc, it's a straight up no. But for rendering complex scenes in 3D software, where VRAM is king, it might be a god send for budget home workstations. The renders might be slow and the performance laggy, but it will get the job done. For me the card is a bargain just for the memory alone.
great video dude. Thanks for taking us along for the ride. It's a road I thought I wanted to take, but now may reconsider a bit firstly :D :D Thanks again :)
3/27/21 Update: Topaz Labs software does not support the k80. 2/9/21 Update: The k80 is built on cuda 3.2 architecture. DRS17 effects such as magic mask cannot run on anything before cuda 7. To cut bloat and keep video editors from being confused as to why their GPU works with some FX but not others, they just cut support entirely. Also, Solidworks 2021 has a similar story. That is why we won’t see the k80 supported by developers in the future. This is truly a bandaid in my system. As soon as I can purchase a 3090, I will replace the k80. 2/8/21 Update: Installed DRS17 beta 8. Still no support for the k80. 2/5/21: I’m currently Rendering a super scaled 1080p video up to 2160p in Davinci resolve studio. Both k80 gpus are pegged at 100% and have been for the past few hours. The card’s heat sink has long since reached its thermal capacity and doesn’t have enough air flow to keep it from thermal throttling the card. We are currently sitting at 800Mhz clock and 91 degrees on the rear GPU and 86 degrees on the front GPU. Render time remaining is 6 hours and counting. I think this render will fail. 2/6/21 Update: it completed! It took 6 hours to complete. I had to get a box fan and point it directly at the k80.
@@WillCarterTech any newer games. Like cod, battlefield, Assassin's Creed etc. Also thx for replying and I'm gonna sub. Also if it's too hard you dont have too :)
@@cavebrain69 thanks man! Of course, I’ll look em up. I haven’t played cod since I was in high school. I’m a boomer so it might take me a min to figures out the games. My nephew installed apex legends or something on my computer. So I’m sure I can figure it out. 🤣
Honestly, I'd say that because NVIDIA likes to keep secrets and tend to bin their gpus (for every architecture they come out with) I'd say that with the extra memory that you have on the gpus soldered onto the cards, just flash the bios to come up---- (Stopped here because when I looked it up the specs... yeah... just ignore this suggestion. This may have worked on a different non-geforce/quadro card but this won't work on the k80 that you have unless you know somebody who can write a bios from scratch and write/mod drivers for it.) errr... In theory, if you were to compare the k80 to another card of very similar architectures.. .you could essentially mod or write a bios for the k80 to flash it and have it come up as two 980s (as a hypothetical, please whatever you do, DON'T DO IT.) with a crap-ton of video memory, that way you can avoid: 1. NVIDIA's licensing issues by running NVIDIA's licensing server (for enterprise as the k80 counts as an enterprise card). 2. Hypervisor Enterprise Licensing. 3. Software client licensing Which would've spat in your face with a grand total of about $10k alone and then some per client depending on how many virtual machines you're going to run with the vGPU feature set. #stepoffnvidia And with the extra video memory... heh, no game is safe. It will be the equivalent of zapping a nerd with some Doom Juice and the k80 becomes the 980 dual wield (well.. not good with names but you get the idea) or the "Doom Slayer" of GPUs... well, depending on your work load and use case I suppose. But yeah, I'm honestly going to test this concept out at one point in my life to see if it works. Wish me luck!
7:00 system interruptions is a boot up triggered system application. Rebooting will just make it start all over again. Ignore it, when a load is put on the cpu it will go away, or will go away with time.
It didn't go away with time. It required a full reboot. once I set my shut down to fully shut down, I no longer experienced this issue. Windows 10 default shut down doesn't fully shut down whereas "restart" will fully shut down your computer. ua-cam.com/video/OBGxt8zhbRk/v-deo.html
Do you still think the performance is good even without the dedicated Tesla driver? I have a GTX card and of course its drivers are visually better than the Tesla's, but im just wondering if theres a lot of power im missing out on.
I couldn't see a difference in my tests on windows 10. but those drivers can help specific applications. I just haven't found any software that benefited from the tesla driver.
I use K10 as it was just lying inside a server, unused. I connected it to my HP Zbook Studio through Thunderbolt 3 eGPU enclosure. It required many adjustments under Windows 7, but it finally worked with Resolve 16 and Blender. A clear boost to my Quadro M1000M inside my Zbook Studio. I may get an RTX or RX later. As for the fan, I used double deck server fans to pull out the heat and it never got overheated. I use DC-to-DC voltage regulator so I could control the fan speed manually to get acceptable fan noise while I'm working with it 😊 Nice video anyway. You make me smile, realizing that I have a companion using Tesla for DCC.
Hey! Now THAT is the way to do it! Especially using it along side a Quadro, so one driver works with both cards. I was reading on Black Magic’s website that DRS17 does not support the Kepler architecture. They said that their new effects such as the mask and tracking effects will only work on newer GPU architectures. Kepler was developed on cuda 3.7 architecture. I don’t exactly understand how that works. But I do understand that they will no longer support my k80 so I have to stick with DRS16 for now until I can get a newer card. Are you going to stick with DRS16 as well?
Hi Will. Big thanks mate. I'm currently not sticking with Nvidia's recommendation of using Maximus driver and recommended Quadro card. I couldn't change my on board Quadro M1000M inside my 4 years old Zbook Studio to any that Nvidia recommended. I have no choice other than using the M1000M. It was a true headache for more than a week to make my setup works (Quadro M1000M, Tesla K10, eGPU box) as my installation is against all recommendations, for instance the Thunderbolt 3 eGPU box clearly wants Win 10 as mine downgraded to Win 7 (I love Win 7), Quadro M1000M vs Tesla K10 (Nvidia doesn't support unified driver for them, Quadro driver will override Tesla and so will Tesla's), etc. So, I made a custom driver myself by modifying INF file and combined Quadro and Tesla driver files. Yes, I would still be using R16 and that would make two of us 😄 I noticed performance boost when assigning rendering to K10 alone. And I mean real boost. A 3 mins full 4K ProRes project with thousands of mask paints, clean plates and color corrections that rendered 9-11 hours on my Zbook Studio, now it's just taking 1.5 hours to finish with single GPU on Tesla 😊 so happy that everything works just fine. Oh yes, I'm disabling the K10's 2nd GPU to lower down the operating power as it's just consumed 90 watt when fully loaded with single GPU. However, I could always enable the 2nd K10 GPU whenever I need it.
Btw, ever,since I assigned the Tesla K10 to exclusively handle the GPU task in R16, all my problems have gone away like "GPU memory is full" and intermittent crashes. Since I disabled the 2nd GPU, that means the K10 is only processing with 4GB of memory, which is wierd, considering that I'm editing and compositing plenty 4K Cineform clips I suspect that when using a single card to handle Windows graphics as well as R16 GPU, was the reason it gave GPU memory full mesaage and crashes.
Had my eye on some used Quadro and Tesla cards for years, thinking about trying something like this for my applications. I never managed to justify it since I knew there would be issues and limited support. Still really cool to see someone trouble shoot it and get results.
Yeah! Same. I always wanted to try the k80. But there wasn’t a lot of coverage on these cards since they are more niche. So I spent the last 4 months trying to get it to work, giving up for a while, coming back to it, getting frustrated, giving up, coming back. 🤣 then I finally figured out that DRS 17 isn’t compatible but DRS 16 is compatible. 🤷🏻♂️
@@WillCarterTech The used M6000 24gb was dropping in price last year and I was tempted to get one. Then Nvidia 30 series landed and the whole market exploded from the supply issues. Now the M6000 24gb is $1k and not remotely worth it. Currently running a 11gb card. I mostly use the Vram for OpenCL acceleration of fluid and smoke simulations. The perform boost is substantial as long as the Vram doesn't max out.
@@lux2031 yeah... the GPU supply shortage was what lead me to making this video. Originally I was going to upgrade my GPU when the new 30 series was announced. I figured either 30 series will be good and I’ll get one of those or 30 series will be like 20 series and I’ll find a 1080 ti for cheap somewhere 🤣 Neither one of those happened! Hahaha so I started looking for old gpus and sorting by GB and this Tesla was the cheapest 24GB on eBay so I went for it. I don’t know a whole lot about flow simulations. But I enjoy watching Major Hardware’s fan show down. Do you watch his videos?
@@WillCarterTech I've seen a few of the fan showdown videos. Reminds me of paper airplane competitions. Often the simpler designs are more effective than the complex ones but its fun to try.
11:00 stuff like DaVinci and blender and other stuff using multiple gpus will only use the amount of vram on the lowest card. If you have 2 12gb cards and 1 4gb card, it will only be able to use the smallest amount, aka 4gb. The computational power goes up linearly but not the vram, the vram is always the same. It may be 2 12gb gpus but they can only use 12gb each, for the same application. So it's only 12gb
@alextran74 it does, but like I said, in davinci resolve and blender, when using multiple gpus for the same task, you will be limited to the vram limit of the lowest card. Since all the cards need to run the same workload, you're limited to the card with the smallest vram amount.
ah thanks man i was looking at one of these since my 980ti died 2 weeks ago now and just trying to find a decent gpu for blender that wont cost me an arm or leg, guess ill keep looking, great work btw
Try putting your 980ti in the oven to refloat the solder. I’ve seen a lot of videos of cards coming back from the grave buy doing that. You have nothing to lose since the card is dead any way.
Dude a pci-e power connector doesn't fit in an EPS12V socket, that is if you don't apply enough pressure. Under enough pressure anything can fit anywhere.
it really didn't take much pressure. but then again i am a super big macho lumbjack man, clearly, i mean just look at how huge my muscles are... hahahahaha
No, I don't think the water cooling is necessary. There are 3d printable models that can allow you to mount fans to the heatsink on this card. should work fine if you have a couple high RPM fans. water cooling would be quieter which would be nice when I'm recording audio.
I have this Tesla k80 paired with a Nvidia quadro k6000. Using the latest drivers for Quadro you can select the k6000 as the display driver and the k80 as the primary accelerator. That setup gives me 36gb of vram. Should be good enough for 8k even 16k.
@@WillCarterTech yes it actually is designed for a driver that utilizes the "k" for Keplar based architecture. So and k series Quadro with any k series Tesla will work together. I actually have a Quadro k2000 that I can get you. I upgraded the one I had in my machine to the k6000.
@@WillCarterTech but you have to remember that your system memory, according to the maximus configuration PDF , needs to be 3x what the gpu vram is. So 24gb + 6gb is 30 GB of v ram, multiplied by 3 is 90gb of system memory needed for that setup.
Oh, I just did see your comment: "3/27/21 Update: Topaz Labs software does not support the k80. ".. I am in close contact with Topaz and know of people asking them about the K80. I had a K80 here a few weeks ago and managed to get it running by the same "switching to WDDM" trick mentioned in my other post... Since most "models" from Topaz nowadays rely on faster FP16 calculations and shift of calculations to the shaders (using not CUDA or OPENCL but the compute capability of DX12), the K80 isn´t that fast, but it works. My card had one broken GPU, so I only could test half the speed. Actually, Topaz Video Enhance is able to use two GPUs, so the K80 should be able to get some work done quite well (for the money)... I admit, getting these cards is tricky and as you mention, the effort might not be worth it - but "it works" :) one very nice thing about the K80s is the FP64 performance, which is around the best small money can buy today.. One has to get much more recent enterprise cards if FP64 is needed.. Of course, Davinci and Topaz and others don´t use it - but if one is doing some scientific research and wants FP64 performance - K20/40/80 are the way to go for a budget (Nvidia stripped FP64 support after Kepler, so even a RTX card is slower in FP64)...
Great video! :) One comment - if you put in two 16gb cards resolve does not get 32gb of ram - it duplicates the ram so you get 16gb of ram but 2 gpus processing it. They may improve this in future but for now that’s how it works. Really enjoyed your hackery thank you
Thank you! I did not understand this about GPU vram at the time of making this video. I didn't realize that i was only effectively getting 12 GB of Vram. But 12GB was enough to get rid of the GPU memory full error even when editing 8k footage. I have heard that the Ampere Architecture allows the A6000 and A100 GPUs to combine VRAM over NVLink. I don't have $10,000 to try this out, but that is very exciting technology.
Well IF you have a business rep you can get a RTX 3090 though they will try to get you to a A100 or 2 V100s instead of a single RTX 3090. As they are meant for business and most businesses into GPU compute want new not cheap. The major difference from A100 and V100 and VRAM bandwidth. As they are compute cards the difference from GDDR and HBM is a thing that GTX and consumer RTX cards won't matter about.
@@yumri4 yeah $12,000 later I’ll have 40GB of vram 🤣 It blows my mind the profit margin for business labeled products. They try to do the same thing and charge more for the “gamer” labeled products. It’s so stupid.
@@yumri4 you are correct. However the only reason it is priced that way is because Nvidia has no competition. There is no other product that meets those specific needs.
It has to do with GPU architecture changes over the years. There are a few new effects in DRS17 that utilize newer versions of CUDA, Direct X 12 and Ray Tracing (A.I.) cores. Blackmagic Design decided that it would be less stable to try to code the new effects to support both older architecture and newer architecture. Since most Davinci Resolve users left Premiere Pro due to stability issues, Blackmagic Design decided it would be in the Users best interest if they continued to make Davinci Resolve a stable platform. This means sacrificing some compatibility with GPUs from a decade ago. The positive side is that modern cheap GPUs are way more powerful than the $5,000 Tesla cards from a decade ago. The sad part is, those old gpus will need to be recycled since they can no longer be re-used.
The question is, why would there not be drivers to be compatible, is that even a thing? teslas are essential the same as Quadra’s but without cooling and output. If Quadros work, the so will teslas
Well now this is one of the most budget cards you can possibly buy. The card is selling for $30 now and tests have been done using blower fans to get the cost way down and avoid water cooling all together. Seems like a more interesting proposition now.
@@bart_ender6116 yeah, unfortunately, price drops happen for a reason. Supply and Demand still control the used market, even if rich men north of richmond are manipulating the stockmarket and grocery prices.
@@bart_ender6116 oh it is tons of fun to Play around with. I used it for over a year. But I knew that Davinci resolve 17 could save me hours of editing. I ended up buying an RTX 3090 so that I could use resolve 17. For that one reason, it was worth it. But the k80 definitely held me over for a long time. Great acceleration card.
Your are right in the end of the video. The ordinary cards for customer is gainfull in all sides and for various purporses plus it will be cheaper than buying the proffesional ones.
I already had a R730 laying around so I put a P40 and P100 in it $320 for 40GB of VRAM. Cant wait till Nvidia comes out with some budget cards with serious VRam 48 or 64 for $500
Best regards and thank you for sharing your experience. In my case I have a msi x99a motherboard with an Intel Xeo E5-2699 v3, and I have to use a dedicated GPU to be able to use the monitors. The GPU I'm using is an AMD Firepro W7100, because my computer setup is for a workstation, and it runs fine in Solidworks. I am learning to use Ansys fluent and for GPU calculation acceleration, Nvidia are recommended. I want to acquire an Nvidia Tesla K80 (they were used in servers, because they are cheap, but they do not have video outputs or their own cooling), to do the simulations, but keep the AMD as the generator of the video signals for the monitors. In the AMD Radeon pro settigs application, when I enter global settings, I get to GPU Workload and it gives two options, "Compute" and "Graphics", I suppose that in the second option the AMD GPU is left as the graphics option (a kind of Integrated GPU, since the motherboard does not have video outputs). Is it possible to use these two GPUs in the same computer, the AMD as a video signal generator and the Tesla as a processing unit, for applications like Ansys? Or is it better to use a low-end Nvidia GPU, for example the Quadro P400, that provides graphics and the Tesla K80 performs the calculations? How would it affect the ram memories of the video cards? Because the Firepro has 8 gigs but the Tesla has 24 gigs. I want to use the Tesla only when I need to run simulations that do not support the use of AMD GPUs. There are the following nvidia gpu models that catch my attention (as long as they are compatible with the Tesla K80 drivers) to use as video outputs, in configuration with a Titan K80 with video outputs for 3 or 4 monitors, such as Alternative in case of AMD FirePro problems: Nvidia Quadro P400 Nvidia Quadro K1200 My configuration: Motherboard msi X99A TOMAHAWK Intel Xeon E5-2699 v3 CPU RAM 32 GB Crucial SSD M2 msi Spatium 1Tb AMD Firepro W7100 GPU (which supports and works with Radeon pro drivers, and is comparable in performance to the AMD Radeon pro WX5100 GPU) PSU: Thermaltake smart 700w (AMD GPU has a maximum power consumption of 150 watts.) Win version: Win pro 10 22H2 Thank you.
In windows I've run into compatibility issues with the amd drivers and Nvidia drivers installed at the same time causing screen black outs and crashes. This might not be an issue anymore since my experience was a few years ago, but I haven't heard of anyone fixing this on windows. However I have heard that the linux community have made great strides in running both AMD and Nvidia GPU on one computer at the same time. you will also need to upgrade your PSU since the k80 draws more than 300watts. I still don't recommend using the k80 since it's not compatible with most modern versions of solidworks. so you would be dead in the water anyway. the m40 is still working in solidworks i believe. you should double check that though before you buy it.
here is an update that I wrote in the description of the video 3 and a half years ago. This should answer your question sufficiently. 2/9/21 Update: The k80 is built on cuda 3.2 architecture. DRS17 effects such as magic mask cannot run on anything before cuda 7. To cut bloat and keep video editors from being confused as to why their GPU works with some FX but not others, they just cut support entirely. Also, Solidworks 2021 has a similar story. That is why we won’t see the k80 supported by developers in the future. This is truly a bandaid in my system. As soon as I can purchase a 3090, I will replace the k80.
I'm working on testing that. I'll get back to you. I know that topaz labs won't let me use this card because Topaz uses Direct x12. I'll try stable diffusion and tell you what I find.
It seems that since this video the after market offers for cooling internals has expanded. While I've never seen a front mounted duel heat pup but their are back mounted heat pumps that can pull some heat off of the backside of a GPU. I don't know just how effective it would be, but with temperatures this high any extra might be worth it.
In Linux I think this would be a pretty easy yet somewhat hacky setup. You could use PCIE passthrough to send the K80 to a docker container running Divinci Resolve (Or a docker container running anything, local ollama or a command line blender render), and then use your main GPU for display. Then you could use your cooling solution of choice (I don't think you need water cooling, check out Craft Computing's solutions) and if you already have a nice power supply then everything is done for you. This eliminates a lot of the resource cost.
@@agoogleuser2507 Honestly, not really. What I said was an idea and I don’t have an enterprise GPU or a use case so I’ve never experimented but I’m sure you could get something working. I believe Craft Computing has a video on using any enterprise GPU in a virtual machine to use normally, but you’d need a modest home lab or spare pc for that, unless you wanna run the virtual machine on your main computer. About the Docker Containers, good luck. I only said it because I know of immutable distros and services like flatbub which basically run all apps in a little docker container. It’s definitely something you could get working though and I wish you the best of luck
According to Puget Systems, your VRAM does not "add" together by adding GPUs in DaVinci Resolve. So, two 12GB cards will still only give you 12GB of VRAM.
Question: is it possible to get an image off this GPU if you do not have integrated graphics, but do have a motherboard with onboard HDMI/DP designed to work with APU? I have an x570-PRO motherboard and Ryzen 2700 Pro, so I'm curious if this is theoretically possible.
I would have no way to test and confirm this for you. However, craft computing and level1tech have the k80 gpu as well as amd cpus and could test this for you.
OMG DUDE! Thank you!! You taught me that Quadro and Teslas both do passthrough!! OMG MAN!! That explains SO FREAKING MUCH!!!! I bought Nvidia mining GPU 601-100 and le noooo pass through lol
Unfortunately, due to the popularity of my video, several other youtubers, such as Level1tech, have also made videos about the k80. This has caused many people to purchase the k80, driving the price up. Im sorry, i didnt think my little youtube channel could have such a large impact on the market. If i had known what would happen, i’d likely have just told my friends about it and left it at that.
For 1200$ you can get a server chassis packed with ram, storage and 3 of this GPU's and it won't choke on power or thermally, that is, you'll need to keep it in basement and tell your neighbors that it's not a person screaming, just a server. Also $200 for PSU is overpriced, and that fan have low static pressure, which would by why you couldn't get it air cooled
🤣 I love this comment. Yes! Exactly! You described all the issues and solutions perfectly. I couldn’t have said it more succinctly. And yes I agree $200 is alot for a psu, but that is what they were selling for just a few months ago. So it’s not too far fetched. I rounded up for brevity and dramatic effect. I didn’t want to say $1,169. lol
Just did this picked up hp server with 128gb and dual 8 core cpus, 1.2tb SAS 15k for £200, alongside the card for another 2 just waiting for it to arrive plans to mod it to reduce the sounds of a jet taking off when it starts. That aside servers are going for relatively cheap and I Recommend them to anyone whos work flow benefits from them, though you may want to look into liscencing, customer / driver support and the like. r/homelabs has some useful info and a community willing to help those getting into it
How do you keep the noise down while it’s sitting next to you and you are recording a podcast? 🤣 this question is just facetious. But that is my next goal. Water cooling to lower the noise level. How did I do as far as spreading misinformation? I’m not a “computer guy” so I tried to research as much as possible to keep my mistakes minimal. What did I mess up?
@@WillCarterTech all my equipment is in a climate controlled closet.lol i run an r9 fury x for video output. Amd drivers don't conflict with tesla drivers. Water cooling is my next step for the tesla though as well.
Really! Wow! That’s great news! I had been told by many different sources that amd drivers cause issues when installed next to Nvidia drivers. Yes the closet solution! I have stared at my closet many times thinking that I could shove this desktop in there. Lol alas my apartment doesn’t have a climate controlled closet. 🤷🏻♂️ maybe in the future I can revisit that idea. Yeah, with how cheaply you can find the dual water blocks, they seem like an excellent solution. My issue going forward is that I don’t want to spend a ton of money getting this card running tip top, if it’s no longer supported in the next generation of davinci resolve 17. It works great on 16. But I’d rather spend that approximately $500 on a modern GPU with better power consumption, cooling, performance, RTX voice, and decoding/encoding acceleration.
@@WillCarterTech ive run amd and nvidia at the same time for years. you must run the amd card as primary. install nvidia cards in other slots. nvidia drivers must be installed first then amd drivers. its finiky to get going but once you do its fine.
@@TheLotroNerd alrighty. Ive needed to do an update on the k80 anyway. Usually, Windows would default to the 960. So I’ll have to figure out how to force windows to use the k80 for encoding
I just got Nvidia Tesla K80 and Quadro K6000 I don't understand way problems when this card has same Kepler architecture of processor . What you suggest to change or to eliminate K80 I want stable system for editing not gaming . Pleas tell me how to solve problems .
Unfortunately, the answer is more money. You need to buy a Nvidia Quadro RTX A6000 for $6k. 🤷🏻♂️ Or buy a cheaper gaming card that is decent at editing. Unfortunately there are no cards available right now due to bit coin. So i am currently dealing with this unstable system until a modern gpu comes available. If you have done everything outlined in this video and you still cant get stability, it’s likely do to lack of driver support. Im sorry 🤷🏻♂️ not much else you can do. My system with the k80 crashes occasionally. It’s not stable. I dont recommend this configuration.
Whether this card support Adobe Premiere Pro 2021? Currently, we have a 2 GB Nvidia card, for 2 hours of wedding footage, it's taking 8 hours to complete the export. The plan is to buy this card and do all the customization to our Desktop for more CUDA will that help in reducing the export time from 8 hours to something?
No, unfortunately, adobe is still very cpu focused when it comes to rendering. Their most recent updates have allowed for gpu acceleration on a couple of fx and encoding/decoding tasks. But the tesla k80 card in this video doesn’t have dedicated encoding hardware. In addition premiere only can take advantage of 12 cores on the cpu. Vs DR that can use 32 cores on the cpu. My best suggestion would be to try davinci resolve with your current equipment and see if that improves your render times. It’s free and if you are just focused on transcoding footage, DR excels at that task. I like to transcode to DNxHR HQ 10bit. If you are rendering into custom resolutions or resolutions higher than 4k (like 6k or 8k or vertical or 19:9 aspect, etc) you will want to purchase the studio version which is only $300 one time. And the studio version also has faster render times especially if you have multiple gpus in your system. I used Premiere for a decade and i tried resolve and used it for a couple tasks here and there. I saw a HUGE performance increase on my computer over premiere with out spending any money. Over time i gradually used DR for more tasks than premiere and eventually i noticed that i hadn’t opened premiere in months on my computer and i had been using DR for my entire work flow. At that point i stopped paying for premiere. After a couple of years using DR i decided to buy Davinci Resolve Studio so that i could make this video about the tesla k80. I love it! Now i can do 3:2 aspect ratios and 6k renders for my timelapses. You may find that using the free DR for transcoding speeds up your computer quite a bit. Or you might find that you still need to up grade your computer. I don’t know what computer you have so I’m not sure.
@@WillCarterTech We have a Photography studio and our requirement is to mix and create a final output from the video we shot in wedding events. We use Adobe Premiere latest edition and Edius. The system we are using is Intel Core i5 4670 3.40 GHz processor, Intel DH87RL motherboard with 32 GB memory with a 2 GB Nvidia GeForce GT 710 card. Another system with a latest spec AMD Ryzen 9-3900X Processor, Asus B550 Motherboard with 16 GB 3200 Mhz DDR4, For this system we have purchased a RTX3060 Ti and not yet installed with it. The concerned is we need to reduce the exporting time from 8 to 12 hours to 1 or 2 hours for a whole 3 hours wedding event videos.
Your core i5 4670 will be a huge bottle neck for you. So upgrading the gpu will only improve render times slightly. But that system would likely benefit the most from the tesla k40 (m40) due to the fact that the geforce 710 is close to the same generation of the k40. The drivers will work better when the gpus are close to the same architecture. However, im not sure if premiere supports such old architectures on their latest version. The AMD 3900x system is about the pinnacle of performance for Premiere. This system wouldnt benefit from the k80 because the 3080ti is the best gpu on the market to date. Also the 3900x is a 12 core cpu, premiere uses 12 cores. You will likely see an uplift in performance by 20-30% if you swapped out your cpu for the amd 5950x, which your mobo is compatible with. However you need to update your motherboards BIOS first. You can find the update files on the manufacturers website. In your case ASUS. However im not certain your mobo is capable of delivering the power necessary for that cpu, since it’s the budget motherboard. That’s something you will have to look up on ASUS’ website. This system would see a huge reduction in render times if you gave it 64GB of 3200Mhz cl14 DDR4 RAM. Premiere is VERY ram hungry. Also that system as configured ought to be able to render a 10min 4k video in about 2 hours. Check your premier settings to make sure that they are sent to performance mode
THANK YOU! You just prevented me from making a huge mistake. Just ordered a new data science rig (threadripper 3960, 128gb 3600 ram, amd 5700xt video card) and wanted a powerful data science graphics card. I cant find anything because stupid mining gpu surge. anyone have any suggestions??? currently I have a amd 5700 xt (build may be coming with a 3090...likely 3080). Id want a titan but not they are crazy expensive.
Add the cost of the time you wasted to setup the machine even following instructions here considering that your system and requirements might be different.... How much is your time worth?
Since a lot of students use laptops, I am thinking using an eGPU box and a Tesla would help them not blow up their laptop trying to render things... Of course this with the new information about using Teslas as gaming cards with the hacks would mean the proces of the Teslas are going to go up.
Solidworks no longer supports the Keplar architecture in their latest versions. So if you need the latest version of Solidworks, then no, the K80 would not help you here. The M40 would be a better option to take a look at.
@@WillCarterTech I ended up buying an m40 and was not able to force windows to use it. Ended up finding a 2060 super for an amazing price and although it's still the bottleneck its much better.
Just now subscribed to your channel! I think you deserve more followers because you deserve it, and also because your good in what you do, keep it up!!!
AIO PC? what is an AIO PC? all in one PC? do you mean "how would this work in an external GPU enclosure over thunderbolt?" i imagine it would work alright.
@@WillCarterTech yes AIO buildup pc onboard but i am off of vga internal make vga external for rendering 3d project. i wanna know how k80 work on lumion software or programe 3d software any else so interested for k80 a big cuda core
@@aximoucinema I have no idea. I don't have any of the hardware or software that you are talking about in order for me to test what you are asking about.
you should look about using amd GPU's with the tesla K80. this will allow for amazing compatibility and display will be taken care of also more vram will be available with the cram of k80
I'm actually planning on buying two of these for GPU compute in Linux. I already have a rackmount server with the correct power cables, so its just the GPUs that I need to buy.
1 you can get a bracket that allows it to sit upright 2 you do no need a water block but i recommend changing thermal paste as the slop they used is trash
Thanks for this video now I know it would be better to purchase an rtx but going to continue my journey with this card and lol you weren’t lying about the power consumption
Yes! I’m going to continue my journey with this card too. I am able to get sufficient cooling with fans I had laying around and my psu has just enough watts. So I was able to get it up and running for just $400. But I was going to purchase Davinci Resolve Studio any way, so it really only cost me $100. Pretty cheap for the amount of improvement in performance. I wasn’t able to use fusion at all before, nor film grain or camera shake or blur effects or even complicated color grades. So I’m very happy with my purchase. But I’m really sad that it isn’t compatible with DRS17 going forward... :-/
yes they recommend the quadro drivers... to run the tesla's... ive ran the c2075 on mojave fine using the proprietary mac drivers. just check your driver files the nvidia teslas and grab the web driver. its helps if you upgrade macOS instead of a clean install. theres a script out there that you can run that will knock out the macos issues if you want to run it you can no nbd...
Ya those Mac pros are quite the machine. I had the 2012 5,1 server edition with a swapped Xeon Hexcore from the quad. 64gb ram single channel. The pcie output 150 watts and there is an additional out on the board. The adapter cable 6 pin to mini is called something weird. Booster cable or something. The psu is rated at 900watts I think so plenty of power on board it’s just a matter of hitting the right install sequence. K20 and the c2075 will run... the only thing I couldn’t solve was the cards weren’t totally recognized per spec’s c2075 only registered 2gb and the k20 was recognized but no data showed up. I also didn’t know about the kext utilities and abilities... Do it up, they said it wasn’t possible... they don’t know nothing lol
@@WillCarterTech Dumpsters! That's a tragedy, should be punishable by law. Last Summer I seen a lot of people posting "Got it when it was new, boxed up in the garage since. Worked then 100$ obo... Then you see 1000$ 2000$ stock models. Im still kicking myself for selling mine... But I really like the mini's. Prefect desktop. the 10's are limited on HD. But i found that Waterfox browser is 64 bit and 1080p plays just fine with multiple apps opens. I run a theta node on my 2012 and its my primary on an intel gpu... performance wise 100% better.Almost went with the newst M!, super cheap but i bought moog instead... Its awesome... Im trying to do the eGPU with my C2075... its currently in a amd dell i got a couple weeks back.. Cheers on the homeschooling DIY. always like to get other perspectives and everybody has little tricks to explore. Cardboard and duck will get you a long ways lol
@@WillCarterTech ok, then it does make sense. Because I tend to see people are buying high-class hardware just for their UA-cam channels, whereas the same content can be filmed on a smartphone without any real loss of quality to viewers
@@alx8439 yes, you are correct. but I have Gear Acquisition Syndrome lol so I like playing with fancy new toys. It's why I loved working at a production company. Playing with high end equipment.
you can speak about prices. now a 3060 costs 900-1200 euros, and if you compare the k80 is only 250-300 euros, if you have a good psu, and another gpu for display, you should not have problems with it.
@@WillCarterTech true. But i could get a really good cooling for my laptop for like 4 euros. Fans, and other things were avalibe, i only had to buy patch tapes.
The older version of premiere that i have doesn’t use gpu. So i would say no but i Heard that Adobe released a version of premiere last year that takes advantage of more hardware in the system. So I’m not sure.
Hello thanks for the great video. What is the mainboard that you use for the nvidia k80? Cann you give the model and number? I am using a MSI Z87-G43 Gaming and dont works.
might i recommend trying an AMD Firepro? I have found success with an S9150 w/ 16GB of VRAM. A 1070 is my main video out, and divers have not been an issue. I use "Video Editor" from Microsoft.
10:25 the name ssds today aren't fast enough to saturate the x4 gen 3 link they have, let alone double the bandwidth at gen 4. You will spend more money on gen 4 when the memory chip on the ssd can't even saturate last gen specifications.
currently i am using rtx 3080 12g for DaVinci Resolve but it lacks Vram. If I buy this Nvidia Tesla K80 24GB and plug it into the 2nd slot on my mainboard will it work in parallel with the 3080? Will I be able to take advantage of its vram? Looking forward to hearing from you and everyone watching this video. Thank you
unfortunately no on all your questions. the driver for this keplar card would not be compatible with the rtx3080. the vram does not stack. The k80 has two gpus on board. Each gpu has 12GB of vram. you already have 12GB of vram. so this card would not improve your situation. Davinci resolve 17 does not support keplar architechure anymore. and neither does davinci resolve 18. the k80 is no longer viable for DR, unless you are willing to stay on DR16. I was not. I bought a 3090 with 24GB of vram. they sell for about $800 on ebay right now. which is alot considering the price/performance compared to AMD cards. but if you need NVidia for some reason, it's your only 24GB option under a thousand dollars. I've heard that AMD has gotten alot better on Davinci resolve these last couple generations. something to consider.
There's also the M40, similar to the K80, but I don't know about support in DaVinci Resolve or how it would work with your other GPU. I don't even understand how a program like this would need to support specific GPUs, since I'm not using it and I'm on Linux.
I mean, i could try making a timelapse on the one at work but my boss probably wouldnt be too happy about it 🤣 those mac minis with the m1 processor shoot well above their weight class. I dont think it could edit a 6000x4000 time lapse though... maybe 🤷🏻♂️... ill try to sneak some tests in one of these days.
@@WillCarterTech Perhaps when the RISC optimised version of Davinci Resolve is released officially, but the proof will be in the tasting. But I agree with you. The performance is phenomenal given that the entire unit costs the same as a decent GPU.
Yeah! if you need a mediocre gpu with a ton of vram, i love that the m1 allows the gpu access the 16GB of ram as if it were vram. That’s what i was most impressed with. I wish we could figure out how to do that with a desktop. Ive been asking a buddy of mine if there is a way to do that. It sounds like resizable bar on pcie 4.0 does a similar thing where the CPU can access the vram on the GPU. 🤷🏻♂️ so i would thing that we can make it work the other way around. I just dont know enough about computers.
I can only use this card on Linux... on Windows it appears configured with the drivers correctly, but no editing software recognizes it... it's not a defect, because on Linux it works
Davinci resolve dropped support for Kepler GPUs in the middle of 2021. Several other software companies did as well. Nvidia also stopped supporting Kepler GPUs. the k80 is officially at end of life. If you want to use this card you need to install old versions of software. For example davinci resolve studio 16 supports the k80, but 17 and newer do not.
@@WillCarterTech.. hello friend, are you sure about that, my k80 wasn't showing up because it was in TCC I changed it to WDDM... and now it appears as a video card.... I downloaded the new Davinci, I tested a rendering and it was normal
@@jaoninguem7800 oh! that's amazing! i have no idea what TCC or WDDM is. Please make a video showing me what you changed so that I can do that too. my k80 has been sitting on a shelf since i bought a 3090. it would be nice to give it some life.
As far as I’ve heard, yes they mine like crazy but they consume so much electricity that you would end up with a negative $$. So even though they are great at mining, they cost more money than they earn.
@@WillCarterTech + mining won't make use of all the VRAM, which is one of the top features of the card so it doesn't make much sense, you should get a 1080ti which has faster ram instead ;)
Good job will. I would definitely not recommend it for video editing. But I'm thinking of buying this for my computations as i do a ton of heavy computations for electromagnetics.
@Will Carter, the HomeSchool DJ I have the same card running on windows 10 and can not get it to work. I am getting these errors, can you tell me how you got it to work? 1st error (ATTENTION! No OpenCL, HIP or CUDA installation found.) 2nd error (clCompileProgram is missing fr OpenCL shared library.) 3rd error (cuFuncSetAttribute is missing from CUDA shared library.) I have the latest CUDA installed I think it's version 10. Any help would be appreciated thanks.
The k80 kepler architecture is cuda 3.7 Cuda 10 ought to be compatible with the k80. Have you gone into your motherboard bios to enable openGL? Im not sure why it isn’t working for you. I have had several people in these comments tell me that their gpu was dead on arrival.
I have a GTX 960 (Maxwell 2.0) 2 GB. Would that card work along side the Tesla card you are showing? I already have 32 GB of DDR3 (a game that ran really poorly with 8 GB provoked me to make such a big upgrade).
yes, that is a similar card to mine. I'm not super familiar with gaming. But there are alot of channels that have covered this k80 since I posted this video. So there is alot of information out there now. There were only 5 videos on youtube about the k80 before I posted this video and 3 of those videos were about cooling.
what driver version did you use? im useing gtx1080fe for my mainpc AORUS X570 ULTRA i tried Game ready driver and Studio driver and Tesla Driver on 472.12 , 462.31 , 462.96 , 452.96 but i start up my pc 3 mintus and my mainpc get freez and can't not boot up any more and i remove the k80 from my motherboard my pc can boot up again so the k80 doen't work on my main pc so i buy a quadro k600 from ebay and plug in both k80 k600 on my second pc (Huanan X79) and i can't even enter the bios i stuck in post code "d4" can anyone help?
It sounds like your problem is much deeper than driver corruption, but give DDU a try in safemode just for the heck of it. I dont think it will fix your problem. But i was having freezing about a month ago (not as bad as you) and running DDU in safemode fixed my issue. So it’s worth giving it a try. Ive been using the latest game ready driver for my gtx 960. And that seems to be the only driver that doesnt give me issues (“as much.” there is still occasional issues after windows updates or program updates) Give that a shot and let me know if it changes anything. You have a pascal gpu, and 2 kepler gpus. Try using only kepler and use a kepler driver. The 960 i have is a kepler gpu like the k80. So i dont have as many driver conflicts.
@@WillCarterTech i tried in safe mode use ddu still useless and i trying to using ubuntu 20.4 to install the nvidia driver but still get screen freezing and can't boot up again i think its my k80 vram die broken but still thank you!
@@linya0711 Yeah I would have to agree. I had another comment about six months ago say that their K 80 was dead when they got it. I think those GPUs are getting old enough that some of them are starting to give up the ghost.
if I remember correctly blender's 2022 update dropped support for kepler architechure. so i'm going to say no. unless you are using an old version of blender.
Now prices of this card have gone wild triplicating it's value or even more. This is kind of sad given this situation, even though still great value compared the actual prices of the other cards. Now it raises me the question: how good in performance are these cards at games a part from workstation usage?
Yes I had a very good experience with Pop OS. Unfortunately not all of the programs that I use regularly are compatible with Linux. And I’m not willing to swap OS between tasks. I was actually able to achieve stable performance and windows after a few months of tweaking settings. A lot of my issues were caused by the stream deck software and Black Magic “desktop video” software. After uninstalling those two programs and changing a few settings in windows and in the bios I had a very enjoyable experience with the Tesla K 80.
Hey guys, should I buy a Nvidia Tesla K20X for 100 bucks? I want to use it for daily normal use like Office 365, watching UA-cam and sometimes video editing, please give me your opinion!!Thank you!
Guys dont be so cheap, get a mining power supply to power 10 of them and it cost … about $60 wow, very expensive right? Still its $140 cheaper than yours
@@WillCarterTech Matrox has a wide range of cards. For 4 - 8+ display multimonitoring. Streaming / acquisition. Encoding Mojito 4K M264 series CompressHD Box for mobile pc Unfortunately this pro material is not really tested by UA-camrs. Too bad because the cost, the possibilities to work on real time video streams, the very low noise level and the very low consumption of these cards should attract our attention.
Can i use this gpu for cfd & cad design ? I think it's about 200$ (1280¥). With that money i can buy quadro p1000 with only 4gb vram. Which one would be better.
Due to Kepler no longer being supported by many programs such as 2021 solid works and davinci resolve 17. I would say that a newer GPU would be recommended. The p1000 is based on Pascal, 2 generations newer. So it’s likely still compatible with your program, whereas the k80 is not likely compatible.
I would double check the minimum system requirements, but yeah you could probably get the k80 to work. The p1000 would be a more efficient GPU because it draws less power. It depends on how much vram you need for your work load. CAD normally takes a lot of vram. If you have a good plan to cool the k80 and your local electric company doesn’t charge a bunch, you can make the k80 work. But I bet the p1000 has more processing power just based on the spec sheet.
This video:
1. Pain
2. Success
3. The other 300 ways you could have done it more easily
Yes!!! 🤣 I go through the pain so you don’t have to 🤣
@@WillCarterTechthe thing here was that you took the pains the wrong way. Too bad😂
If you put in a pcie 8 pin, the gpu is toast. The 8 pin is a 6 pin with 2 extra grounds, one of this grounds will short out the gpu. 3 12v, 5 grounds.
@lucian6172 Yes, the pins do have a different shape, but nothing is stopping you from pushing a little bit harder to get it in there. I've tried to do it, and it goes in with some work. Unfortunate. The spacing for the connector is exactly the same.
@lucian6172 No, I didn't power it on. I just checked if it was possible, and it definitely is. And it wouldn't fry it. It would short directly to ground on the power pins, and the power supply would shut off. No harm can be done. But you still shouldn't do it.
That extra 6 pin on the rx 580 is for if you wanted to overclock. 75 watts come from the pcie slot, even an 8 pin and a slot would give you 225 watts of available power.
8+6+slot would be 300 watts of available power.
@lucian6172 the gpu calculates total wattage and power limits around its tdp. The extra power pin is for overclocking and is over full spec.
@lucian6172 lmao your not going to melt wire, you know nothing about resistance. Adding more wire is not gonna help you.
@lucian6172 the gpu doesn't use all of the tdp at all times. Overclocking will get you better performance at the same tdp limit. If you want to overclock past the tdp you need to raise the tdp limit which is only available on cards with extra power connector headroom.
You don't seem to know how overclocking works.
This card with a modded bios and proper cooling is basically an 980 ti, and with an output of some sort, So I bought 3 of them! +GT 710 for the output because they both use the same Kepler/Kepler 2.0 architecture
Gavin if you see this what did you use it for?
This setup is a really good way to run blender on Linux because it can render easily in parallel
DUDE, WHAT A VIDEO! Finally, teslas are the least documented cards on the internet, and after seeing this video, I will say that I'm definitely going to keep an eye on the tesla k40, mainly because my new workstation is an actual rackmount server that I have just placed in my room (it's a bit noisy, but 12 core and 24gb of ram for 230€ in a shitty used market is an absurd deal, and for a complete amateur is pretty good imo), and I also know that these cards are fantastic for vms (that are a think that I'm definitely going to experiment with in the future), great video, keep up the good content! (sorry for my broken english, I'm from Italy)
What is the configuration? I am going to build a PC for 500-600$ with 12 cores CPU, 32 RAM, nVidia Tesla m40.
@@linux3584 If you are talking to me: I bought an HP Proliant DL380 G6, because it's the most flexible machine for my needs (a workstation able to do video editing and other interesting workflows like VMs with online services and stuff like that), it has 12 cores (the cpus are two xeon x5650) and 24Gb of ECC ddr3 clocked at 1333mhz (not the fastest RAM but it's still very solid for what I want to do), at the moment due to the fact that I'm very buisy with work I haven't still bought a GPU, but I'm a bit limited by Power consumption and space due to the case being designed for servers.
The fact of having a server is something really personal, and if you want to go that route (which is really cheap and flexible) you have to take in consideration all the disadvantages (noise and power consumption), for example my server make a noise at 54-56db constantly, I can live with that easily, becuase I love my machine, but many of my friends told me that they couldn't stand it for extended periods of times.
I would also add that if you live in a hot place you are gonna have a bad time with a machine like mine.
As far as the gpu is concerned I reached the conclusion that probably going for a tesla is not the best of ideas, but just because the passthrough to VMs is quite limited by licences and shit like that (damn Nvidia), but if you just have to do video editing like it's shown in this video it would be a very nice card.
Sorry, it's a bit of a long comment but I feel like it's necessary, because buying strange hardware is always a risk, so I wanted to explain my situation as best as I could.
Yes, it's a server card - not a PC card!!!
@@WillCarterTech lol ahahahaha...so hilarious I laughed and was rolling on the floor...you are way so talented in narration than troubleshooting the card with cardboard...
Hey this may be a little old now, but I would reccomend selling the K80 and buying a Tesla M40 24GB, as it's on the Maxwell Architecture (Which does support DX 12, Cuda 5.2, and other modern technologies, furthermore it still gets driver updates from Nvidia)
I have given that alot of thought. even just for the curiosity of the thing
@@WillCarterTech I would love to see a video on it! I emailed you a few weeks ago with more detailed information on the project.
@@dosenotcompute7733 alright ill check my email. Thank you!
I was unable to find the email that you sent. I checked my spam and it wasn’t there either.
@@WillCarterTech Hm ok, I guess it didn't send for some reason.
What do we learn?
Brake into a miners house and steal a rtx 3080 or 3090
Is much besser than using a Tesla Card
Heck of a video! Quadro would be a good option for a display GPU. Those are typically spec’d. out for machines that run CAD and engineering modeling. They generally possess less “grunt” and lower clock speeds than their GTX/RTX counterparts, but often provide stability. They’re geared more for accuracy in linear rendering, etc. as opposed to brute force.
Aaaaahhhhh that makes a lot of sense. That explains why video editors always say the Quadros are great for video rendering.
I got my studio "legit" so this gpu rocks it it can do 8k if you have the ssd
Can any other GPU-bound applications like games use the Tesla GPU? Or is there something in the drivers preventing that kind of usage?
Yeah, there are plenty of applications that could use the tesla but alot of them are dropping support for the Kepler architecture this year; such as solid works.
@@WillCarterTech Do those applications need to specifically support the Tesla GPU or does it just work as if it were any ordinary GPU?
They need to support the architecture. If the application doesn’t support the processor’s architecture, it will just act like the GPU isnt plugged into the system.
@@WillCarterTech Oh wow so does that mean games probably won't be able to use the Tesla GPU then? So if nothing specifically supports the architecture anymore then these Tesla GPUs are effectively bricked?
They are e-waste at this point. You would need to be willing to work with an older version of softwares. Like right now, I’m using Davinci Resolve 16. But topaz labs doesnt even have a older version that supports the kepler architecture. So i still can’t use topaz labs if i wanted to. unless i do everything through the cpu. i rendered a 20 second clip in topaz labs video enhance ai using just the CPU. It took all night. Its just not worth it at that point.
these Tesla gpus weren’t ever good at Gaming even when they were supported.
Yeah, entirely depends on the use case.. For videography / encoding etc, it's a straight up no. But for rendering complex scenes in 3D software, where VRAM is king, it might be a god send for budget home workstations. The renders might be slow and the performance laggy, but it will get the job done. For me the card is a bargain just for the memory alone.
yep
great video dude. Thanks for taking us along for the ride. It's a road I thought I wanted to take, but now may reconsider a bit firstly :D :D Thanks again :)
Yep! I’m glad I could help. I’m still thermal throttling with two fans. Only after about 30mins of 100% load though. Like during a render.
I though you said it drains a lot of power, my R9 295X2 draws 500W just for the GPU XD, Excelent video, you got a new subscriber :D
Well in that case 🤣 the tesla k80 sips power compared to that! Idk what a R9 295x2 is. I’ll have to look that up 🙂
I felt the pain from his sigh after the "it should work right?"
3/27/21 Update:
Topaz Labs software does not support the k80.
2/9/21 Update:
The k80 is built on cuda 3.2 architecture. DRS17 effects such as magic mask cannot run on anything before cuda 7. To cut bloat and keep video editors from being confused as to why their GPU works with some FX but not others, they just cut support entirely. Also, Solidworks 2021 has a similar story. That is why we won’t see the k80 supported by developers in the future. This is truly a bandaid in my system. As soon as I can purchase a 3090, I will replace the k80.
2/8/21 Update:
Installed DRS17 beta 8. Still no support for the k80.
2/5/21: I’m currently Rendering a super scaled 1080p video up to 2160p in Davinci resolve studio. Both k80 gpus are pegged at 100% and have been for the past few hours.
The card’s heat sink has long since reached its thermal capacity and doesn’t have enough air flow to keep it from thermal throttling the card.
We are currently sitting at 800Mhz clock and 91 degrees on the rear GPU and 86 degrees on the front GPU.
Render time remaining is 6 hours and counting. I think this render will fail.
2/6/21 Update: it completed! It took 6 hours to complete. I had to get a box fan and point it directly at the k80.
Can you try gaming?
@@cavebrain69 sure which game?
@@WillCarterTech any newer games. Like cod, battlefield,
Assassin's Creed etc.
Also thx for replying and I'm gonna sub. Also if it's too hard you dont have too :)
@@cavebrain69 thanks man! Of course, I’ll look em up. I haven’t played cod since I was in high school. I’m a boomer so it might take me a min to figures out the games. My nephew installed apex legends or something on my computer. So I’m sure I can figure it out. 🤣
@@WillCarterTech running path traced minecraft is a nice way to make a space heater out of your pc
You can run rtx voice on 10 series cards
to mount it could you not just cut up a metal place holder and drill it to match to two screw holes you took the original bracket off of ?
yes
Honestly, I'd say that because NVIDIA likes to keep secrets and tend to bin their gpus (for every architecture they come out with) I'd say that with the extra memory that you have on the gpus soldered onto the cards, just flash the bios to come up---- (Stopped here because when I looked it up the specs... yeah... just ignore this suggestion. This may have worked on a different non-geforce/quadro card but this won't work on the k80 that you have unless you know somebody who can write a bios from scratch and write/mod drivers for it.)
errr... In theory, if you were to compare the k80 to another card of very similar architectures.. .you could essentially mod or write a bios for the k80 to flash it and have it come up as two 980s (as a hypothetical, please whatever you do, DON'T DO IT.) with a crap-ton of video memory, that way you can avoid:
1. NVIDIA's licensing issues by running NVIDIA's licensing server (for enterprise as the k80 counts as an enterprise card).
2. Hypervisor Enterprise Licensing.
3. Software client licensing
Which would've spat in your face with a grand total of about $10k alone and then some per client depending on how many virtual machines you're going to run with the vGPU feature set. #stepoffnvidia
And with the extra video memory... heh, no game is safe. It will be the equivalent of zapping a nerd with some Doom Juice and the k80 becomes the 980 dual wield (well.. not good with names but you get the idea) or the "Doom Slayer" of GPUs... well, depending on your work load and use case I suppose.
But yeah, I'm honestly going to test this concept out at one point in my life to see if it works. Wish me luck!
Good luck! Send me the vid when you do it!
7:00 system interruptions is a boot up triggered system application. Rebooting will just make it start all over again. Ignore it, when a load is put on the cpu it will go away, or will go away with time.
It didn't go away with time. It required a full reboot. once I set my shut down to fully shut down, I no longer experienced this issue. Windows 10 default shut down doesn't fully shut down whereas "restart" will fully shut down your computer.
ua-cam.com/video/OBGxt8zhbRk/v-deo.html
Do you still think the performance is good even without the dedicated Tesla driver? I have a GTX card and of course its drivers are visually better than the Tesla's, but im just wondering if theres a lot of power im missing out on.
I couldn't see a difference in my tests on windows 10. but those drivers can help specific applications. I just haven't found any software that benefited from the tesla driver.
I use K10 as it was just lying inside a server, unused. I connected it to my HP Zbook Studio through Thunderbolt 3 eGPU enclosure.
It required many adjustments under Windows 7, but it finally worked with Resolve 16 and Blender. A clear boost to my Quadro M1000M inside my Zbook Studio. I may get an RTX or RX later.
As for the fan, I used double deck server fans to pull out the heat and it never got overheated. I use DC-to-DC voltage regulator so I could control the fan speed manually to get acceptable fan noise while I'm working with it 😊
Nice video anyway. You make me smile, realizing that I have a companion using Tesla for DCC.
Hey! Now THAT is the way to do it! Especially using it along side a Quadro, so one driver works with both cards.
I was reading on Black Magic’s website that DRS17 does not support the Kepler architecture. They said that their new effects such as the mask and tracking effects will only work on newer GPU architectures. Kepler was developed on cuda 3.7 architecture.
I don’t exactly understand how that works. But I do understand that they will no longer support my k80 so I have to stick with DRS16 for now until I can get a newer card.
Are you going to stick with DRS16 as well?
Hi Will. Big thanks mate. I'm currently not sticking with Nvidia's recommendation of using Maximus driver and recommended Quadro card. I couldn't change my on board Quadro M1000M inside my 4 years old Zbook Studio to any that Nvidia recommended. I have no choice other than using the M1000M.
It was a true headache for more than a week to make my setup works (Quadro M1000M, Tesla K10, eGPU box) as my installation is against all recommendations, for instance the Thunderbolt 3 eGPU box clearly wants Win 10 as mine downgraded to Win 7 (I love Win 7), Quadro M1000M vs Tesla K10 (Nvidia doesn't support unified driver for them, Quadro driver will override Tesla and so will Tesla's), etc. So, I made a custom driver myself by modifying INF file and combined Quadro and Tesla driver files.
Yes, I would still be using R16 and that would make two of us 😄
I noticed performance boost when assigning rendering to K10 alone. And I mean real boost. A 3 mins full 4K ProRes project with thousands of mask paints, clean plates and color corrections that rendered 9-11 hours on my Zbook Studio, now it's just taking 1.5 hours to finish with single GPU on Tesla 😊 so happy that everything works just fine. Oh yes, I'm disabling the K10's 2nd GPU to lower down the operating power as it's just consumed 90 watt when fully loaded with single GPU. However, I could always enable the 2nd K10 GPU whenever I need it.
Btw, ever,since I assigned the Tesla K10 to exclusively handle the GPU task in R16, all my problems have gone away like "GPU memory is full" and intermittent crashes. Since I disabled the 2nd GPU, that means the K10 is only processing with 4GB of memory, which is wierd, considering that I'm editing and compositing plenty 4K Cineform clips
I suspect that when using a single card to handle Windows graphics as well as R16 GPU, was the reason it gave GPU memory full mesaage and crashes.
That actually makes a lot of sense. I’d be willing to bet mine is having the same issue as yours
Hey Will. I'm lucky enough that I got both RTX A4000 and A5000 with me now. Testing them heavily, recently.
Thanks, I was going to try the K80's but after review of your video, am reconsidering it. PS two K80's for sale.
lol yeah i've got one for sale too ahaha
Had my eye on some used Quadro and Tesla cards for years, thinking about trying something like this for my applications. I never managed to justify it since I knew there would be issues and limited support. Still really cool to see someone trouble shoot it and get results.
Yeah! Same. I always wanted to try the k80. But there wasn’t a lot of coverage on these cards since they are more niche. So I spent the last 4 months trying to get it to work, giving up for a while, coming back to it, getting frustrated, giving up, coming back. 🤣 then I finally figured out that DRS 17 isn’t compatible but DRS 16 is compatible. 🤷🏻♂️
@@WillCarterTech The used M6000 24gb was dropping in price last year and I was tempted to get one. Then Nvidia 30 series landed and the whole market exploded from the supply issues. Now the M6000 24gb is $1k and not remotely worth it.
Currently running a 11gb card. I mostly use the Vram for OpenCL acceleration of fluid and smoke simulations. The perform boost is substantial as long as the Vram doesn't max out.
@@lux2031 yeah... the GPU supply shortage was what lead me to making this video. Originally I was going to upgrade my GPU when the new 30 series was announced. I figured either 30 series will be good and I’ll get one of those or 30 series will be like 20 series and I’ll find a 1080 ti for cheap somewhere 🤣
Neither one of those happened! Hahaha so I started looking for old gpus and sorting by GB and this Tesla was the cheapest 24GB on eBay so I went for it.
I don’t know a whole lot about flow simulations. But I enjoy watching Major Hardware’s fan show down. Do you watch his videos?
@@WillCarterTech I've seen a few of the fan showdown videos. Reminds me of paper airplane competitions. Often the simpler designs are more effective than the complex ones but its fun to try.
@@lux2031 right?! Almost every time! I’m just waiting for the day when one of the complicated ideas actually works better 🤣
I wander how does Google data center gets Nvidia Tesla k80 to work. As this is provide in Google Colab.
I’d be curious to know as well
11:00 stuff like DaVinci and blender and other stuff using multiple gpus will only use the amount of vram on the lowest card. If you have 2 12gb cards and 1 4gb card, it will only be able to use the smallest amount, aka 4gb. The computational power goes up linearly but not the vram, the vram is always the same. It may be 2 12gb gpus but they can only use 12gb each, for the same application. So it's only 12gb
Yes, you are correct. I learned this overtime of trial and error. Thank you for sharing your knowledge.
@@WillCarterTech & @raycert07: I thought Nvidia control panel let you assign which program to run on which card?
@alextran74 it does, but like I said, in davinci resolve and blender, when using multiple gpus for the same task, you will be limited to the vram limit of the lowest card.
Since all the cards need to run the same workload, you're limited to the card with the smallest vram amount.
@@raycert07 Thanks, good to know that!
maximus should use kepler2.so which one is better k620 or gt710
not sure
ah thanks man i was looking at one of these since my 980ti died 2 weeks ago now and just trying to find a decent gpu for blender that wont cost me an arm or leg, guess ill keep looking, great work btw
Try putting your 980ti in the oven to refloat the solder. I’ve seen a lot of videos of cards coming back from the grave buy doing that. You have nothing to lose since the card is dead any way.
@@WillCarterTech ah i was going to but someone bought it from me for parts for 200 so i kinda just said yes.
Oh yeah! For $200 I would have sold it as well.
Dude a pci-e power connector doesn't fit in an EPS12V socket, that is if you don't apply enough pressure. Under enough pressure anything can fit anywhere.
it really didn't take much pressure. but then again i am a super big macho lumbjack man, clearly, i mean just look at how huge my muscles are... hahahahaha
А что обязательно ставить дорогое водяное охлаждение? Вроде бы прикручивают обычные кулеры
No, I don't think the water cooling is necessary. There are 3d printable models that can allow you to mount fans to the heatsink on this card. should work fine if you have a couple high RPM fans. water cooling would be quieter which would be nice when I'm recording audio.
I have this Tesla k80 paired with a Nvidia quadro k6000. Using the latest drivers for Quadro you can select the k6000 as the display driver and the k80 as the primary accelerator. That setup gives me 36gb of vram. Should be good enough for 8k even 16k.
How’s it running in your set up? Does using the Quadro solve a lot of the issues I had because I am using a gaming card for display?
@@WillCarterTech yes it actually is designed for a driver that utilizes the "k" for Keplar based architecture. So and k series Quadro with any k series Tesla will work together. I actually have a Quadro k2000 that I can get you. I upgraded the one I had in my machine to the k6000.
Yeah, that’s exactly what I wanted to know. I figured someone out there set up their work station correctly.
@@WillCarterTech but you have to remember that your system memory, according to the maximus configuration PDF , needs to be 3x what the gpu vram is. So 24gb + 6gb is 30 GB of v ram, multiplied by 3 is 90gb of system memory needed for that setup.
@@Valient6 oh wow 😳
Oh, I just did see your comment: "3/27/21 Update: Topaz Labs software does not support the k80. ".. I am in close contact with Topaz and know of people asking them about the K80. I had a K80 here a few weeks ago and managed to get it running by the same "switching to WDDM" trick mentioned in my other post... Since most "models" from Topaz nowadays rely on faster FP16 calculations and shift of calculations to the shaders (using not CUDA or OPENCL but the compute capability of DX12), the K80 isn´t that fast, but it works. My card had one broken GPU, so I only could test half the speed. Actually, Topaz Video Enhance is able to use two GPUs, so the K80 should be able to get some work done quite well (for the money)... I admit, getting these cards is tricky and as you mention, the effort might not be worth it - but "it works" :)
one very nice thing about the K80s is the FP64 performance, which is around the best small money can buy today.. One has to get much more recent enterprise cards if FP64 is needed.. Of course, Davinci and Topaz and others don´t use it - but if one is doing some scientific research and wants FP64 performance - K20/40/80 are the way to go for a budget (Nvidia stripped FP64 support after Kepler, so even a RTX card is slower in FP64)...
This is great information! I wish i understood everything you are talking about. Ill need to do some more learning.
Great video! :)
One comment - if you put in two 16gb cards resolve does not get 32gb of ram - it duplicates the ram so you get 16gb of ram but 2 gpus processing it. They may improve this in future but for now that’s how it works.
Really enjoyed your hackery thank you
Thank you! I did not understand this about GPU vram at the time of making this video. I didn't realize that i was only effectively getting 12 GB of Vram. But 12GB was enough to get rid of the GPU memory full error even when editing 8k footage. I have heard that the Ampere Architecture allows the A6000 and A100 GPUs to combine VRAM over NVLink. I don't have $10,000 to try this out, but that is very exciting technology.
Yeah but nVidia 3000 series cards are currently impossible to buy.
Zakly! 🤣
Well IF you have a business rep you can get a RTX 3090 though they will try to get you to a A100 or 2 V100s instead of a single RTX 3090. As they are meant for business and most businesses into GPU compute want new not cheap. The major difference from A100 and V100 and VRAM bandwidth. As they are compute cards the difference from GDDR and HBM is a thing that GTX and consumer RTX cards won't matter about.
@@yumri4 yeah $12,000 later I’ll have 40GB of vram 🤣 It blows my mind the profit margin for business labeled products.
They try to do the same thing and charge more for the “gamer” labeled products. It’s so stupid.
@@WillCarterTech I agree it is stupid but it is paid so it is the correct "market price".
@@yumri4 you are correct. However the only reason it is priced that way is because Nvidia has no competition. There is no other product that meets those specific needs.
Why does a program like Davinci resolve need to support specific GPUs? Interesting.
It has to do with GPU architecture changes over the years. There are a few new effects in DRS17 that utilize newer versions of CUDA, Direct X 12 and Ray Tracing (A.I.) cores. Blackmagic Design decided that it would be less stable to try to code the new effects to support both older architecture and newer architecture. Since most Davinci Resolve users left Premiere Pro due to stability issues, Blackmagic Design decided it would be in the Users best interest if they continued to make Davinci Resolve a stable platform. This means sacrificing some compatibility with GPUs from a decade ago. The positive side is that modern cheap GPUs are way more powerful than the $5,000 Tesla cards from a decade ago. The sad part is, those old gpus will need to be recycled since they can no longer be re-used.
I guess I need your link to $100 K80.
I found nothing below $350
TIA
@@WillCarterTech
Thank you for that.
Another channel had a link to a $180 K80.
But no returns and no int'l shipping.
That Killed that.
in the era of cheap 3d printing, cardboard is the right choice.
riiight… 🤣
The question is, why would there not be drivers to be compatible, is that even a thing? teslas are essential the same as Quadra’s but without cooling and output. If Quadros work, the so will teslas
yeah, it was finicky. the geforce drivers for kepler worked okay most of the time.
If only you can share that extra Vram on the K880 with a Gaming GPU so that you can play in higher resolutions without running out of VRAM.
yeah, I wish vram stacked
Well now this is one of the most budget cards you can possibly buy. The card is selling for $30 now and tests have been done using blower fans to get the cost way down and avoid water cooling all together. Seems like a more interesting proposition now.
unfortunately the reason the price dropped was because solidworks, Davinci Resolve and topaz labs all dropped support for kepler architecture.
@@WillCarterTech Ah I see
@@bart_ender6116 yeah, unfortunately, price drops happen for a reason. Supply and Demand still control the used market, even if rich men north of richmond are manipulating the stockmarket and grocery prices.
@@WillCarterTech I bought one for a project just something to play with and 30 dollars isnt much to pay
@@bart_ender6116 oh it is tons of fun to Play around with. I used it for over a year. But I knew that Davinci resolve 17 could save me hours of editing. I ended up buying an RTX 3090 so that I could use resolve 17. For that one reason, it was worth it. But the k80 definitely held me over for a long time. Great acceleration card.
Your are right in the end of the video. The ordinary cards for customer is gainfull in all sides and for various purporses plus it will be cheaper than buying the proffesional ones.
I already had a R730 laying around so I put a P40 and P100 in it $320 for 40GB of VRAM. Cant wait till Nvidia comes out with some budget cards with serious VRam 48 or 64 for $500
You can cool it with two 16$ fans there's a video showing. But it runs around 80 deg
yeah, after this video was published, another youtuber made a video cooling this card. I ended up doing that to this card after i saw his video
Best regards and thank you for sharing your experience.
In my case I have a msi x99a motherboard with an Intel Xeo E5-2699 v3, and I have to use a dedicated GPU to be able to use the monitors.
The GPU I'm using is an AMD Firepro W7100, because my computer setup is for a workstation, and it runs fine in Solidworks. I am learning to use Ansys fluent and for GPU calculation acceleration, Nvidia are recommended.
I want to acquire an Nvidia Tesla K80 (they were used in servers, because they are cheap, but they do not have video outputs or their own cooling), to do the simulations, but keep the AMD as the generator of the video signals for the monitors.
In the AMD Radeon pro settigs application, when I enter global settings, I get to GPU Workload and it gives two options, "Compute" and "Graphics", I suppose that in the second option the AMD GPU is left as the graphics option (a kind of Integrated GPU, since the motherboard does not have video outputs).
Is it possible to use these two GPUs in the same computer, the AMD as a video signal generator and the Tesla as a processing unit, for applications like Ansys? Or is it better to use a low-end Nvidia GPU, for example the Quadro P400, that provides graphics and the Tesla K80 performs the calculations?
How would it affect the ram memories of the video cards? Because the Firepro has 8 gigs but the Tesla has 24 gigs.
I want to use the Tesla only when I need to run simulations that do not support the use of AMD GPUs.
There are the following nvidia gpu models that catch my attention (as long as they are compatible with the Tesla K80 drivers) to use as video outputs, in configuration with a Titan K80 with video outputs for 3 or 4 monitors, such as Alternative in case of AMD FirePro problems:
Nvidia Quadro P400
Nvidia Quadro K1200
My configuration:
Motherboard msi X99A TOMAHAWK
Intel Xeon E5-2699 v3 CPU
RAM 32 GB Crucial
SSD M2 msi Spatium 1Tb
AMD Firepro W7100 GPU (which supports and works with Radeon pro drivers, and is comparable in performance to the AMD Radeon pro WX5100 GPU)
PSU: Thermaltake smart 700w (AMD GPU has a maximum power consumption of 150 watts.)
Win version: Win pro 10 22H2
Thank you.
In windows I've run into compatibility issues with the amd drivers and Nvidia drivers installed at the same time causing screen black outs and crashes. This might not be an issue anymore since my experience was a few years ago, but I haven't heard of anyone fixing this on windows. However I have heard that the linux community have made great strides in running both AMD and Nvidia GPU on one computer at the same time.
you will also need to upgrade your PSU since the k80 draws more than 300watts.
I still don't recommend using the k80 since it's not compatible with most modern versions of solidworks. so you would be dead in the water anyway. the m40 is still working in solidworks i believe. you should double check that though before you buy it.
Any update for DRS 19 COMPATIBILITY?
here is an update that I wrote in the description of the video 3 and a half years ago. This should answer your question sufficiently.
2/9/21 Update:
The k80 is built on cuda 3.2 architecture. DRS17 effects such as magic mask cannot run on anything before cuda 7. To cut bloat and keep video editors from being confused as to why their GPU works with some FX but not others, they just cut support entirely. Also, Solidworks 2021 has a similar story. That is why we won’t see the k80 supported by developers in the future. This is truly a bandaid in my system. As soon as I can purchase a 3090, I will replace the k80.
Thoughts on this card with stable diffusion ai generation?
I'm working on testing that. I'll get back to you. I know that topaz labs won't let me use this card because Topaz uses Direct x12. I'll try stable diffusion and tell you what I find.
It seems that since this video the after market offers for cooling internals has expanded. While I've never seen a front mounted duel heat pup but their are back mounted heat pumps that can pull some heat off of the backside of a GPU. I don't know just how effective it would be, but with temperatures this high any extra might be worth it.
In Linux I think this would be a pretty easy yet somewhat hacky setup. You could use PCIE passthrough to send the K80 to a docker container running Divinci Resolve (Or a docker container running anything, local ollama or a command line blender render), and then use your main GPU for display. Then you could use your cooling solution of choice (I don't think you need water cooling, check out Craft Computing's solutions) and if you already have a nice power supply then everything is done for you. This eliminates a lot of the resource cost.
Woah. I'm interested. Anything you can point to?
@@agoogleuser2507 Honestly, not really. What I said was an idea and I don’t have an enterprise GPU or a use case so I’ve never experimented but I’m sure you could get something working. I believe Craft Computing has a video on using any enterprise GPU in a virtual machine to use normally, but you’d need a modest home lab or spare pc for that, unless you wanna run the virtual machine on your main computer. About the Docker Containers, good luck. I only said it because I know of immutable distros and services like flatbub which basically run all apps in a little docker container. It’s definitely something you could get working though and I wish you the best of luck
According to Puget Systems, your VRAM does not "add" together by adding GPUs in DaVinci Resolve. So, two 12GB cards will still only give you 12GB of VRAM.
Nope it sure doesn’t. So the k80 only has 12 GB of usable v ram. The good news is that 12GB is enough for 9 seconds of 6k footage.
I recommend you to get a custom waterbloc to cool the gpu with minimum noise
Question: is it possible to get an image off this GPU if you do not have integrated graphics, but do have a motherboard with onboard HDMI/DP designed to work with APU? I have an x570-PRO motherboard and Ryzen 2700 Pro, so I'm curious if this is theoretically possible.
I would have no way to test and confirm this for you. However, craft computing and level1tech have the k80 gpu as well as amd cpus and could test this for you.
OMG DUDE! Thank you!! You taught me that Quadro and Teslas both do passthrough!! OMG MAN!! That explains SO FREAKING MUCH!!!! I bought Nvidia mining GPU 601-100 and le noooo pass through lol
Hi. Can you share the link tesla k80, I can't find it anywhere near your price tag.Thank you
Unfortunately, due to the popularity of my video, several other youtubers, such as Level1tech, have also made videos about the k80. This has caused many people to purchase the k80, driving the price up.
Im sorry, i didnt think my little youtube channel could have such a large impact on the market. If i had known what would happen, i’d likely have just told my friends about it and left it at that.
For 1200$ you can get a server chassis packed with ram, storage and 3 of this GPU's and it won't choke on power or thermally, that is, you'll need to keep it in basement and tell your neighbors that it's not a person screaming, just a server.
Also $200 for PSU is overpriced, and that fan have low static pressure, which would by why you couldn't get it air cooled
🤣 I love this comment. Yes! Exactly! You described all the issues and solutions perfectly. I couldn’t have said it more succinctly. And yes I agree $200 is alot for a psu, but that is what they were selling for just a few months ago. So it’s not too far fetched. I rounded up for brevity and dramatic effect. I didn’t want to say $1,169. lol
Just did this picked up hp server with 128gb and dual 8 core cpus, 1.2tb SAS 15k for £200, alongside the card for another 2 just waiting for it to arrive plans to mod it to reduce the sounds of a jet taking off when it starts.
That aside servers are going for relatively cheap and I Recommend them to anyone whos work flow benefits from them, though you may want to look into liscencing, customer / driver support and the like. r/homelabs has some useful info and a community willing to help those getting into it
Very interesting video! Thanks.
good video, lots of mistakes tho.i use 4 of these, any questions ask away.
How do you keep the noise down while it’s sitting next to you and you are recording a podcast? 🤣 this question is just facetious.
But that is my next goal. Water cooling to lower the noise level.
How did I do as far as spreading misinformation? I’m not a “computer guy” so I tried to research as much as possible to keep my mistakes minimal. What did I mess up?
@@WillCarterTech all my equipment is in a climate controlled closet.lol i run an r9 fury x for video output. Amd drivers don't conflict with tesla drivers. Water cooling is my next step for the tesla though as well.
Really! Wow! That’s great news! I had been told by many different sources that amd drivers cause issues when installed next to Nvidia drivers.
Yes the closet solution! I have stared at my closet many times thinking that I could shove this desktop in there. Lol alas my apartment doesn’t have a climate controlled closet. 🤷🏻♂️ maybe in the future I can revisit that idea.
Yeah, with how cheaply you can find the dual water blocks, they seem like an excellent solution.
My issue going forward is that I don’t want to spend a ton of money getting this card running tip top, if it’s no longer supported in the next generation of davinci resolve 17. It works great on 16. But I’d rather spend that approximately $500 on a modern GPU with better power consumption, cooling, performance, RTX voice, and decoding/encoding acceleration.
@@WillCarterTech ive run amd and nvidia at the same time for years. you must run the amd card as primary. install nvidia cards in other slots. nvidia drivers must be installed first then amd drivers. its finiky to get going but once you do its fine.
@@WillCarterTech turn windows update off during this rocess and do it yourself. windows will muck it up everytime.lol
I want to see this using nvenc encoding for OBS
The tesla k80 doesnt have nvenc encoding / decoding. I used the gtx960 for encoding in obs. When using the k80 i would drop frames
@@WillCarterTech It says it has two nvenc encoders on the nvidia website. Also says that nvenc is fully unlocked.
Hmmm… maybe the softwares i was using didn’t recognize it, or maybe I didn’t know how to read the software.
@@WillCarterTech Would love to see a video if you can figure it out!
@@TheLotroNerd alrighty. Ive needed to do an update on the k80 anyway.
Usually, Windows would default to the 960. So I’ll have to figure out how to force windows to use the k80 for encoding
Can’t get computer to recognize my Tesla gpu help please
Did you try everything in the video?
If you have already tried everything, and your tesla k80 isnt showing up, you have a dead gpu. RMA your gpu
I just got Nvidia Tesla K80 and Quadro K6000 I don't understand way problems when this card has same Kepler architecture of processor . What you suggest to change or to eliminate K80 I want stable system for editing not gaming . Pleas tell me how to solve problems .
Unfortunately, the answer is more money. You need to buy a Nvidia Quadro RTX A6000 for $6k. 🤷🏻♂️
Or buy a cheaper gaming card that is decent at editing.
Unfortunately there are no cards available right now due to bit coin. So i am currently dealing with this unstable system until a modern gpu comes available.
If you have done everything outlined in this video and you still cant get stability, it’s likely do to lack of driver support.
Im sorry 🤷🏻♂️ not much else you can do. My system with the k80 crashes occasionally. It’s not stable. I dont recommend this configuration.
@@WillCarterTech What motherboard you was using
Gigabyte Z390 Aorus Master
Like i said though. My system isnt stable. So getting the same motherboard as me will not make your system more stable.
@@WillCarterTech Can Motherboard make some different
Nice testing bro, great video.
Thanks! :-)
Whether this card support Adobe Premiere Pro 2021? Currently, we have a 2 GB Nvidia card, for 2 hours of wedding footage, it's taking 8 hours to complete the export.
The plan is to buy this card and do all the customization to our Desktop for more CUDA will that help in reducing the export time from 8 hours to something?
No, unfortunately, adobe is still very cpu focused when it comes to rendering. Their most recent updates have allowed for gpu acceleration on a couple of fx and encoding/decoding tasks. But the tesla k80 card in this video doesn’t have dedicated encoding hardware.
In addition premiere only can take advantage of 12 cores on the cpu. Vs DR that can use 32 cores on the cpu.
My best suggestion would be to try davinci resolve with your current equipment and see if that improves your render times. It’s free and if you are just focused on transcoding footage, DR excels at that task. I like to transcode to DNxHR HQ 10bit.
If you are rendering into custom resolutions or resolutions higher than 4k (like 6k or 8k or vertical or 19:9 aspect, etc) you will want to purchase the studio version which is only $300 one time. And the studio version also has faster render times especially if you have multiple gpus in your system.
I used Premiere for a decade and i tried resolve and used it for a couple tasks here and there. I saw a HUGE performance increase on my computer over premiere with out spending any money. Over time i gradually used DR for more tasks than premiere and eventually i noticed that i hadn’t opened premiere in months on my computer and i had been using DR for my entire work flow. At that point i stopped paying for premiere.
After a couple of years using DR i decided to buy Davinci Resolve Studio so that i could make this video about the tesla k80. I love it! Now i can do 3:2 aspect ratios and 6k renders for my timelapses.
You may find that using the free DR for transcoding speeds up your computer quite a bit. Or you might find that you still need to up grade your computer. I don’t know what computer you have so I’m not sure.
@@WillCarterTech Thanks for your time to write me back, Appreiciated. Will start DR soon.
Is transcoding the only thing that you are concerned with?
What are the specs of the system you are using for Premiere?
@@WillCarterTech We have a Photography studio and our requirement is to mix and create a final output from the video we shot in wedding events. We use Adobe Premiere latest edition and Edius. The system we are using is Intel Core i5 4670 3.40 GHz processor, Intel DH87RL motherboard with 32 GB memory with a 2 GB Nvidia GeForce GT 710 card.
Another system with a latest spec AMD Ryzen 9-3900X Processor, Asus B550 Motherboard with 16 GB 3200 Mhz DDR4, For this system we have purchased a RTX3060 Ti and not yet installed with it.
The concerned is we need to reduce the exporting time from 8 to 12 hours to 1 or 2 hours for a whole 3 hours wedding event videos.
Your core i5 4670 will be a huge bottle neck for you. So upgrading the gpu will only improve render times slightly. But that system would likely benefit the most from the tesla k40 (m40) due to the fact that the geforce 710 is close to the same generation of the k40. The drivers will work better when the gpus are close to the same architecture. However, im not sure if premiere supports such old architectures on their latest version.
The AMD 3900x system is about the pinnacle of performance for Premiere. This system wouldnt benefit from the k80 because the 3080ti is the best gpu on the market to date. Also the 3900x is a 12 core cpu, premiere uses 12 cores. You will likely see an uplift in performance by 20-30% if you swapped out your cpu for the amd 5950x, which your mobo is compatible with. However you need to update your motherboards BIOS first. You can find the update files on the manufacturers website. In your case ASUS. However im not certain your mobo is capable of delivering the power necessary for that cpu, since it’s the budget motherboard. That’s something you will have to look up on ASUS’ website.
This system would see a huge reduction in render times if you gave it 64GB of 3200Mhz cl14 DDR4 RAM. Premiere is VERY ram hungry.
Also that system as configured ought to be able to render a 10min 4k video in about 2 hours. Check your premier settings to make sure that they are sent to performance mode
THANK YOU! You just prevented me from making a huge mistake. Just ordered a new data science rig (threadripper 3960, 128gb 3600 ram, amd 5700xt video card) and wanted a powerful data science graphics card. I cant find anything because stupid mining gpu surge. anyone have any suggestions??? currently I have a amd 5700 xt (build may be coming with a 3090...likely 3080). Id want a titan but not they are crazy expensive.
Train online or learn other data science first. 🤷♂️
If you use linux, buy a tesla k80.
Add the cost of the time you wasted to setup the machine even following instructions here considering that your system and requirements might be different.... How much is your time worth?
that's a great point
Can we use this graphicard for mining
Sure can
@@WillCarterTech what will be hashrate ??
Since a lot of students use laptops, I am thinking using an eGPU box and a Tesla would help them not blow up their laptop trying to render things... Of course this with the new information about using Teslas as gaming cards with the hacks would mean the proces of the Teslas are going to go up.
Yeah, i saw k80 sell on ebay for $500 in may. It was crazy! But yes you are right, this card would do well to be in a egpu box.
Outside of video editing in Divinci Resolve, would the K80 still help with graphics acceleration, like in Solidworks?
Solidworks no longer supports the Keplar architecture in their latest versions. So if you need the latest version of Solidworks, then no, the K80 would not help you here. The M40 would be a better option to take a look at.
@@WillCarterTech I ended up buying an m40 and was not able to force windows to use it. Ended up finding a 2060 super for an amazing price and although it's still the bottleneck its much better.
Just now subscribed to your channel!
I think you deserve more followers because you deserve it, and also because your good in what you do, keep it up!!!
Ah man! thanks! I appreciate the encouragement! :)
hello bro how k80 suport on AIO PC? you have solution ill make a riser vga external
AIO PC? what is an AIO PC? all in one PC? do you mean "how would this work in an external GPU enclosure over thunderbolt?" i imagine it would work alright.
@@WillCarterTech yes AIO buildup pc onboard but i am off of vga internal make vga external for rendering 3d project. i wanna know how k80 work on lumion software or programe 3d software any else so interested for k80 a big cuda core
@@aximoucinema I have no idea. I don't have any of the hardware or software that you are talking about in order for me to test what you are asking about.
you should look about using amd GPU's with the tesla K80. this will allow for amazing compatibility and display will be taken care of also more vram will be available with the cram of k80
I'm actually planning on buying two of these for GPU compute in Linux. I already have a rackmount server with the correct power cables, so its just the GPUs that I need to buy.
that is the perfect use case for this card! i wish you the best!
1 you can get a bracket that allows it to sit upright 2 you do no need a water block but i recommend changing thermal paste as the slop they used is trash
You have really good points. I think I have extra thermal paste laying around somewhere.
Tesla M40 GPUs go for $300 on ebay. It has 24 GBs of vram and is a new arch then the K80
Thanks for this video now I know it would be better to purchase an rtx but going to continue my journey with this card and lol you weren’t lying about the power consumption
Yes! I’m going to continue my journey with this card too. I am able to get sufficient cooling with fans I had laying around and my psu has just enough watts. So I was able to get it up and running for just $400. But I was going to purchase Davinci Resolve Studio any way, so it really only cost me $100.
Pretty cheap for the amount of improvement in performance. I wasn’t able to use fusion at all before, nor film grain or camera shake or blur effects or even complicated color grades. So I’m very happy with my purchase. But I’m really sad that it isn’t compatible with DRS17 going forward... :-/
yes they recommend the quadro drivers... to run the tesla's... ive ran the c2075 on mojave fine using the proprietary mac drivers. just check your driver files the nvidia teslas and grab the web driver. its helps if you upgrade macOS instead of a clean install. theres a script out there that you can run that will knock out the macos issues if you want to run it you can no nbd...
I do have an old Mac Pro laying around. I should try popping it in there and see if it runs. Might be a fun video too
Ya those Mac pros are quite the machine. I had the 2012 5,1 server edition with a swapped Xeon Hexcore from the quad. 64gb ram single channel. The pcie output 150 watts and there is an additional out on the board. The adapter cable 6 pin to mini is called something weird. Booster cable or something. The psu is rated at 900watts I think so plenty of power on board it’s just a matter of hitting the right install sequence. K20 and the c2075 will run... the only thing I couldn’t solve was the cards weren’t totally recognized per spec’s c2075 only registered 2gb and the k20 was recognized but no data showed up. I also didn’t know about the kext utilities and abilities... Do it up, they said it wasn’t possible... they don’t know nothing lol
They are still used by some companies to this day. Although I was seeing alot of them hitting the dumpsters last year when the new Mac Pro came out
@@WillCarterTech Dumpsters! That's a tragedy, should be punishable by law. Last Summer I seen a lot of people posting "Got it when it was new, boxed up in the garage since. Worked then 100$ obo... Then you see 1000$ 2000$ stock models. Im still kicking myself for selling mine... But I really like the mini's. Prefect desktop. the 10's are limited on HD. But i found that Waterfox browser is 64 bit and 1080p plays just fine with multiple apps opens. I run a theta node on my 2012 and its my primary on an intel gpu... performance wise 100% better.Almost went with the newst M!, super cheap but i bought moog instead... Its awesome... Im trying to do the eGPU with my C2075... its currently in a amd dell i got a couple weeks back.. Cheers on the homeschooling DIY. always like to get other perspectives and everybody has little tricks to explore. Cardboard and duck will get you a long ways lol
Haha yess duck tape and cardboard are my go tos 🤣
What kind of movies production are you doing in Davinci Resolve?
youtube videos, commercials, training/classroom videos, podcasts, 8k timelapse videos, 4k stock footage
@@WillCarterTech ok, then it does make sense. Because I tend to see people are buying high-class hardware just for their UA-cam channels, whereas the same content can be filmed on a smartphone without any real loss of quality to viewers
@@alx8439 yes, you are correct. but I have Gear Acquisition Syndrome lol so I like playing with fancy new toys. It's why I loved working at a production company. Playing with high end equipment.
you can speak about prices. now a 3060 costs 900-1200 euros, and if you compare the k80 is only 250-300 euros, if you have a good psu, and another gpu for display, you should not have problems with it.
@@WillCarterTech true. But i could get a really good cooling for my laptop for like 4 euros. Fans, and other things were avalibe, i only had to buy patch tapes.
@@WillCarterTech btw do you have any gaming resoults too? I am curious
Serve the home actually did a video on the K 80 gaming performance.
@@WillCarterTech could you give a link, because i could not find that video
Any luck with it in After Effects or Premiere? Would it be worth it to pick one of these cards up to use with those programs?
The older version of premiere that i have doesn’t use gpu. So i would say no but i Heard that Adobe released a version of premiere last year that takes advantage of more hardware in the system. So I’m not sure.
Hello thanks for the great video. What is the mainboard that you use for the nvidia k80? Cann you give the model and number? I am using a MSI Z87-G43 Gaming and dont works.
I’m using a z390 gigabyte aorus master
might i recommend trying an AMD Firepro? I have found success with an S9150 w/ 16GB of VRAM. A 1070 is my main video out, and divers have not been an issue. I use "Video Editor" from Microsoft.
Wow! You are the second person I’ve heard say that you got AMD and Nvidia cards to work together. That’s incredible
10:25 the name ssds today aren't fast enough to saturate the x4 gen 3 link they have, let alone double the bandwidth at gen 4. You will spend more money on gen 4 when the memory chip on the ssd can't even saturate last gen specifications.
There are ssd manufacturers that advertise 7 GBps
@@WillCarterTech unless it's 2 drives in raid 0, I doubt that's happening.
Hmmmm. I mean sequential reads and writes are different from real world performance so you have a good point
currently i am using rtx 3080 12g for DaVinci Resolve but it lacks Vram. If I buy this Nvidia Tesla K80 24GB and plug it into the 2nd slot on my mainboard will it work in parallel with the 3080? Will I be able to take advantage of its vram? Looking forward to hearing from you and everyone watching this video. Thank you
I don't use DaVinci myself, but that's only possible if DaVinci supports GPU memory sharing in it's own way.
unfortunately no on all your questions.
the driver for this keplar card would not be compatible with the rtx3080.
the vram does not stack. The k80 has two gpus on board. Each gpu has 12GB of vram. you already have 12GB of vram. so this card would not improve your situation.
Davinci resolve 17 does not support keplar architechure anymore. and neither does davinci resolve 18. the k80 is no longer viable for DR, unless you are willing to stay on DR16. I was not.
I bought a 3090 with 24GB of vram. they sell for about $800 on ebay right now. which is alot considering the price/performance compared to AMD cards. but if you need NVidia for some reason, it's your only 24GB option under a thousand dollars. I've heard that AMD has gotten alot better on Davinci resolve these last couple generations. something to consider.
There's also the M40, similar to the K80, but I don't know about support in DaVinci Resolve or how it would work with your other GPU. I don't even understand how a program like this would need to support specific GPUs, since I'm not using it and I'm on Linux.
It pains me to say this, but what about an m1 powered mac mini?
I mean, i could try making a timelapse on the one at work but my boss probably wouldnt be too happy about it 🤣 those mac minis with the m1 processor shoot well above their weight class. I dont think it could edit a 6000x4000 time lapse though... maybe 🤷🏻♂️... ill try to sneak some tests in one of these days.
@@WillCarterTech Perhaps when the RISC optimised version of Davinci Resolve is released officially, but the proof will be in the tasting. But I agree with you. The performance is phenomenal given that the entire unit costs the same as a decent GPU.
Yeah! if you need a mediocre gpu with a ton of vram, i love that the m1 allows the gpu access the 16GB of ram as if it were vram. That’s what i was most impressed with.
I wish we could figure out how to do that with a desktop. Ive been asking a buddy of mine if there is a way to do that. It sounds like resizable bar on pcie 4.0 does a similar thing where the CPU can access the vram on the GPU. 🤷🏻♂️ so i would thing that we can make it work the other way around. I just dont know enough about computers.
I can only use this card on Linux... on Windows it appears configured with the drivers correctly, but no editing software recognizes it... it's not a defect, because on Linux it works
Davinci resolve dropped support for Kepler GPUs in the middle of 2021. Several other software companies did as well. Nvidia also stopped supporting Kepler GPUs. the k80 is officially at end of life. If you want to use this card you need to install old versions of software. For example davinci resolve studio 16 supports the k80, but 17 and newer do not.
@@WillCarterTech.. hello friend, are you sure about that, my k80 wasn't showing up because it was in TCC I changed it to WDDM... and now it appears as a video card.... I downloaded the new Davinci, I tested a rendering and it was normal
@@jaoninguem7800 oh! that's amazing! i have no idea what TCC or WDDM is. Please make a video showing me what you changed so that I can do that too. my k80 has been sitting on a shelf since i bought a 3090. it would be nice to give it some life.
@@WillCarterTech I answered you with the guide, but it seems that UA-cam removed it because it had a link
Is the nvidia tesla k80 good for mining? Asking for a friend 😉
As far as I’ve heard, yes they mine like crazy but they consume so much electricity that you would end up with a negative $$. So even though they are great at mining, they cost more money than they earn.
@@WillCarterTech + mining won't make use of all the VRAM, which is one of the top features of the card so it doesn't make much sense,
you should get a 1080ti which has faster ram instead ;)
Good job will. I would definitely not recommend it for video editing. But I'm thinking of buying this for my computations as i do a ton of heavy computations for electromagnetics.
Oh yeah. What software do you use? Does it still support Kepler?
@@WillCarterTech I use CST Design Studio. But will have to convince my research supervisor to shell out some money if i want to buy this.
@@abc20723 the k80 is a Frankenstein card. It’s not a good investment because most programs are cutting support for cuda 3.2
@@WillCarterTech Thats true. That's why a bit unsure if its worth it or not. If CST doesn't support it then i will be in trouble 😁.
@@abc20723 you should check cst to see if they support cuda 3.2
@Will Carter, the HomeSchool DJ
I have the same card running on windows 10 and can not get it to work.
I am getting these errors, can you tell me how you got it to work?
1st error (ATTENTION! No OpenCL, HIP or CUDA installation found.)
2nd error (clCompileProgram is missing fr OpenCL shared library.)
3rd error (cuFuncSetAttribute is missing from CUDA shared library.)
I have the latest CUDA installed I think it's version 10.
Any help would be appreciated thanks.
The k80 kepler architecture is cuda 3.7
Cuda 10 ought to be compatible with the k80. Have you gone into your motherboard bios to enable openGL?
Im not sure why it isn’t working for you. I have had several people in these comments tell me that their gpu was dead on arrival.
P.S. I also bought a Titan X, so to hell with miners and 1080 Ti's!
I have a GTX 960 (Maxwell 2.0) 2 GB. Would that card work along side the Tesla card you are showing? I already have 32 GB of DDR3 (a game that ran really poorly with 8 GB provoked me to make such a big upgrade).
yes, that is a similar card to mine. I'm not super familiar with gaming. But there are alot of channels that have covered this k80 since I posted this video. So there is alot of information out there now. There were only 5 videos on youtube about the k80 before I posted this video and 3 of those videos were about cooling.
Qual fonte suporta essa placa mano ?
I'm not sure what you mean.
what driver version did you use?
im useing gtx1080fe for my mainpc AORUS X570 ULTRA
i tried Game ready driver and Studio driver and Tesla Driver on 472.12 , 462.31 , 462.96 , 452.96
but i start up my pc 3 mintus and my mainpc get freez and can't not boot up any more
and i remove the k80 from my motherboard my pc can boot up again
so the k80 doen't work on my main pc
so i buy a quadro k600 from ebay and plug in both k80 k600 on my second pc (Huanan X79)
and i can't even enter the bios
i stuck in post code "d4"
can anyone help?
@@WillCarterTech im useing the DDU(Display Driver Uninstaller) to delete my driver but not in safemode
and im useing windows 10 21H1 19043.1237
It sounds like your problem is much deeper than driver corruption, but give DDU a try in safemode just for the heck of it. I dont think it will fix your problem. But i was having freezing about a month ago (not as bad as you) and running DDU in safemode fixed my issue. So it’s worth giving it a try.
Ive been using the latest game ready driver for my gtx 960. And that seems to be the only driver that doesnt give me issues (“as much.” there is still occasional issues after windows updates or program updates)
Give that a shot and let me know if it changes anything. You have a pascal gpu, and 2 kepler gpus. Try using only kepler and use a kepler driver. The 960 i have is a kepler gpu like the k80. So i dont have as many driver conflicts.
Also craft computing has done some bios hacking to get the k80 to work better. He will have better information than I can provide.
@@WillCarterTech i tried in safe mode use ddu still useless and i trying to using ubuntu 20.4 to install the nvidia driver but still get screen freezing and can't boot up again
i think its my k80 vram die broken
but still thank you!
@@linya0711 Yeah I would have to agree. I had another comment about six months ago say that their K 80 was dead when they got it. I think those GPUs are getting old enough that some of them are starting to give up the ghost.
Quick question, can this gpu be used for rendering stuff on blender? Like honestly this is extremely cheap for this much
if I remember correctly blender's 2022 update dropped support for kepler architechure. so i'm going to say no. unless you are using an old version of blender.
@@WillCarterTech god damn it 😭😭😭
Now prices of this card have gone wild triplicating it's value or even more. This is kind of sad given this situation, even though still great value compared the actual prices of the other cards.
Now it raises me the question: how good in performance are these cards at games a part from workstation usage?
I prefer the P100. It only got 16GB, but HBM2 memory wich is WAY faster (5 times or so) then DDR5X. And you dont get HBM2 memory in personal cards.
disable fast boot, then shutdown will work
i reccomend that you try it in Linux because card was made for it and its more stable
Yes I had a very good experience with Pop OS. Unfortunately not all of the programs that I use regularly are compatible with Linux. And I’m not willing to swap OS between tasks. I was actually able to achieve stable performance and windows after a few months of tweaking settings. A lot of my issues were caused by the stream deck software and Black Magic “desktop video” software. After uninstalling those two programs and changing a few settings in windows and in the bios I had a very enjoyable experience with the Tesla K 80.
bro its a server gpu its made for crunching numbers and equactions
Yes it is!
Hey guys, should I buy a Nvidia Tesla K20X for 100 bucks? I want to use it for daily normal use like Office 365, watching UA-cam and sometimes video editing, please give me your opinion!!Thank you!
Im pretty sure Microsoft Word nor Excel are GPU accelerated programs…
Guys dont be so cheap, get a mining power supply to power 10 of them and it cost … about $60 wow, very expensive right? Still its $140 cheaper than yours
Then do what? no modern programs support Kepler
and Matrox Mojito 4K ?
Is that card an processor accelerator? Or just a display card. It might pair well with the tesla 🤔
@@WillCarterTech Matrox has a wide range of cards.
For 4 - 8+ display multimonitoring.
Streaming / acquisition.
Encoding
Mojito 4K
M264 series
CompressHD
Box for mobile pc
Unfortunately this pro material is not really tested by UA-camrs. Too bad because the cost, the possibilities to work on real time video streams, the very low noise level and the very low consumption of these cards should attract our attention.
Yeah it’s super interesting. It reminds me of some of the sdi monitor cards from black magic. Ill have to do some research on these.
Can i use this gpu for cfd & cad design ? I think it's about 200$ (1280¥). With that money i can buy quadro p1000 with only 4gb vram. Which one would be better.
Due to Kepler no longer being supported by many programs such as 2021 solid works and davinci resolve 17. I would say that a newer GPU would be recommended. The p1000 is based on Pascal, 2 generations newer. So it’s likely still compatible with your program, whereas the k80 is not likely compatible.
@@WillCarterTech i'm going to use much older version of catia + ansys fluent.
I would double check the minimum system requirements, but yeah you could probably get the k80 to work.
The p1000 would be a more efficient GPU because it draws less power.
It depends on how much vram you need for your work load. CAD normally takes a lot of vram.
If you have a good plan to cool the k80 and your local electric company doesn’t charge a bunch, you can make the k80 work. But I bet the p1000 has more processing power just based on the spec sheet.
I see your method of cooling highly correlates to your choice of T-shirt
Nice!
😜