Article with quick reference charts and links to products (additional links above): www.gamersnexus.net/guides/3399-best-workstation-gpus-2018-for-adobe-premiere-autocad-vray-and-more Watch our Best CPUs of 2018 video before Black Friday: ua-cam.com/video/TCxjhEMGNZE/v-deo.html Watch our Best Cases of 2018 video: ua-cam.com/video/U2RkwBXaYQA/v-deo.html Find more of Rob here: ua-cam.com/channels/-Rm-fVvpu3-aSTwG96RmTQ.html
This content is very helpful, as gaming performance is easy to find, yet data for professionals and freelancers is scarce and often difficult to parse. I'd love to see more content targeted towards artists and workstation users to bolster the library of content on the channel.
agree. This metric set is extremely useful for even hobbyist; more than often even when trying to ask pro-user of those application, the answers are always so vague they might as well not answer it.
Luke Canniff I totally agree with you. I would really like to see these type of benchmarks expanded to include 3D modeling apps such as Rhino3d. There is a benchmark plugin (holomark) available for the v5 version.
This doesn’t affect me but what are the benchmarks for 2 RTX 580s? There’s plenty of people out there that would love budget access to faster workstation related tasks. Considering how cheap these cards can be had on the secondary market. I think including some dual “mid range” card setups would be very meaningful to a lot of people. Anyone that is watching this video is look to spend there money wisely. Anyone who isn’t is just going to buy the most expensive card anyways.
Re AutoCAD - you took a benchmark from AutoCAD 2016! That's a near 4 year old piece of software... it's falling off the supported roadmap in March next year. We've have 2017, 2018 and 2019 since then.
But having said that, this is the best content piece I've ever seen from a non-CAD channel covering workstation use, seriously well done. The advice regarding knowing your workload is critical, and LTT should really stop with their attempts to influence our market. It's important to point out to potential buyers that CAD is extremely deep, there are countless workflows internally and sometimes it isn't as cut and dry as one spec is best for the entire application... it depends on what modules you use to an extent. But well done guys I can tell how much work went into that.
@@bengrogan9710 You can still get a 30 day trial of AutoCAD 2019 the same way you could with 2016. The ending of perpetual licensing didn't change trial offerings.
@@Neil3D Not my reasoning. Without a lifetime licence you cannot properly maintain a stable version as you are required to patch immediately. For a reviewer this endsyour ability to reliably repeat benchmarks at later dates for cross comparisons with newly released hardware.
Damn this man knows his Render engines. Would be cool to see UE4 and Unity benchmarks. I might make a test map to compare my 1080ti with the 2080ti when I get it.
simply driver optimization as you see in few tests the geforce cards are so far away that it could only be explained due to lack of driver optimization you can hate Nvidia for giving the driver optimization just for quadros but truth to be told it costs them money to develop those drivers and the premium price you pay for quadros finance the driver development.
@Rafael Mesa for maya if you use the legacy viewport with a gtx,you are screwed.The 2.0 works.I used a k4200,1080ti and have a p6000 now(ebay offer for 1900€).And if you have a heavy scene or animation that doesnt work well on 2.0, than you shoud return to legacy viewport where the quadro has the same fps performance as 2.0.Hope you find the info interesting.
@@123OGNIAN Ditto. My w9100 actually performs worse than a 1070 or a Titan XM in VP 2.0 until I start adding heavier and heavier things to the scene until it is hitching and pausing so badly on the Geforce cards that it is utterly unusable, despite the high FPS. On my w9100 it is buttery smooth despite the lower FPS. In legacy, which I still have to use for certain things, it is literally up to 10 times faster.
I really enjoyed watching this despite not currently being in need of a heavy duty workstation GPU it's nice to finally see a tech channel covering this side of things in detail, As recommendations for improvements, 1 - Add more Cards (Especially AMD branded cards) to your benchmarking list, 2 - It would be nice to see things like average temps during these benchmarks and power draw too, 3 - It would be awesome to see reviews for each individual workstation GPU much like gaming GPUs but with a heavy focus on productivity applications and 1 or 2 gaming benchmarks on games which are known to hammer down graphics cards in order to provide a more varied view of what performance a content creator can achieve when wanting to play the occasional game on the weekend
I have quite a few friends who are in production / content creation and they ask me for hardware recommendations all the time. I never had easily data to easily parse through and make solid recommendations since being a software engineer, I personally have no experience with workstation parts. Content like this is going to help those people directly AND people like me who non-technical friends and family often turn to for recommendations. Really enjoyed this content piece GN!
My company has just bought new AutoCAD workstations with 1080tis, glad to see the choice to switch from Quadro seems to be justified here. Don't see many benchmarks for professionals, thank you GN and Rob for the vid!
With premiere and after effects' piss poor code, it doesn't matter how fast your system is... the performance will always be piss poor especially Cc 2019
Really, really hoping that this video will be updated soon. There's still not enough reliable workstation content so I'd love to see GN get back into this space
Absolutely LOVE the workstation for design/engineering content. 20+ years in the engineering filed and these types of reviews are very rare and difficult to find. Well done to Rob Williams and all who worked on this one. Well done to GN for branching out into the industrial space. Five stars, full marks, etc. etc.
Why not include the Vega Frontier Edition? I would have liked to see how it's extra memory would have played out compared to Vega 64 in these workloads.
so it was actually just fine that i bought a regular RX580 for autocad and some engineering rendering right? Since quadro and pro costs my entire school fee
I'd also like to add something of note, especially for Catia: Frame rate only matters for comfort in navigating assemblies. What matters is the amount of ram so you can actually open massive assemblies.
10:00 They don't *need* to support OpenCL to run on AMD cards. They can use their existing cuda code on the AMD cards. All they need to do is to recompile the code for AMD cards using AMDs compilers.
Depending on your workload, you could consider just using whatever you can get for hardware, and something like AWS EC2 (Amazon Web Services Elastic Cloud Compute) to do the actual final rendering. They offer instances with GPUs included, which means you can use a high end GPU and only pay by the hour (instead of the whole thing up front). The other major services offer this as well (Azure, Google Cloud, etc.), and support Windows-based instances. Without specific numbers in front of me, I'd still say it's likely you could get through college working this way, spending less money than buying the GPU (to include electrically powering the GPU out-of-pocket), and waiting less time for the final rendering.
Hi there. Would love to get an update on this information now that the new RTX and RX cards are out. I'm in two minds as to which one i should get. RTX3070 8Gb or RX6800 16GB. I use Enscape, and Vray.
I've watched this over a dozen times and this is by far the best comparison for professional reference I've seen. I've bought a Titan xp for myself and convinced my boss to buy 3 and a friend to buy one based primarily on this video. And they are as impressive as I expected.
I don't get why Quadro cards in some applications are so much faster than GeForce cards. All new Quadro cards (except GP100, GV100) have the same poor 1/32 ratio for fp64 and 1/64 ratio for fp16 compared to their fp32 floating point performance. The Quadro P6000 and GTX 1080 Ti even have the same GP102 die (with a few CUDA cores disabled on the 1080 Ti). The memory bandwidth on some Geforce cards is even higher than on their Quadro equivalents and even the memory amount is very similar. How can the performance differ so much when the floating point performance and bandwidth, and chip architecture are basically the same? To me (as a experienced OpenCL programmer) it seems like these applications slow down consumer cards on purpose.
@@coopergates9680 No, I write scientific simulations running on the GPU, see the demos on my channel. With proper optimization, the only things that should make a difference are floating point performance, memory bandwidth, the chip architecture and, when the application actually requires that much, the amount of memory (google "Roofline model"). The code for Quadro and GeForce cards is exactly the same, there are no special instructions or optimizations for Quadro (at least those with the same GPU dies as their GeForce equivalents). That's why I don't understand how two cards with the same die and very similar specs perform vastly different in some "professional" applications.
@@ProjectPhysX How much FP64 are you using? AMD seems to be a little better on average in that camp. A rare exception before Gx100 would be the Titan Black (though I heard it needs a special driver trick to perform well in FP64).
@@coopergates9680 Basically none, and if I really need fp64 then the CPU is often faster than a GeForce GPU. fp64 is only required in tasks where there are very different scales to consider, for example a solar system model with moons very close to planets and planets very far apart. Using fp64 coordinates in CAD software is just stupid. You can do efficient fp64 only on some Tesla cards (C20xx (1/2 ratio), K20-K80 (1/3 ratio), P100, V100 (1/2 ratio)), very few Quadro cards (x000 (1/2), K6000 (1/3), GP100 and GV100 (1/2)) and also some Titans (Kepler based Titans (1/3)*, Titan V (1/2)). All other Nvidia cards have 1/32 the fp64 performance compared to fp32. AMD supports fp64 with a 1/16 ratio on their consumer cards (Polaris and Vega) which is at least better than 1/32. * The Kepler Titans were actually able to switch from 1/32 to 1/3 in software.
@@ProjectPhysX Tahiti (such as HD 7970 and FirePro S10000) has a 1/4 ratio of FP64 to FP32. The FirePro W9100 would be a professional Hawaii card that has a 1/2 ratio. FP32 isn't very reliable when you need inverse trigonometric functions, logarithms, or other lengthy sums that propagate errors.
Well presented analyses. Rob's narrative has great flow. Even though I will never require a render workstation, I thoroughly enjoyed this content. Great change up.
One main thing about viewport performance is the API each application is running on. Autodesk products like 3ds max and Autocad are using direct3d, so it's no surprise that consumer cards have similar or better performance compared to Pro cards like Quadro. On the other hand, most CAD apps, like Catia, SNX etc, run on OpenGL and professional gpus usually have an advantage in this field (they have better optimized drivers, specs like cores etc aren't a paramount factor).
God bless this video. I'm not a content creator myself but have long wondered what that market looks like, how the cards relate in different workloads and such.
Conclusion: If something doesn't require workstation card then jsut go gaming card preferably RTX, if something requires workstation card get titan V or XP. RTX 2070 actually shines price to performance mostly there. AMD is heavy work dependand, in some WX8200 is great, in some pure render workloads like Blender vega is the best to suck in another test.
Was the testing done in Windows? Because AMD has far better driver support for most of these applications on Linux. I have a modded vega56 (basically the same mod as you showed some time ago) and i would be very intrested in the results.
I think that's why Blender (an open-source software mostly developed on Linux and I guess only OpenCL) performs quite nicely with Radeon cards. But to be fair with Nvidia, their proprietary Linux driver is quite on par with the Windows driver. Even if they play solo and can create some installation difficulties, the support is here.
@@acaperic359 Its been running since day two with those mods basically 24/7, never had a crash due to it, its fantastic. Power consumption is INSANITY, but the performance is better than a 1080/2070 in most games. Temps are around 45 degrees C in 19 ambient so fairly cool ambient
I like that others at gamers nexus can host videos now 🙂 maybe the audience can support a larger company even though it's more scientific than pop-technology channels with only surface level explanations of concepts
That is our hope! That the audience will support our growth in new sectors. We believe in letting individual editorial staff do what they're good at. No need for me (Steve) to pretend I know every single thing.
YES! Thanks so much for this. Absolutely amazing level of quality here - I frickin love GN! You've helped tremendously in my efforts to build the ultimate WS in 2018! I'll head over to Patreon soon!
no Vega FE which is a Vega 64 with pro optimizations and 16GB of HBM2. i have a Vega FE and its running in pro mode with compute optimization enabled in the drivers and its insane for 4K pro workloads on latest drivers
Yeah it seems like peoples are underestimating the performance of AMD Vega card for professional workloads. Take a look at Radeon 7, for gaming it's nothing knew, but for rendering it's a good choice.
Finally a pro focused gpu comparison!!! It's great! I would only suggest to include Rhinoceros 3D CAD software next time especially in the viewport comparison. Thanks anyway
Finally! waiting for a long time for you to cover something very important like this on your channel it's very rare to find you should do more videos like this "Gaming GPUs & Workstaions GPUs", Thanks you! #GamerNexus
@gamers nexus @techgage I'm a Machinist & CNC programmer. At my work we have tried a number of different hardware setups to increase productivity in the software we use. From an I7 7700k with a gtx1070 to an i5 and Quadro p1000 to an even lower spec pc we have found very little performance difference. I would LOVE to see a build video for a reasonably priced ($2500ish) CAM specific workstation focusing on performance in GibbsCAM, MasterCAM, Fusion360, Hypermill etc.
you guys never disappoint. I wish you had a review on best professional streaming setups, comparing items like the black magic studio 4k cam, and the accessories black magic sells.
What I'm most curious about is live playback performance with above-4k formats (5k-8k). While I rarely deliver projects in 4K let alone anything above that, I do receive A LOT of 5k and higher footage due to the fact that it enables my clients to crop and reframe their images and creating proxies and transcoding takes FOREVER and takes a lot of time and costs me additional money on my light bill so I'm looking to upgrade to something like a 1080 TI or 2080 TI with 16GB of RAM.
I don't have a need for a work station as I mostly game and only casually work a project. But thank you for the video. I found it very interesting in learning about this use case.
Is there any chance that you guys could do benchmarks of data-analysis tools for certain setups? I'd love to see how things like XGBoost perform on different cpus, and I imagine a lot of people would be interested in seeing how well some common tensorflow models run on differing gpus.
Hey @gamersnexus the new optix drivers have come into early access! I think its time for a refresh of this video as tensor cores are now utilized and render time have been drastically reduced on rtx cards
I currently have an AMD RX580. It's an eGPU for my 2018 Mac Mini. I recently learned about the AMD Pro series and I am interested on the WX7100. It appears that the RX 580 and WX7100 are practically the same cards, minus the viewport performance. WX7100 having way better viewport performance, which is important for me because my RX580 is a bit slow when I zoom in or out in Blender. I can get a WX7100 for $160 but I'm not sure if I should hold out for a WX9100.
I had been using an HP all in one with an i7 7700 and an R9 370 to draw and render some designs in CATIA V6, and it worked fine for the stuff we did (never more than about 20 parts in a single assembly at 1080p).
So we see some Geforce and RX cards here.. What about all the others from those dept?? will they scale appropriately or are these Geforce/RX cards the only cards you can/should use with these softwares?
Hi! Due to the new lineup of cards currently on the market, I just wanna ask if you guys cam create a new video regarding workstation GPUs? And what would be the better options for certain price points.
At the beginning of the movie there is RTX 2080 clearly visible in the foreground along with some other PRO cards (p6000, radeon) but I was unable to found RTX 2080 benchmarked later on, thus the photo is misleading. RTX 2080 is not RTX 2080Ti, nor GTX 1080Ti.
Great info. It's a pity that the software producers don't publish their own benchmarks. I would've liked to have seen one of the 4096core Fiji GPUs included eg my R9 Nanos. Also - I run PowerDirector and would've liked to have seen that used in benchmarking as well as other low end/UA-cam type editors.
Article with quick reference charts and links to products (additional links above): www.gamersnexus.net/guides/3399-best-workstation-gpus-2018-for-adobe-premiere-autocad-vray-and-more
Watch our Best CPUs of 2018 video before Black Friday: ua-cam.com/video/TCxjhEMGNZE/v-deo.html
Watch our Best Cases of 2018 video: ua-cam.com/video/U2RkwBXaYQA/v-deo.html
Find more of Rob here: ua-cam.com/channels/-Rm-fVvpu3-aSTwG96RmTQ.html
nice! Thanks, Rob for dropping by for a vid. B)
Did you had HBCC on the Vega and WX?
AutoCAD logo is out dated for the past few years
How come no Titan V?
The second graph is wrong. He's says the rx580 beats the wx7100 because it is above on the chart but the result for the wx7100 is actually higher XD
2:14 Autodesk 3ds Max
2:57 Autodesk Maya
3:46 Dassault Systemes CATIA
4:34 Dassault Systemes SolidWorks
5:10 Siemens NX
6:09 PTC Creo
6:57 Autodesk AutoCAD 2016 3840x2160 Cadalyst
8:00 Blender 2.79b Cycles Render Time
8:47 AMD Radeon ProRender 2.3.379
9:46 Redshift 2.6.22
10:35 Chaos Group V-Ray 4.02.05
11:27 OctaneBench 3.08
12:15 Adobe Premiere Pro CC 2019 AVC H.264
12:50 MAGIX Vegas Pro 16
YOU are the MVP. Thanks.
This content is very helpful, as gaming performance is easy to find, yet data for professionals and freelancers is scarce and often difficult to parse. I'd love to see more content targeted towards artists and workstation users to bolster the library of content on the channel.
Ditto.
hesrightyouknow.jpg
If you are looking for sources Puget Systems has a lot of testing on their site.
agree.
This metric set is extremely useful for even hobbyist;
more than often even when trying to ask pro-user of those application, the answers are always so vague they might as well not answer it.
Luke Canniff I totally agree with you. I would really like to see these type of benchmarks expanded to include 3D modeling apps such as Rhino3d. There is a benchmark plugin (holomark) available for the v5 version.
That was one hell of a round of chart simulator
+
I didn't think I was going to enjoy this (only because I don't do any kind of workstation tasks) but this was so interesting.
Yeah, I've never seen such an unpredictable batch of results.
Oh wow a different host this time. Nice to see GN experimenting with other hosts than Steve or Buildzoid
Yeah buildzoid has a very grating voice and needs a script
i would like to see some information about these benchmarks next time, i have no idea what are the most used for
@@ADR69 i'm pretty sure this dude is reading from a script too.
BZ does a great job imo
This video is perfect. I actually need a 4k$ workstation for 3D rendering but I don't know what to look for. Thanks!
Now you do! We have a lot of 3D rendering applications in this one.
I built a $3500 3D workstation :p took me a long time to save up, can't wait for NVidia official RTX SDK.
This doesn’t affect me but what are the benchmarks for 2 RTX 580s? There’s plenty of people out there that would love budget access to faster workstation related tasks. Considering how cheap these cards can be had on the secondary market. I think including some dual “mid range” card setups would be very meaningful to a lot of people. Anyone that is watching this video is look to spend there money wisely. Anyone who isn’t is just going to buy the most expensive card anyways.
@@TH3M3RC Do you mean rx 580 or am I missing something fron Nvidia? (obvs I'm jk)
luca_palo I type RTX so much it auto corrected 😂
Re AutoCAD - you took a benchmark from AutoCAD 2016! That's a near 4 year old piece of software... it's falling off the supported roadmap in March next year. We've have 2017, 2018 and 2019 since then.
But having said that, this is the best content piece I've ever seen from a non-CAD channel covering workstation use, seriously well done. The advice regarding knowing your workload is critical, and LTT should really stop with their attempts to influence our market. It's important to point out to potential buyers that CAD is extremely deep, there are countless workflows internally and sometimes it isn't as cut and dry as one spec is best for the entire application... it depends on what modules you use to an extent. But well done guys I can tell how much work went into that.
@@Neil3D a likely reason for covering Autocad 2016 is since then they have stopped using lifetime licences and moved to a running subscription model
@@bengrogan9710 You can still get a 30 day trial of AutoCAD 2019 the same way you could with 2016. The ending of perpetual licensing didn't change trial offerings.
@@Neil3D Not my reasoning. Without a lifetime licence you cannot properly maintain a stable version as you are required to patch immediately.
For a reviewer this endsyour ability to reliably repeat benchmarks at later dates for cross comparisons with newly released hardware.
pls provide lifetime licenses to gn
Thank you Rob, very cool!
Damn this man knows his Render engines. Would be cool to see UE4 and Unity benchmarks. I might make a test map to compare my 1080ti with the 2080ti when I get it.
Pretty sure GTX/RTX would run circles around Quadros there.
As a 3D artist myself, i always wondered why people buy quadros.
Because large professional scenes take a lot more than 11GB of VRAM.
DawoopFilms that is why we have out of core support (dabs)
simply driver optimization as you see in few tests the geforce cards are so far away that it could only be explained due to lack of driver optimization
you can hate Nvidia for giving the driver optimization just for quadros but truth to be told it costs them money to develop those drivers and the premium price you
pay for quadros finance the driver development.
@Rafael Mesa for maya if you use the legacy viewport with a gtx,you are screwed.The 2.0 works.I used a k4200,1080ti and have a p6000 now(ebay offer for 1900€).And if you have a heavy scene or animation that doesnt work well on 2.0, than you shoud return to legacy viewport where the quadro has the same fps performance as 2.0.Hope you find the info interesting.
@@123OGNIAN Ditto. My w9100 actually performs worse than a 1070 or a Titan XM in VP 2.0 until I start adding heavier and heavier things to the scene until it is hitching and pausing so badly on the Geforce cards that it is utterly unusable, despite the high FPS. On my w9100 it is buttery smooth despite the lower FPS. In legacy, which I still have to use for certain things, it is literally up to 10 times faster.
As a freelance 3d artist, I thank you for this comparison.. would be cool to see more stuff like this in the future.
@Flávio Vink will do :) thanks!
Salut, tot pe 3ds max lucri? Ce zici de noul update de la Blender? Uite-te te rog la el si dai o sansa. Mie mi-a intrat la suflet.
Thank you for this, there is a shocking lack of proper benchmarks using CAD programs.
Hundreds of UA-camrs benchmarking the same few games gets old.
This video is like oasis for content creators amongst gaming benchmarks and rewievs. Thank you guys =)!
What missing from this are AMD Pro WX 9100 (16GB) and Vega Frontier (16GB) cards which would near or at the top of the charts.
I really enjoyed watching this despite not currently being in need of a heavy duty workstation GPU it's nice to finally see a tech channel covering this side of things in detail, As recommendations for improvements, 1 - Add more Cards (Especially AMD branded cards) to your benchmarking list, 2 - It would be nice to see things like average temps during these benchmarks and power draw too, 3 - It would be awesome to see reviews for each individual workstation GPU much like gaming GPUs but with a heavy focus on productivity applications and 1 or 2 gaming benchmarks on games which are known to hammer down graphics cards in order to provide a more varied view of what performance a content creator can achieve when wanting to play the occasional game on the weekend
I have quite a few friends who are in production / content creation and they ask me for hardware recommendations all the time. I never had easily data to easily parse through and make solid recommendations since being a software engineer, I personally have no experience with workstation parts. Content like this is going to help those people directly AND people like me who non-technical friends and family often turn to for recommendations. Really enjoyed this content piece GN!
My company has just bought new AutoCAD workstations with 1080tis, glad to see the choice to switch from Quadro seems to be justified here. Don't see many benchmarks for professionals, thank you GN and Rob for the vid!
With premiere and after effects' piss poor code, it doesn't matter how fast your system is... the performance will always be piss poor especially Cc 2019
Really, really hoping that this video will be updated soon. There's still not enough reliable workstation content so I'd love to see GN get back into this space
Absolutely LOVE the workstation for design/engineering content. 20+ years in the engineering filed and these types of reviews are very rare and difficult to find. Well done to Rob Williams and all who worked on this one. Well done to GN for branching out into the industrial space. Five stars, full marks, etc. etc.
Why not include the Vega Frontier Edition? I would have liked to see how it's extra memory would have played out compared to Vega 64 in these workloads.
Rob doesn't have one and is not local to us.
@@GamersNexus also need to get instinct ;)
@@GamersNexus any chance adding FE results?
@@g10118 instinct is purely designed for deep learning like nvidia tesla also it doesn't have display output
Karan Vora nonsense, it's a general purpose accelerator with plenty of fp32. A lack of display outputs isn't a problem for most engines.
Finally something for us professionals!!!! I like this we need more of these =)
Workstation Nexus?
Still interesting though.
so it was actually just fine that i bought a regular RX580 for autocad and some engineering rendering right?
Since quadro and pro costs my entire school fee
Would love to see a 2019 version of this video, with the RTX Titan, Radeon VII and 5700 XT added
It would have been nice to see thr WX9100 and it's 16GB memory
Vega FE as well would look pretty smushed into the charts.
Also no mention of HBCC.
I was about to say.. where's the Pro WX 9100..?
I'd also like to add something of note, especially for Catia: Frame rate only matters for comfort in navigating assemblies. What matters is the amount of ram so you can actually open massive assemblies.
I believe he reference that, but the reverse is also true - comfortable working is fast working, assuming your buffer is big enough for your use
10:00
They don't *need* to support OpenCL to run on AMD cards.
They can use their existing cuda code on the AMD cards.
All they need to do is to recompile the code for AMD cards using AMDs compilers.
can't you make a budget video like this ? that would be very helpful for students, 1060,1050ti,rx460,rx560,rx480,rx580 with budget workstations gpus
Are you really serious?
Yes and the 1660ti
Yes, we need an entry into this industry and we don't have so much cash to spend, 1660ti or dual 1050ti or other budget gpus.
Depending on your workload, you could consider just using whatever you can get for hardware, and something like AWS EC2 (Amazon Web Services Elastic Cloud Compute) to do the actual final rendering. They offer instances with GPUs included, which means you can use a high end GPU and only pay by the hour (instead of the whole thing up front). The other major services offer this as well (Azure, Google Cloud, etc.), and support Windows-based instances. Without specific numbers in front of me, I'd still say it's likely you could get through college working this way, spending less money than buying the GPU (to include electrically powering the GPU out-of-pocket), and waiting less time for the final rendering.
Did Steve catch a cold or something? He sounds different,
That's Rob!
Oh shit sorry Steve. I was joking around there. haha. Great job Rob!
I thought it was NileRed
His hair fell out, sounds a little different now.
After wading through all the gaming benchmarks, I finally get to some content that matters. Thank you.
This is one of the most useful videos that nobody is doing out there. Top notch job from GN.
Hi there. Would love to get an update on this information now that the new RTX and RX cards are out. I'm in two minds as to which one i should get. RTX3070 8Gb or RX6800 16GB. I use Enscape, and Vray.
I've watched this over a dozen times and this is by far the best comparison for professional reference I've seen. I've bought a Titan xp for myself and convinced my boss to buy 3 and a friend to buy one based primarily on this video. And they are as impressive as I expected.
Wow sudden new voice. Steve keeping us on our toes with surprises.
Please keep up the great workstation content. This is one of the most informative videos I've found on gpu's for cad and rendering.
I don't get why Quadro cards in some applications are so much faster than GeForce cards. All new Quadro cards (except GP100, GV100) have the same poor 1/32 ratio for fp64 and 1/64 ratio for fp16 compared to their fp32 floating point performance. The Quadro P6000 and GTX 1080 Ti even have the same GP102 die (with a few CUDA cores disabled on the 1080 Ti). The memory bandwidth on some Geforce cards is even higher than on their Quadro equivalents and even the memory amount is very similar. How can the performance differ so much when the floating point performance and bandwidth, and chip architecture are basically the same?
To me (as a experienced OpenCL programmer) it seems like these applications slow down consumer cards on purpose.
Do you soft mod consumer GPUs (such as to make them work with workstation drivers)? If so, can that remove at least some of the performance loss?
@@coopergates9680 No, I write scientific simulations running on the GPU, see the demos on my channel. With proper optimization, the only things that should make a difference are floating point performance, memory bandwidth, the chip architecture and, when the application actually requires that much, the amount of memory (google "Roofline model"). The code for Quadro and GeForce cards is exactly the same, there are no special instructions or optimizations for Quadro (at least those with the same GPU dies as their GeForce equivalents). That's why I don't understand how two cards with the same die and very similar specs perform vastly different in some "professional" applications.
@@ProjectPhysX How much FP64 are you using? AMD seems to be a little better on average in that camp. A rare exception before Gx100 would be the Titan Black (though I heard it needs a special driver trick to perform well in FP64).
@@coopergates9680 Basically none, and if I really need fp64 then the CPU is often faster than a GeForce GPU. fp64 is only required in tasks where there are very different scales to consider, for example a solar system model with moons very close to planets and planets very far apart. Using fp64 coordinates in CAD software is just stupid.
You can do efficient fp64 only on some Tesla cards (C20xx (1/2 ratio), K20-K80 (1/3 ratio), P100, V100 (1/2 ratio)), very few Quadro cards (x000 (1/2), K6000 (1/3), GP100 and GV100 (1/2)) and also some Titans (Kepler based Titans (1/3)*, Titan V (1/2)). All other Nvidia cards have 1/32 the fp64 performance compared to fp32. AMD supports fp64 with a 1/16 ratio on their consumer cards (Polaris and Vega) which is at least better than 1/32.
* The Kepler Titans were actually able to switch from 1/32 to 1/3 in software.
@@ProjectPhysX Tahiti (such as HD 7970 and FirePro S10000) has a 1/4
ratio of FP64 to FP32. The FirePro W9100 would be a professional Hawaii card that has a 1/2 ratio.
FP32 isn't very reliable when you need inverse trigonometric functions, logarithms, or other lengthy sums that propagate errors.
God bless you for videos like this ! There's always a shortage of videos on professional video cards
Well presented analyses. Rob's narrative has great flow. Even though I will never require a render workstation, I thoroughly enjoyed this content. Great change up.
One main thing about viewport performance is the API each application is running on. Autodesk products like 3ds max and Autocad are using direct3d, so it's no surprise that consumer cards have similar or better performance compared to Pro cards like Quadro. On the other hand, most CAD apps, like Catia, SNX etc, run on OpenGL and professional gpus usually have an advantage in this field (they have better optimized drivers, specs like cores etc aren't a paramount factor).
Hey it's that TechGage guy, I subbed to him after you recommend his channel a few months ago.
An interesting channel that I've sub too a long time ago. Especially when he does the workstation CPU's like TR and i9
for a budget editing station the Radeon 580 is the best solution, decent enough performance at a low cost
True story bro 👆
Looks like bang-for-your-buck video rendering in Adobe Premiere is the RX 580 :)
@facelessninetytwo it all comes down to eho has the bigger d!ck
@facelessninetytwo It's called status
This is by far one of the best benchmark video out here. Like Subs. Comments and Shared
Thanks for putting this content together even though the audience for it is quite small. It'll be a great reference for my next system.
Finally, a new content related to production.
much better video than linus tech tips video about the same topic
Absolutely the best Video Card round up ever well done!!!!!
Not a topic I have much interest in, but I watch it all anyway. Quite enjoyable content.
Nice video!
Are you planning on doing an updated video like this one?
Am I the only one that is surprised by the pretty solid performance of AMD Radeon RX 580 in some of the programs?
considering you can get a 8Gb card for $200 yea its pretty good in some of the programs
Me too
God bless this video. I'm not a content creator myself but have long wondered what that market looks like, how the cards relate in different workloads and such.
Badly needed this video! Thank you so much!
Any plans to rerun these tests with Quadro RTX cards?
Finally a work-oriented video! I wish you could make a separate channel to proper cover this kind of content, periodically.
Conclusion: If something doesn't require workstation card then jsut go gaming card preferably RTX, if something requires workstation card get titan V or XP. RTX 2070 actually shines price to performance mostly there. AMD is heavy work dependand, in some WX8200 is great, in some pure render workloads like Blender vega is the best to suck in another test.
Kind of wondering why the Titan V is not in this?
Was the testing done in Windows? Because AMD has far better driver support for most of these applications on Linux. I have a modded vega56 (basically the same mod as you showed some time ago) and i would be very intrested in the results.
Oreo Lamp powermod?
Aca Peric Yes, and its cooled by a h155i v2 instead of some aircooler crap
Oreo Lamp that seems amazing, how is it doing with temps and performance? is it safe?
I think that's why Blender (an open-source software mostly developed on Linux and I guess only OpenCL) performs quite nicely with Radeon cards.
But to be fair with Nvidia, their proprietary Linux driver is quite on par with the Windows driver. Even if they play solo and can create some installation difficulties, the support is here.
@@acaperic359 Its been running since day two with those mods basically 24/7, never had a crash due to it, its fantastic. Power consumption is INSANITY, but the performance is better than a 1080/2070 in most games. Temps are around 45 degrees C in 19 ambient so fairly cool ambient
I cannot understand why there are no titan V, GP100 or GP100 which are the actual cards, not titan xp....
I can't imagine how much work was put into this video. not to mention you have almost all the cards.
I like that others at gamers nexus can host videos now 🙂 maybe the audience can support a larger company even though it's more scientific than pop-technology channels with only surface level explanations of concepts
That is our hope! That the audience will support our growth in new sectors. We believe in letting individual editorial staff do what they're good at. No need for me (Steve) to pretend I know every single thing.
@@GamersNexus great saying !!!
thay's my idea of a Journalist, a very good Journalist.
(not a newsSeller)
@@GamersNexus but youre the tech jesus !
YES! Thanks so much for this. Absolutely amazing level of quality here - I frickin love GN! You've helped tremendously in my efforts to build the ultimate WS in 2018! I'll head over to Patreon soon!
no Vega FE which is a Vega 64 with pro optimizations and 16GB of HBM2. i have a Vega FE and its running in pro mode with compute optimization enabled in the drivers and its insane for 4K pro workloads on latest drivers
meehhhe Of You i have an fe too and i can cosign. Vega is also incredibly awesome for gaming. Its a win win
Yeah it seems like peoples are underestimating the performance of AMD Vega card for professional workloads. Take a look at Radeon 7, for gaming it's nothing knew, but for rendering it's a good choice.
Have you guys done an updated video with the Quadro RTX cards?
thanks for benchmarking workstation applikations, hard to find stuff like this
Finally a pro focused gpu comparison!!! It's great! I would only suggest to include Rhinoceros 3D CAD software next time especially in the viewport comparison. Thanks anyway
Finally! waiting for a long time for you to cover something very important like this on your channel it's very rare to find you should do more videos like this "Gaming GPUs & Workstaions GPUs", Thanks you! #GamerNexus
This is an excellent video 👍 very useful information
Kinda curious why the Vega Frontier wasn't used in these tests. It is really nice for both workloads, being workstation and gaming.
awesome ... only thing with i miss ... are Fusion 360 benchmark (which don't exist) and DaVinci Resolve
Lukas Hric resolve runs ofx plugins best with opencl. Vega. 1080ti does a good job though. Either will be good bit ofx runs better on AMD
@gamers nexus @techgage I'm a Machinist & CNC programmer. At my work we have tried a number of different hardware setups to increase productivity in the software we use. From an I7 7700k with a gtx1070 to an i5 and Quadro p1000 to an even lower spec pc we have found very little performance difference. I would LOVE to see a build video for a reasonably priced ($2500ish) CAM specific workstation focusing on performance in GibbsCAM, MasterCAM, Fusion360, Hypermill etc.
you guys never disappoint. I wish you had a review on best professional streaming setups, comparing items like the black magic studio 4k cam, and the accessories black magic sells.
This was a much needed investigation. Thank you!
What I get from your render engine tests is that just Octane performance scales linearly compared to Redshift or VRay which do not.
Thanks for this! As an engineering student who games this is really useful and interesting information.
AMEN man no one else is doing this content
great comparisons and very useful! thank you so much for taking the time. That was lots of work you put in!
Thanks for the reviews. It is still useful in 2021 and one of the best comparisons of there. Any plans for an updated similar comparison?
Waiting for 2019 version with new med tier cards ...!!
What I'm most curious about is live playback performance with above-4k formats (5k-8k). While I rarely deliver projects in 4K let alone anything above that, I do receive A LOT of 5k and higher footage due to the fact that it enables my clients to crop and reframe their images and creating proxies and transcoding takes FOREVER and takes a lot of time and costs me additional money on my light bill so I'm looking to upgrade to something like a 1080 TI or 2080 TI with 16GB of RAM.
I don't have a need for a work station as I mostly game and only casually work a project. But thank you for the video. I found it very interesting in learning about this use case.
Is there any chance that you guys could do benchmarks of data-analysis tools for certain setups? I'd love to see how things like XGBoost perform on different cpus, and I imagine a lot of people would be interested in seeing how well some common tensorflow models run on differing gpus.
Love the video, disappointed not to see VEGA FE though
Hey @gamersnexus the new optix drivers have come into early access! I think its time for a refresh of this video as tensor cores are now utilized and render time have been drastically reduced on rtx cards
I currently have an AMD RX580. It's an eGPU for my 2018 Mac Mini. I recently learned about the AMD Pro series and I am interested on the WX7100. It appears that the RX 580 and WX7100 are practically the same cards, minus the viewport performance. WX7100 having way better viewport performance, which is important for me because my RX580 is a bit slow when I zoom in or out in Blender. I can get a WX7100 for $160 but I'm not sure if I should hold out for a WX9100.
Fun to see the 2080Ti struggling to beat the RX 580 and sometimes losing to it. :)
Need one more video like this with the new gpus
I had been using an HP all in one with an i7 7700 and an R9 370 to draw and render some designs in CATIA V6, and it worked fine for the stuff we did (never more than about 20 parts in a single assembly at 1080p).
So we see some Geforce and RX cards here.. What about all the others from those dept?? will they scale appropriately or are these Geforce/RX cards the only cards you can/should use with these softwares?
great content ! keep these professional work coming !
Nice video, but I feel like more cards would have been even better. Maybe a revisit of how the rts cards preform when navi drops or so?
Great video. Does rx580 is the cheapest card?
This is almost complete but the lack of WX 9100 benchmarks is a huge miss for me.
Hi! Due to the new lineup of cards currently on the market, I just wanna ask if you guys cam create a new video regarding workstation GPUs? And what would be the better options for certain price points.
At the beginning of the movie there is RTX 2080 clearly visible in the foreground along with some other PRO cards (p6000, radeon) but I was unable to found RTX 2080 benchmarked later on, thus the photo is misleading. RTX 2080 is not RTX 2080Ti, nor GTX 1080Ti.
Why didn't you also compared those results with Titan V?
How did you do the animations in the chart you displayed for each comparison?
Couldn't you have built Blender 2.79 from source, so as to get support for the RTX cards in testing? I did it for the RTX 2080. Works great.
Great info. It's a pity that the software producers don't publish their own benchmarks. I would've liked to have seen one of the 4096core Fiji GPUs included eg my R9 Nanos. Also - I run PowerDirector and would've liked to have seen that used in benchmarking as well as other low end/UA-cam type editors.