What is the best GPU, A6000 or the RTX 3090.The truth for rendering, Workstation GPU vs. Gaming GPU

Поділитися
Вставка
  • Опубліковано 18 лис 2024

КОМЕНТАРІ • 1 тис.

  • @nickcifarelli8887
    @nickcifarelli8887 3 роки тому +238

    I'm a 3Ds Max Arch Viz artist with V-ray and was watching this video, praying that you would not overlook the HUGE importance of VRAM capacity in poly-heavy complex scenes. Thank you for your review. Yes the A6000 smashed that render by half compared to the 3090, due, in a large part, to the 48GBs of VRAM. If the GPU doesn't have the VRAM necessary to load the scene with textures, poly counts, particle simulations, etc, it has to page file the textures on the program disk. This accounts for a far slower workflow. Also, with NVLink, you can essentially double your VRAM to 96GB (assuming of course you can afford two A6000s!) and that would allow you to load entire cityscapes, forests etc. HAVING SAID THAT, your point is very valid, for the large majority of content creators out there, performance per dollar, you can get 2 or 3 3090s (MSRP) for the price of a single A6000. So realistically, you have made a very valid point. I personally am still using a Ttitan xP with 11GBs and it cannot load heavy scenes with carpets and Vray fur. I would personally gladly upgrade to 2x 3090s then shell out for a A6000. You can set one as the dedicated interface and the other one for rendering. Nvidea caters to high, high-end post houses (ILM, Weta Digital, etc) with the Quadro series and hence the hefty price tag. Yes the drivers are ISV certified and the RAM is ECC, but for the average guy out there, or even for the more professional prosumer like myself, a A6000 is a pipe dream. Id be happy with dual 3090s tbh. Thank you for the video, and you got yourself a new subscriber.

    • @MediamanStudioServices
      @MediamanStudioServices  3 роки тому +34

      thanks Nick. I totally agree with you. As I have worked at ILM, I know that these houses use the workstations cards, but the sad fact is that even at the A-listed studios, not all artist need the pricy workstations GPUs, the bulk of CG work is not that complex. Set extensions, some simple 3D add and FX. But for these complex Avatar type scenes, they need the Vram in the A6000. and even then all the shots are comp at the end in Nuke. so they use layers.

    • @nickcifarelli8887
      @nickcifarelli8887 3 роки тому +9

      @@MediamanStudioServices Nuke is one of my favourite programs. Still, exporting all the render passes for Nuke to compose still requires a hefty GPU. But I am glad we are on the same page. Well said sir.

    • @MediamanStudioServices
      @MediamanStudioServices  3 роки тому +13

      Hi Nick, thanks for your comments. I agree that the A6000 is only for a smaller sub-set of users, but for 70% of all content creators a RTX3xxx card is all they need. As you stated, you are using a 11GB card and and the 3090 24gb would be a great upgrade for you.
      thanks for watching.

    • @blackgamingstudio5104
      @blackgamingstudio5104 3 роки тому +3

      Nvlink does not double vram that many people said are you sure nvlink double vram

    • @blackgamingstudio5104
      @blackgamingstudio5104 3 роки тому +3

      Is 96gb vram is enough for 4k character creation and 4k interior exterior designing in 3ds max and substance painter and red shift

  • @JetCooper3D
    @JetCooper3D Рік тому +21

    I work at Pinewood Studios UK and work for Marvel, Disney Lucas etc. We switch to Geforce card back on Star Wars ep7 and have never looked back. The Geforce cards are stable and good to go. Saved money can be routed to other hardware. Great advice to all and great video - subscribed. Thank you. (We use the new 4090 RTX in all of our workstations now / 3090's before).

    • @MediamanStudioServices
      @MediamanStudioServices  Рік тому +3

      thanks for sharing your experiences with the channel

    • @The0zx
      @The0zx Рік тому +3

      Hi, Bro! Do you create 3d models for Marvel movies and Disney Lucas movies? Can you tell me about the computer specs you and your team are currently using? I dream of working on 3D models like you. But right now I don't know how complex the 3D model that I will make is. I need information about RAM usage capacity, processor, etc.

    • @goldenheartOh
      @goldenheartOh Рік тому +1

      ​@@The0zxis it still true Blender 3D is so optimized it can run on a potato? I used to have a similar dream 20yrs ago & Blender 3D was awesome. & I did have a potato for a pc.
      My point is I strongly suggest you get a feel for it on Blender before building a pc for it.

    • @dazrelixs
      @dazrelixs 9 місяців тому

      but you guys render locally or on the farm?

  • @CrimsonKing666
    @CrimsonKing666 2 роки тому +8

    Something important about the price is that GeForce cards are more unstable. I used to work with Deep Learning with a GeForce 3090 and was pretty common to see my computer crashing or stopping the training. I'm using an RTX a5000 and I never had that issue anymore.

  • @wonderwonder9027
    @wonderwonder9027 3 роки тому +5

    I'm a civil engineer. First thing comes first and I'd like to congratulation you on the way you put the video together...... really straight to the point and full of just the important information instead of wasting time talking so much about things an average viewer won't understand......
    Second thing is that there is no decent reviewer out there that do tests -the same professional level that you do - on computer parts whither its GPUs or CPUs on engineering tasks..... I mean yes there are a lot of artists out there that would like to know how fast their art work will be rendered...... but there are -as far as I know - many professional work loads are not being tested..... I don't want to be very technical but how do these cards handle:
    -Matlap AI workloads
    -BIM applications like Revit and Robot and maybe Sap2000, Etap and Safe for structural analysis
    -GIS analysis and clustering
    -Analysis of aerial photo for environmental, hydraulic and structural purposes
    I mean yes there are some workloads for 3DMax and Maya and some video production that can put artists on the limits of what this hardware can and can't do, but the way I see it that they are very rare and in those rare cases they do have big studios and production companies: ie. can directly address Nvidia about technical stuff. But we the engineers are reaching those limits on a daily basis and we are the ones who really need to make educated decisions about the hardware we use.
    thank you for your time reading this.

    • @MediamanStudioServices
      @MediamanStudioServices  3 роки тому +2

      hi wonder wonder, thanks for the kind words.
      I worked on a project for Warner Bros World theme park in Abu Dhabi and was amazed on how well a lower end GPU could handle such complex geo in Revit. Wish we had this kind of viewport performance in Maya or Max.
      As for how the RTX3090 or A6000 will handle civil engineer type work loads. well I could not say. I have zero experience in these type of project and apps.
      you can check out this channel. maybe he can help
      ua-cam.com/users/Tech3DWorld
      thanks for watching my channel

  • @oscarcampbellhobson
    @oscarcampbellhobson Рік тому +4

    Thank you for being blunt and to the point, not babbling about everything nobody cares about

  • @yayandeleon
    @yayandeleon 2 роки тому +4

    finally a more sensible benchmark on actual workflow usage for these cards. sick and tire of those gaming benchmarks who thinks the only actual usage for GPU's are for gaming only and complain for the high price tag

  • @steve55619
    @steve55619 Рік тому +12

    Don't forget about AI and ML work. Larger LLM's benefit from more VRAM. Also note how much heat you produce and power consumption with 2x 3090 in NVLink vs RTX A6000

  • @kentharris7427
    @kentharris7427 2 роки тому +6

    You can rent the cards for $1.50 an hour for one card or $6.00 an hour for 4 RTX6000 cards or $1,000 per month per card, cloud based. I personally have the 3090 card in my PC which is good for most applications. If I need raw speed for any given time I will rent the cards.

  • @graphguy
    @graphguy 6 місяців тому +10

    You said exactly what I wanted to hear.
    I play zero games, but I do alot of amateur work with Blender 3D and have been perplexed on going with a new RTX or a studio ready graphic card.
    thanks!

    • @rahulkamath6984
      @rahulkamath6984 5 місяців тому

      so what did you actually go with?

    • @graphguy
      @graphguy 5 місяців тому +2

      @@rahulkamath6984 haha decided to go to Italy for 2 weeks, then decide!

    • @rahulkamath6984
      @rahulkamath6984 5 місяців тому +1

      @@graphguy hahaha you don’t need any benchmarking to decide that I guess 😅

    • @noth606
      @noth606 5 місяців тому +1

      "You said exactly what I wanted to hear." - Eh, you don't seem to realize, but that is a very bad thing. It means the video isn't only useless to you, it does you a disservice, when you're evaluating options and have a preference, the input you need is the opposite side of yours. If your criteria survives unscathed, you had and still have the right idea, if not - reconsider. I you instead watch things that confirm your preference, you're invalidating your previous preference to a degree because you're just reinforcing it which is worse than doing nothing.

  • @rupasree8055
    @rupasree8055 Рік тому +3

    Thx for doing this video , we really appreciate it, as there are only few videos regarding workstation GPUS

  • @qkayaman
    @qkayaman 2 роки тому +7

    It depends on what you need them for, but for me I need A6000/A5000, not RTX 3090. Why? Multi-GPU setup, where I need peer-to-peer (P2P) access between all GPUs; memory transfer between GPUs through host memory is a no go for me. P2P is part of Nvidia GPUDirect, and only possible with 3090 over NVLink (i.e. only possible between pairs), so if you want a 4 GPU setup and need P2P, forget about it. With A6000/A5000 P2P is possible over PCIe, which means running 4 of them is possible. Also dual slot profile makes it easier to stack (I know blower type 3090 is available, but hard to find). May also be interested to know, in Windows in order to enable P2P over NVlink, SLI mode needs to be enabled in NVidia Control Panel. Fun fact: NVidia disabled this in latest Windows drivers, so if you want it to work, need to roll back to a pre-Jan 2021 driver! Linux drivers don't have this issue though.

  • @yubawang7652
    @yubawang7652 2 роки тому +3

    Thank you sir! Finally see someone that knows what he's talking about and showing actual production scene

  • @yushkovyaroslav
    @yushkovyaroslav Рік тому +3

    Very good video really shows what matters.
    Honestly underrated channel a lot more relevant content than some of the "bigger' channels out there.

    • @MediamanStudioServices
      @MediamanStudioServices  Рік тому +1

      thnaks Y Y. I am looking to do some new videos soome. Just need to get the equipment. That is the hard part

  • @CreativeAudience
    @CreativeAudience 2 роки тому +3

    Thank you for your test, I agree with you. I'm a 3D motion graphic designer and animation. I have been working with C4D and Octane render for many years. From my experience, I'm working on Quadro and Geforce. The 3D preview frame rate performance and rendering of Quadro and Geforce are not different. Quadro is just only of their marketing or product positioning but the price is too cruel. Quadro only has more RAM but it's a lot more expensive than Geforce 5 times. For me, It's a huge cost. I try to argue with other people over the years about Quadro and Geforce but no one believed. Especially the computer sellers and people who are not graphic designer.

  • @rashdanml
    @rashdanml 2 роки тому +3

    I think the key point here is that Nvidia only recently started releasing Studio ready drivers for Geforce cards, as of the 3000 series. It USED to be true that Geforce wasn't suited for workstation usage because of the lack of driver optimizations for the Geforce line.
    The underlying hardware has pretty much always been the same with differences in numbers. Weaker workstation GPUs (i.e. fewer CUDA cores than Geforce) were still preferred for workstation use because the Studio drivers were better optimized to use that hardware.

  • @mistrrhappy
    @mistrrhappy 2 роки тому +2

    I'll second the request for the A6000 vs 4090 comparison! Interesting to see the results!

  • @craigvfx
    @craigvfx Рік тому +17

    Would you do a comparison with the new 4090 vs A6000 ADA Lovelace cards

  • @thewizardsofthezoo5376
    @thewizardsofthezoo5376 Рік тому +5

    One thing is the power consumption and the lack of VRam, then if it takes a couple of minutes or hours more to run is less critical, for LLM fine tuning, those consumer cards are useless because of lack of RAM, because it's the size of the VRam that determines what you can load in for training.

  • @miavsm
    @miavsm 2 роки тому +2

    I like master like you, now people counting views without knowing that technology advances or disadvantages.
    Thank you for sharing your knowledge 🙏🏻

  • @TrueMathSquare
    @TrueMathSquare Рік тому +4

    I just found your channel and I love it.

  • @awnina7627
    @awnina7627 2 роки тому +1

    I am an architect and ur videos are amazing , please we need more videos regarding the motherboards types we need

  • @jeremiahMndy
    @jeremiahMndy Рік тому +3

    Keep making these please I'm a pro 3D artist and your videos have really helped.

    • @MediamanStudioServices
      @MediamanStudioServices  Рік тому +1

      thanks for watching, I hope to make new videos soon. Sorry for the long delay in making content

  • @yushkovyaroslav
    @yushkovyaroslav 2 роки тому +1

    Thanks for being a guy who actually came out and put the only useful thing out there for comparison: actual rendering numbers.

    • @MediamanStudioServices
      @MediamanStudioServices  2 роки тому

      thanks for watching Y Y. I try to do a good job. Share with your friends to help grow the channel.

  • @wagnerdesouza6512
    @wagnerdesouza6512 2 роки тому +3

    Very interesting channel, testing hardware with professional softwares, and not with games.
    Would be nice to see gpu tests with Substance Painter.

  • @rajis92
    @rajis92 3 роки тому +1

    Finally. Someone doing relevant hardware reviews and comparisons for 3D/VFX purposes. That deserves a sub :)
    I've always said this. There's a reason why all the render engines and 3D programs that utilise CUDA use Geforce cards for demos and put them first in their compatibility lists first before showing a Quadro card. When it comes to rendering the software developers make it so the software utilises CUDA/RT/Tensor cores. It doesn't care whether it comes from a Quadro or Geforce. It only cares for "how many cores can you give me to computer this task?". A lot of studios that do GPU rendering will use Geforce cards because cost to performance makes far more sense. GPU shortage/price hike aside you could buy like 2x 3090 for the price of one A6000.
    Quadros are mostly appreciated for the purposes of:
    1. Rendering crazy amount of polygons and loads of 4k+ textures with loads of UDIMs that are not mipmapped (non tex or tx files). Even then mipmapping and proxies are your friend when it comes to memory on the GPU
    2. CAD like programs utilise Quadro features far more than your standard 3D DCC/Painting apps (Maya, Houdini, C4D, Max, Substance, Mari e.t.c)

  • @essa07
    @essa07 2 роки тому +4

    professional comparison …just what I need

  • @fanimations2363
    @fanimations2363 3 роки тому +1

    Something i've been searching a lot on UA-cam, great comparison , loved it, thanks!

  • @SteveGrin
    @SteveGrin Рік тому +5

    Speaking from experience, my A4000 out performs my 3080 in AutoCAD and Revit in two ways. First the 3080 lags during certain operations - for example "override graphics in view" in REVIT or "layers" in cad. The second thing is the artifacts that you get with the G-force card when rotating a model are annoying. Every time I get a new WS, I try the current top of the line g-force card and every time I end up back to the WS card.

    • @animhaxx
      @animhaxx Рік тому +3

      So you saying is A4000 is better at viewport handling?

    • @SteveGrin
      @SteveGrin Рік тому

      I guess you could say that.

  • @faceless_ghost
    @faceless_ghost 2 роки тому +2

    you're replying to every comment,really great! you're putting a lot of time in it!
    thank you so much💗

  • @sameerkadam4956
    @sameerkadam4956 Рік тому +6

    Nobody is talking about TDP comparison between workstation and GeForce cards. cost of running system and power bills in commercial setup or running workstation or server 24 x 7 RTX3090 TDP is 350Watt whereas a4000 TDP is only 140Watt.

    • @MediamanStudioServices
      @MediamanStudioServices  Рік тому +2

      Hi sameerkadam4956, I agree that power usage is a big topic. I will make a video on this subject. However, using slower GPUs for a render does not necessarily reduce the overall power consumption. It just takes longer to render the frame. However, looking closely at power utilization is a big factor that is overlooked in purchasing GPUs for projects. Thanks for watching and the video topic idea.

  • @armalik11
    @armalik11 4 місяці тому +2

    Excellent video. Gave some vital information about the cards and how memory has been useful for different situations

  • @scottsturmTWM
    @scottsturmTWM 2 роки тому +7

    Hoping for a 4090 vs A6000 video ; - )

    • @eliahr11
      @eliahr11 2 роки тому

      duuude that's not even a comparison those gpus are generationally different the Axxxx is ampere base where the 4xxx series is Ada based. The 4090 will beat the crap out of the quadro card immediately

    • @scottsturmTWM
      @scottsturmTWM 2 роки тому

      @@eliahr11 you do realize a new ADA based A6000 card is coming out soon, yes?

    • @eliahr11
      @eliahr11 2 роки тому

      @@scottsturmTWM didn't know that, ty

    • @isaacvl95
      @isaacvl95 2 роки тому

      @@scottsturmTWM what would you recommend for fcd simulations cad and some 3d rendering some used rtx Quadro 4000, any rtx from the 3000 series starting from at least a 3080 or the current A4000

  • @Royameadow
    @Royameadow Рік тому +1

    We don't get a lot of testing with the Quadro cards as is, so it truly is welcoming whenever we get to see somebody take the time to compare these products to their GeForce counterparts, the added software compatibility and considerably higher Video Memory truly does make a difference and today the subject of Clearance Factor has become ever more important in a time where the RTX 4090 barely has any Dual Slot options when compared to the Quadro L6000 (RTX 6000: Ada Generation) that many people are beginning to flat~out ditch GeForce because of the lack of smaller options to fit in a smaller case (I've been using Micro ATX since I999; never have I used a larger form factor).
    In this era, the core audience that will benefit heavily from products such as the Quadro L6000 is definitely in the AI and Deep Learning space: Now, I don't know what your history is with certain Memory Intensive AI software such as OpenAI Jukebox, but it would be incredibly welcoming to see how much faster Music Continuations with it render on the L6000 when compared to the 4090, even the Tesla cards witness a major handicap at the 0I6 and 024 GB threshold in this particular workload that jumping to a card with a capacity of 048 or better truly does help the process run smoother without either becoming sluggish or crashing due to an Out of Memory error; Jukebox is something that I truly hope we will see more people in the Techtuber scene showcase as part of their Benchmark Suite, it doesn't get a lot of attention outside of a select few and thus having concrete numbers on how fast it works under plentiful conditions would show the Quadros' true worth over GeForce, we're still a few generations away from being able to render a Sixty Second Sample in the same time as its length or shorter and it'll be nice to see which cards get the most out of it until that time ultimately comes. (:

  • @Andbar93
    @Andbar93 Рік тому +3

    Thanks for the video, I hardly found comparisons between the Quadro and the rtx in a professional environment, I wish you could make comparisons in AI tasks such as generated images.

    • @tanguero2k7
      @tanguero2k7 Рік тому

      Hi there! Let me save you some time (TLDR):
      The results both in rendering and AI, given the same prompt, parameters AND SEED, are the same ( on both a 6GB RTX 3060 (mobile) and a much faster 24GB Quadro RTX 5000).
      The long version:
      I bought an RTX 3060 based laptop because I would never give more than 300€-400€ ($300 aprox) for an 8GB card for both rendering (blender) and AI (local implementations of stability-ai and BERT related workloads) work. When I get to where I want, I move everything over to an RTX 5000 at my workplace.
      Other than the size of the generated images, I can only say the 5000 (naturally) returns faster: my 6GB 3060 often crashes when attempting to render 4K (blender) or simply refuses to generate textures/images (dreamtextures on blender / stability-ai on the shell ) above 512x512.
      This, however, might change in a near future due to a recent paper by nvidia themselves where the model used for raytracing was told to be bellow 1 MB (yes, you're reading 1 Megabyte). Have a look at 2-minute-papers channel and have a look for yourself.
      Oh! Btw, if you'd like to test some workload before buying, let me know. (Edited to add that I also do some occasional photogrammetry work with the free and open source meshroom)

  • @surroundrive
    @surroundrive Рік тому +3

    Excellent production design: your set, lighting, audio, video, dialogue...liked and sub'd.

  • @pedrorivera1892
    @pedrorivera1892 3 роки тому +1

    Thank you for the video. When doing 3d renderings the biggest difference I found between A SERIES Vs GEFORCE is temperature.

  • @Hobbes4ever
    @Hobbes4ever 3 роки тому +4

    rich gamers are probably getting A6000. It's "only" $1K more

    • @MediamanStudioServices
      @MediamanStudioServices  3 роки тому +1

      Hi Ya Tub, well the A6000 does preform pretty good for games. check out this UA-cam video review.
      ua-cam.com/video/1PNka8UjAMk/v-deo.html
      But why pay more if its the same performance for games as the 3090. Maybe because the 3090 is hard to get.........
      thanks for watching

    • @Hobbes4ever
      @Hobbes4ever 3 роки тому

      @@MediamanStudioServices I was being sarcastic. The scalpers want 3K for the 3090 and people were complaining about how expensive the $1200 RTX 2080Ti was couple of years ago.
      Thanks for a very informative video!

    • @MediamanStudioServices
      @MediamanStudioServices  3 роки тому

      @@Hobbes4ever ya, I know, thanks for watching

  • @E_Clip
    @E_Clip 3 роки тому +1

    The memory pooling since the 2080Ti's have been great for production and I really don't see myself buying a quadro (or A as they are called now) ever again. The pooled Vram from 2 x 3090's is more than enough for my workloads (mostly ArchVis).
    Great content mate, glad I found you! Keep it up :)

  • @Oldyellowbrick
    @Oldyellowbrick 2 роки тому +4

    I think the cost difference is pretty insane BUT I use Octane render and I am always maxing out VRAM and having to reduce scenes...Not only does it slow you down considerably when you reach the 'ceiling' but you tend to get alot of issues when you reach the 90% mark of the VRAM capacity with systems crashes. 48gb would be very welcome in my workflow but I will wait to see what the 40 series will offer.

  • @joaoalexdias
    @joaoalexdias 3 роки тому +2

    Hi thank you for your review! I’m a 3D character animator using mostly Maya, I worked in the majors studios using workstations with both cards you mentioned, and my thought is that a Quadro card is more efficient in the viewport and computing processes than a GeForce. Even in my personal rigs with lower end cards I’ve noticed that, for example I had a workstation with a Quadro K2000D with 2G of VRAM and a laptop 5 years newer with a GTX1060 and I had a better viewport speeds with the workstation, the workstation CPU was a bit better but wouldn’t justify the differences in performance. Your totally right on that comparison with the A4000, I would definitely choose that one over an RTX 3070 or even 3080. I guess it all comes down on the production usage, if you’re going for the renders or even 3D generalist a Gforce might be a better choice budget wise, but if you’re doing stuff like character animation, vfx and simulations I would definitely choose a middle end quadro.

    • @MediamanStudioServices
      @MediamanStudioServices  3 роки тому

      hi Joao Dias, I would not compare a K2000 and a laptop 1060, this is not a good comparison at all. you do know that all laptop GPU are rated very much differently than desktop GPU. A RTX3080 Laptop is not the same as a desktop 3080. the Laptop version of a 3080 is more like a 3060 desktop GPU. Laptops just can not deliver the power required to drive these GPUs.
      But thanks for sharing your comments and watching the channel.

  • @renanmonteirobarbosa8129
    @renanmonteirobarbosa8129 2 роки тому +6

    2 things, VRAM and NVLINK. Thats the main difference.

  • @valdisgerasymiak1403
    @valdisgerasymiak1403 2 роки тому +2

    A lot of people use GPUs for machine learning tasks. When I went from 3070 to 3090 I got x3.2 speed increasing mainly because of the RAM (I increased batch size while training) So it's worth going from 3090 -> A6000 if the x2 memory will give x2 speed with lower power usage.

  • @bravestbullfighter
    @bravestbullfighter Рік тому +5

    Interesting! How about follow-up with A6000 vs 4090?

  • @tyrannicpuppy
    @tyrannicpuppy 2 роки тому +1

    Very nice. As someone starting to dip their toes into 3D content creation for fun, but only currently has the 4GB 1650 Super I could afford when putting the tower together midway through last year, it's nice to see a video addressing the content usability of the 30 series cards. LTT and that ilk make great videos, but they barely give the render benchmarks a mention and they certainly don't go into this level of detail. I know the new fancier cards are on the horizon, but this has helped convince me to grab a 30 series now and enjoy stable and yet powerful rendering compared to what I'm getting now. I can always splurge again in a few years if the newer ones are really so much better, but by then I might be doing far more complex stuff with it thanks to a few years of practice and need the extra horses.

  • @ronniecoleby
    @ronniecoleby 2 роки тому +3

    Just what I was looking for thanks so much. I think this solved my dilemma I'll go for the 3090 and maybe add a second card in the future - with prices tumbling down now that seems more sensible! I'm looking to purchase a workstation for a personal project which uses Metahumans in Unreal Engine. Would be great to see how the two hold up in the viewport in Unreal - in a filmmaking (24fps) context. I know this is a more niche use case though! :-)

    • @mikebrown9826
      @mikebrown9826 2 роки тому

      You may want to research more. I believe Unreal can only use one GPU. And I am not sure if the nvlink will work for Unreal. But you could render on one GPU while using the second to continue to work in the program.

  • @Betoromero22
    @Betoromero22 Рік тому +2

    Por fin alguien serio que se dedica a hacer videos para creadores!!! Gracias por compartir

  • @andreasfjellborg1810
    @andreasfjellborg1810 3 роки тому +4

    Here(Sweden) you can get 3x 3090 for one A6000, going 3x 3090 would be a quite a lot faster than a single 3090...

    • @MediamanStudioServices
      @MediamanStudioServices  3 роки тому

      hi Andreas, i would love to have 3 RTX3090, but its so hard to find them in the market right now at a good price
      Thanks for watching

  • @kitewinds663
    @kitewinds663 2 роки тому +2

    Thanks for video, very helpful! A comparison of Solidworks assembly and drawings performance between the A6000 and the W6800 AMD-card would be interesting. Also the A5500 is of interest. Thanks again.

  • @wonderwonder9027
    @wonderwonder9027 2 роки тому +3

    Can you please do the following tests on the A6000:
    - AutoDesk Revit architecture render
    -MatLap heat exchange simulation
    -AutoDesk Advance Steel stress and displacement calculation
    -AutoDesk Robot wind load simulation and seismic load calculation
    I know they are out of the scope of this channel but me being a civil engineer have no idea what to expect from this kind of investment if I'm going to make it....... And no other channel are nice enough to read through the comments section let alone give an answer.......
    Thanks for your time

  • @concinnity1240
    @concinnity1240 Рік тому

    This video helped me out so much and answered all the questions I've had while I'm trying to build a workstation for CAD. Thank you so much! Excellent video.

  • @technicallyme
    @technicallyme Рік тому +4

    I got a a4000 for 400 (what a change a year makes ) but it solved my problem with the 3070. Not enough memory

  • @TheOnlyAndreySotnikov
    @TheOnlyAndreySotnikov Рік тому +1

    The benefit of a workstation card is supposedly in the fact that it has double precision floating-point calculations accelerated. Gaming cards have double precision blocks sandbagged.

  • @crckdns
    @crckdns 2 роки тому +3

    the A6000 would be great to train Stable Diffusion models ..so the "AI" can finally render realistic hands!

  • @Sitrec
    @Sitrec 2 роки тому

    Just wanted to say that I really appreciate you and your content. There is so much misinformation when it comes to hardware in the creative space and content like this has been really missing.

  • @oliverleemans6363
    @oliverleemans6363 Рік тому +5

    Can You test the A5000 against the RTX 3090 or the RTX 4070 ?

  • @zaydraco
    @zaydraco Рік тому +2

    For AI workloads the memory and tensor cores are important due to the amount in the data sets. If you have bigger data sets it becomes more important to have bigger memory or have a better algorithm to partition the exchange of RAM and GPU RAM... If I remember from the Cuda API you move memory from one to the other, the thing with normal memory in a program is that you can page into swap memory or virtual pages, but it is not the same with GPUs. At least not automatically handled by the OS as far as I know.

  • @NarekAvetisyan
    @NarekAvetisyan 3 роки тому +3

    Great review just what I was looking for!
    Q: Can you test the memory pooling of 2 RTX 3090's with NVLink in Blender? I'd really like to see that.

    • @MediamanStudioServices
      @MediamanStudioServices  3 роки тому +1

      Hi Narek Avetisyan, I would love to but I do not have the equipment anymore.
      I only have for a short time to make the videos.
      Thanks for watching

  • @loganpenciu7317
    @loganpenciu7317 3 роки тому

    You sir are speaking my language! Been looking for a channel that talking about computers in a 3d production studio setting. Subscribed! :)

  • @sideffect8
    @sideffect8 3 роки тому +4

    You should benchmark the 2x nvlinked 3090s vs the A6000. Graph the memory usage and bandwidth during the pre render. Curious to know how the complex scenes saturate the extra memory

    • @MediamanStudioServices
      @MediamanStudioServices  3 роки тому

      hi Sideffect8, I totally would but I do not have two 3090 anymore.
      Thanks for watching

    • @siminc7905
      @siminc7905 3 роки тому

      Nvlink chopsoff about 10% performace. so do the math

  • @andrewfischer247
    @andrewfischer247 3 роки тому

    This was really well done and I appreciate how you compared several scenarios. Subscribed!

  • @c.pop.echo.28
    @c.pop.echo.28 2 роки тому +7

    the only reason why I would buy an A6000 gpu is because of Vram capacity. For instance in Chaos Vantage Software I can only use 2 GPUS for rendering, and in Arcviz i do need lots of vram. But, instead of 2x24 3090, I could go with 2x48 A6000. It's just double the capacity, but the price is like 4x bigger. Nvidia, you greedy greedy company.

  • @DoronTshuva770
    @DoronTshuva770 2 роки тому +2

    Well. You said it.
    The true test to see if the A6000 “sucks” is to see same price test.
    I think it’s impossible to connect 8-10 3090 (and the eGpu box) to see how fast they are again same price card.
    But then I’m pretty sure you will see how much it sucks for it’s price

  • @sevdev9844
    @sevdev9844 2 роки тому +4

    It about machine learning, I think. You pay for the RAM and the cuda cores.

  • @Birdie_1991
    @Birdie_1991 3 роки тому +1

    3 mins in and you brought it down to way i can understand. "Suck" and "B***S****" caught me off guard as this video screams professional. makes me feel more comfortable as someone whos into tech, music gear, and gaming to know that you don't need big words to get the point across. More smart people should feel ok not having to go to the extremes of showing their level of knowledge by having to use Big words. random thoughts but this soothes my soul. (not satire)

  • @dittofarmers9007
    @dittofarmers9007 2 роки тому +4

    Hi would you please the same test but not for rendering, but for computation. Try software such as Ansys mechanical or Abaqus. These software has the option to utilize GPU in the computation. We really would like to know if Geforce can work as good as Quadro for computing in double precision mode. Thanks. That will help a lot of engineers out there!

  • @webdesign6776
    @webdesign6776 3 роки тому

    I always enjoy your videos ,in this one I especially liked knowing that the studio ready drivers had the same bug fixes as the "Quadro" drivers

  • @Snowaxe3D
    @Snowaxe3D Рік тому +5

    I worked in a 3D design company and no one except one (our senior manager) was using "Workstation GPU",

  • @kyleglenn4132
    @kyleglenn4132 2 місяці тому +2

    Anyone remember rendering that same exact scene in Blender (the orange bmw) in the early 2000's? I think we need a new test scene lol

  • @EdinGacic
    @EdinGacic 2 роки тому +4

    do you have any tests with dual 3090 with NVLink on big scenes like the one you showed where double VRAM made a huge difference? I am debating if adding another RTX 3090 FE to my workstation is better at 750-800 EUR used or selling the RTX 3090 and buying the new RTX 4090. I am leaning more towards two 3090s if NVLink actually works and scales good. It would be cool if you can do test like this :)

    • @thomasrichter1219
      @thomasrichter1219 2 роки тому

      I have exactly the same thoughts. Have you already made a decision?

  • @LordShockwave9
    @LordShockwave9 2 роки тому

    Dude, thanks for this! I just managed to get a RTX 3060 for my workstation at home and was thinking about swapping it for an A2000 card but that card performs worse. You saved me some cash! Thank you!

  • @444haluk
    @444haluk 2 роки тому +3

    48GB VRAM is the one that makes difference.

    • @MediamanStudioServices
      @MediamanStudioServices  2 роки тому +2

      but not that many workflows need the 48Gig, so only the last test really made a difference

  • @oscaroscar9941
    @oscaroscar9941 3 роки тому +1

    Just the things that I want to see. Well done!

  • @JazekFTW
    @JazekFTW 2 роки тому +7

    All these workstation gpus with lots of vram are intended to be used in AI / DL / Quantum workloads not rendering

    • @spankosaurus
      @spankosaurus 2 роки тому

      true, most rendering programs like octane render are optimised for a gaming gpu. Important for people to know this

  • @AI-xi4jk
    @AI-xi4jk 3 роки тому +2

    Deep learning benchmarks also show the same picture of these cards being pretty much equivalent. The main difference would be if you have very large models to train which don’t fit into 3090. Otherwise the architecture and specs are basically the same. When you start stacking these cards into a server or workstation a blower type of cards will do fine, but the regular rtx fans will start choking and throttling down the clocks. This can be remedied by using liquid cooling and bringing the heat away from cards. Both a great but I see no point buying enterprise cards if your workload is fine with enthusiast counterparts. That being said the market will prices are so out of wack that the gap between them is much less these days.

  • @checkmate8015
    @checkmate8015 2 роки тому +5

    Which one is better for me as a game dev and 3D artist

    • @faradaysinfinity
      @faradaysinfinity 2 роки тому

      I too am asking this. Intuition says a6000. But still. I mainly use Unreal 5

    • @mikebrown9826
      @mikebrown9826 2 роки тому

      Get the rtx3070 as this is the middle tier gpu. So your dev needs to play on this card. And it is still powerful

    • @mikebrown9826
      @mikebrown9826 2 роки тому +1

      @@faradaysinfinity if your doing unreal then get the 3090. Or a6000

  • @jaylewis2611
    @jaylewis2611 2 роки тому +1

    Would love to see a comparison between x2 RTX 3090s and x1 A6000, before going x2 3090s vs x2 A6000.

  • @kuhan333
    @kuhan333 2 роки тому +3

    Hi, Great Video! I have a couple of questions,
    1: What are you thoughts on GPU for Unreal Engine (Content creation/ Virtual Camera/ Green screen maybe but No LED wall ) 3090 rtx vs A5000.
    2: There are many makers for 3090 card, Which one would you recommend, founder's edition vs other makers.(I was looking in 3090 FE vs 3090 ASUS ROG-STRIX but if you have other recommendation please do share)
    Thanks in advance,

    • @MediamanStudioServices
      @MediamanStudioServices  2 роки тому +4

      for UE4 i would use the RTX3090 as UE required a lot of processing power, and the 3090 has more than the A5000
      As for brands, sorry i have not done a comparesion of the different brands. I have a Gigabyte Tubro and it had been great for me.
      I also have a Strix 3060ti and that us also a good GPU model for me. so you will have to find what is best in your market and also available.
      Thanks for watching

  • @CEOHankScorpio
    @CEOHankScorpio 3 роки тому +2

    I swear a decent portion of the workstation GPUs are being sold to studios who don't need them by sales reps at dell, hp, etc who managed to convince the buyer it's necessary. You don't need ECC memory to sculpt in zbrush, yet I've worked places where EVERY sculptor had a workstation card for 10x the price.
    A very small number of individuals will buy these and actually need them; and they already know who they are. Almost everyone else who buys an A6000 does so because they're spending someone else's money. Don't get me wrong, if I could convince my boss I need it I'd definitely take one!

  • @Mr_i_o
    @Mr_i_o 2 роки тому +4

    Perhaps you would consider doing performance per dollar with dual NVLink 3090 vs dual a6000 vs single a6000?

    • @MediamanStudioServices
      @MediamanStudioServices  2 роки тому +4

      I would love to but getting the GPU to make the video is very hard. I am still looking for the GPU but once I get the cards, you bet I will make a video.
      Thanks for watching

  • @nortonf2008
    @nortonf2008 2 місяці тому +2

    very well, friend! thak you for this content.

  • @Livingston3d
    @Livingston3d 2 роки тому +5

    Great sir.! Please help me. Will this GPUs (Radeon RX 6800 XT, Radeon RX 6600 XT, Radeon RX 7000 XT ) are good for Maya, 3ds max, Zbrush, painter, blender, etc. Is it good for our modeling and hard file viewport navigation and rendering purposes?

    • @Amarthir
      @Amarthir 2 роки тому

      They will ;')

    • @Pixel_FX
      @Pixel_FX Рік тому +1

      Radeon GPUs Render slower in Blender cycles compared to nvidia because of Optix. For every other program they are fine, its rendering that Radeon cards are slower. Dont know about upcoming 7000 series performance. I have a 5700XT and 3080s. my 5700XT was faster than 2070 until optix introduced. after optix in blender, RTX cards became way faster.

    • @The0zx
      @The0zx Рік тому

      @@Pixel_FX How about Intel ARC A770 for the same work?

  • @NimaCn
    @NimaCn 3 роки тому +1

    Thanks a lot for the comprehensive video. Subbed for the future videos!

  • @supercolor222
    @supercolor222 3 роки тому +2

    both are good but for 3D industry the more Vram it got the stable the sence and faster it can handle like Unreal Engine, Houdini. for me I will choose to get two 3090 instead of single A6000

    • @MediamanStudioServices
      @MediamanStudioServices  3 роки тому

      Hi Super, two RTX3090 is a good choice, But Unreal and really only use on of these cards. I am not sure how the NvLink will work with Unreal. Let me know how it Performs. But for Houdini, I would choose a RTX A6000
      Thanks for watching

    • @marekkovac7058
      @marekkovac7058 3 роки тому

      If you only need GPU for houdini openGL stuff, It might be a better choice to go for A6000 because Houdini cannot use more than 1 gpu for a single task. Yes you can use one for openCL and another for openGL but for example 2 gpus both doing openCL wont work. A6000 48GB can also give you a hi-res quality minimal pyro solver results. NVlink wont help in houdini, TCC mode might work(havent tested it yet) but that function is only available in Quadro cards and you also need a third GPU to run the display.

    • @MediamanStudioServices
      @MediamanStudioServices  3 роки тому +1

      @@marekkovac7058 thanks for your input, I agree with you if someone is doing a single workflow like Houdini or Unreal, I would choose the A6000. The extra Vram is required for these types of workloads.

  • @albertsitoe7340
    @albertsitoe7340 2 роки тому

    This is the best video about the topic! I can finally safely direct people who ask to this awesome Masterclass!

  • @itsaman96
    @itsaman96 Рік тому +4

    I always install studio drivers on my 1660 super 6gb

    • @duh4293
      @duh4293 Рік тому +3

      Another 1660 super user in the wild. Heck yeah.

  • @emanggitulah4319
    @emanggitulah4319 3 роки тому

    Great to see this content. As you said a lot of other channels say that you have to have a quaddro.

    • @MediamanStudioServices
      @MediamanStudioServices  3 роки тому

      Hi Emang, I have worked in so many studios that do not us "Quadros" GPUs.
      Thanks for watching

  • @capezonmyback
    @capezonmyback 2 роки тому +4

    I can't find a new NVlink 2x 3090 Benchmark. Would be really helpfull!

    • @mohamedsakka2338
      @mohamedsakka2338 Рік тому +1

      ua-cam.com/video/jw_mnwo9Nag/v-deo.html you can skip the building part it might be boring

  • @rapatouille
    @rapatouille Рік тому

    great comparison presentation!
    love your kitchen studio! very nice concept

  • @arjayjalmaani
    @arjayjalmaani 2 роки тому +3

    Would you say for real-time video rendering using Unreal Engine, would I still need an A6000 or would two 3090’s be as well, if not slightly better?

    • @MediamanStudioServices
      @MediamanStudioServices  2 роки тому +3

      I would go with the one A6000 as Unreal does not support multi GPU rendering

    • @arjayjalmaani
      @arjayjalmaani 2 роки тому +2

      @@MediamanStudioServices Good to know. Thanks for the reply!

  • @angelg3986
    @angelg3986 3 роки тому +1

    Some professional workloads do not tolerate memory errors you get without ECC. So it's not always about the raw performance.

    • @sentragrafikakompumedia
      @sentragrafikakompumedia 3 роки тому +1

      I personally experienced this when me and my friend did both real and testing projects. My friend's RTX 3070 ran some errors in the whole 5 days when we did it and needed to restart his computer as discarding the desktop surface buffer and re-create the allocation from DWM with Ctrl+Shift+WinKey+B didn't help at all, while me with RTX A5000 and A4000 didn't experience any problems and working without any restarting at all. And he got several Out of GPU memory notification and needed to restart the software as well.
      His 3070 is surely great, but I personally don't want to be bothered with sudden issues.

    • @MediamanStudioServices
      @MediamanStudioServices  3 роки тому +2

      Hi Angel G, I agree with you that some workloads do require EEC memory, but for most artist using creative Software. ECC is not required. It is mostly CAD and medical industry that needs the ECC.

  • @ChadKenova
    @ChadKenova 2 роки тому +3

    Still trying to get my hands on a 4090 hopefully more stock should be in soon but i love to watch shootouts like these, workstation cards are very interesting and expensive. Would be nice to see how a 4090 compares to the 6000.

  • @sainsay
    @sainsay 3 роки тому

    I have had the opportunity to work with servers running Quadros for deep learning and machine learning. the one thing I learned is that having ECC memory is a blessing if you have long workloads like a week or several weeks. yes, you make backups while processing the tasks but not having or seriously reducing crashes is better than having to restart from a backup. also having 48GB is nice, pooling the memory was/is not always efficient for the workloads I work with. having up to 4 GPUs work independently greatly improves the quality of my workload.
    but for workstations Quadros are becoming more and more obsolete for a lot of people and it is great to see this comparison. also showing that if you scale up the workload it eventually becomes more than worth it.

    • @MediamanStudioServices
      @MediamanStudioServices  3 роки тому +1

      thanks for sharing your experiences with the channel. As you have pointed out, some workloads require the workstation GPUs.
      thanks for watching.

  • @epsilonplus3514
    @epsilonplus3514 2 роки тому +3

    can I use mixed multiple gpu on blender?
    like gtx 1070 and 1050ti.

    • @MediamanStudioServices
      @MediamanStudioServices  2 роки тому +2

      yes you can but the renders will be limited to the lowest amount of Vram in any one single GPU. This is the limitations for multi GPU rendering. The package is bundled up with Vram limitations of a single card. So say you have a card with 6GB and one with 12GB. the render packages will only be set for 6GB and the second 12GB card will not use 6 of its GB of Vram. I hope you understand my response.

    • @epsilonplus3514
      @epsilonplus3514 2 роки тому +3

      @@MediamanStudioServices thank you for answering.

  • @DapperProf
    @DapperProf 3 роки тому

    Thanks for doing this, finally a reviewer who knows and cares about CG productivity!!

  • @MissouriReaper
    @MissouriReaper Рік тому +10

    AMD also has workstation GPU

  • @javiej
    @javiej 3 роки тому

    This is a great video, thanks. But I miss the very reason why high end customers chose the A series, which is simply to receive professional support.
    When you buy a workstation to the right provider (even If more expensive) and with the professional model all the support chain work together to ensure that it will deliver, your hardware and software combination will be pre-tested in advance, and all of the related teams will collaborate to solve potential problems. From the distributor who will pop up on site next day with a replacement unit to the workstation manufacturer to Nvidia engineers answering the support line.
    But if a complicated problem happens and you have a GeForce...good luck guys , you will need it because you will be at your own.

    • @MediamanStudioServices
      @MediamanStudioServices  3 роки тому +1

      Hi Javiej, thanks for your comments.
      i will respond to your comment with the same response I gave another viewer.
      As I pointed out in the video, all bug fixes for the workstation cards are also released with the Geforce GPUs as well. And I have heard this argument many times that you get support.
      While, I have worked in this industry for over 25 years and Studios like ILM, Disney and Dreamworks would never jeopardize there production on software version that are not stable. Also as I have worked in these studios Tech department, I have not once made a support ticket to Nvidia for a GPU bugs for current production software. Yes we did get support on experimental driver versions for Maya and 3D Max, but by the time these packages were in production, many other studios had also reported the same bugs. And Nvidia had solved the issues.
      Also the Nvidia drivers as so much more stable in the last few years than they were in the past. So the need for Nvidia support is lower.
      So I agree that you get support for workstation GPU, but the need to use this support is very little if every used for the average studio or artist. So unless you are doing complex AI or Deep learning, why would you need support from Nvidia.
      Again, I am not hating on Workstation GPUs I am just pointing out the argument that studio and artist do not need workstation GPU, and this opinion is becoming less relevant. There are some use cases where Workstation GPUs are required, But for most artist a Geforce GPU is more than capable of supplying a very powerful GPU for production.
      Thanks for your comments and watching my channel.

    • @javiej
      @javiej 3 роки тому

      @@MediamanStudioServices
      And I don't hate the GeForce models, I also use them regularly for production and they are great, also having better price / performance ratio. But when I talk about stability and support I'm not not only talking about bugs and drivers:
      - Professional workstation models normally come with same day or next day on site support included, which has a lot of value in my opinion when they are backed by expert engineers and responsive team, (which unfortunately is not always the case ). That's very uncommon with GeForce PCs, or if they offer it then it will not be at typical GeForce standard prices or not served by truly expert support teams, there are honorable exceptions but it is more like a lottery.
      - Thermal and power issues are easier to handle with A series. It is much easier to install 2 or 3 x A6000 in a workstation than 2 or 3 x 3090. Still possible with 3090 depending on the case , but you will encounter more problems related with physical space, increased TDP and cooling. For example a dual A6000 is feasible in the P620 that you own (as per your other videos), also leaving space and power for extra HBas. But I would not try that with standard 3090s.
      - I also work in the post market, and I can assure the Nvidia support team is excellent at answering support cases related to Quadro models (and now A series) and if you have the right support channel when you need it you will get access to very expert engineers to help you, but if you have a problem with a GeForce then you only have the forums ( or either going trough 10 layers of support at Nvidia hotline...)
      - The extra graphics memory will have zero value if you don't need it, but if you need it the performance is multiplied by several times.
      What I would never recommend is to buy an A6000 on a random internet retailer, as you will certainly pay the extra price but you will not get the support structure that should be expected for this type of product.

  • @pathfinder9602
    @pathfinder9602 3 роки тому +3

    we need a6000 not because of the amount of rt and the cuda core,:
    first, 24 gb of Vram for virtual production is easily reach its limit with high quality texture. using 2 3090 in realtime engine is still hit and miss, unless your workflow is totally using offline renderer you gonna need 48gb of vram at some point or another.
    second, the elephant in the room, Genlock. VP need it for sync with led and other node and camera in the system.
    third, in my country, 3090 is more than 300% msrp, where a 6000 is on msrp.
    my two cent

    • @MediamanStudioServices
      @MediamanStudioServices  3 роки тому +1

      Hi Pathfinder, I agree with you,
      Watch my new video:
      ua-cam.com/video/xtN9dX0oWzY/v-deo.html

    • @pathfinder9602
      @pathfinder9602 3 роки тому

      @@Maverrick2140 maybe you are true for MSRP comparison in normal time. We doesn't need ecc ram, we need 48 gb of ram in single card, and genlock, that is the point. i am running small studio not a corporation.

  • @DerekDavis213
    @DerekDavis213 2 роки тому

    At 5:02 , memory bus speed is actually 'memory bus width'. The speed of the GDDR6 chips might be 19 Gbps, for example.

  • @danydepp8487
    @danydepp8487 Рік тому +4

    You missed the test with the most important used software by professionals like Autocad, Revit, photoshop, illustratotr, premiere, etc.

  • @nigeldawson5960
    @nigeldawson5960 3 роки тому

    Thanks for the info. I’ve watched many of your vids and they helped me build the best machine for me. Much appreciated.