That's exactly I was looking for to upgrade the card for blender. God bless You fella. Blender benchmarks are surely kinda help, but they are not direct index of render time. Keep increasing your excel sheet length for greater benefit of many in pursuit of precise answer 👍
@@ContradictionDesign Heya...Finally bought the A770 due to larger RAM than close competition. But been using my 'ancient' 3330 CPU. This snarls when having complex scenes, I mean many a scenario, fps is less than one. Been planning to upgrade. Any suggestion for CPU purchase (without breaking the bank)
@@Bholu-y1d You could look at AMD 5600X or 5900X if you want super cheap new stuff, depending on the price range you are after. If you want brand new, the 9600X would be great because it has very high clock speeds, and the smaller core count will not be a problem too often. What is your budget for the new Motherboard/CPU/RAM? I can help more if you let me know your price
@@ContradictionDesign Yes yes, what You have suggested is definitely under my consideration. Initially I zeroed on 14600K & 5900X but later bumped into Intel 13th, 14th generation issue matter, so decided to focus on Ryzen series. But than saw Intel Ultra line up. Then Ryzen 5 series 9700, I mean it's infinite loop. One will keep toggling between different choices. Very now I observed You used 5950 in your video which is near to 5900. My requirement is nice playback in viewport, because for render I have a770 (by courtesy of your fine video, it was single limiting factor to push me to go for a770). I learnt that single core is responsible for smooth playback & multiple cores for fluid simulation, I need both. Won't go for 9950X. Budget, you can take as mid range
@@Bholu-y1d Thanks for the nice words again! I have seen some really good deals on 5900X on and off. 12 Cores at 4.7 Ghz for $350 or something like that is a really good deal considering 4 cores was high end a few years ago haha. And the AM4 platform, including motherboard, RAM, etc, will cost quite a bit less too, as AM5 boards are pretty pricey. 5000 series and newer will have fantastic viewport performance. It will be limited by your render engine/GPU most of the time. Fluid simulations are a different beast. The only way to get super fast playback is to run low quality previews in the viewport to see the general shape of the fluid. Some addons have a toggle for a viewport version versus the final version for display purposes. This also depends on how dense your simulation is. At some point, simulations are just going to be slow no matter what you are running them on. I would avoid Intel 13,14 gen for now, unless you are an expert, and can deal with bios level problems. They still have something going on there, and I wouldn't know how to prevent the issues. Newer Intel than that might be fine, but I do not know about them much.
Thanks for putting in all that work in getting the data for the various GPUs including the A770. Good to see data showing where the Intel Arc GPU's sit in comparison to AMD/Nvidia esp with Blender workloads and that if your getting into Blender then Intel GPUs are definitely worth considering . Looking forward to seeing how the new Intel GPU's perform with Blender in the next few weeks/months.
You are welcome! I am super excited for the Intel GPUs. I hope they get more than 12 GB VRAM though. If they offer a 24 GB card for $350 or so, I will buy them in bulk haha. But I have only heard 12 so far for the B580 I think. We shall see soon!
@@ContradictionDesign .. I dont get Intels naming convention on the new GPUS but whatever replaces the A770 if it comes with 24GB RAM should do be really good for Blender/Content creation workloads. I'm surpised Intel didn't release some Blender/DaVinci Resolve and bench marks for the 2 new cards and the potential uplift in performance.
@@y0jimbb0ttrouble98 Yeah their A770 is an interesting GPU too. It is not slow for it's price, and has great VRAM, but the power draw was too high for its performance. If we get a cheap, high VRAM GPU from them with lower power draw, I won't even care how slow it is haha. Have they launched anything officially yet?
@@ContradictionDesign Damn.. I forgot how power hungry the A770 is and hopefully Intel have managed to fix that with the new GPUs. Todays just a paper launch for the B580/570-so nothing official yet but I'd assume the likes of GN, Jay2cents, HWUB etc will get review samples soon.
@@y0jimbb0ttrouble98 Yeah someday I will get a review sample. That is my UA-cam dream haha. But this launch season will be pretty interesting for sure. I also just bought a 7600 XT 16 GB to test, so be watching for that if you are curious how it measures up vs the intel A770, for a similar price
Hey I can do that! I might just share it for free on Patreon. It would be downloadable, but not a live file like a google doc. I am glad my sheet is helpful!
Looking forward to seeing Intel Battlemage step up to the plate. Not thrilled with the state of NVIDIA as a company (their cards are phenomenal) and AMD still struggling with drivers and software issues.
Yes I am too. Intel has a great chance to win people over. I really did like this A770 already. If it had just a bit less TDP, it would really be quite good as far as I am concerned.
I appreciate you getting results for 7900 xtx. I actually fit into the category of people that would buy it also for Blender, besides other things. If you would like me to test something else for you at some point, PM me. My PC also has 7950x and 192Gb RAM and 4TB NVMe in it. Should be ready for anything you'll throw at it, within consumer grade reasons, haha!
Nice Setup! AM5 has some quirks so far, but is vey fast overall for sure. If you have any interest in testing Cinebench, I would love to see those results, out of curiosity. And I think the 7900 XTX is a decent value for a 24 GB GPU, especially when gaming is involved.
@@ContradictionDesign Everything out there shows the 7900 /RDNA3 cards delivering mediocre performance in Blender and most productivity software. Personally, I think claims of 'good performance' is either lies or fabrications. It's only good for CUDA 'copying' - i.e. Zluda.
RDNA3 GPUs have worse rendering speed per dollar than Nvidia 40 series for sure. The value argument for them is that people who dabble in Blender a little, but mostly game on their PC, can still use Blender just fine. So AMD is good for games, meh for Blender relatively.
@@ContradictionDesign I've been told that argument before. But, the fact remains - the only ppl who told me that the 7900 xtx is good in Blender (also, HIP-RT works according to them) - is the guy that replied to you - 'moravianlion' and your friend (assuming his results are accurate). If one only dabbles in Blender and games - then why not just use a 4070 Ti Super or used 4080? They are competitive in games and have better performance in all productivity use. I want AMD to be better but they seem to focus on gaming - their gpus should be a lot cheaper, then.
@@danp2306 The best argument for 7900 XTX in productivity is that you get 24 GB VRAM for $1k. Only way to do that with Nvidia is for $1.6K at least for the 4090. So at the end of the day, people will buy AMD if they lean that way anyway. But most people don't really need to optimize performance of hardware. If you aren't a business contending with profit margin concerns, than your hardware is not too critical. Could one thing be faster than another? Yeah. For gamers and dabblers though, there is not really any loss of income from rendering or gaming a little slower. I do agree that AMD GPUs could be a little cheaper, and I would buy them in droves if they were. I think they will focus on low and mid tier cards for the next set of GPUs, and those might fill this need. We will see in 4-6 months I expect.
The lone monk scene has a lot more going on in the compositor than other scenes, which can be running on the CPU or GPU with either OpenCL or OpenGL depending on settings and version. Something worth looking into, maybe Intel's OpenCL/GL performance is better than other cards?
Oh that is very interesting! I have never played with the settings at all, so I did not know this! I would not be surprised if Intel was really good at specific things like that. But they don't advertise any of that haha. But this is a very interesting theory.
I am planning to get this card for around $270-300 in India during sale for video editing with premiere, davinci, after effect, and some vfx. Would you go for this card instead of rtx 3060 or 4060? And go for something more expensive for professional workload? Budget is tight and 300 is the max I can go!
Hey, greetings from Russia. I'm also trying to decide if I should get ARC A770 or 4060. From what I read, 3060 is not a good option anymore since this generation is slowly vanishing. 4060 has way lower power consumption and in general is optimized way better. Talking about Premiere Pro, they say version v24 was optimized well for Intel and works/renders way faster. Also Intel supports AV1 codec which is better than H.264. Talking about Davinci, it doesn't seem to be optimized for Intel so 4060 probably would show better results but 16Gb VRAM for Davinci is a big advantage. In jobs like noise reduction the higher VRAM amount the better (that's why many people preffered 3060 12 Gb over 4060 8 Gb). In 3D design programs 4060 also shows better results (but probably the newer versions will show better compatibility). Intel seems kind of raw to me and I'm not sure if I want to get into all the troubles with drivers when I can just work on a silent, cold and problem-less GPU. So I'm still thinking...
Yes software support must be your first checkpoint. I know Intel still is not compatible with some softwares. I do like the Arc A770, it has a great value proposition. The 4060 would be great if not for the 8 GB VRAM. It is a really hard decision at that price point for sure. I will try to help how I can, but they really just have the VRAM paywalled at that price unfortunately.
If your software supports the ARC, it is a good value. But I do not use enough different software to know for sure, especially with editing softwares and after effects. Their drivers are much better now, but some programs are not supported at all. For 300 it is tough because the VRAM is paywalled basically. So Nvidia knows you want 16 GB, and they want you to spend the extra $100 to get it.
You know, I will test is out specifically. I use it for renders all the time with other GPUs too. But it will be interesting to pay more attention to it than usual
True, but the A770 is quite cheap, relatively. I got mine new last year for under $300. It is an OK value, but you are right, not quite what it should be. And yeah, Intel's gaming performance was a bit behind in this generation, but I have not tested for gaming performance here on the channel. So keep in mind my impressions of GPUs are nearly entirely 3D render related.
can you help me figure it out? i have a rtx 4070, it has 12gb vram of course. i am working on a big project on blender and the problem right now is that i am really low on vram, with my current budget i am looking to buy a rx 7900xtx with 24gb. i dont know if that is a good choice or not because i am worried that the 7900xtx is not worth the upgrade or will have many problems with 3d like some information i have found Many thanks ❤
For many apps, the 7900 XTX will work just fine. It is the cheapest way to get 24 GB VRAM without getting old pro cards or a used 3090. It will compete with or beat the 3090 in rendering speed as well. I can't speak for too much other software, but Blender works fine with current AMD GPUs. I have heard unreal engine makes better use of them as well. Video editors should work fine with them, but I have not tested AMD cards for video editing. Hope this helps!
@@ContradictionDesign Used my 7900 xtx with Resolve a few times, sometimes with quite demanding workflow. It worked fantastic and exports much faster and power efficiently than my 7950x.
I think your biggest concern will be A770 support in Premier and After Effects. I use Davinci myself, so I do not have any testing or experience with Premiere. I would suggest looking up support on A770 for those applications. You have seen the Blender speed comparison, so you have that. The A770 does also have the extra VRAM, which can be really nice. Also, does your PC support Re-sizable BAR? If not, the A770 will suffer a major speed loss.
@@ContradictionDesign I am thinking to pair up my 13600K(I already have this CPU) with the ARC A770. For the 13600K, obviously I have to take a Z series motherboard. So i think i will have resizeable bar support.
@@fardinzaman570 Ok yeah you should have Re-Bar support. As long as your apps have good compatibility with the A770, it may be a good fit for you. A770 also has Av1 support, which the RTX 3060 does not have.
When are you going to test a 7900 xtx so you can verify your friend's results? ;-) Also, I would suggest an extensive comparison for nvidia vs amd - e.g. CUDA + OptiX vs Zluda, HIP, HIP-RT - so ppl can see the improvements or regressions (whatever they may be) - and imho, the 7900 xt or xtx are the only amd cards worth considering for Blender - and even then, not sure if they are worthwhile - given the 4070 series - all of them - are probably better performers but there are some situations in which ppl will pick and amd gpu (if they also game, if they use Linux - if they want more vram - since, the nvidia gpus with the max. vrm in consumer cards are 3090s and 4090s - right?). For the above, I'd only compare a 4070 Ti Super and 7900 xtx - to make it simpler and those gpus *supposedly* are close in price?
Well the 7900 XTX is costly enough that I don't really want to buy it. My uses are only for rendering, so it is likely a worse deal for me than other options. And it is better for people who also game on it, since the 7900 XTX is great for gaming. If I can work something out, I will get one. Not sure when. As far as the history of the render engines/api changes, that could be a really fun test! I can go back to Blender 2.79 and work my way up to 4.1. Thank you for that idea! Yes the 3090 and the 4090 are the only 24 GB consumer gpus from Nvidia currently. There are pro options, but they are way too expensive for my budget. So that is one point in favor of the 7900 XTX, but the speed being below what I would hope makes it a hard buy for 3D uses only.
You know what though, I am wondering if the 7900 GRE is a better value though. Can get those for $550, which would make them a relatively cheap 16 GB card. And they are just over half the price of the XTX
@@ContradictionDesign Perhaps, but research that card's specs - it might be too close to a 7800 xt* - and thus, the performance results might reflect that if you test that one? *Edit: I guess I should describe another way - it's a 'cut down' 7900 xt - anyway, look into it - if it's slightly better than the 7800 xt score you received - you might not be too impressed. :)
@@danp2306 ohhhh that is a really good point! I will watch for a good way for me to get a 7900 xtx, and we'll see what happens. I have a couple of servers coming soon, so I will need to get those going. Then I'll look for the next ideas. Thanks for your comments!
@@ContradictionDesign Just a suggestion - might even be able to find them cheaper on the used market? Imho, it's the only amd gpu worth considering for Blender - at least, until the 8800 XT is released?
I heard at the end of the video, that you're getting into server CPUs. Those wouldn't happen to be the Intel Xeon Scalable LGA 3467 socket? I got an old Lenovo P720. I'm an unsuccessful programmer, current market is really hard, so I got it to learn how to make virtual machines and kubernetes clusters and stuff like that. Need a lot of RAM and cores for that Trying out the computer in blender, I'm kind of disappointed. I'm running it in Ubuntu 20.04, I have a Xeon 6138, 20 cores, 2.0 GHz, it's definitely faster at rendering than my Windows laptop's Ryzen 5 5600H (4.2 ghz 6 core), but not twice as fast. The Xeon seems to be about 5 times slower than my 2060 12gb. At least these Lenovo P720s are surprisingly quiet. I am excited for the 50 series though. Going to save up a pretty penny to get a 5080. 'Till then I guess I'll be doing modeling and single frame renders.
First, sorry to hear about work. I hope it goes the right way for you soon. I just got my first two servers this week. I got a Dell R930 and a R730XD. the 730XD is for storage, the 930 is for CPU/GPU rendering, and general learning. Once I know what I am doing, I will design my stack. The R930 I got has quad 24-core e7-8890 V4, for 96 cores total. 256 GB RAM. I will benchmark these two machines soon, and have plenty of server/render farm content from it. I am also excited for the 50 series. RTX 5090 should come with 1.5 or 2 TB memory bandwidth, and 50% more cores, at a higher IPC and clock speed. Should be really crazy to see what it can do. Also, they have teased a "hardware denoiser", so I will be watching the fine print to see what that could be. I look forward to the server stuff. Maybe when I get educated a little bit here soon, we can learn from each other!
@@ContradictionDesign Thanks, I hope work goes well for me too! lol. I'm excited to hear about your server setup, seems like your rendering server will be a very powerful computer. I have heard of companies buying servers, loading them up with GPUs and then making virtual machines with virtual GPUs from it. Presumably employees SSH into their virtual machine instance from a laptop and then use GPU power from anywhere in the world with an internet connection. It could be an interesting setup to try. I've heard people say that certain things can give diminishing returns, rendering with 4k samples, or caustic lights. Maybe it's 'diminishing returns' but only on older GPUs. What I really like about Blender is that it has a culture of making tutorials, I'd love to make one someday. I like your channel, it's very helpful to see how different hardware affects rendering, thank you for going to all this effort
@@danilshcherbakov940 Well I am glad you found me. Yes I think remote GPUs are a cool idea. I definitely need to get fiber Internet before I try what I really want to try, which is renting out workstations over the Internet. People could just sign in and run Blender or whatever they want on a VM. That and just normal rendering for clients. Should be a lot of fun. I got my Linux installer media set up, now I just need some time to get the thing configured
Hey! I am good. Hope you are too! I do not have test data for unreal engine, but people have told me that AMD is more competitive in Unreal. But I can not back that claim up. This is something I would like to test more, but I need to learn unreal first haha.
@@Mukeshjoshi12-n8r I am really busy at work, and have been playing more video games than usual. I will possibly start posting on my gaming channel soon btw. But basically just really busy and I have not been prioritizing making videos for a bit. I will get back in the swing soon, I am just trying to balance many things now.
Blender will use multiple GPUs, but they are not efficiently used in one Blender instance together. You get full speed if each GPU runs on a separate instance. It is annoying, but just how it is for now. And sorry for late reply, but UA-cam studio is not helpful with telling me about new replies to replies.
So it's not worth it? Even when using an Intel CPU with DeepLink Tech + dual arc a770? I would really like to know it, because I'm still a bit sceptical when it comes to 3D usages woth AMD products. I also avout NVIDIA because of their shitty price performance behavior. At last, I'm about to rewipe my entire setup in order to teach myself somw PC related jobs and learn some new stuff or especially using my hobby as a beneficial opportunity with UE5, BLENDER, FL STUDIO, ADOBE, and so on.
I am not able to test Intel deep link currently, since I do not own any of the supported CPUs. I imagine you would get some extra performance with it in many apps. You need to enable Resizable BAR too, or the ARC GPU will run slower than it should.
It always comes down to what you're willing to pay for. I'm running 7900 xtx with many non gaming apps and I'm very pleased. I assume it will be similar case with all 7 series. There's plenty of professional apps that have AMD covered. And worst case scenario, just patch them with ZLUDA. In your case, I'd probably make more sense to go with Intel. I hope they fixed all their kinks already.
People have asked some really good questions lately, and I have new test ideas! Plus I can revisit whatever hardware people want to see. A770, 7800 XT, and 4070 retests on Blender 4.3 might be good.
I already have results for a 7800 xt, and the 7900 GRE is only a little faster. The problem is that it is not worth me buying one because I don't have uses for them after I run the tests. The AMD GPUs are slower at the same price vs Nvidia for Blender at least. And since I don't need a GPU for gaming, the AMD mid models are not super worthwhile for me. Plus new generations are launching this winter, so I'd like to prepare for those. If I had a buyer lined up to take it from me after I test it, I could maybe do it. Like, some of my friends will need upgrades eventually. They might help. But I don't want to take losses on these, since I don't make much from UA-cam yet. I would be more likely to test 7900 XTX, but I already have some friends helping me with those results. Hope that all made sense.
@@ContradictionDesign Sure, I appreciate all your efforts and understand your perspective. However, is it worth spending an additional $200 (in my country)to get the 4070 Ti Super instead of the 7900 GRE? Is the extra amount worth it, especially since I am a beginner.
@@Tamiui Ohh I see. The 4070 Ti Super will be quite a bit faster for 3D work than the 7900 GRE, which is closer to a 7800 XT. But for a beginner, the render speed may not be a huge factor for you yet. If you really expect to do a lot of rendering soon, it can save significant time to invest more in the GPU. The 4070 Ti Super is much faster for sure. However, keep in mind that you could also just render overnight, and there are plenty of hours when you don't use your PC. So for final renders you need lots of speed, but only every so often. It is really up to you, based on your financial situation. I think the GRE is a good choice because of its 16 GB VRAM. That is a good amount for 3D work. I also really like the 4070, but 12 GB is a bit more restrictive for scene complexity.
@@ContradictionDesign Thank you 🙏🙏 I hope that God will guide you to Islam, my brother, because I truly wish you all the best, and I will pray to God that you find always the best in your life 💕💕💕
@@KhalidEnterprise well they are often pretty close in speed I think. So if you can deal with higher power draw, the A770 because it has more VRAM. Just make sure you have resizable bar support on your PC
Well 16 GB VRAM is a really strong argument for these, for budget rendering. Cheap cards, not efficient power wise, but quick enough for rendering. Intel B580 24 GB would be an absolute take my money product for me. We'll see if they make one or not.
@@ContradictionDesign I agree these cards could hold their own (plus I didn't know they had that much vram holy).. I was making fun at their marketing bc they didn't highlight the benchmark in other applications. TY for your service 👊
@@johnnyplank3964 Yeah nobody really talks about productivity tasks, except they say "great for all your productivity software" and they leave it at that. I always find it interesting to see how they try to sell stuff. Yeah the B580 will have 12 GB VRAM to start, and I am just hoping they will launch a special addition 24 GB version. But we'll see!
That's exactly I was looking for to upgrade the card for blender. God bless You fella. Blender benchmarks are surely kinda help, but they are not direct index of render time. Keep increasing your excel sheet length for greater benefit of many in pursuit of precise answer 👍
Hey! Glad this was helpful. I am eagerly waiting whatever new GPUs are around the corner. Thanks for your comment!
@@ContradictionDesign Heya...Finally bought the A770 due to larger RAM than close competition. But been using my 'ancient' 3330 CPU. This snarls when having complex scenes, I mean many a scenario, fps is less than one. Been planning to upgrade. Any suggestion for CPU purchase (without breaking the bank)
@@Bholu-y1d You could look at AMD 5600X or 5900X if you want super cheap new stuff, depending on the price range you are after. If you want brand new, the 9600X would be great because it has very high clock speeds, and the smaller core count will not be a problem too often.
What is your budget for the new Motherboard/CPU/RAM? I can help more if you let me know your price
@@ContradictionDesign Yes yes, what You have suggested is definitely under my consideration. Initially I zeroed on 14600K & 5900X but later bumped into Intel 13th, 14th generation issue matter, so decided to focus on Ryzen series. But than saw Intel Ultra line up. Then Ryzen 5 series 9700, I mean it's infinite loop. One will keep toggling between different choices. Very now I observed You used 5950 in your video which is near to 5900. My requirement is nice playback in viewport, because for render I have a770 (by courtesy of your fine video, it was single limiting factor to push me to go for a770). I learnt that single core is responsible for smooth playback & multiple cores for fluid simulation, I need both. Won't go for 9950X. Budget, you can take as mid range
@@Bholu-y1d Thanks for the nice words again! I have seen some really good deals on 5900X on and off. 12 Cores at 4.7 Ghz for $350 or something like that is a really good deal considering 4 cores was high end a few years ago haha. And the AM4 platform, including motherboard, RAM, etc, will cost quite a bit less too, as AM5 boards are pretty pricey.
5000 series and newer will have fantastic viewport performance. It will be limited by your render engine/GPU most of the time.
Fluid simulations are a different beast. The only way to get super fast playback is to run low quality previews in the viewport to see the general shape of the fluid. Some addons have a toggle for a viewport version versus the final version for display purposes. This also depends on how dense your simulation is. At some point, simulations are just going to be slow no matter what you are running them on.
I would avoid Intel 13,14 gen for now, unless you are an expert, and can deal with bios level problems. They still have something going on there, and I wouldn't know how to prevent the issues. Newer Intel than that might be fine, but I do not know about them much.
Thanks for putting in all that work in getting the data for the various GPUs including the A770. Good to see data showing where the Intel Arc GPU's sit in comparison to AMD/Nvidia esp with Blender workloads and that if your getting into Blender then Intel GPUs are definitely worth considering . Looking forward to seeing how the new Intel GPU's perform with Blender in the next few weeks/months.
You are welcome! I am super excited for the Intel GPUs. I hope they get more than 12 GB VRAM though. If they offer a 24 GB card for $350 or so, I will buy them in bulk haha. But I have only heard 12 so far for the B580 I think. We shall see soon!
@@ContradictionDesign .. I dont get Intels naming convention on the new GPUS but whatever replaces the A770 if it comes with 24GB RAM should do be really good for Blender/Content creation workloads. I'm surpised Intel didn't release some Blender/DaVinci Resolve and bench marks for the 2 new cards and the potential uplift in performance.
@@y0jimbb0ttrouble98 Yeah their A770 is an interesting GPU too. It is not slow for it's price, and has great VRAM, but the power draw was too high for its performance. If we get a cheap, high VRAM GPU from them with lower power draw, I won't even care how slow it is haha. Have they launched anything officially yet?
@@ContradictionDesign Damn.. I forgot how power hungry the A770 is and hopefully Intel have managed to fix that with the new GPUs. Todays just a paper launch for the B580/570-so nothing official yet but I'd assume the likes of GN, Jay2cents, HWUB etc will get review samples soon.
@@y0jimbb0ttrouble98 Yeah someday I will get a review sample. That is my UA-cam dream haha.
But this launch season will be pretty interesting for sure. I also just bought a 7600 XT 16 GB to test, so be watching for that if you are curious how it measures up vs the intel A770, for a similar price
Pls share the spreadsheet with us next time, cause yours is the most helpful one out there
Hey I can do that! I might just share it for free on Patreon. It would be downloadable, but not a live file like a google doc.
I am glad my sheet is helpful!
Cant wait for the Battlemage Cards to realese
It will be fun! Hoping they are competitive!
Fr fr, I'm sick and tired of Nvidia's monopoly
Looking forward to seeing Intel Battlemage step up to the plate. Not thrilled with the state of NVIDIA as a company (their cards are phenomenal) and AMD still struggling with drivers and software issues.
Yes I am too. Intel has a great chance to win people over. I really did like this A770 already. If it had just a bit less TDP, it would really be quite good as far as I am concerned.
I appreciate you getting results for 7900 xtx. I actually fit into the category of people that would buy it also for Blender, besides other things.
If you would like me to test something else for you at some point, PM me.
My PC also has 7950x and 192Gb RAM and 4TB NVMe in it. Should be ready for anything you'll throw at it, within consumer grade reasons, haha!
Nice Setup! AM5 has some quirks so far, but is vey fast overall for sure.
If you have any interest in testing Cinebench, I would love to see those results, out of curiosity.
And I think the 7900 XTX is a decent value for a 24 GB GPU, especially when gaming is involved.
@@ContradictionDesign Everything out there shows the 7900 /RDNA3 cards delivering mediocre performance in Blender and most productivity software. Personally, I think claims of 'good performance' is either lies or fabrications. It's only good for CUDA 'copying' - i.e. Zluda.
RDNA3 GPUs have worse rendering speed per dollar than Nvidia 40 series for sure. The value argument for them is that people who dabble in Blender a little, but mostly game on their PC, can still use Blender just fine. So AMD is good for games, meh for Blender relatively.
@@ContradictionDesign I've been told that argument before. But, the fact remains - the only ppl who told me that the 7900 xtx is good in Blender (also, HIP-RT works according to them) - is the guy that replied to you - 'moravianlion' and your friend (assuming his results are accurate). If one only dabbles in Blender and games - then why not just use a 4070 Ti Super or used 4080? They are competitive in games and have better performance in all productivity use. I want AMD to be better but they seem to focus on gaming - their gpus should be a lot cheaper, then.
@@danp2306 The best argument for 7900 XTX in productivity is that you get 24 GB VRAM for $1k. Only way to do that with Nvidia is for $1.6K at least for the 4090. So at the end of the day, people will buy AMD if they lean that way anyway.
But most people don't really need to optimize performance of hardware. If you aren't a business contending with profit margin concerns, than your hardware is not too critical. Could one thing be faster than another? Yeah. For gamers and dabblers though, there is not really any loss of income from rendering or gaming a little slower.
I do agree that AMD GPUs could be a little cheaper, and I would buy them in droves if they were. I think they will focus on low and mid tier cards for the next set of GPUs, and those might fill this need. We will see in 4-6 months I expect.
The lone monk scene has a lot more going on in the compositor than other scenes, which can be running on the CPU or GPU with either OpenCL or OpenGL depending on settings and version. Something worth looking into, maybe Intel's OpenCL/GL performance is better than other cards?
Oh that is very interesting! I have never played with the settings at all, so I did not know this! I would not be surprised if Intel was really good at specific things like that. But they don't advertise any of that haha. But this is a very interesting theory.
Will its performance surpass the 3060 as they update the drivers? the 16gb of vram is very nice.
The A770 will probably not get any faster from drivers going forward. It is a decent GPU but it won't change any more now I believe.
I am planning to get this card for around $270-300 in India during sale for video editing with premiere, davinci, after effect, and some vfx. Would you go for this card instead of rtx 3060 or 4060? And go for something more expensive for professional workload? Budget is tight and 300 is the max I can go!
Hey, greetings from Russia. I'm also trying to decide if I should get ARC A770 or 4060. From what I read, 3060 is not a good option anymore since this generation is slowly vanishing. 4060 has way lower power consumption and in general is optimized way better.
Talking about Premiere Pro, they say version v24 was optimized well for Intel and works/renders way faster. Also Intel supports AV1 codec which is better than H.264.
Talking about Davinci, it doesn't seem to be optimized for Intel so 4060 probably would show better results but 16Gb VRAM for Davinci is a big advantage. In jobs like noise reduction the higher VRAM amount the better (that's why many people preffered 3060 12 Gb over 4060 8 Gb). In 3D design programs 4060 also shows better results (but probably the newer versions will show better compatibility).
Intel seems kind of raw to me and I'm not sure if I want to get into all the troubles with drivers when I can just work on a silent, cold and problem-less GPU. So I'm still thinking...
Yes software support must be your first checkpoint. I know Intel still is not compatible with some softwares. I do like the Arc A770, it has a great value proposition. The 4060 would be great if not for the 8 GB VRAM. It is a really hard decision at that price point for sure. I will try to help how I can, but they really just have the VRAM paywalled at that price unfortunately.
If your software supports the ARC, it is a good value. But I do not use enough different software to know for sure, especially with editing softwares and after effects. Their drivers are much better now, but some programs are not supported at all. For 300 it is tough because the VRAM is paywalled basically. So Nvidia knows you want 16 GB, and they want you to spend the extra $100 to get it.
@@ContradictionDesign thanks
Did you try out the Open image denoiser GPU option in B4.1?
You know, I will test is out specifically. I use it for renders all the time with other GPUs too. But it will be interesting to pay more attention to it than usual
I wonder what the increase in performance of the new Intel cards will be.
NO battlemage performance data has ever leaked.
Hopefully the new Intel GPUs will be a huge surprise. This one is not even that bad, if not for the power draw. We shall see!
Well maybe it will be a big, good surprise for everyone!
@@ContradictionDesign not really though
since 6800xt and a770 have identical die sizes
but a770 gaming performance does seem low
True, but the A770 is quite cheap, relatively. I got mine new last year for under $300. It is an OK value, but you are right, not quite what it should be.
And yeah, Intel's gaming performance was a bit behind in this generation, but I have not tested for gaming performance here on the channel. So keep in mind my impressions of GPUs are nearly entirely 3D render related.
here another bull 0_0 i came for the rtx 3060 video very interesting data comparison
Thanks glad you found it useful!
can you help me figure it out? i have a rtx 4070, it has 12gb vram of course. i am working on a big project on blender and the problem right now is that i am really low on vram, with my current budget i am looking to buy a rx 7900xtx with 24gb. i dont know if that is a good choice or not because i am worried that the 7900xtx is not worth the upgrade or will have many problems with 3d like some information i have found
Many thanks ❤
For many apps, the 7900 XTX will work just fine. It is the cheapest way to get 24 GB VRAM without getting old pro cards or a used 3090. It will compete with or beat the 3090 in rendering speed as well.
I can't speak for too much other software, but Blender works fine with current AMD GPUs. I have heard unreal engine makes better use of them as well. Video editors should work fine with them, but I have not tested AMD cards for video editing.
Hope this helps!
@@ContradictionDesign Used my 7900 xtx with Resolve a few times, sometimes with quite demanding workflow. It worked fantastic and exports much faster and power efficiently than my 7950x.
Should I go for A770 or RTX 3060?
My main purpose is 3D Animation (Only Blender), Premier Pro and after effects.
I think your biggest concern will be A770 support in Premier and After Effects. I use Davinci myself, so I do not have any testing or experience with Premiere. I would suggest looking up support on A770 for those applications.
You have seen the Blender speed comparison, so you have that. The A770 does also have the extra VRAM, which can be really nice.
Also, does your PC support Re-sizable BAR? If not, the A770 will suffer a major speed loss.
@@ContradictionDesign I am thinking to pair up my 13600K(I already have this CPU) with the ARC A770. For the 13600K, obviously I have to take a Z series motherboard. So i think i will have resizeable bar support.
@@fardinzaman570 Ok yeah you should have Re-Bar support. As long as your apps have good compatibility with the A770, it may be a good fit for you. A770 also has Av1 support, which the RTX 3060 does not have.
@@fardinzaman570 you'll do well with editing because intel have an av1 encoder but probably wait for battlemage
This is probably a great answer right now. Hoping battlemage is just fantastic. A770 is decent, but I have not tried editing with it yet.
Did you also run Cinebench as well?
Unfortunately, Intel discreet GPUs do not appear to be supported in Cinebench 24 at this time. So that is annoying. I just went and tried to confirm.
Can we double this GPU Like SLI for blender and unreal engine?
They don't link together, but many apps can make use of both GPUs, including blender and possibly unreal engine.
When are you going to test a 7900 xtx so you can verify your friend's results? ;-) Also, I would suggest an extensive comparison for nvidia vs amd - e.g. CUDA + OptiX vs Zluda, HIP, HIP-RT - so ppl can see the improvements or regressions (whatever they may be) - and imho, the 7900 xt or xtx are the only amd cards worth considering for Blender - and even then, not sure if they are worthwhile - given the 4070 series - all of them - are probably better performers but there are some situations in which ppl will pick and amd gpu (if they also game, if they use Linux - if they want more vram - since, the nvidia gpus with the max. vrm in consumer cards are 3090s and 4090s - right?). For the above, I'd only compare a 4070 Ti Super and 7900 xtx - to make it simpler and those gpus *supposedly* are close in price?
Well the 7900 XTX is costly enough that I don't really want to buy it. My uses are only for rendering, so it is likely a worse deal for me than other options. And it is better for people who also game on it, since the 7900 XTX is great for gaming. If I can work something out, I will get one. Not sure when.
As far as the history of the render engines/api changes, that could be a really fun test! I can go back to Blender 2.79 and work my way up to 4.1. Thank you for that idea!
Yes the 3090 and the 4090 are the only 24 GB consumer gpus from Nvidia currently. There are pro options, but they are way too expensive for my budget. So that is one point in favor of the 7900 XTX, but the speed being below what I would hope makes it a hard buy for 3D uses only.
You know what though, I am wondering if the 7900 GRE is a better value though. Can get those for $550, which would make them a relatively cheap 16 GB card. And they are just over half the price of the XTX
@@ContradictionDesign Perhaps, but research that card's specs - it might be too close to a 7800 xt* - and thus, the performance results might reflect that if you test that one? *Edit: I guess I should describe another way - it's a 'cut down' 7900 xt - anyway, look into it - if it's slightly better than the 7800 xt score you received - you might not be too impressed. :)
@@danp2306 ohhhh that is a really good point! I will watch for a good way for me to get a 7900 xtx, and we'll see what happens. I have a couple of servers coming soon, so I will need to get those going. Then I'll look for the next ideas. Thanks for your comments!
@@ContradictionDesign Just a suggestion - might even be able to find them cheaper on the used market? Imho, it's the only amd gpu worth considering for Blender - at least, until the 8800 XT is released?
I heard at the end of the video, that you're getting into server CPUs. Those wouldn't happen to be the Intel Xeon Scalable LGA 3467 socket?
I got an old Lenovo P720. I'm an unsuccessful programmer, current market is really hard, so I got it to learn how to make virtual machines and kubernetes clusters and stuff like that. Need a lot of RAM and cores for that
Trying out the computer in blender, I'm kind of disappointed. I'm running it in Ubuntu 20.04, I have a Xeon 6138, 20 cores, 2.0 GHz, it's definitely faster at rendering than my Windows laptop's Ryzen 5 5600H (4.2 ghz 6 core), but not twice as fast. The Xeon seems to be about 5 times slower than my 2060 12gb. At least these Lenovo P720s are surprisingly quiet.
I am excited for the 50 series though. Going to save up a pretty penny to get a 5080. 'Till then I guess I'll be doing modeling and single frame renders.
First, sorry to hear about work. I hope it goes the right way for you soon.
I just got my first two servers this week. I got a Dell R930 and a R730XD. the 730XD is for storage, the 930 is for CPU/GPU rendering, and general learning. Once I know what I am doing, I will design my stack. The R930 I got has quad 24-core e7-8890 V4, for 96 cores total. 256 GB RAM. I will benchmark these two machines soon, and have plenty of server/render farm content from it.
I am also excited for the 50 series. RTX 5090 should come with 1.5 or 2 TB memory bandwidth, and 50% more cores, at a higher IPC and clock speed. Should be really crazy to see what it can do. Also, they have teased a "hardware denoiser", so I will be watching the fine print to see what that could be.
I look forward to the server stuff. Maybe when I get educated a little bit here soon, we can learn from each other!
@@ContradictionDesign Thanks, I hope work goes well for me too! lol. I'm excited to hear about your server setup, seems like your rendering server will be a very powerful computer. I have heard of companies buying servers, loading them up with GPUs and then making virtual machines with virtual GPUs from it. Presumably employees SSH into their virtual machine instance from a laptop and then use GPU power from anywhere in the world with an internet connection. It could be an interesting setup to try.
I've heard people say that certain things can give diminishing returns, rendering with 4k samples, or caustic lights. Maybe it's 'diminishing returns' but only on older GPUs. What I really like about Blender is that it has a culture of making tutorials, I'd love to make one someday. I like your channel, it's very helpful to see how different hardware affects rendering, thank you for going to all this effort
@@danilshcherbakov940 Well I am glad you found me. Yes I think remote GPUs are a cool idea. I definitely need to get fiber Internet before I try what I really want to try, which is renting out workstations over the Internet. People could just sign in and run Blender or whatever they want on a VM. That and just normal rendering for clients.
Should be a lot of fun. I got my Linux installer media set up, now I just need some time to get the thing configured
Hey how's you doing? I am thinking to start with unreal engine 5 like using unreal engine 5 so can u tell rx 7800xt better or 4070 super?
Hey! I am good. Hope you are too!
I do not have test data for unreal engine, but people have told me that AMD is more competitive in Unreal. But I can not back that claim up.
This is something I would like to test more, but I need to learn unreal first haha.
@@ContradictionDesign thanks for telling. By the way why are you not uploading videos?
@@Mukeshjoshi12-n8r I am really busy at work, and have been playing more video games than usual. I will possibly start posting on my gaming channel soon btw.
But basically just really busy and I have not been prioritizing making videos for a bit.
I will get back in the swing soon, I am just trying to balance many things now.
@@ContradictionDesign ohh I hope you start your gaming channel soon
It exists, but I have not posted in years haha. I want to start streaming on there when I am between videos over here.
Hi, can blender detect two arc a770 cards?
It should absolutely have no problem seeing both, but I can't prove it for sure
@@ContradictionDesign Thanks :D
@@antonioluciano9693 Don't do that it only will use one as no gpus optimises for gpus to work together
@@WhipplesMemes But Blender can multiple graphics cards
Blender will use multiple GPUs, but they are not efficiently used in one Blender instance together. You get full speed if each GPU runs on a separate instance. It is annoying, but just how it is for now. And sorry for late reply, but UA-cam studio is not helpful with telling me about new replies to replies.
So it's not worth it? Even when using an Intel CPU with DeepLink Tech + dual arc a770?
I would really like to know it, because I'm still a bit sceptical when it comes to 3D usages woth AMD products. I also avout NVIDIA because of their shitty price performance behavior.
At last, I'm about to rewipe my entire setup in order to teach myself somw PC related jobs and learn some new stuff or especially using my hobby as a beneficial opportunity with UE5, BLENDER, FL STUDIO, ADOBE, and so on.
I am not able to test Intel deep link currently, since I do not own any of the supported CPUs. I imagine you would get some extra performance with it in many apps.
You need to enable Resizable BAR too, or the ARC GPU will run slower than it should.
It always comes down to what you're willing to pay for.
I'm running 7900 xtx with many non gaming apps and I'm very pleased. I assume it will be similar case with all 7 series.
There's plenty of professional apps that have AMD covered. And worst case scenario, just patch them with ZLUDA.
In your case, I'd probably make more sense to go with Intel. I hope they fixed all their kinks already.
Looking forward to December 2924 retest 🧐.
People have asked some really good questions lately, and I have new test ideas! Plus I can revisit whatever hardware people want to see. A770, 7800 XT, and 4070 retests on Blender 4.3 might be good.
can you test rx 7900 gre
I already have results for a 7800 xt, and the 7900 GRE is only a little faster. The problem is that it is not worth me buying one because I don't have uses for them after I run the tests. The AMD GPUs are slower at the same price vs Nvidia for Blender at least. And since I don't need a GPU for gaming, the AMD mid models are not super worthwhile for me. Plus new generations are launching this winter, so I'd like to prepare for those.
If I had a buyer lined up to take it from me after I test it, I could maybe do it. Like, some of my friends will need upgrades eventually. They might help. But I don't want to take losses on these, since I don't make much from UA-cam yet.
I would be more likely to test 7900 XTX, but I already have some friends helping me with those results.
Hope that all made sense.
@@ContradictionDesign Sure, I appreciate all your efforts and understand your perspective. However, is it worth spending an additional $200 (in my country)to get the 4070 Ti Super instead of the 7900 GRE? Is the extra amount worth it, especially since I am a beginner.
@@Tamiui Ohh I see. The 4070 Ti Super will be quite a bit faster for 3D work than the 7900 GRE, which is closer to a 7800 XT. But for a beginner, the render speed may not be a huge factor for you yet. If you really expect to do a lot of rendering soon, it can save significant time to invest more in the GPU. The 4070 Ti Super is much faster for sure.
However, keep in mind that you could also just render overnight, and there are plenty of hours when you don't use your PC. So for final renders you need lots of speed, but only every so often.
It is really up to you, based on your financial situation. I think the GRE is a good choice because of its 16 GB VRAM. That is a good amount for 3D work. I also really like the 4070, but 12 GB is a bit more restrictive for scene complexity.
@@ContradictionDesign Thank you 🙏🙏 I hope that God will guide you to Islam, my brother, because I truly wish you all the best, and I will pray to God that you find always the best in your life 💕💕💕
Thank you! That is a very kind gesture and I appreciate your prayers for me.
which one would you prefer? A770 or 3060? (for blender)
@@KhalidEnterprise well they are often pretty close in speed I think. So if you can deal with higher power draw, the A770 because it has more VRAM. Just make sure you have resizable bar support on your PC
Do you earn Render Crypto Tokens ?
No but I wish I would have filled out the form to join up a couple years ago
It's a 2060 repackaged with an upscale ai strictly for gaming
There fucking geniuses over at intel
Well 16 GB VRAM is a really strong argument for these, for budget rendering. Cheap cards, not efficient power wise, but quick enough for rendering.
Intel B580 24 GB would be an absolute take my money product for me. We'll see if they make one or not.
@@ContradictionDesign I agree these cards could hold their own (plus I didn't know they had that much vram holy).. I was making fun at their marketing bc they didn't highlight the benchmark in other applications. TY for your service 👊
@@johnnyplank3964 Yeah nobody really talks about productivity tasks, except they say "great for all your productivity software" and they leave it at that.
I always find it interesting to see how they try to sell stuff.
Yeah the B580 will have 12 GB VRAM to start, and I am just hoping they will launch a special addition 24 GB version. But we'll see!