I believe Alan Wake also uses software RT like Wukong/other UE5 games which is why the difference between base and Path Tracing isn't as much as Cyberpunk. There are also updated mods now for Cyberpunk Path Tracing which improve the performance a lot. Everyone not on a 4090 should definitely try using them. The older versions looked like they made the graphics worse to optimize the Path Tracing, but the new versions seem to be improved in that regard too. Would be nice to have similar mods for other games too.
You're right, I forgot it also uses some software ray tracing. The Cyberpunk mods really make a big difference indeed. On the 4070 path tracing at 1440p wasnt really usable for me unless when using DLSS Performance, and by that point the hit to image quality is not worth it to me. But with the mods it's able to stay above 60 fps with DLSS Quality, which is then a good base to be able to use frame generation.
Very interesting topic, I enjoyed it. Something that I've really wanted to test since Turing shipped with Tensor cores was some way to compare their efficiency and/or strength from gen to gen. As in, how do the 544 1st Gen Tensor cores in the 2080 Ti compare to the 272 2nd Gen Tensor cores in the 3080 to the 136 3rd Gen Tensor cores in the 4060 Ti? If each gen has been doubling how "good" the Tensor cores are (whatever that means) then they should all get the same uplift from Tensor use cases proportional to the base frame rate. Or would they? This is why I've wanted to test this kind of thing! Then after that I'd want to test GPU's that have roughly the same amount of Tensor cores but across the various generations, something like how the 2060 Super with 272 1st Gen Tensor cores compares to the 272 2nd Gen Tensor cores in the 3080 with the 264 3rd Gen Tensor cores in the 4070 Ti Super. Of course it would get many times more complicated once DLSS Frame Gen is thrown in, but even seeing somebody that has the hardware to test the DLSS Super Resolution differences would be enlightening. Thanks for the unique topics and happy benching! Edit - Before the "Um, actually" crowd arrives, I know Volta Titan V has 1st Gen Tensor cores mixed with Pascal. I am referring to consumer GeForce availability.
Thanks for showing interest here. I like testing silly stuff like this. And I wasnt expecting Cyberpunk's massive uplift with DLSS Quality. It makes sense if you take into consideration the massive pixel counts in bounce lighting with path tracing enabled, and that DLSS applies upscaling to all that too. As for your topic, that'll actually make for a very interesting watch. Havent come across a video yet testing the differences between the different tensor core counts. And what would it actually do for gaming and upscaling? Is the overhead less with more or better Tensor cores, meaning DLSS increases the framerate more on newer gen Tensor cores? I actually dont know... Would be interesting to find out.
Yeah, was also surprised, but it makes sense if you compare the amount of light sources and reflective surfaces versus the other games. Cyberpunk has tons of lights and billboards, car head lights etc as light sources, and then also cars, windows, puddles, water etc as reflective surfaces. The other games have a lot less of those. But the other two being newer I actually expected them to be heavier.
This makes me wonder the settings that Black myth wukong and Alan wake 2 use. I know CB 2077 uses 2 rays and 2 bounces. Alan wake 2 does 3 bounces at max settings, but I am undure if it's 1 or 2 rays. Couldn't find and numbers on Black myth wukong. My guess is that CB 2077 has the most rays, and that is why it dcales so well with lowering render resolution since you are then calculating fewer rays. I could be wrong, but one way to test it could be lowering the rays in CB 2077 with a mod. From there, check what the perf differences are at different resolutions
Another thing is I know Alan wake renders things like Depth of field after DLSS, the recommended behavior, which means you are still rendering some things at native resolution. I wonder if the other games do that and if changing Alan wakes 2 behavior gives it a more meaningful perf boost. This video leaves a lot of fun questions that I want to research now.
It makes me happy seeing someone else also interested in this kind of stuff ;) For Cyberpunk I believe you are right. It has quite a few bounces, and that's why upscaling helps so much here. I also couldnt confirm the number of bounces in the other games though.
Thanks! Just a pity UA-cam doesnt think so 🤣 But I thought it was an interesting thing to look at and knew it wont get much exposure, but it was a fun topic to explore.
@@Mostly_Positive_Reviews Hopefully it will catch on. For You can compare the built in benchmarks for wukong and cyberpunk, so you can get a better sample of the effect over time. It's a shame alan wake doesn;t have a built in benchmark.
Note: I didn't refresh the GPUBusy metrics so they can be disregarded in this video.
That DLSS 2.0 update a while back was extremely crucial
Yeah indeed. DLSS 1.0 was not so hot, but 2.0 is really does a good job.
nice video mate! keep posting more)
Thank you mate, appreciate it 🙏
I believe Alan Wake also uses software RT like Wukong/other UE5 games which is why the difference between base and Path Tracing isn't as much as Cyberpunk. There are also updated mods now for Cyberpunk Path Tracing which improve the performance a lot. Everyone not on a 4090 should definitely try using them. The older versions looked like they made the graphics worse to optimize the Path Tracing, but the new versions seem to be improved in that regard too. Would be nice to have similar mods for other games too.
You're right, I forgot it also uses some software ray tracing.
The Cyberpunk mods really make a big difference indeed. On the 4070 path tracing at 1440p wasnt really usable for me unless when using DLSS Performance, and by that point the hit to image quality is not worth it to me. But with the mods it's able to stay above 60 fps with DLSS Quality, which is then a good base to be able to use frame generation.
Very interesting topic, I enjoyed it. Something that I've really wanted to test since Turing shipped with Tensor cores was some way to compare their efficiency and/or strength from gen to gen. As in, how do the 544 1st Gen Tensor cores in the 2080 Ti compare to the 272 2nd Gen Tensor cores in the 3080 to the 136 3rd Gen Tensor cores in the 4060 Ti?
If each gen has been doubling how "good" the Tensor cores are (whatever that means) then they should all get the same uplift from Tensor use cases proportional to the base frame rate. Or would they? This is why I've wanted to test this kind of thing!
Then after that I'd want to test GPU's that have roughly the same amount of Tensor cores but across the various generations, something like how the 2060 Super with 272 1st Gen Tensor cores compares to the 272 2nd Gen Tensor cores in the 3080 with the 264 3rd Gen Tensor cores in the 4070 Ti Super.
Of course it would get many times more complicated once DLSS Frame Gen is thrown in, but even seeing somebody that has the hardware to test the DLSS Super Resolution differences would be enlightening. Thanks for the unique topics and happy benching!
Edit - Before the "Um, actually" crowd arrives, I know Volta Titan V has 1st Gen Tensor cores mixed with Pascal. I am referring to consumer GeForce availability.
Thanks for showing interest here. I like testing silly stuff like this. And I wasnt expecting Cyberpunk's massive uplift with DLSS Quality. It makes sense if you take into consideration the massive pixel counts in bounce lighting with path tracing enabled, and that DLSS applies upscaling to all that too. As for your topic, that'll actually make for a very interesting watch. Havent come across a video yet testing the differences between the different tensor core counts. And what would it actually do for gaming and upscaling? Is the overhead less with more or better Tensor cores, meaning DLSS increases the framerate more on newer gen Tensor cores? I actually dont know... Would be interesting to find out.
So, Cyberpunk 2077 is still the MOST PT demanding game after all.
I'm a bit surprised.
Good idea.
Yeah, was also surprised, but it makes sense if you compare the amount of light sources and reflective surfaces versus the other games. Cyberpunk has tons of lights and billboards, car head lights etc as light sources, and then also cars, windows, puddles, water etc as reflective surfaces. The other games have a lot less of those. But the other two being newer I actually expected them to be heavier.
This makes me wonder the settings that Black myth wukong and Alan wake 2 use. I know CB 2077 uses 2 rays and 2 bounces. Alan wake 2 does 3 bounces at max settings, but I am undure if it's 1 or 2 rays. Couldn't find and numbers on Black myth wukong. My guess is that CB 2077 has the most rays, and that is why it dcales so well with lowering render resolution since you are then calculating fewer rays. I could be wrong, but one way to test it could be lowering the rays in CB 2077 with a mod. From there, check what the perf differences are at different resolutions
Another thing is I know Alan wake renders things like Depth of field after DLSS, the recommended behavior, which means you are still rendering some things at native resolution. I wonder if the other games do that and if changing Alan wakes 2 behavior gives it a more meaningful perf boost. This video leaves a lot of fun questions that I want to research now.
It makes me happy seeing someone else also interested in this kind of stuff ;)
For Cyberpunk I believe you are right. It has quite a few bounces, and that's why upscaling helps so much here. I also couldnt confirm the number of bounces in the other games though.
Geat Idea.
Thanks! Just a pity UA-cam doesnt think so 🤣 But I thought it was an interesting thing to look at and knew it wont get much exposure, but it was a fun topic to explore.
@@Mostly_Positive_Reviews Hopefully it will catch on. For You can compare the built in benchmarks for wukong and cyberpunk, so you can get a better sample of the effect over time. It's a shame alan wake doesn;t have a built in benchmark.