I have a 3080 FE and based on this video the RTX 5080 should be roughly 1.9 times faster... Mmmm. I'm still on the fence as I get 80 FPS in Stalker 2... and I'm CPU bound. I'm pretty sure I'll get the cpu/mobo/ram and wait a while before I get a new GPU... hopefully the 6080 in 2 years... but we'll see, I may give in before that.
I was wondering how they were going to get away with running the 12VHPWR connection so close to its limit. By clamping it, it looks like. I'm running a 1300w supply with two 12VHPWR feeds, would love to see someone provide a way to unlock power and maybe double up the connections.
ofc it will. especially in more demanding situation. in older games i dont think it would matter much. in upcoming games? i would expect so, many upcoming games rumoured to be rt-based just like indiana jones. which means u need faster data transfer, larger allocation and wider bandwith. cos ray/path tracing relies on it heavily. *my pro tip is just save some money and go for 5070ti if u were going for 5070 originally. on paper, thats my sweetspot 🤣
Listen, Jensen had already spoke about where their primary focus was going to be with the release of the 50 series and it was... you guessed it, with RT and AI. They are focusing on increasing quality and performance of all-things AI related on that board as more and more of us are utlizing AI locally on our machines while simultaneously offering up a very respectable frame rate increase in the process. Raster performance will almost certainly be near stand-still and a thing of the past as AI saturates EVERYTHING and takes over. EVERYTHING will be AI overlays, AI embeddings, AI in every corner of every function you will interact with.
I understand that, same as RT. At one point it will be in all games by default, but currently what's the point of upgrading to a 5000 if you own a 4000 series? Like, unless you are on the high end, and you use a 4K monitor with 120Hz or more, or, a 1080/1440p monitor with 240Hz or more, then Multi-frame Generation does very little for you because with a decent base framerate, Frame Gen already leaves you at a framerate that's higher than 140 fps, and 180 fps if you are shooting for 90 base or more. Better ray tracing performance sounds awesome to experience path tracing in more games, and Multi-frame Generation would help too, but other than that? I don't see the point.
@joseijosei Well, I have a 4090. It works great. I use it on a 4K monitor. I use it for stable diffusion, Photoshop, Blender, Gigapixel, 3D chat platforms, etc... Indiana Jones for instance, on maximum settings in certain very complex areas even with DLSS is starting to dip a bit, noticeably. Granted I have some things running in the background that I mentioned but that's just the way I use my system. The 5090 is gonna bring 32GB to the table and hopefully if the Titan is a real thing it'll have 48GB. Just for the ram alone I want it but the neural shaders, increased tensor performance, increased ray tracing performance and the other things they mentioned... I want to try all of it and stay with the times.
Even if two GPUs, like the RTX 4080 and a future RTX 5080, have the same number of teraflops (TFLOPS), their actual performance can be very different. TFLOPS only measure the theoretical computational power of CUDA cores, but they don't account for internal hardware improvements. New GPU generations typically include advancements that result in higher raw performance, even with the same TFLOPS: More advanced shader cores: With each generation, CUDA cores are optimized to process more operations per cycle. For instance, the CUDA cores in an RTX 5080 will be more efficient than those in the RTX 4080, doing more work with the same theoretical operations. Faster memory and better cache: Newer GPUs feature memory with higher bandwidth (e.g., GDDR7 instead of GDDR6X) and more advanced cache systems. This reduces latency and allows CUDA cores to operate more smoothly. More advanced manufacturing process: Next-gen chips use smaller manufacturing nodes (e.g., from 5nm to 4nm or 3nm), increasing transistor density and enabling higher clock speeds without significantly increasing power consumption. Improved internal design: New architectures optimize how data is managed within the GPU, maximizing CUDA core utilization and minimizing bottlenecks. So, even if an RTX 4080 and an RTX 5080 share the same number of TFLOPS, the RTX 5080 will perform much better because its CUDA cores will be more efficient, its memory will be faster, and its design will be optimized to extract more performance from the same hardware. In GPUs, TFLOPS don’t tell the whole story-architecture is what truly matters.
Ayyyyyyy, love the attention to detail on the vid. Thanks for pointing this out. Definitely going to try this calculation out. Btw, the thumbnail is 🔥🔥
This doesn't take into account that the Cuda Core performance per core is different each generation. This math only shows it if the cuda core performance is the same each generation. For instance, a single cuda core on a GTX 980 massively slower than a single cuda core on a RTX 4080.
Jensen showed the shader TFLOPs it’s the same gen over gen 2 instructions per clock, nothing changed. The only difference is higher RT and higher AI 🤖 performance. The question is is that achieved with more tensor cores and RT Cores? Once the architecture diagram is out we will know
i dont get why everybody hates AI frames, i dont have enough time to scan the frames for errors as regular consumer i have to say it looks pretty amazing?
You can't just multiply out the shader cores and equate that to total roaster performance. Yes it may give you a minimum uplift figure but you're totally discounting performance improvements between generations of shader cores and the impact of changes to the rendering pipeline, memory bandwidth and more. I agree that nVidia ma;y be hiding something by not showing it, but until we get actual benchmarks with the cards in the next couple of weeks nobody knows either way except nVidia.
Hello, I read that the rtx 5090 will be locked at max 600w watts, so even if you liquid cool the rtx 5090 you wont be able to extract much performance. How true is this? Isn't the cable 12vx6 maxed out at 600w? Or will there be a way to enable more power than 600w to the gpu? Thanks in advance
First and only tech-reviewer out there to actually consider the Power bottleneck on the 5090, eager to see what we can squeeze out of this bad boy. Hopefully no 5090ti that would be too much heat to handle D:
Unfortunately, it will be even bigger bottleneck compared to 4090's bus width. So you are paying 100% extra for 35% extra performance compared to 5080. Take in account then that price is increased while 5080's price is decreased (add inflation in equasion and price drop is really big). Even though 4080 was really not a big success in terms of public opinion but charts upon release indicated that it actually offered cheaper fps/$ in both raster and dlss scenarios even given the small $400 price difference. Then there is also a considerable 30gbps memory on 5080 as well as lower temperatures resulting from lower power, making room for bigger OC potential. 5070ti is even more impressive. I expect 5070ti to be overclockable to 5080 levelsa and basically retire 4090 at just $750, its crazy.
Nvida won't show a game with bottleneck in it's chart(far cry 6), because it won't showed the maximum uplif that nvidia wanna show- and we know that all the games that nvidia choosed where cherry picked, to show maximum uplift, even if in real life the uplifts will be smaller. So in my mind you actually see the real performance of 5090 without bottleneck, but with more uplift than the 5090 will deliver in real world(for nvida marketing). Also we all saw the same 30% uplift in nvida path tracing cyberpunk, 20 fps of 4090 and 27 fps of 5090- there sure no bottleneck.
@@Cartz369 >limited by voltage Bro... Power (W) is defined as current x voltage. If you increase voltage you increase power. And the only reason why you would have to increase voltage is because of increase in frequency, otherwise you may start getting into instability issues because capacitors cannot discharge fast enough.
wait so is it worth getting a 5070ti over a base 4080. for perspective i have a 3060 12gb vram with an i5 12400f. i use my pc for vr and screen games. but also for making 3d models. and im planning on upgrading to a 5070ti but im second guessing my self now. so what would be better?
27% is spot on. Gamer Nexus did a pixel count on the graph for Far Cry and it was a 27% increase. Now I need to consider a psu larger than a 1000W if it's being dialed back.
You can use the same math, cause 40 series was double counting the cuda cores with the integers. Same with 50 series so 40 and 50 series is comparable. Do you think I am born yesterday?
@@gamedaytoday1 👍Thanks I already did. Just though it would be another good video for you to show this, not trying to say your wrong or right. Sorry if confusing
@@gamedaytoday1 No I know you have no idea what you are talking about, as a graphics engineer I find your content frustrating as all you do is spread incorrect misinformation.
I really don´t understand why all of you self-acclaimed hardware testers are complaining. It doesn´t matter how a frame is produced if the frame has the same quality. I would understand if you start complaining after you did real world tests and found out that the AI frames suffer from blurring or whatever, but to complain just about numbers is plainly stupid. Also, no one cares about the performance in purely rasterized games. In those games, you already have 300+ frames with 30-series and there is no monitor able to display more than 360 frames, while the vast majority of players aren´t even able to see a difference above 150 frames. So, why should NVidia pack their cars with hardware that is able to display rasterized games in even higher frequency, when it simply isn´t needed?
If rtx 5080 is only 8% faster than the rtx 4080 super- why did they limit it to 16 gb vram instead of bumping it to 20/24 gb vram? Surely they would need some incentive for people to buy it?
On CES there were minimum 1 MSI 5080 with 24 GB. I dont think this was a texting error on the cover, there is at least a 5080 ti or super. Anybody wonder why Asus dont show there 5080 ROG Strix, because there is 24 GB VR on it!
Because its 256bit gpu meaning you can only go 8/16/32gb with available chip currently. There will likely be 24gb version with 3gb chips but i expect it to be 5080 super with gb202, rather than just same gb202 which is already maxed out. I argue that 16gb is perfect amount for under 10k cuda. The gpu is not lacking anything at all, it will be perfect for 4k, given that dlss4 uses less vram.
Amazing video I was hoping someone would go into the more technical details using math and such to figure out the performance, loved it and subbed! Only problem is the low rumbling you use is very distracting, but that's a nitpick, keep up the good work
Well I'm not a Nvidia fan boi but do own a few cards and mostly use AMD in Gentoo systems. This presentation was so well done that I had to subscribe and keep up the good work.
What I'm curious about is if the stock cooler for the founder edition will be enough to handle an overclock with a TDP of 700 w or so. If it really need 700 w then maybe I should go with another brand that has at least three slots worth of cooling.
I'm quite sure the boost clocks are going to be higher in practice. Edit: Looks like the 5070ti is also power limited. Judging by the boost/base clocks +cuda cores vs the 5070 and 5080. The 5070ti will likely be a beast when OC'd.
I have to add to this, because according to this logic, basing raster performance on TFlops, the 3070 ti should be 2,5x faster than the 2070 super. But it's only 1,5x faster. The 4070ti has 1,8x more TFlops than the 3070ti, but is also 1,5x the performance. I really think that there are other optimisations had by Nvidia. For example the immense bandwidth improvement. Or AI enhanced shader performance. I don't see why the 5070ti wouldn't be faster than a 4080. Let alone a 4070tiSuper...
Very cool info. Thanks man. I was planning on getting a 5090, but will now wait for the 5090x? I already have a 4090 that's barely breaking a sweat at 4K gaming.
Which card should I choose: New RTX 5070 or used 4070 Ti Super ? I mainly do 3D, Renderings and Animation, don’t really have much time to play, so mainly play Fortnite and planing to dive into Metro: Exodus and Stalker 2 on 2K Monitor. Maybe worth to mention: I have B650E mainboard, so I will benefit from PCIe 5.0 GPU (but don’t really know, how much). Also considering saving some money and maybe going used RTX4080, but I will build a PC for first time in my life, so I have no experience in choosing PC parts, therefore I am a little bit afraid of going with used parts. Will appreciate any advice and thanks in advance
I know these “upgrades” aren’t worth it for people who got 4000 series gpus but how much would you recommend them for someone coming from an older gpu like 3000 series or maybe even a console player looking to building their first pc.
Ray tracing is what’s bogging down performance. So it makes sense to boost there. For games that don’t use ray tracing, they’re likely able to run quickly as is.
I have a 4060, Is upgrading to a 5070 or 5070ti worth it ? I’m looking for just a bit more power and I do like DLSS ? I’m thinking 4070ti is worth it, with the VRAM
Everyone arguing about the performances should see this, since it seems no one can wait for actual in game tests of raw performance before saying shit. This makes sense and nice video !
Nice analysis. Well done sir. In my opinion, if the rt performance is noticeably better, as in the performance loss for enabling it is less, im mostly ok with that. I knew it would have to happen at some point. As for frame gen 4, as long as the latency isnt bad, whatever. I think it's still a gimmick, and probably wouldn't use it imo, dlss 3 is enough for me and does a fine job, but who knows, if that latency is passable i might, considering getting one of these 50 series..... Maybe, so we shall see.........
That’s not gonna be the case.. he fails to realize that the performance and IPC uplift for the new generation Blackwell cores will be more powerful than the older GEN cores . Meeting at the core count is the same on both cards. It’s still not apples to apples.
Sorry to burst ur bubble little bro but you are way way oversimplifing things and you are wrong. Across the board it will be 30% upgrade +|- 5% in non RT
Is it actually realistic they release a 5090ti/Titan? There were rumors of a 4090ti as well, but that card never saw a release. What makes it different this time around?
I ain’t worried about printed/posted specs. I will be attempting to get a 5090 FE on release. I have a water cooled 4090 right now. Ill do a comparison video and real world test on the 5090, including FPS vs Response times with CoD with and without DLSS to get a true idea of it’s capabilities. It’s gonna be fun!
Not worth upgrading from a 4080 to a 5080 for only 15% better performance in games that are not ray traced. And once both are Max overclocked its only 10%! Maybe wait for 60 Series, and skip this one.
The 12VHPWR limit is 600W so it's limiting the power the card can extract, and it's so new that even all PSU don't have them yet, so a card with more than 600w is gonna be hard. I don't expect a new card for consumers with more than 600w any time soon.
thats right, and it makes the FE cards vs. aib models so weird…either the FE is going to underperform or the AIB models like astral will have no advantage. there’s 60w over the pciex but either way there’s no much overclocking headroom because power limits of the damn cable.
Some motherboards can supply additional power through the PCIE slot, they have an additional 6 pin, usually near the 24pin. Maybe a good idea to start looking at these mobos
@perlerbear5279 its standard. There are Asus boards with gpu power going through mb entirely but that requires those specific mobos and their gpus that use the same connector. Idk what's the power rating on those, probably the same and it would be harder to resell later when 6090s come out.
great video. I'm waiting to upgrade and didn't know which one to get. Was leaning toward 5090 but knowing it's power limited 5080 may be the better deal.
I'm not sure this video is helpful and is just going to fan the fuel of hate against Nvidia RTX 5000 launch, we need real metrics and your 'real' performance preview is not that. You say the 5090 is power limited in Far Cry 6 but that's a much older game and it's quite common for older games to be limited by the rendering engine which means you can get a ceiling cap on the number of fps being displayed. If the tests get anywhere close to this or even can become bottlenecked by another subsystem - typically the CPU, then the results can be skweed.
im curious about overclocking on the more premium AIB cards like Astral over the FE cards, there’s no much overclocking headroom in regard of power limits of the power connector so either FE is gonna underperform or there will be no sense in buying a more premium model like Astral.
Not really wait for 60 series. I think people who has 20 and 30 series cards should upgrade, but if you are lagging with an rtx 4080 super then upgrade.
No! We really need to be better at getting over FOMO If your current card is doing what it needs to do, you don’t need to uprade, especially just one generation up cards have gotten so powerful that you don’t need to upgrade very often anymore. Only upgrade when your stuff isn’t doing what it needs to do
@DevareayWilliams I don't want to lose to depreciation and I already have a sale set up for almost my initial investment. If 5090 keeps it's value like 4090 did, isn't it worth investing more to not lose value in the long run?
@@MooselessCanada Sorry, but I don't really understand this keep value stuff. I invest into a high end gpu for 7 to 8 years. I can't and will never understand people who buy something to keep its value. If you buy it, then you must use it for years because if not you just waste your sweet money.
I'm currently running a 4080 Super with a 7800X3D on my Sim Rig. Pushing triple (+ a 4th for data) 1440p 165hz monitors I want much more power so I'm going to upgrade to a 5090 on the 30th. I'm debating on if it will be worth upgrading to a 9800X3D as well.
i just wanna know how much faster it is in pure 3D rendering like path tracer, redshift render in apps like unreal engine, c4d, houdini, considering it says 317 TFLOPS +66%, is it 66% faster in pure 3D rendering? or is it gonna be even faster? i dont care about gaming/in game fps.
Finally, someone gets it.. take the 5080 verse of 4080 yeah, they might have pretty much the same core count . But the cores in the 5080 are going to have a higher IPC performance uplift on the new generation Blackwell architecture. My guess should be a 20 to 30% raster improvement.
His video kinda makes sense if you take into account that only the 5090 got a price increase while the other cards got a price decrease. Why is that? Remember that we're talking about a company that last generation wanted to sell us a 4070 disguised as a 4080, but quickly abandoned the idea due to all the backlash. I'm pretty sure there's something fishy here.
@ I don’t believe that’s the case, first of all we use the 5080 that’s using GDDR7. Much more expensive than the 6X that’s in the 4080 . Also has more actual cores . I think it’s more of a strategy to make AMD totally irrelevant by pricing the cards right this time. I’ll find out soon enough because I have a 5080 on reserve . But I’ve never seen a generation where the new architecture is slower than the old. Lol . There’s always an IPC gain. As far as how much we’ll find out soon enough.
Yup, I am sitting at i5-9700k + 2080ti that I build during covid, now I am slowly noticing need to lower quality to maintain fps. I would be most probably looking to pick up discounted 4070 Ti Super and pair it with Ryzen some sort
@@12ozMousee So if you have no gpu is it worth going to 5080 or 5090 since the price of 4090 is low 2k? I'm open for suggestions. I sold my last pc for about 3k lol
Chat, will 700w be enough for the 5070 ti and an i5 12600 k/f? I know it says the recommended psu is 750w, but I don't know if the 50 extra wattage will be necessary
I don’t think the cuda cores can be seen as equal for both 40 and 50 series as it would make sense to have a newer type and maybe more efficient and powerful cuda cores.
at 1:35 you multiplied by the Boost Clock instead of the Base Clock. The boost is the overclocking of the base. The 5090 is not overclocked by much and the 5080 is more overclocked. so check that out.
I personally don't think there will be a 5090Ti. This was the same assumption for the 4090. A lot of people were saying that the chip was not fully utilised, and there will be a titan variant. However, we all know how that turned out. In fact, the utilised AD102 chip ended up being a professional card RTX 6000 Ada Generation card. From a marketing sense, it doesn't make sense for Nvidia to do it. Releasing a 5090Ti will basically make the 5090 redundant. Or the price might be too high it won't sell as well. I think the 3090Ti did not sell well. That's why Nvidia skipped it for the 40 series. Again, I might be wrong, and this is just my opinion.
Yeah, Wouldn't hold my breath for a 5090Ti, if we will see something it will be after a year at best but I'm highly skeptical. It's not like Nvidia has been in a rush to release better gaming graphics cards. 5090 will likely be the top dog for the next two years. Full chips go to professional use and defected ones get turned into gaming cards which is their side hustle at this point 😅
Especially since AMD has nothing to compete at the high end, there's no reason to release a 5090ti or anything stronger than 5090 in the gaming segment when Nvidia can use those silicon to produce enterprise/non-gaming GPU's for x3-x5 more profit.
I stocked up on 40s sales over the holidays because I knew Nvidia would pull this stunt. I'm probably going to upgrade during the 60 series if the Super models are underwhelming
I'm running a 4070 Ti Super at 4k and even with high-res texture packs added on in some games my vram use has topped out at around 11gb. I don't bother with ray/path tracing, if you use it then 16gb might be a better fit.
The power estimate for the 5090 does not consider 2 things though. One, the 4080S only pulls 300w or even a bit less in most games so the 5080 won't be too much higher than this and likely will not hit the 360w limit, so doubling that gives an overly high estimate. Also, some of the power budget goes to the VRAM and VRM losses, so you cannot take the full board power and multiply it by the % change in shader cores. This will also result in an estimate that is likely too high.
Dude thank you for not bashing on DLSS4 and the frame generation. If you're not playing eSports and want beautiful visuals at a hypersmooth frame rate then it is a GOD SEND. You aren't going to get 4K raytracing graphics in this day and age at a good FPS! Game devs have gotten lazy and are using DLSS4 as a crutch forcing Nvidia to improve their algorithms and push them because they are literally the only way at this point to deal with games that *can't utilize the hardware properly* . Heck, Helldivers 2 is a perfect example, it can't even utilize the CPU or the GPU fully yet gives unstable FPS anyway as if it's being bottlenecked by something (it isn't). No way to play that game properly without DLSS on older hardware at 4K resolution. And still nobody is talking about the DDR7 VRAM! Stop complaining about VRAM capacity and appreciate the fact we have a new generation of VRAM storage, that's so friggin' cool, it'll help massively with productivity tasks that use a lot of it!
Do you think the power requirements are the issue we aren't seeing huge raw performance gains anymore? I don't have an issue with AI. I welcome it and think it will become better and better. But I think pure raster performance needs to be improved as well as a baseline so that the AI can top it off.
No the node size is the issue. They went from 5nm to 4nm so they can’t fit more transistors so the needed to make the die bigger. I think we will see much bigger performance leaps going from 4nm to 2nm. The 60 series will be a much bigger leap in performance than this generation
On the power aspect, the connector gives you a max of 600W, and the slot itself can provide up to another 75W, so for a any single connecter RTX5090 you will be limited to 675W with all the overclocking, considering that in real workloads you don't use all parts of the chip at the same time, it should be interesting to see how far the water cooling will go on these Honestly at this point the bottle neck might just be the VRM itself, or getting all the heat away from such a small PCB
I honestly dont understand much about the numbers, do you think in raw perfirmance the 5080 is near the 4090? Don't really care about the frame generation or upscaler
No, the RTX 5080 is nowhere close to the RTX 4090 in raw performance. As shown the video, the RTX 5080 is only very slightly faster than the RTX 4080 Super.
I suppose you could throw in another 12VHPWR connector on the gpu or some 8pins on the side but that's getting a little ridiculous at that point. Either that or that meme where it's powered directly from the wall lol.
@@10Sambo01 Also could make a new connector too but they will need to prioritize efficiency over performance since the card will get too hot for normal customer use if the cards get upward of 700-800W under max load. If it gets to that the 80 and 70 series will be the gamer tier and 90+ will be professional use tier.
@@FrederikSeerupNielsen True that; I mean; some electric fan heaters are 800W! I guess that's really what it is in the end; a really expensive fan heater which also outputs 3D graphics :D
Its nice to see someone else also can do the math on this. I came with the same numbers as you. Then I also said so to the community... That was pretty rough, and told me im wrong, started to throw sticks and stones after me for the claim. But it was just a mather of time before someone else also see what ive been seeing. Then we also need to take in count that they also did 40 series with dlss 3 as the current gen has been, against dlss 4, that has a minor uplift as well in performance. It makes graphs and numbers even more pretty. I would say 5%, but i like a more gounded number and said 3%. So we are in the safe zone. Keep up the good work
Idk... Ig u dont know the Performance of the new cuda gen 4 vs gen 3 so comparing gen 3 vs gen 4 cuda cores is Impossible? Thats what i understood maybe we see more Performance boost
50 series GPUs are upgrade options for 30 or 20 series owners,
So I should stay with my gtx 1080
@@vilhosiviranta upgrade b4 the prices increasee
3090 here and playing on a 4K TV. It's almost essential for me.
@@peterchild2010 same, I have a 3090 and need it for vr (sim racing)
I have a 3080 FE and based on this video the RTX 5080 should be roughly 1.9 times faster... Mmmm. I'm still on the fence as I get 80 FPS in Stalker 2... and I'm CPU bound. I'm pretty sure I'll get the cpu/mobo/ram and wait a while before I get a new GPU... hopefully the 6080 in 2 years... but we'll see, I may give in before that.
Very interesting, makes mathematical sense. I'm keen to see how close this is in the real world testing when the embargos are lifted.
Game Day, dude....great reporting. You are so much more than just a Star Citizen tech reviewer.
Great breakdown. Very interesting, the power situation with the 90.
I was wondering how they were going to get away with running the 12VHPWR connection so close to its limit. By clamping it, it looks like. I'm running a 1300w supply with two 12VHPWR feeds, would love to see someone provide a way to unlock power and maybe double up the connections.
correct me if im wrong, but won't the memory being GDDR7 provide an additional boost to rastorized performance?
Yes,but not much
The clocks didn't improve to take advantage of it although I bet their is probably nice headroom in oc.
Yeah it will. These videos are just hypothesis. Wait for the actual card to be released.
ofc it will. especially in more demanding situation. in older games i dont think it would matter much.
in upcoming games? i would expect so, many upcoming games rumoured to be rt-based just like indiana jones.
which means u need faster data transfer, larger allocation and wider bandwith. cos ray/path tracing relies on it heavily.
*my pro tip is just save some money and go for 5070ti if u were going for 5070 originally. on paper, thats my sweetspot 🤣
@@TheFeederabis i already plan on getting the rtx 5080, however i have been with AMD for years now so i want to wait for the new gpu announcmenet
Listen, Jensen had already spoke about where their primary focus was going to be with the release of the 50 series and it was... you guessed it, with RT and AI. They are focusing on increasing quality and performance of all-things AI related on that board as more and more of us are utlizing AI locally on our machines while simultaneously offering up a very respectable frame rate increase in the process. Raster performance will almost certainly be near stand-still and a thing of the past as AI saturates EVERYTHING and takes over. EVERYTHING will be AI overlays, AI embeddings, AI in every corner of every function you will interact with.
I understand that, same as RT. At one point it will be in all games by default, but currently what's the point of upgrading to a 5000 if you own a 4000 series? Like, unless you are on the high end, and you use a 4K monitor with 120Hz or more, or, a 1080/1440p monitor with 240Hz or more, then Multi-frame Generation does very little for you because with a decent base framerate, Frame Gen already leaves you at a framerate that's higher than 140 fps, and 180 fps if you are shooting for 90 base or more.
Better ray tracing performance sounds awesome to experience path tracing in more games, and Multi-frame Generation would help too, but other than that? I don't see the point.
@joseijosei Well, I have a 4090. It works great. I use it on a 4K monitor. I use it for stable diffusion, Photoshop, Blender, Gigapixel, 3D chat platforms, etc... Indiana Jones for instance, on maximum settings in certain very complex areas even with DLSS is starting to dip a bit, noticeably. Granted I have some things running in the background that I mentioned but that's just the way I use my system. The 5090 is gonna bring 32GB to the table and hopefully if the Titan is a real thing it'll have 48GB. Just for the ram alone I want it but the neural shaders, increased tensor performance, increased ray tracing performance and the other things they mentioned... I want to try all of it and stay with the times.
Even if two GPUs, like the RTX 4080 and a future RTX 5080, have the same number of teraflops (TFLOPS), their actual performance can be very different. TFLOPS only measure the theoretical computational power of CUDA cores, but they don't account for internal hardware improvements.
New GPU generations typically include advancements that result in higher raw performance, even with the same TFLOPS:
More advanced shader cores:
With each generation, CUDA cores are optimized to process more operations per cycle. For instance, the CUDA cores in an RTX 5080 will be more efficient than those in the RTX 4080, doing more work with the same theoretical operations.
Faster memory and better cache:
Newer GPUs feature memory with higher bandwidth (e.g., GDDR7 instead of GDDR6X) and more advanced cache systems. This reduces latency and allows CUDA cores to operate more smoothly.
More advanced manufacturing process:
Next-gen chips use smaller manufacturing nodes (e.g., from 5nm to 4nm or 3nm), increasing transistor density and enabling higher clock speeds without significantly increasing power consumption.
Improved internal design:
New architectures optimize how data is managed within the GPU, maximizing CUDA core utilization and minimizing bottlenecks.
So, even if an RTX 4080 and an RTX 5080 share the same number of TFLOPS, the RTX 5080 will perform much better because its CUDA cores will be more efficient, its memory will be faster, and its design will be optimized to extract more performance from the same hardware. In GPUs, TFLOPS don’t tell the whole story-architecture is what truly matters.
Ayyyyyyy, love the attention to detail on the vid. Thanks for pointing this out. Definitely going to try this calculation out. Btw, the thumbnail is 🔥🔥
Why would the the Titan AI/5090ti have PCIe Gen4 when the normal 5090 has Gen5? A typo?
Yes a Typo
This doesn't take into account that the Cuda Core performance per core is different each generation. This math only shows it if the cuda core performance is the same each generation. For instance, a single cuda core on a GTX 980 massively slower than a single cuda core on a RTX 4080.
Jensen showed the shader TFLOPs it’s the same gen over gen 2 instructions per clock, nothing changed. The only difference is higher RT and higher AI 🤖 performance. The question is is that achieved with more tensor cores and RT Cores? Once the architecture diagram is out we will know
@@gamedaytoday1incorrect, and not taking into account massively faster memory
Yes the difference in 2010 technology is very inferior to 2020 technology. Excellent point
I agree with this, we'll see how much raw performance gains it'll have, if its less than expected i'll just refund it right away.
@@zackz1260
That faster memory doesn't amount to much it seems.
27 fps on a 5090 in CP2077 with PT and no upscaling, 21 on 4090.
i dont get why everybody hates AI frames, i dont have enough time to scan the frames for errors as regular consumer i have to say it looks pretty amazing?
You can't just multiply out the shader cores and equate that to total roaster performance. Yes it may give you a minimum uplift figure but you're totally discounting performance improvements between generations of shader cores and the impact of changes to the rendering pipeline, memory bandwidth and more.
I agree that nVidia ma;y be hiding something by not showing it, but until we get actual benchmarks with the cards in the next couple of weeks nobody knows either way except nVidia.
Hello, I read that the rtx 5090 will be locked at max 600w watts, so even if you liquid cool the rtx 5090 you wont be able to extract much performance. How true is this?
Isn't the cable 12vx6 maxed out at 600w? Or will there be a way to enable more power than 600w to the gpu?
Thanks in advance
With a few unlocked BIOS you can pull 1000w just like the 4090
@gamedaytoday1 oh, thats great to know. Makes my decision to go for the Rtx 5090 Astral Liquid cooled even better.
Thanks for the reply sir.
@@gamedaytoday1but the cable is 600w max?
600w to the gpu through the cable yes, but there is also the ability to pull 75w through the mobo.
@@Thor847200 yeah so not a huge amount to oc with
First and only tech-reviewer out there to actually consider the Power bottleneck on the 5090, eager to see what we can squeeze out of this bad boy. Hopefully no 5090ti that would be too much heat to handle D:
Unfortunately, it will be even bigger bottleneck compared to 4090's bus width. So you are paying 100% extra for 35% extra performance compared to 5080.
Take in account then that price is increased while 5080's price is decreased (add inflation in equasion and price drop is really big). Even though 4080 was really not a big success in terms of public opinion but charts upon release indicated that it actually offered cheaper fps/$ in both raster and dlss scenarios even given the small $400 price difference. Then there is also a considerable 30gbps memory on 5080 as well as lower temperatures resulting from lower power, making room for bigger OC potential. 5070ti is even more impressive. I expect 5070ti to be overclockable to 5080 levelsa and basically retire 4090 at just $750, its crazy.
Nvida won't show a game with bottleneck in it's chart(far cry 6), because it won't showed the maximum uplif that nvidia wanna show- and we know that all the games that nvidia choosed where cherry picked, to show maximum uplift, even if in real life the uplifts will be smaller. So in my mind you actually see the real performance of 5090 without bottleneck, but with more uplift than the 5090 will deliver in real world(for nvida marketing). Also we all saw the same 30% uplift in nvida path tracing cyberpunk, 20 fps of 4090 and 27 fps of 5090- there sure no bottleneck.
@@filippetrovic845It will not have a memory bandwidth bottleneck.
It won't be power bottle necked. The 4090 is limited by voltage. Plus games and there bench mark tests don't push the cards power to its limit.
@@Cartz369 >limited by voltage
Bro... Power (W) is defined as current x voltage. If you increase voltage you increase power. And the only reason why you would have to increase voltage is because of increase in frequency, otherwise you may start getting into instability issues because capacitors cannot discharge fast enough.
A third more performance for a third more power and money. What a day to be alive.
tbf the 4090 atm is more expensive than the 5090 will be at launch day
wait so is it worth getting a 5070ti over a base 4080.
for perspective i have a 3060 12gb vram with an i5 12400f. i use my pc for vr and screen games. but also for making 3d models. and im planning on upgrading to a 5070ti but im second guessing my self now. so what would be better?
27% is spot on. Gamer Nexus did a pixel count on the graph for Far Cry and it was a 27% increase.
Now I need to consider a psu larger than a 1000W if it's being dialed back.
Wow love the analysis! Lookin forward to your future videos!
Now post a video doing the exact same math with the 40 series over the 30 series to see if it adds up to real world performance.
You can use the same math, cause 40 series was double counting the cuda cores with the integers. Same with 50 series so 40 and 50 series is comparable. Do you think I am born yesterday?
@@gamedaytoday1 👍Thanks I already did. Just though it would be another good video for you to show this, not trying to say your wrong or right. Sorry if confusing
@@gamedaytoday1 No I know you have no idea what you are talking about, as a graphics engineer I find your content frustrating as all you do is spread incorrect misinformation.
@@SirDragonClaw As a graphics engineer can you be more specific and point out some mistakes and correct the information?
Im quite interested.
I’m also a graphics engineer with words to say
I really don´t understand why all of you self-acclaimed hardware testers are complaining. It doesn´t matter how a frame is produced if the frame has the same quality. I would understand if you start complaining after you did real world tests and found out that the AI frames suffer from blurring or whatever, but to complain just about numbers is plainly stupid. Also, no one cares about the performance in purely rasterized games. In those games, you already have 300+ frames with 30-series and there is no monitor able to display more than 360 frames, while the vast majority of players aren´t even able to see a difference above 150 frames.
So, why should NVidia pack their cars with hardware that is able to display rasterized games in even higher frequency, when it simply isn´t needed?
You forgot to take into account architectural improvements in the CUDA cores. Those can boost regular shader performance if present.
If rtx 5080 is only 8% faster than the rtx 4080 super- why did they limit it to 16 gb vram instead of bumping it to 20/24 gb vram? Surely they would need some incentive for people to buy it?
Wait for a 5080ti and super variants
DDR7?
Probably so they can sell 4080 Super/TI as a mid generation refresh. Explains why there is such a massive gap between the 5080 and 5090 to.
On CES there were minimum 1 MSI 5080 with 24 GB. I dont think this was a texting error on the cover, there is at least a 5080 ti or super. Anybody wonder why Asus dont show there 5080 ROG Strix, because there is 24 GB VR on it!
Because its 256bit gpu meaning you can only go 8/16/32gb with available chip currently. There will likely be 24gb version with 3gb chips but i expect it to be 5080 super with gb202, rather than just same gb202 which is already maxed out. I argue that 16gb is perfect amount for under 10k cuda. The gpu is not lacking anything at all, it will be perfect for 4k, given that dlss4 uses less vram.
Amazing video I was hoping someone would go into the more technical details using math and such to figure out the performance, loved it and subbed! Only problem is the low rumbling you use is very distracting, but that's a nitpick, keep up the good work
But why is AI frames bad?
You sure Nvidia will release a 5090 ti? There were rumors and leaks about a 4090 ti and it never materialized.
There will be a 5090ti to compete against the AMD UNDA top end GPU in 2026. RTX 60 series is the competitor to UNDA.
Nvidia can get 10x on the enterprise market for this gpu. We will get an Titan if the ai bubble peaks.
@@ZackSNetworkamd has yet to compete with the 4090, let alone the 5090 and you're out here suggesting that they have something even faster than that?
@@aberkae yea this is the truth. Gamers will only get the left overs now.
Well I'm not a Nvidia fan boi but do own a few cards and mostly use AMD in Gentoo systems. This presentation was so well done that I had to subscribe and keep up the good work.
Cool analysis and really fun video editing. You have my sub.
What I'm curious about is if the stock cooler for the founder edition will be enough to handle an overclock with a TDP of 700 w or so. If it really need 700 w then maybe I should go with another brand that has at least three slots worth of cooling.
Most likely not, would need to get a AIB card and then an unlocked BIOS. Probably slap a water block on it as well
But isnt 600w max for the power connection?@@gamedaytoday1
I'm quite sure the boost clocks are going to be higher in practice.
Edit: Looks like the 5070ti is also power limited. Judging by the boost/base clocks +cuda cores vs the 5070 and 5080.
The 5070ti will likely be a beast when OC'd.
I have to add to this, because according to this logic, basing raster performance on TFlops, the 3070 ti should be 2,5x faster than the 2070 super. But it's only 1,5x faster. The 4070ti has 1,8x more TFlops than the 3070ti, but is also 1,5x the performance.
I really think that there are other optimisations had by Nvidia.
For example the immense bandwidth improvement.
Or AI enhanced shader performance.
I don't see why the 5070ti wouldn't be faster than a 4080. Let alone a 4070tiSuper...
Fantastic content, excellent presentation and technical knowledge, underrated channel!
Very cool info. Thanks man. I was planning on getting a 5090, but will now wait for the 5090x? I already have a 4090 that's barely breaking a sweat at 4K gaming.
Should I just try get a cheaper 4080 super when the 50 series gets released?
but is the 5070 still a really good card, and what graphics cards does it compare to
Which card should I choose: New RTX 5070 or used 4070 Ti Super ? I mainly do 3D, Renderings and Animation, don’t really have much time to play, so mainly play Fortnite and planing to dive into Metro: Exodus and Stalker 2 on 2K Monitor.
Maybe worth to mention: I have B650E mainboard, so I will benefit from PCIe 5.0 GPU (but don’t really know, how much). Also considering saving some money and maybe going used RTX4080, but I will build a PC for first time in my life, so I have no experience in choosing PC parts, therefore I am a little bit afraid of going with used parts.
Will appreciate any advice and thanks in advance
I wonder if the partner cards will allow more power draw since they're using a different cooling design?
My last build was back in 2018 with the 1080. Im lioking to build again. Should I "try" and snag a 5090 or wait until 60 series?
Thank you so much for doing this, I am really bad at maths but this was easy to understand
Maybe the 5080Ti, 8-12months from now will be worth the upgrade with 24GB VRAM and more CUs?
I know these “upgrades” aren’t worth it for people who got 4000 series gpus but how much would you recommend them for someone coming from an older gpu like 3000 series or maybe even a console player looking to building their first pc.
I was wandering if’s worth selling my 4090 and buy a 5080? Is 5080 any better?
Ray tracing is what’s bogging down performance. So it makes sense to boost there. For games that don’t use ray tracing, they’re likely able to run quickly as is.
Should I got from 3070 ti to 5070 or grab a 4070 ti
I have a 4060,
Is upgrading to a 5070 or 5070ti worth it ?
I’m looking for just a bit more power and I do like DLSS ?
I’m thinking 4070ti is worth it, with the VRAM
I have rtx 470 ti super, Should I upgrade to 5080 or is it not worth it?
So were basically paying for software this gen?
Also you can divide the AI performance with ~2 for the new cards. Nvidia's data is about fp8 (4xxx) vs fp4 (5xxx).
Divide the AI performance with 2 and then multiply it by 4X with frame gen - Nvidia math
Everyone arguing about the performances should see this, since it seems no one can wait for actual in game tests of raw performance before saying shit. This makes sense and nice video !
Nice analysis. Well done sir. In my opinion, if the rt performance is noticeably better, as in the performance loss for enabling it is less, im mostly ok with that. I knew it would have to happen at some point. As for frame gen 4, as long as the latency isnt bad, whatever. I think it's still a gimmick, and probably wouldn't use it imo, dlss 3 is enough for me and does a fine job, but who knows, if that latency is passable i might, considering getting one of these 50 series..... Maybe, so we shall see.........
And the power draw keeps going up. Insane
I refuse to believe that RTX 5070 will be slow than 4070 super in rasterization. But if that's true, Nvidia is out of their goddamn mind.
That’s not gonna be the case.. he fails to realize that the performance and IPC uplift for the new generation Blackwell cores will be more powerful than the older GEN cores . Meeting at the core count is the same on both cards. It’s still not apples to apples.
@entreri76x I agree with you.
Sorry to burst ur bubble little bro but you are way way oversimplifing things and you are wrong. Across the board it will be 30% upgrade +|- 5% in non RT
Is going from a 4070 super to a 5070 worth it? I bought the 4070 super on Amazon and I can still cancel anytime and I’m genuinely curious
Is it actually realistic they release a 5090ti/Titan? There were rumors of a 4090ti as well, but that card never saw a release. What makes it different this time around?
5090 is the 4090 Ti
I ain’t worried about printed/posted specs. I will be attempting to get a 5090 FE on release. I have a water cooled 4090 right now. Ill do a comparison video and real world test on the 5090, including FPS vs Response times with CoD with and without DLSS to get a true idea of it’s capabilities. It’s gonna be fun!
I got a 4060 ti 8gb and im getting bottlenecked on vram usage in newer titles so anything good is worth upgrading to
Isnt the 12vhprw rated to around 600w on the connector? Im unsure if theres much safe oc headroom as they already had cables melting
Not worth upgrading from a 4080 to a 5080 for only 15% better performance in games that are not ray traced. And once both are Max overclocked its only 10%! Maybe wait for 60 Series, and skip this one.
Every modern game will require Raytracing
113 -> 171 Tflops
Just skip every card....skip the 40, skip the 50, next 2 years...skip the 60.
But good that we finally have a decent low profile card.
i don't think it is ever worth upgrading every generation
Definitely worth going from an rx6800 to a 5080 though which is my plan. 😁
The 12VHPWR limit is 600W so it's limiting the power the card can extract, and it's so new that even all PSU don't have them yet, so a card with more than 600w is gonna be hard. I don't expect a new card for consumers with more than 600w any time soon.
thats right, and it makes the FE cards vs. aib models so weird…either the FE is going to underperform or the AIB models like astral will have no advantage. there’s 60w over the pciex but either way there’s no much overclocking headroom because power limits of the damn cable.
Some motherboards can supply additional power through the PCIE slot, they have an additional 6 pin, usually near the 24pin. Maybe a good idea to start looking at these mobos
@perlerbear5279 yeah, that's another 60w but still not much power headroom for seriois OC on more premium cards
@@medovk Is that the case universally? Or is it on a board by board basis?
@perlerbear5279 its standard. There are Asus boards with gpu power going through mb entirely but that requires those specific mobos and their gpus that use the same connector. Idk what's the power rating on those, probably the same and it would be harder to resell later when 6090s come out.
great video. I'm waiting to upgrade and didn't know which one to get. Was leaning toward 5090 but knowing it's power limited 5080 may be the better deal.
I'm not sure this video is helpful and is just going to fan the fuel of hate against Nvidia RTX 5000 launch, we need real metrics and your 'real' performance preview is not that. You say the 5090 is power limited in Far Cry 6 but that's a much older game and it's quite common for older games to be limited by the rendering engine which means you can get a ceiling cap on the number of fps being displayed. If the tests get anywhere close to this or even can become bottlenecked by another subsystem - typically the CPU, then the results can be skweed.
Is the 12V-2x6 cable limited to 600w? how would one go about giving more power the 5090 to overclock it?
6:44 I'm hoping the ASUS Astral card will get around 700w if this is the case!
im curious about overclocking on the more premium AIB cards like Astral over the FE cards, there’s no much overclocking headroom in regard of power limits of the power connector so either FE is gonna underperform or there will be no sense in buying a more premium model like Astral.
Is going from 4080 super to 5090 worth it?
Not really wait for 60 series. I think people who has 20 and 30 series cards should upgrade, but if you are lagging with an rtx 4080 super then upgrade.
No! We really need to be better at getting over FOMO
If your current card is doing what it needs to do, you don’t need to uprade, especially just one generation up
cards have gotten so powerful that you don’t need to upgrade very often anymore. Only upgrade when your stuff isn’t doing what it needs to do
@DevareayWilliams I don't want to lose to depreciation and I already have a sale set up for almost my initial investment. If 5090 keeps it's value like 4090 did, isn't it worth investing more to not lose value in the long run?
@@MooselessCanada Sorry, but I don't really understand this keep value stuff. I invest into a high end gpu for 7 to 8 years. I can't and will never understand people who buy something to keep its value. If you buy it, then you must use it for years because if not you just waste your sweet money.
@@DevareayWilliams I agree with this statement.
I'm currently running a 4080 Super with a 7800X3D on my Sim Rig. Pushing triple (+ a 4th for data) 1440p 165hz monitors I want much more power so I'm going to upgrade to a 5090 on the 30th. I'm debating on if it will be worth upgrading to a 9800X3D as well.
Nuh. None of that worth it. Waiting for 13800X3D and 6090.
How could it be even possible to overclock 5090 if the connector itself only support 600 W?
i just wanna know how much faster it is in pure 3D rendering like path tracer, redshift render in apps like unreal engine, c4d, houdini, considering it says 317 TFLOPS +66%, is it 66% faster in pure 3D rendering? or is it gonna be even faster? i dont care about gaming/in game fps.
Curious on the mobile compares
You are completely missing the architectural improvements which are normally between 10-20% on top of everything else.
Finally, someone gets it.. take the 5080 verse of 4080 yeah, they might have pretty much the same core count . But the cores in the 5080 are going to have a higher IPC performance uplift on the new generation Blackwell architecture. My guess should be a 20 to 30% raster improvement.
@@entreri76x that makes no sense. Based on what? architecture is just how the layout is done. No magical power there.
His video kinda makes sense if you take into account that only the 5090 got a price increase while the other cards got a price decrease. Why is that? Remember that we're talking about a company that last generation wanted to sell us a 4070 disguised as a 4080, but quickly abandoned the idea due to all the backlash.
I'm pretty sure there's something fishy here.
@ I don’t believe that’s the case, first of all we use the 5080 that’s using GDDR7. Much more expensive than the 6X that’s in the 4080 . Also has more actual cores . I think it’s more of a strategy to make AMD totally irrelevant by pricing the cards right this time. I’ll find out soon enough because I have a 5080 on reserve . But I’ve never seen a generation where the new architecture is slower than the old. Lol . There’s always an IPC gain. As far as how much we’ll find out soon enough.
guys should i get the 5070ti or go with the 7900XTX
How about memory speed? Since gddr7 is much faster than gddr6x i would expect a huge jump from that as well…
Yup, I am sitting at i5-9700k + 2080ti that I build during covid, now I am slowly noticing need to lower quality to maintain fps.
I would be most probably looking to pick up discounted 4070 Ti Super and pair it with Ryzen some sort
So is it even worth upgrading to 5090 if you currently own a 4090?
No lol
@@12ozMousee So if you have no gpu is it worth going to 5080 or 5090 since the price of 4090 is low 2k? I'm open for suggestions. I sold my last pc for about 3k lol
I have a 7900xtx looking to upgrade should I go for a 5090 nevermind I saw the price Thank god for amd options lol
Ohh great hot take mate! Also that is why the FE only has the 2 pcie width cooler.
Chat, will 700w be enough for the 5070 ti and an i5 12600 k/f? I know it says the recommended psu is 750w, but I don't know if the 50 extra wattage will be necessary
I’m more interested in this new updated Ai tech looks to be really good I already know these cards are beast
I don’t think the cuda cores can be seen as equal for both 40 and 50 series as it would make sense to have a newer type and maybe more efficient and powerful cuda cores.
at 1:35 you multiplied by the Boost Clock instead of the Base Clock. The boost is the overclocking of the base. The 5090 is not overclocked by much and the 5080 is more overclocked. so check that out.
The boost clock is at what the GPUS is always running in the least demanding game, so that equation makes sense.
@ why not just base clock and keep the variable consistent. Different OEMs will have different boost clocks
I personally don't think there will be a 5090Ti. This was the same assumption for the 4090. A lot of people were saying that the chip was not fully utilised, and there will be a titan variant. However, we all know how that turned out. In fact, the utilised AD102 chip ended up being a professional card RTX 6000 Ada Generation card. From a marketing sense, it doesn't make sense for Nvidia to do it. Releasing a 5090Ti will basically make the 5090 redundant. Or the price might be too high it won't sell as well. I think the 3090Ti did not sell well. That's why Nvidia skipped it for the 40 series. Again, I might be wrong, and this is just my opinion.
Yeah 90ti cards are Done. He can now sell Ada generation equivalents for over 15K
Yeah, Wouldn't hold my breath for a 5090Ti, if we will see something it will be after a year at best but I'm highly skeptical. It's not like Nvidia has been in a rush to release better gaming graphics cards. 5090 will likely be the top dog for the next two years.
Full chips go to professional use and defected ones get turned into gaming cards which is their side hustle at this point 😅
Especially since AMD has nothing to compete at the high end, there's no reason to release a 5090ti or anything stronger than 5090 in the gaming segment when Nvidia can use those silicon to produce enterprise/non-gaming GPU's for x3-x5 more profit.
It's because of silicone yields. They all have defects hence the cut downs.
@@MKiller29 why have more defects than 4090 series? The nanometers are the same. Architecture is similar too.
I stocked up on 40s sales over the holidays because I knew Nvidia would pull this stunt. I'm probably going to upgrade during the 60 series if the Super models are underwhelming
Is 5070 ti's 16gb vram worth the price difference to 5070?
I'm running a 4070 Ti Super at 4k and even with high-res texture packs added on in some games my vram use has topped out at around 11gb. I don't bother with ray/path tracing, if you use it then 16gb might be a better fit.
@ thanks for the info mate. I won't use ray tracing however I saw that Frame Generation also has high vram demands.
@@gangs0846 Yeah, I don't do frame gen either, use dlss quality on a couple of titles to get frame rates up but mostly just native (dlaa).
Damn this is great analysis. 4090 still going to be a beast throughout the next gen.
What adapter do you need to support tdp 700w?
The power estimate for the 5090 does not consider 2 things though. One, the 4080S only pulls 300w or even a bit less in most games so the 5080 won't be too much higher than this and likely will not hit the 360w limit, so doubling that gives an overly high estimate. Also, some of the power budget goes to the VRAM and VRM losses, so you cannot take the full board power and multiply it by the % change in shader cores. This will also result in an estimate that is likely too high.
To be honest the numbers are on point there ar different ways to get to 720w. 2x specs 5080 is the high IQ short cut, don’t overcomplicate it.
@@gamedaytoday1 High IQ, oh my... lol
Dude thank you for not bashing on DLSS4 and the frame generation. If you're not playing eSports and want beautiful visuals at a hypersmooth frame rate then it is a GOD SEND.
You aren't going to get 4K raytracing graphics in this day and age at a good FPS! Game devs have gotten lazy and are using DLSS4 as a crutch forcing Nvidia to improve their algorithms and push them because they are literally the only way at this point to deal with games that *can't utilize the hardware properly* . Heck, Helldivers 2 is a perfect example, it can't even utilize the CPU or the GPU fully yet gives unstable FPS anyway as if it's being bottlenecked by something (it isn't). No way to play that game properly without DLSS on older hardware at 4K resolution.
And still nobody is talking about the DDR7 VRAM! Stop complaining about VRAM capacity and appreciate the fact we have a new generation of VRAM storage, that's so friggin' cool, it'll help massively with productivity tasks that use a lot of it!
Do you think the power requirements are the issue we aren't seeing huge raw performance gains anymore? I don't have an issue with AI. I welcome it and think it will become better and better. But I think pure raster performance needs to be improved as well as a baseline so that the AI can top it off.
No the node size is the issue. They went from 5nm to 4nm so they can’t fit more transistors so the needed to make the die bigger. I think we will see much bigger performance leaps going from 4nm to 2nm. The 60 series will be a much bigger leap in performance than this generation
On the power aspect, the connector gives you a max of 600W, and the slot itself can provide up to another 75W, so for a any single connecter RTX5090 you will be limited to 675W with all the overclocking, considering that in real workloads you don't use all parts of the chip at the same time, it should be interesting to see how far the water cooling will go on these
Honestly at this point the bottle neck might just be the VRM itself, or getting all the heat away from such a small PCB
I honestly dont understand much about the numbers, do you think in raw perfirmance the 5080 is near the 4090?
Don't really care about the frame generation or upscaler
No, the RTX 5080 is nowhere close to the RTX 4090 in raw performance. As shown the video, the RTX 5080 is only very slightly faster than the RTX 4080 Super.
@KillFrenzy96 thank you.
5090 vs 3080 10gb ram?
As a 4090 owner, I'm going to try my best to get the 5090 FE only because of the 2-slot design giving me SO many more cool SFF case options.
I have no PC but I'm getting one right now so I'm just going for 5070 instead of 4070 series because of same price so isn't that worth it tbh?
IF THE PRICE IS SAME , OR EVEN 50 BUCKS MORE ,GO FOR 5070
highly recommended doing searching before buying anything regarding parts for starters
@@leyonden9999 Processor AMD Ryzen 7 7700,8 cores 16 Threads,Upto 5.3
Motherboard MSI B650M Gaming WiFi
Ram Gskill GSKILL RIPJAWS M5 NEO RGB BLACK 6000MHZ 16GB*2 BLACK
Storage Hikvision E3000 1TB
Case MSI Pano M100R PZ Black
GPU NVIDIA GeForce RTX 5070 12GB GDDR7
Cpu Cooler Deepcool AK400 Cpu Air Cooler Black
PSU Antec G650W 80+ Gold
Cost - $1500
How would you even power a 700+ watt GPU? The 12VHPWR cable is rated to 600W, plus you can get 75W from the PCIe slot, so that's 675 max...
I suppose you could throw in another 12VHPWR connector on the gpu or some 8pins on the side but that's getting a little ridiculous at that point. Either that or that meme where it's powered directly from the wall lol.
Oh sure@@FrederikSeerupNielsen, it's definitely POSSIBLE (albeit ill-advised), but not officially supported AFAICS.
@@10Sambo01 Also could make a new connector too but they will need to prioritize efficiency over performance since the card will get too hot for normal customer use if the cards get upward of 700-800W under max load. If it gets to that the 80 and 70 series will be the gamer tier and 90+ will be professional use tier.
@@FrederikSeerupNielsen True that; I mean; some electric fan heaters are 800W! I guess that's really what it is in the end; a really expensive fan heater which also outputs 3D graphics :D
Fantastic video, thanks for sharing!…
It's still guess work but at least there's some thought behind it. Interesting to see if AIB's will put out 700 watt versions of the 5090
Thinking about dropping some money on one of these ngl. I still want to wait a bit for more information. My 3060 is tired lol.
Its nice to see someone else also can do the math on this. I came with the same numbers as you. Then I also said so to the community... That was pretty rough, and told me im wrong, started to throw sticks and stones after me for the claim. But it was just a mather of time before someone else also see what ive been seeing. Then we also need to take in count that they also did 40 series with dlss 3 as the current gen has been, against dlss 4, that has a minor uplift as well in performance. It makes graphs and numbers even more pretty. I would say 5%, but i like a more gounded number and said 3%. So we are in the safe zone. Keep up the good work
Idk... Ig u dont know the Performance of the new cuda gen 4 vs gen 3 so comparing gen 3 vs gen 4 cuda cores is Impossible? Thats what i understood maybe we see more Performance boost
They won't release the full Blackwell as a graphics card because that would impact datacenter sales.
I would give the 5000 a bit more benefit of the doubt, your comparing CUDA cores cross generation. Hopefully this new generation has improved.