AMD's RX 7900 XTX Performance Explained

Поділитися
Вставка
  • Опубліковано 10 чер 2024
  • AMD revealed their new RDNA3 based GPUs, the RX 7900 XTX & the RX 7900 XT. In this video we will take a look at AMDs new graphics cards, explain why the performance of RDNA3 is lower than expected and figure how how many stream processors Navi 31 actually has.
    Follow me on Twitter: / highyieldyt
    Chapters
    0:00 Intro
    1:13 TFLOPS & real world gaming performance
    4:16 RDNA3 vs RDNA2 / dual-issue shader cores
    6:17 RDNA3 Ray Tracing Performance
    7:49 RDNA3 design goals, pricing & wrap up
    Links
    videocardz.com/newz/alleged-a...
  • Наука та технологія

КОМЕНТАРІ • 400

  • @00wheelie00
    @00wheelie00 Рік тому +67

    I think yields are so good they don't want to sell many XT cards, but they need a cheaper option for marketing.

    • @VoldoronGaming
      @VoldoronGaming Рік тому +4

      It is a similar approach that nvidia did with the two 4080s but unlike nvidia there isn't a large gap in performance between the XTX and the XT like there would have been between the 4080 16gb and the 4080 12 gb...and of course it pushes you to buy the more expensive model but again unlike nvidia there is a smaller cost difference...100 between the XTX and XT while there was $300.00 difference between the 2 4080s but a much larger performance gap between pricing tiers which isn't the case with AMD's two 7900s.

    • @harveybirdmannequin
      @harveybirdmannequin Рік тому +2

      That's why I am a fan of companies focusing on only 2 or 3 high end models for each generation. Anyone who wants something low end or cheaper can buy last generation. It's a waste to put time and resources into making something purposely lower performance just for marketing or price points.

    • @Rushtallica
      @Rushtallica Рік тому +7

      @@harveybirdmannequin The vast majority of gamers buy lower-cost models, not $800+ models. The 1060 was (maybe still is) the card most used on Steam surveys for years.

    • @noergelstein
      @noergelstein Рік тому +6

      @@harveybirdmannequin
      But then you have no lower power models and also the previous generations high end models don’t get cheap enough (in production) to be able to be sold as low end models with a profit. You need the models with small chips and simpler boards to be able to produce the volume and at the cost of the low and middle end market.

    • @ColdRunnerGWN
      @ColdRunnerGWN Рік тому +2

      @@Rushtallica - The 3060 is now the most popular card, which tells you a lot. It tells you most people don't either don't give a fig about RT and 100+ fps or they can't afford it.

  • @Psychx_
    @Psychx_ Рік тому +41

    Even if on average only 30% of the additional ALUs are utilized, that's still a win. What I find impressive about this architecture, is that (incl. the nodeshrink) ALU count and theoretical performance per area have doubled - in addition to energy efficiency.

    • @HighYield
      @HighYield  Рік тому +2

      It's definitely a win for AMD, as it is really space efficient. I made this video to explain why its not really 12.288 stream processors and why this huge amount of theoretical power doesnt directly translate to gaming.

  • @donaldsmith881
    @donaldsmith881 Рік тому +51

    I really want to wait and see benchmarks and what AIBs do with these cards. right now there's a ton of confusion on what these cards a going to perform at. I feel that right now without seeing benchmark and AIBs releasing these cards and seeing what they can do is very premature. this new chip design may be capable of having a performance we can't see without testing.looking forward to seeing your next video

    • @glitchinthematrix9306
      @glitchinthematrix9306 Рік тому +2

      We all know what aibs will do witht he price....fuck that.

    • @u13erfitz
      @u13erfitz Рік тому +3

      It seems pretty clear to me it will perform like a 4080 in raster and worse at ray tracing. AMD is fairly clear about that.

    • @florin604
      @florin604 Рік тому +2

      @@u13erfitz faster in raster

    • @garethjackson6187
      @garethjackson6187 Рік тому

      @@u13erfitz zero

    • @u13erfitz
      @u13erfitz Рік тому

      @@florin604 likely but I want to actually see them both.

  • @lbpk5786
    @lbpk5786 Рік тому +34

    I think AMD is leaving room for potential 7950XT and 7970XTX 3GHz models, possibly with more CU's, faster 24GBPS memory, stacked infinity cache (192mb), 450W power targets, or even dual-GCD designs. However, they probably don't think consumers want $1200+ GPU's right now considering the economy. Instead, they want to focus on power efficiency and the mid-range market.

    • @SirSleepwalker
      @SirSleepwalker Рік тому +19

      TBH I wouldn't call $1k cards midrange, it's actually enthusiast card and 4090 for $1700 is basically pro range with handful of gamers going for it as well.

    • @SettlingAbyss96
      @SettlingAbyss96 Рік тому +1

      I agree with most of this but I think they’re doing it because they would rather wait to see if Nvidia comes out with a 4090Ti so they don’t get one upped like they have before.
      AMD knows Nvidia is ok with creating a huge GPU that they will probably sell at a loss just to hold a performance crown - mind share is the main thing keeping Nvidia ahead. Most of the midrange GPU’s aren’t good enough at RT and that’s not a realistic selling point anyways because generally the top games on steam played today don’t even have ray tracing.

    • @TdrSld
      @TdrSld Рік тому +1

      @@SettlingAbyss96 Yep as much a Nvida wants RT to be a thing across the board it's not. Right now RT is a "gimmick" to say "look at the shinny, plz by our product", but it's a huge performance killer for not much in over all graphics up lift to the eye. Just some better shadows is all I have seen when playing on friends rigs with a 3k card.

    • @shepardpolska
      @shepardpolska Рік тому

      @@TdrSld To be honest, there are games where RT is worth it. They just make a small part of the market, and still Raster is what will let you actually play a game, not RT.

    • @Rushtallica
      @Rushtallica Рік тому

      I heard months ago that an AMD engineer was more excited looking toward the next/post-7000 series gen. If so, this might be another stepping stone gen for AMD, though still with decent cards to fill in the gap. I'm curious to see how both companies will progress and whether NVidia will do something different regarding using monolithic dies.

  • @pentaborg871
    @pentaborg871 Рік тому +8

    One thing that this RDNA 3 architecture says to me, is that if AMD wanted to, they could have made a GPU with a die size matching Nvidia, beef up the power to 450-500W and the clocks, and absolutely massacre Nvidia in raster, and been significantly closer in ray-tracing.
    The 4090 Die Size is 608mm^2 . The 7900 xtx is 533 mm² (combining the 308mm^2 + six 37.5 mm² MCD's). For a die size 87.6% of Nvidia's 4090, and a much lower clock than the 3ghz that it's architecturally capable of achieving, the reference 7900 XTX still gets 90% of the performance (based on extrapolating the data from AMD's slides, TechPowerUp figures, and other review data).
    That's actually really incredible. It will be very interesting to see how much performance gain AIB partners will achieve over the reference.

    • @VoldoronGaming
      @VoldoronGaming Рік тому +4

      AMD is offering more performance per shader than nvidia and at less power consumption and power cost per kilowatt use and for less money and it can fit in your potential customers' cases unlike the competition. An incredibly smart business move and customer friendly to boot.

    • @ronosmo
      @ronosmo Рік тому

      As Adored pointed out, the size of the compute die is actually more like 200mm! If that's true, AMD can produce a die with twice the performance, for just 85 watts extra.

    • @HighYield
      @HighYield  Рік тому

      Yes, AMD could have made a much faster GPU. 600mm² GCD with eight MCDs and stacked 3D cache. The question is, does it have a market?

    • @pentaborg871
      @pentaborg871 Рік тому +1

      @@HighYield And this is an extremely good point. AMD knows that if they try to compete at the top of the top, people will just buy Nvidia. AMD's being extremely strategic by ignoring it for now, produce something fairly close to the 4090, but not a 4090 competitor for way cheaper in order to win over market share, then produce the big guns down the line... like a hypothetical 7970 XTX

    • @starkistuna
      @starkistuna Рік тому

      @@HighYield I think they did not need to go that big to beat them, but they should have used more CU units than just measly 20% and their die could have been a bit bigger and they could have had that real 70% uplift they are claiming that is just marketing bullshit.

  • @Psychx_
    @Psychx_ Рік тому +14

    Since RDNA3 has omitted the old geometry pipeline (it's NGG only) and Wave64 is still supported (and used, esp. for geometry & RT, sometimes also pixel shaders), it should theoretically be possible to map 2 Wave64 to 4 SIMD32.

    • @crylune
      @crylune Рік тому +2

      Whatever any of that means

    • @HighYield
      @HighYield  Рік тому +1

      I'm already waiting on more info on the RDNA3 architecture, NGG only has been confirmed by AMD, the question is the true impact. Same with the new AI-cores, really interesting stuff.

  • @theminer49erz
    @theminer49erz Рік тому +1

    First of all, I am so happy to see so many likes and comments so fast!! I saw this last night and before I noticed it was only a couple hours after it was released and saw 600+ likes I was like "oh wow did I miss this yesterday?". Nope you are just getting the attention you deserve! I know I have nothing to do with it and it doesn't reflect on me, but I am still really happy to see you doing so well! Not only does it feel good to know your work is paying off, it also shows that not everyone wants the crap most people spew on YT and there are more people who are capable of appreciating your approach which is comforting! It's always nice to know there are more intelligent people out there than you think lol.
    I am wondering if their RT performance architecture has changed at all? My 6900xt has a big jump on RT than even the 6800. AMD's "50%" jump in RT claim referred to the 6900xt, but if they scale differently than RDNA2 then it may mean that lower BIN cards have more of a boost too? I guess we will see.
    I have yet to find a game that couldn't do full RT at 4k on my 6900xt. It may be slower on the bench by a bit, but in the grand scheme of things, its not anything significant. In fact I did some hands on experiments with my card and a 3080ti that I had for another build I was doing for someone and the 6900xt killed it in whatever games I tried. I didn't use DLSS or FSR, just native 4K. It may have been faster with DLSS if the games had it, but the more I try any upscaling, the less I care for it. I noticed both FSR and DLSS seem to make the animations appear "accelerated". Not like more FPS but more like sped up/fast forwarded a bit. Not a lot but enought to notice imo. Like when you can just tell someone sped up a video a tiny bit to make them look like they did something faster than it really was. If that makes sense.
    One last thought I have is about the future plans of AMD with this hardware. They are not the type of company to rush something through the pipeline. They could very well just be taking it step by step with this new design especially. It could be like building the biggest engine you can then slapping it on a chassis and and suspension etc that hasn't really been tested with smaller enginesthat are similarto it then selling it. This run could provide extremely important data for future chiplet GPU designs and drivers for them while remaining stable as its out in the wild. Imagine if they had tried to put the pedal to the floor on their first chiplet based cards and messed up! I see it as the potentially being very responsible in a way perhaps. There may also be a lot of untapped potential in the cards alreasy that afyer some time and driver feedback, could allow it to be realized eventually via new drivers. They could even end up trying a "3D" varriant eventually down the road before RDNA4. Any shortcomings in performance will in no way match the difference in price with Nvidia and the performance that may be getting left on the table by going AMD isn't that much really as far as I can tell. It may even be some time before it is even noticeable except on a VERY few games(if gaming is your thing).
    I hope I am right about this. If I am and they do that, it will show a lot of restraint on their part and that is rare in modern business, especially tech. With so many of the "biggest tech companies " out there like Tesla, Meta, starlink etc always promising the moon and never fulfilling it, it's nice to see a company that actually makes things and money doing so, not follow that "model". I will be getting a full "7000 system"(CPU and GPU) in about a year, but in the meantime, I am perfectly happy with what I got now.
    The Market wasn't only for the higest end stuff even when it was so tight the past two years (not counting mining). Many people couldn't get anything at all. Most probably don't even have a real 4k display(meaning maybe a TV), so it's not really nessasary for a large chunk of people to have such a nice card. The may decide to upgrade displays in the future, so getting a reasonably priced card that can eventually use the display they want first, may be a good path for a lot of people. That way in the meantime they don't have so much money, unused on the "table". If any of that makes any sense. I often wonder if I make any sense since I often get a glazed over "I have no idea what you are talking about " look when I discuss things with people. Granted I know I talk about a lot of things many don't care to even know of at all let alone be able to hold a conversation about, but I can't rule out that I may also be insane. I mean if I was, I wouldn't know right? Lol
    Anyway, keep up the good work man and please allow yourself to be at least a little proud of the reception your work is getting! You may be already, as you should, but just in case you are like me and can have a problem with "imposter syndrome"/being overly humble at times, I just wanted to encourage you to give it a whirl. As well as let you know it is worthy of pride imo. Cheers!

    • @HighYield
      @HighYield  Рік тому

      I think the RT discussion is impacted by the hard hitting RT games, with over the top settings, like Cyberpunk with psycho settings or Metro Exodus EE, which is basically a Nvidia sponsored tech-demo. When I bought my RX 6800 I thought it wouldn't be really capable of running RT games, but in reality, on the few games that do support RT, it does a good job. Just finished Guardians of the Galaxy with RT set to high and hat around 80 FPS most of the time.
      But then again I feel like it was a bold move from Nvidia to introduce RT with Turing, reserving a large chunk of the GPU for an unproven & back then unsupported new technology. RDNA3 will achieve similar RT performance to Ampere and a few weeks ago that was the best you could get, so its definitely good enough. But we all want head-on competition so Nvidia and AMD have both compete on price.
      I think chiplets are here to stay for the future and at some point we will see multiple GCD, AMD is on the right path with this one. The first iteration is never the masterpiece.
      And thanks for your support! I still dont get the youtube algo, but seeing this video take off was really enjoyable :)

  • @ejkk9513
    @ejkk9513 Рік тому +5

    They're using a dual shader for each compute unit (just as Nvidia did starting with Ampere). Since stream processors are actually shaders. That means, in fact, the 7900xtx does have 12k stream processors (aka shaders).

    • @HighYield
      @HighYield  Рік тому

      Yes, in theory there are that many. But the amount of "shaders", as I said, is just a number anyways. More doenst mean more performance.

  • @jaredj9099
    @jaredj9099 Рік тому +20

    AMD was afraid to showcase actual performance. We need to keep our expectations low and maybe we’ll be surprised

    • @sspringss2727
      @sspringss2727 Рік тому +3

      Afraid i think not. Did you not see the PCWorld video where the guy clearly said something he shouldn't have? Amd is planning something monstrous and it just needs a bit more time it seems, maybe another 6-8 months.

    • @102728
      @102728 Рік тому

      It's competing with the 4080 the guy said iirc, and there's no benchmarks for the 4080 so far so no comparison with nvidia that make it look good at the spot it's in. It's weaker than the 4090, and the uplift vs a 3090(/ti) won't look jawdropping either. Plus their marketing focuses on efficiency and value, not dumping watts into the die until the number is bigger than the other guy's number. As always, just wait for reviews

    • @sspringss2727
      @sspringss2727 Рік тому +1

      @Rudolf van Wijk i don't think you know what I'm referring to. That was frank azor that said that. The guy I'm referring to has wendell from level 1 techs telling him he shouldn't have said what he said as well. Something bigger is clearly in the works. Amd would be silly to think not to compete in the high end, Zen is so why do it for rdna2 and not 3. I believe they hit a wall and it needed to be addressed. It has and just needs time to come to market. It'll be here mid next year if I'm correct... at or around a $1200 price tag at most.

    • @102728
      @102728 Рік тому

      @@sspringss2727 ooh its on level1techs channel? Havent seen that yet

    • @HighYield
      @HighYield  Рік тому

      Yeah, it was noticeable. With Zen 3 or RDNA2 they directly attacked Intel/Nvidia, now they are very conservative. But on the other hand, the 4090 is based on a much, much larger chip and should be almost twice as expensive (looking at the silicon).

  • @NickChapmanThe
    @NickChapmanThe Рік тому +2

    Another great breakdown and explanation! Keep it up!

  • @catalystzer0344
    @catalystzer0344 Рік тому +6

    The Rtx 4090 would've cost me not just the card but a larger psu which adds up quickly along with not fitting in some atx case also adds up. Nvidia would be smart to show off the 4080 very soon against the 7900xtx just to prove that the $200 more is justified. So far I am buying a 7900xtx if supplies are good because of the power adapter issues and psu requirements.

    • @HighYield
      @HighYield  Рік тому

      The 4080 should launch next week, so we will know all about it very soon. Since its based on the smaller AD103 die, its also quiet a bit cheaper. I'm already planing a video ;)

  • @JonathanFulton
    @JonathanFulton Рік тому +10

    Great video!
    AMD is being smart because let's be honest - DATACENTER is where they make and secure their future - not enthusiast/retail. You stated AMD is looking at efficiency and die space with this new RDNA3 architecture. AMD is winning Data Center share by leaps and bounds and that is because of power efficiency. On the enthusiast side they always give a SOLID choice to you that is within the top performers - but they always do it in a manner more efficient than their competitors. Pure performance peeps don't care about that, but over the long run - it's the most steady and performance-pushing approach.

    • @TheGuruStud
      @TheGuruStud Рік тому +1

      That's not an excuse. Datacenter uses CDNA.

    • @sudeshryan8707
      @sudeshryan8707 Рік тому

      @@TheGuruStud its the same base architecture, not 2 different ones.

    • @shepardpolska
      @shepardpolska Рік тому

      @@TheGuruStud CDNA is a compute focused fork of RDNA. They share the base tech

  • @garyhall3919
    @garyhall3919 Рік тому +22

    i had always expected AMD to be on par with 3080 raytracing, which i am happy with at $600 cheaper than a 4090

    • @lbpk5786
      @lbpk5786 Рік тому +13

      The 7900XT should roughly match 3080 Ti in ray-tracing, while the 7900XTX should match the 3090 Ti.

    • @-Ice_Cold-
      @-Ice_Cold- Рік тому +2

      @@lbpk5786 It's enough

    • @SirSleepwalker
      @SirSleepwalker Рік тому +5

      @@-Ice_Cold- Agreed, for me if RT works well @1440p60FPS native I'll be content.

    • @marshinz5696
      @marshinz5696 Рік тому

      Cope card for brokies

    • @VoldoronGaming
      @VoldoronGaming Рік тому +9

      @@marshinz5696 Uh. No. Smart shoppers look at what they will be getting for their money. With a 4090 you are only getting 10% more raster than the XTX but at a $600.00 more cost. Only a moron would make that purchase decision. Smater buy is the XTX and XT since it made the 4090 and 4080 irrelevant.

  • @05DonnieB
    @05DonnieB Рік тому +6

    RDNA3 might have some nice fine wine with this architecture. I get the feeling that over time drivers are gonna really increase performance with such a new architecture.

    • @0Joshua026
      @0Joshua026 Рік тому

      Agree this reminds me of the rx 5700 xt launch competing with the rtx 2060 then the 2070 and now the 2080 on some games, this card give me this underdog card vibe and the fine wine is real not just some bs.

    • @HighYield
      @HighYield  Рік тому

      Especially since AMD has to optimize for a new type of shader now. The question is, can they FineWineTM RDNA3?

  • @cosaovidiu2838
    @cosaovidiu2838 Рік тому +1

    Excellent video! Good job! :)

  • @tomtomkowski7653
    @tomtomkowski7653 Рік тому +6

    They did the same as Nvidia did where FP and IP works are done by the same cores so the theory speed is 61TFLOPS but in reality, it depends on the workload, as cores can't do both at the same time.

    • @HighYield
      @HighYield  Рік тому

      Yeah, especially with Ampere Nvidia went all-in on multi-issue shader cores.

  • @mwork79
    @mwork79 Рік тому

    Hey so my question on this "new stream processor." AMD said they were making them more compute efficient. So does each stream processor have a SMT like approach to it? That would make it compute more efficiently and explain why there is double the theoretical shaders.

    • @HighYield
      @HighYield  Рік тому +1

      Yes, its kinda like SMT. They can run two computations at the same time now, but only in some circumstances. Its actually a good comparison to understand the concept.

  • @Fractal_32
    @Fractal_32 Рік тому

    This was a very interesting video and in my opinion one of the best explanations of how RDNA 3 achieves this performance. (In first party benchmarks.)

  • @crazybeatrice4555
    @crazybeatrice4555 Рік тому

    Good job on the video.

  • @NootNoot.
    @NootNoot. Рік тому +2

    Been waiting on this!

  • @ElCineHefe
    @ElCineHefe Рік тому

    Does the RX7900XTX handle ray tracing really bad? Or really slow? I don't understand its RT deficiency because I've always been on Nvidia cards. How bad did AMD's last generation handle RT?

    • @HighYield
      @HighYield  Рік тому

      In RT the 7900 XTX is about on par with a 3099(Ti), so not bad at all, but quiet a bit slower than a 4090 or a 4070. I’d guess it will compete with a 4070(Ti) in RT.
      AMD’s last-gen was about as fast as Nvidia’s previous one. A 6900 XT performs close to a 2080 Ti in RT.

  • @gstormcz
    @gstormcz Рік тому +4

    1. If AMD went to compete 4090 directly with reference design of 7900xtx and price went say 1300-1400USD, then many customers would already consider 1600USD 4090, TDP would go up, 3x8pin would be there already, efficiency strongly discutable, AIBs no room left for OC. Evga out story but now likely for AMD AIBs. Cooler with heatsink would grow to 4090 size and space problems in case too.
    Many customers still weight Nvidia features as better (I dont care).
    Not really sure what 3fan reference cooler can do with reasonable rpm and noise.
    I am interested in upcoming budget and midrange Rx7000, its clear have to wait but also pricing of 6000 series can still make some changes and with all tiers of Rdna3 out it would be more options. (guess it it worthy wait as I upgrade gpu just 1x5y).
    As interesting 7900 are, 7700 or 7600 could be, if it won't be just transfer of 6600/6700 on 5nm.
    More efficiency or higher clock speeds would not hurt.

    • @HighYield
      @HighYield  Рік тому

      I think Navi32 might be the more interesting chip of the two, looking forward to the 7800 class cards!

  • @AD34534
    @AD34534 Рік тому +16

    The 7950XTX is going to be the real shocker to nvidia, I'm sure. Chiplets allow AMD to do all sorts of miracles not possible with monolithic designs, especially in a recession.

    • @imperiousleader3115
      @imperiousleader3115 Рік тому +2

      If they can get multiple GCDs to communicate without major latency issues in RDNA4 then AMD will have a massive advantage over monolithic designs (ie NVIDIA) which are now reaching their practical limit.

    • @scroopynooperz9051
      @scroopynooperz9051 Рік тому +3

      Watched AdoredTv too? Jim did a good breakdown of this

    • @AD34534
      @AD34534 Рік тому

      @@imperiousleader3115 Maybe large amounts of cache chiplets can solve the latency problem (microstutter) that we've seen on dual-GPU setups.

    • @imperiousleader3115
      @imperiousleader3115 Рік тому

      @A B Diminishing returns with increasing L0/L1 cache. If they are sharing the same information, yes, I could see that - but in general compute where two cores are working on different subsets of data I am thinking less so. I did like the idea of one master with multi slave core I saw posted somewhere... use enough raw increase in multi chiplet compute power to cover the efficiency hit.

    • @AD34534
      @AD34534 Рік тому +2

      @@scroopynooperz9051 Just watched it and it seems pretty convincing. Nvidia will scramble for their 4090 Ti which should match the 7950XTX in performance, but it'll also be more expensive because of poor yields. This is where AMD should capitalize and lower prices in order to gain market share.

  • @chapstickbomber
    @chapstickbomber Рік тому +1

    Worth mentioning that Navi31 has 192 ROPs running a better clock and much more bandwidth to feed them. Fill limited workloads should be >60% faster

    • @HighYield
      @HighYield  Рік тому +1

      Yes, you are right, quiet a hefty step up there for AMD.

  • @CraigBlack123
    @CraigBlack123 Рік тому +1

    Solid analysis as usual.

  • @HazzyDevil
    @HazzyDevil Рік тому

    Great video :)

  • @clearlight293
    @clearlight293 Рік тому +6

    Let's be honest, AMD could have designed a bigger die with more CUs to compete with the 4090 or even beat it, but from a cost/performance pov it was not worth it to them, the ultra high end being just a small % of the market.

    • @SirSleepwalker
      @SirSleepwalker Рік тому +2

      Yeah and they don't have so much consumer trust to sell many cards @ $1500 while there is trusted brand. I was Intel dude for years and I was really unsure about going Ryzen, in the end I did it cause Intel was offering crap at the time and I'm very happy with AMD's platform so I'll stick with them. This time I'm Nvidia dude but seeing what Nvidia released and for what money I'll give Radeon a go, I'm pretty sure I'm not the only one with that mentality. In general if they were offering same thing as Nvidia at slightly lower price I would buy Nvidia cause was using it for years and there would be no reason to change it.

    • @clearlight293
      @clearlight293 Рік тому +3

      @@SirSleepwalker Only used Nvidia as well, but if AMD got their video encoder to Nvidia's NVENC quality I'll buy an AMD GPU. Nvidia got it twisted thinking it can rip off gamers with miner prices, unfortunately for them the mining party is over. I'm sure even the 7900XT beats the 4080 in raster and Nvidia will be forced to drop the 4080 price to $800 (what an '80 class GPU should have costed anyway) or come out next year with a Super refresh offering better price/perf ratio.

    • @u13erfitz
      @u13erfitz Рік тому +1

      @@SirSleepwalker drivers on amd have been good for awhile now. Only issue I have had is honestly league doesn’t like playing in full screen windowed mode.

    • @HighYield
      @HighYield  Рік тому

      Agree, AMD just doenst have the "premium" brand like Nvidia does, so I'm sure their market analysis is different.

  • @reptilespantoso
    @reptilespantoso Рік тому

    One of the best explanations on youtube.

  • @dennismeurs9736
    @dennismeurs9736 Рік тому +2

    Just waiting for the rx7950xt and the rx7970xtx...

    • @HighYield
      @HighYield  Рік тому

      By then you will be waiting for the 8900 XTX ;)

  • @j.w.grayson6937
    @j.w.grayson6937 Рік тому

    Nice analysis!

  • @joehorecny7835
    @joehorecny7835 Рік тому +1

    I think Im going to try to grab a 7900xt when they are released, if they are fairly easily obtainable. If there they are gone, and 7900xtx are available I might spend the $100 extra on that. I don't need the fastest GPU on the planet, I can settle for #2 or #3, and I want to use it for machine learning and other compute tasks, besides gaming. I like that they use less power, running a machine learning task 24x7 for 3-5 days takes a hunk of electricity. Nice analysis as always!

    • @Rushtallica
      @Rushtallica Рік тому +1

      I agree, and if those cards end up being hard to obtain or end up being hiked, I'll likely skip this gen altogether. :)

    • @NootNoot.
      @NootNoot. Рік тому +2

      I don't know if you plan on using ML or other compute tasks professionally, but just be sure to check out reviews before you purchase. Just to save you some headache, its a more involved process than just slapping on an RTX card. Unless you know this already, good luck on your purchase!

    • @HighYield
      @HighYield  Рік тому +1

      If you are in the market and dont have $1600+, but want a high-end card, its between the 7900 GPUs and the 4080, but Navi 32 (7800) and the 4070 could also be interesting imho.

  • @angrygoldfish
    @angrygoldfish Рік тому +2

    I think the XT variant is to entice people to pay more for the XTX. If it's 10% slower while being 10% cheaper, many will just go for the XTX because the performance per dollar will be the same. If AMD has good yields on the XTX then their profit margins will be higher. Not by much, but it all counts.

  • @Austin1990
    @Austin1990 Рік тому

    To be fair about the shaders, it is similar to how NVIDIA doubled its CUDA cores with Ampere. But, he does a very good job going over this; most people mostly ignore this factor.
    I am disappointed too, but I was disappointed about Ampere's CUDA count being fake as well. At least the misunderstanding with RDNA 3 was from leaks and not AMD's own marketing.

  • @FLCLimaxxx
    @FLCLimaxxx Рік тому +6

    I think that RDNA3 would need to use alot more power to beat the 4090 and AMD has to stick to their p/w claims and efficiency goals. They can release a refreshed 7950XTX next year maybe even on TSMC 4nm with 1hi cache stacked, more shaders and higher clocks that takes 380-420W. RDNA3 has a power penalty with the chiplet design moving data in and out of each MCD and that's factored into the 355W TDP. Probably why the card is clocked so low, and why AMD locked the BIOS that AIB's have right now, but will likely let them run the cards at 400W on their own designs later. Right now with their competitive price postioning against the 4080 it's a clear win for AMD anyway.

    • @anuzahyder7185
      @anuzahyder7185 Рік тому +1

      4090 can easily beat the shit out of this gpu running at 350w. Check derbauers vdo.

    • @nedegt1877
      @nedegt1877 Рік тому

      @@anuzahyder7185 Yes theoretical BS if you haven't seen 3rd party benchmarks yet. Anyone who claims any win or loss for the 4090 is just Full Of💩

    • @anuzahyder7185
      @anuzahyder7185 Рік тому

      @@nedegt1877 dumbo. if you seriously believe in those numbers then u shud also believe that 6950xt has better rt performance than 7900xtx. Why? Because without rt 7900xtx was 70% faster in cyberpunk. With rt only 48%. So believe this as well. Those performance figures were total bull 💩.

    • @nedegt1877
      @nedegt1877 Рік тому +1

      @@anuzahyder7185 I don't think you know what you're talking about. But either way just wait for the actual launch and then we can discuss anything you like. But as long as we don't have 3rd party benchmarks, we can't know anything for certain. Just theories and fanboyism.

    • @anuzahyder7185
      @anuzahyder7185 Рік тому

      @@nedegt1877 i know what im talking about. What im saying is dont believe in those numbers. No settings were mentioned, u dont know which scene they were benchmarking etc.
      yes wait for third party benchmarks. The only thing im sure about is it will not touch 4090

  • @themightyquinn1343
    @themightyquinn1343 Рік тому +1

    Tbh I really don’t care about ray tracing that much. I’d much rather just keep the extra frames. That being said I’m pretty hyped for the new cards. I likely won’t even buy one, since it’s just not time to upgrade yet, but I’m excited for what they can offer

    • @HighYield
      @HighYield  Рік тому

      I fully get your point and RT is still in its early stages. Two years ago I had a similar feeling, but even my RX 6800 runs RT well enough to notice that it can be really awesome, if implemented correctly. Its just that much better at fooling the brain, especially when it comes to lighting.
      That said, 3080-3090 RT performance it not bad at all, just quiet a bit behind a 4090.

  • @RobBCactive
    @RobBCactive Рік тому +4

    That's still about 96 more RT cores than I use!
    It comes in about where I expected, I simply didn't think gaming performance would scale linearly. It looks like a huge generation leap in performance.
    Having a 300mm² GCD without memory bandwidth starvation suggests an easy way to follow up.
    The GPU size and power usage seems far more reasonable, the branding of 7900xt(x), leaves Navi 32 looking like 7800 level, so I'm concerned that the 6nm monolithic memory bus/cache won't be increasing mid-range performance as much.
    The dual encode/decode engine and the emphasis on software improvements sounds good, the RX 6000 Adrenaline has come along nicely.

    • @HighYield
      @HighYield  Рік тому

      I not playing a lot of RT games either, but it can look really good, if done right. I think its the future of graphics and at some point AMD needs to catch up.
      On the point of Navi32, I think it might be the more interesting chip.

  • @gstormcz
    @gstormcz Рік тому

    What makes then gpu fast in gaming?
    Is there some calculation formula for gaming performance given gpu specs?
    Or is it about both gpu architecture bottlenecks and game calls for actual process which make it either stutter or perform slower?
    And question no 2: What makes gpu fastest given limited by best node(4nm currently or 5 for AMD), limited by reasonable TDP able to cool with air.
    3.Is there possibility to make gpu fast in gaming but weak in cryptomining or AI?
    4.Are there any game engines making stable, easy programmable, fast, non taxing, high resolution games, not preffering any certain gpu?
    What importance is fact AMD and Nvidia have different scheduler type used?
    I saw some gaming synthetic benchmarks, it seems they use some basic major types of calculations used by game engines, their weight(utilization) in main game titles and compare it with benched gpu strenghts in each category, right?

    • @HighYield
      @HighYield  Рік тому +1

      Games perform different to compute, its harder to fully utilize multi-issue shaders with games. RDNA/RDNA2 has really good gaming performance per TFLOP, Ampere and RDNA3 not so much.
      Cryptomining is mostly limited by memory bandwidth, sadly thats something games also really need. You could try software, but its hard. But with Ethereum going prood of stake, mining is less impactful.

  • @545gaming9
    @545gaming9 Рік тому +2

    Am still excited for 4090 last week order 💀

    • @HighYield
      @HighYield  Рік тому +1

      The 4090 is a really nice piece of tech, try running it at a lower power limit.

  • @IronGuardLegionnaire
    @IronGuardLegionnaire Рік тому +1

    As long as AMD doesnt gimp OC potential we are gonna get >3ghz AiB cards with 3 pins. Im hyped, even though I'm more interested in the reference design

    • @HighYield
      @HighYield  Рік тому

      The question is, how does the efficiency scale?

    • @IronGuardLegionnaire
      @IronGuardLegionnaire Рік тому

      @@HighYield if you are pushing 3ghz you don't care about efficiency

  • @RobertJohanssonRBImGuy
    @RobertJohanssonRBImGuy Рік тому

    amazing tech that can evolve more than big designs

    • @HighYield
      @HighYield  Рік тому

      Yes, smart tech doesnt have to be big. I think chiplets are the future and this is just the first trial run.

  • @kusumayogi7956
    @kusumayogi7956 Рік тому

    Rdna 3 and Nvidia ampere/ada lovelace has 2 CUs per SM, it look like has 12288 Shaders but it actually 6144. So, double CUs are like multi thread on CPU
    Double CUs doesnt meant double performance.

    • @VoldoronGaming
      @VoldoronGaming Рік тому

      Correct double CUs doesn't mean double performance but AMD has managed to increase performance per shader by a considerable margin over RDNA2. They are getting 55% more performance with only a 17% increase in shaders. RDNA2 had to double shaders from the 6700XT to the 6900XT to achieve that. Impressive IPC gain in RDNA3.

    • @HighYield
      @HighYield  Рік тому

      Yes, the amount of shaders cant be compared. In theory, they can do twice as many ops as before, so the power is there, it just isnt easily accessible.

  • @maxwellsmart3156
    @maxwellsmart3156 Рік тому +12

    I think the use of "disappointed" is farcical, did you expect a unicorn for the price of a horse. The 4090 is a ludicrously size monster that doesn't fit well in many cases, it's an effective marketing ploy to make anything else look inadequate. Real-time ray tracing is not that important unless your drinking Nvidia koolaid. It looks really cool in marketing material but it doesn't enhance game play, does it?

    • @KibitoAkuya
      @KibitoAkuya Рік тому

      And even then a card that can melt itself is not that good a marketing ploy anyways

    • @SirSleepwalker
      @SirSleepwalker Рік тому +4

      RT might enhance experience (not gameplay itself) and might not. When I turned RT in Doom Eternal I had to google where it is cause I couldn't find any difference except lower FPS, however Cyberpunk looks much better.

    • @crylune
      @crylune Рік тому +2

      @@SirSleepwalker rt doesn't enhance shit for me, it would've a decade ago but raster graphics have caught up. Sorry leather jacket man, too late

    • @HighYield
      @HighYield  Рік тому

      Dont we all want a unicorn for the price of a goat?
      I agree with your point, but I still hoped for a larger GCD.

  • @phil1pd
    @phil1pd Рік тому +1

    The only thing I notice with raytracing on is a huge drop in performance.

    • @HighYield
      @HighYield  Рік тому +1

      Yeah, there are plenty of games where all it does is hurt performance. But a proper RT implementation can look great, I think its the future.

  • @PsychoDiablo
    @PsychoDiablo Рік тому +5

    Can't wait to pair a RX 7900 XTX to my R9 7900X

    • @Nokturnal33
      @Nokturnal33 Рік тому

      How is the 7900x handling temps, Is it constantly peaking at 95 and if so have you tweeked the voltage? Im debating between a 7900x or a 7950x but im worried about the stability issues reported on the new Am5 platform.

    • @SweatyFeetGirl
      @SweatyFeetGirl Рік тому

      @@Nokturnal33 ryzen 7000 is rock solid, and temps are at 95°C only if you leave the processor at stock clocks. you should manually set clocks and voltage to it and temps will drop by 15-25°C at 100% load depending on the app. they designed it to go to 95°C just to be at the top of benchmarks

    • @HighYield
      @HighYield  Рік тому

      Ha, I also thought its funny CPUs + GPUs match this generation!

  • @JayzBeerz
    @JayzBeerz Рік тому +1

    Let’s wait for reviews bro.

    • @HighYield
      @HighYield  Рік тому +1

      Ofc, but we can still look at the architecture before.

  • @Psychx_
    @Psychx_ Рік тому +2

    As I understand Anandtech's explaination, "dual issue" seems to be something different than a 2 cycle issue or the 4 cycle issue that GCN had.
    We now have 8 blocks of SIMD32 units per CU, but those behave more like 2x4 blocks. Assuming that we have 4x32 work items requiring the same instr., we can utilize 4 of these blocks (Wave32) and this takes one cycle to finish - the other 4 blocks do nothing.
    Half utilization, but full IPC.
    Wave64 would be similar, just with 4x64 work items (SIMD32, hence twice the cycles for twice the work). Half utilization, half IPC.
    If we happen to have 2 groups of 4x32 work items and there are no dependencies between the 2 groups, then we can fill all 2x4 SIMD32 blocks and have them finish calculating witin one cycle, yielding 128 results from instr1(group 1) + 128 results from instr2 (group 2).
    Full utilization, full IPC.
    Wave64 would look like this: 2x(4x64) work items, two cycles to compute 2x256 results from two groups. Full utilization, half IPC.

    • @HighYield
      @HighYield  Рік тому

      AMD still has to go into more detail how this "dual-issue" exactly works. I guess at launch we will get more info.

  • @Psychx_
    @Psychx_ Рік тому +6

    RDNA4 will definitely come with another, massive command processor overhaul.

    • @Humanaut.
      @Humanaut. Рік тому +1

      Can you explain?

    • @Psychx_
      @Psychx_ Рік тому +1

      @@Humanaut. RDNA3 CUs are only partially doubled, compared to RDNA2. There are twice as many SIMD units per CU, but the logic around them (RT cores, texture units, etc.) has not been doubled.
      It may be that AMD will complete that scaling process and/or half the amount of SIMD units per CU again, but instead increase the total amount of CUs, if lithography allows for that.
      That would bring utilization benefits, but needs lots and lots of internal bandwidth and a command processor that can keep the CUs fed.
      The command processor already seems to be a limiting factor (hence it's clocked higher), so it needs to change to allow the architecture to scale further.
      Should AMD disaggregate the shader engines into separate chiplets, that would need a rework of the command proc aswell.

    • @Humanaut.
      @Humanaut. Рік тому

      @@Psychx_ Okay thank you very much for making the effort!
      I'm just starting to really get interested in the architecture of GPUs since the announcement of RDNA3 (I've been watching mooreslawisdead).

    • @HighYield
      @HighYield  Рік тому +1

      We still have very little info on RDAN4 atm, it could be a slight RDNA3 overhaul, or a complete re-design. I'm looking forward to first credible leaks.

    • @Humanaut.
      @Humanaut. Рік тому

      @@HighYield I insta subbed after only watching one of your vids, I hope you keep going! : )

  • @imperiousleader3115
    @imperiousleader3115 Рік тому +3

    Thank you for the video. I do not totally agree with everything in this video... I _think_ there is a sizeable uplift in performance but just like the change from Turing to Ampere for NVIDIA the definition of what is a CU/Shader core has become completely fluid that it leaves TFLOP / compute comparisons almost meaningless. To the end user such as myself I have given up trying to wade through marketing BS where either company decide to unilaterally change definitions.
    Disclaimer: I am ignoring Ray Tracing in this post.
    As an OpenCL/compute user:
    One of the issues for AMD has always been that their cores slow down if you mix workloads - and thus NVIDIA always creamed them when programmers mixed INT and FP workloads. Code sent to AMD GPUs that did not mix data types in a workload was OK - noting NVIDIA has always had an edge in INT32 - and I read somewhere that NVIDIA can also dual load INT32 instructions (maybe someone here can confirm this please?) I get a about a 50% faster INT32 throughput on a 3080ti vs 6900XT for example... I not 100% sure if this reflects dual instruction loading or not.
    I can see with RNDA3 if you have a lot of floating point hardware accelerated FMAD calls for example then they will not be performant on the INT32 cores, but also, for mixed datatype workloads seen in a lot of use cases - including games - lets say: using INT for memory addressing and FP for graphics drawing, then the general uplift will be excellent.
    From what I basically understand to date in regards RDNA3 based on the slides shown:
    For the most basic code doing bit wise operations only there should be a greater than 2.x uplift (general compute) as FP/INT32 cores can do both,
    For hardware fused/accelerated functions such as FMAD the performance increase will be 20% as these can only function inside FP cores,
    For mixed workload calls the uplift will be probably 1.5-1.7x - which seems in line with the performance graphs shown by AMD.
    For the technically knowledgable out there... does this seem broadly correct?

    • @UnworthyUnbeliever
      @UnworthyUnbeliever Рік тому

      Can you/someone verify if bitwise compute performance is doubled?
      Plus as far as i know (i am not a gpu compute guy) most ML workloads use FP32, so it seems we are still not getting consumer card capable of GPGPU? Then what was that 'game AI' engine they talked in the announcement?

    • @imperiousleader3115
      @imperiousleader3115 Рік тому

      @@UnworthyUnbeliever I can only offer basic / newbie opinion, but I do GPGPU on AMD and NVIDIA products - in particular INT8 and INT32 based work. If you use bitwise operations FP32 and GIOPS are essentially identical. What gets lost/reduced are hardware fused operations - for the most part. Most times "AI" is mentioned it seems to refers to Tensor cores. I will be looking to see if there are any specific INT8 / FP16 speedups in the whitepaper - which might be indicative of an AMD equivalent speedup using xilinx FPGA IP. Having dual pipelines for FP32/INT32 issue will be most similar to Turing (ish) - at first glance.

    • @UnworthyUnbeliever
      @UnworthyUnbeliever Рік тому

      @@imperiousleader3115
      Nice, More people should grow the habit of reading whitepapers/developers manuals/documentations.
      I personally cannot rule out the possibility that they used XILINX IP in RDNA 3, but i think the first product they are really going to levrage FPGA designs that they have is probably in CDNA 3 ( XDNA ?).
      just one small question, do you use OpenCL on AMD hardware or their ROCm/HIP stack? Or that thing 'Vulkan Compute' (?!) That i heard somewhere? (Is that even a thing?)
      For a beginner which one do you think is a better start?

    • @imperiousleader3115
      @imperiousleader3115 Рік тому

      @@UnworthyUnbeliever I mix and match Nvidia and AMD GPUs so have kept it simple and use plain old OpenCL.. which works perfectly for my basic compute kernels and I can deploy very quickly. The flipside is the lack of familiarity with tensors and RT cores. I have been tempted to go down the HIP pathway to maintain a cross vendor approach to accessing all hardware but it is overkill for my use case. What are you wanting to do? Basic compute or access tensors/RT?

    • @UnworthyUnbeliever
      @UnworthyUnbeliever Рік тому

      @@imperiousleader3115
      Well, i have interest in Heterogeneous computing/GPGPU but i am completely clueless in this field for the time being, i am planning to start learning something in lines of OpenCL as a hobby. (Im not a fan of closed standards)
      I dont like AI/ML fields (the top-level programming aspect, not the concept) so, if anything, library writing/optimizing or compiler optimizing for underlying implementations would be my goal, but as hobby.
      I know its kinda odd, people usually don't like becoming implementation optimizers, but here we are.

  • @gruensein
    @gruensein Рік тому

    The dual issue shaders are not so different form Nvidia's approach. Their doubling of core numbers with Ampere was not achieved by actually doubling all core elements. Instead, the preexisting INT-ALU was extended to be capable to process FP as well. Therefore, the GPU now has to decide if it wants to use it for FP or INT. Therefore, the theoretical FP32-throughput increased enormously without it ever reaching this number in actual use.

    • @HighYield
      @HighYield  Рік тому

      Its pretty close to what Nvidia has been focusing on since Ampere, you are correct.

  • @zntei2374
    @zntei2374 Рік тому

    Maybe the XT will be sold at MSRP, but the XTX (especially AIB)may reach higher prices - think 1049-1099+ dollars.

    • @HighYield
      @HighYield  Рік тому

      For the AIB models you could be right.

  • @johnvs7169
    @johnvs7169 Рік тому

    The biggest question unanswered is the lower clocks - all leaks pointed to +3GHz clocks (leaked AMD slide too) and we got far lower than that. Waiting for independent reviews and benchmarks. If performance is near in what they claim (85% of a 4090) will be a decent contender - most people "buying consciously" understand RT in current generation games is a gimmick. Overall my feelings are mixed.

    • @HighYield
      @HighYield  Рік тому

      If we look at RDNA2, the smaller chips always had the higher clock speeds. Navi32 and Navi33 might clock close or even above 3GHz. I think 2.5GHz boost (in games it could be higher) is decent.

  • @L2Xenta
    @L2Xenta Рік тому

    Cool story bro 👍.
    Also I think you should watch latest AdoredTV video, if you didnt already. His theory is interesting to say the least.

    • @HighYield
      @HighYield  Рік тому

      Ofc I already watched it. Have been watching Jim for years now!

  • @TrueThanny
    @TrueThanny Рік тому

    I'm disappointed in the dual-issue decision myself, but at least they're not lying about the core count like nVidia is. Ampere and Lovelace cards are advertised with twice the actual CUDA core count.

    • @HighYield
      @HighYield  Рік тому

      TBH, I'm kinda surprised they kept it real with the stream processor count.

  • @framedthunder6436
    @framedthunder6436 Рік тому

    1:25 now
    The Vega 64 can Challenge the 1080Ti in some games
    Vega was poor in drivers when it Launch

    • @HighYield
      @HighYield  Рік тому

      Yes, the whole Vega architecture has aged pretty well vs Pascal.

  • @davidgunther8428
    @davidgunther8428 Рік тому

    I think the 7900XT price is so close because the 7900XTX was going to be priced higher but moved to under $1000 late. The performance difference seems to be more than the price difference, which makes the XTX the more appealing choice.
    I think about $1100 to 1200 was the initial target.

    • @janbenes3165
      @janbenes3165 Рік тому

      It might also just be that AMD is trying to convince people to spend $100 more than they would otherwise do. Hoping that people will think "I'm already spending $900, might as well go for the flagship $1000"

    • @Astravall
      @Astravall Рік тому

      Well if that was a last minute desission for the 7900 XTX why not lower the price for the 7900XT too?

    • @demoth
      @demoth Рік тому

      I think it the opposite. The xtx is always 1k. And the xt was maybe 800-850. But when they saw the 4080 12 gb being cancel, they saw an opportunity to fill that gab and bump the xt to the same pricing as that card. Lack of competition often bring greed and right now a new gen gpu in the 900 dollars range have no competition until Nvidia announces something.

    • @HighYield
      @HighYield  Рік тому

      Could be, but why wouldnt they lower the XT price at the same time?

    • @davidgunther8428
      @davidgunther8428 Рік тому

      @@HighYield I'm not quite sure, maybe the same reason they priced the R9 7950X so much better than the R9 7900X?
      The very similarly to Raphael, the RX 7900XT and XTX cost the same to make since the parts are the same (except RAM), one just preforms better.

  • @technicallyme
    @technicallyme Рік тому

    The problem is Vega 64 is they could never fill up the 4 SIMD units. Rdna halfed the SIMD units and the result was efficiency. Second thing is the work units went from 128 to 256 so while the cu’s are the same the math is not.
    Hot chips has an article for the Xbox graphics and how and essentially overlapped stream processors to save space. This is why the ps5 and Xbox chips are a similar size even though the Xbox has 40% more cus’s
    30 and 40 series GeForce cards are doing smt on cuda cores they didn’t physically double the core count it would require a huge shrink without counting the entrance cache that was added.

    • @HighYield
      @HighYield  Рік тому

      Yes, RDNA3 follows in the footsteps of Ampere.

  • @ColdRunnerGWN
    @ColdRunnerGWN Рік тому

    The cynic in me says the $100 difference in the 7900XTX vs 7900XT is done for OEMs. Pre-built buyers will simply see a computer that has a similar number with a lower price tag. Not only will they get a $100 break on the GPU, but it will allow OEMs to pair it with a cheaper CPU so the consumer will notice a significant price reduction, and won't have a clue how good it performs.

    • @HighYield
      @HighYield  Рік тому

      And maybe to get ppl to say "heck, just gonna by the XTX".

    • @ColdRunnerGWN
      @ColdRunnerGWN Рік тому

      @@HighYield - On the DIY side, yes that would definitely happen. However, the 7900XT is likely a binned version, so they will need to sell them somewhere.

  • @quajay187
    @quajay187 Рік тому

    if only AMD could put in the same effort for they're drivers

  • @eugkra33
    @eugkra33 Рік тому +1

    If it's 50-60% more RT performance, and that's with 20% more RT cores, that's only 30-40% more RT per CU. On top of that, there is the question why AMD rounded all their numbers. Are all the 50% numbers really just in the 45%, and 47% range? And the 70% claims are all like 65%? They usually list actual percentages. Lot of shady stuff going on.

    • @HighYield
      @HighYield  Рік тому

      AMD showed 80% more performance in one RT game iirc, but ofc it will heavily depend on the game. I agree, the presentation omitted a lot of things.

  • @blackstar_1069
    @blackstar_1069 Рік тому +1

    i hope the new smartvideo thingy is on par with nvidia 30 series i don't wanna giff my momo to the green giant.

    • @HighYield
      @HighYield  Рік тому

      AMD needs to step up with software support, the hardware seems capable.

  • @DrivinginNewYorkCityNYC
    @DrivinginNewYorkCityNYC Рік тому

    So the One Million dollar questions is will there be and when there will be the 7950xtx?

    • @HighYield
      @HighYield  Рік тому

      I think there's a good chance we will see a RDNA3 refresh, but I dont think we will see more shaders.

  • @SteusNC
    @SteusNC Рік тому

    Все по красоте раскидал, уважуха... ))

  • @pyroromancer
    @pyroromancer Рік тому

    given how unenergized even the AMD presenters were during the reveal.
    This video makes sense

    • @HighYield
      @HighYield  Рік тому

      I'm glad I'm not the only one who noticed, it felt like something was off.

  • @XeqtrM1
    @XeqtrM1 Рік тому

    in short pc world hade a interview with amd they asked why no compariosnand its becouse the 7900 xtx ment to compete against the 4080 16 gb thats why they didnt show

    • @HighYield
      @HighYield  Рік тому

      Makes sense, but still felt kinda off, not comparing it to the competition.

  • @joepiazza3756
    @joepiazza3756 Рік тому +2

    The 7900 XTX seems to be in line with a 3080 Ti in raytracing and the XT with the non TI. For me I'll take a 3080's ray tracing for the next few years.

    • @JoeBlow-ub1us
      @JoeBlow-ub1us Рік тому

      Even then, RT performance is still pretty disappointing and that's with the few cherry picked, fsr ridden graphs they showed for RT. Although, RT performance is probably more important to me for my next upgrade than most people. I'm still cautiously optimistic though.

    • @joepiazza3756
      @joepiazza3756 Рік тому

      @@JoeBlow-ub1us if a 3080s raytracing is an issue for you then get a 4090 and nothing else. If you want some ray tracing and don't care then a 7900 XT is probably good enough as a 3080 is fine.

    • @HighYield
      @HighYield  Рік тому

      Its not bad, just still lacks behind Nvidia and they didnt close the gap.

  • @douglasmurphy3266
    @douglasmurphy3266 Рік тому

    Performance is scaling with memory. All of these different processes going on need their own real estate to live in memory.

    • @HighYield
      @HighYield  Рік тому

      They increased the on-die L0, L1 & L2$ and have a much larger 384-bit memory interface. I think bandwidth wise, RDNA3 should be good.

  • @nedegt1877
    @nedegt1877 Рік тому

    To be honest Nvidia really needs the performance crown to stay relevant. Without their own platform it's going to be very hard to make money in a large portion of the computer market. Only the high-end is not enough to survive. Sometimes I think AMD doesn't want to beat Nvidia because RDNA could be much faster with minor changes.

    • @HighYield
      @HighYield  Рік тому

      Nvidia has a huge name to defend, "GeForce" is the #1 premium GPU brand and even if AMD would be faster, it would take some time for ppl to notice. Just like Ryzen took a couple of years to become a household name.

    • @nedegt1877
      @nedegt1877 Рік тому

      @@HighYield That is true of course, mindshare is very important, but mindshare is mainly driven by marketing which Nvidia is pretty good at. And also channels like LTT and J2C are important for "what to buy".
      I'm certain that if AMD manage to beat Nvidia in Raytracing then it will change the mind of a lot of peoples. Nvidia's reputation isn't that great these days. It's just the performance that keep peoples buying GeForce cards.

  • @FLCLimaxxx
    @FLCLimaxxx Рік тому

    Just like Ampere and Ada, the shaders themselves are slightly slower in RDNA3 next to previous gen.

    • @HighYield
      @HighYield  Рік тому

      Yes, more but slower, same as Nvidia.

  • @vulcan4d
    @vulcan4d Рік тому +1

    Can't wait for the 7950xtx!

    • @HighYield
      @HighYield  Рік тому

      I'm waiting for the 8900 XTX ;)

  • @mtunayucer
    @mtunayucer Рік тому +2

    1:26 1080 is pascal fam

    • @HighYield
      @HighYield  Рік тому +1

      Pascal was a huge leap, maybe even GOAT.

    • @mtunayucer
      @mtunayucer Рік тому

      @@HighYield yeah man im still using gtx 1080.

  • @joakimedholm128
    @joakimedholm128 Рік тому +1

    wait with your opinions until aibs benchmarks are out! adorededtv is pretty certain that the rx 7950xtx will have two compute chips with 24000 shaders combined

    • @Decki777
      @Decki777 Рік тому

      Delusional AMD fanboy

    • @joakimedholm128
      @joakimedholm128 Рік тому

      @@Decki777 adored is certainly NOT an amd fanboy, hes a legend on youtube and he gives is honest opinion whether its amd, intel or nvidia!

    • @Decki777
      @Decki777 Рік тому

      @@joakimedholm128 opinion doesn't matter the Matter is Fact not even AMD said that there will be 7950xtx. I thought 7900xtx and 7900xt would be Dual GPU chiplet monsters but they are not

    • @joakimedholm128
      @joakimedholm128 Рік тому

      @@Decki777 well we dont know anything until aib cards and benchmarks are out there. until then its an opinion and thats a fact

    • @HighYield
      @HighYield  Рік тому

      AMD has been pretty on point with their own benchmarks in the past and there is almost no chance we will see a dual GCD card this generation, tho this is clearly the future goal of GPU chiplets.

  • @heinzbongwasser2715
    @heinzbongwasser2715 Рік тому +2

    Why are you not growing faster? Good content.

    • @SirSleepwalker
      @SirSleepwalker Рік тому

      Because the last phrase. If he was talking bollocks he would grow faster. :D

    • @HighYield
      @HighYield  Рік тому

      Growing fast enough, already too many comments to handle. And thanks for the support!

  • @raduv1619
    @raduv1619 Рік тому

    There is still a rumour about a 7950Xtx xt which will have the higher number of shaders

    • @HighYield
      @HighYield  Рік тому

      Honestly, currently I'm very skeptical.

  • @Zapzockt
    @Zapzockt Рік тому +2

    Might be, that AMD is a bit sandbagging here, to not raise too high expectations and too many reactions from NVidia upfront. And as the architecture changed, I expect the drivers to be just not fully evolved and maybe AMD themselves do not exactly know, how fast they might get, when the drivers are further fine-tuned at release or later. I think, we should better be waiting for the real benchmarks before we judge this generation. Aside from the drivers, I see options for AMD to release a two GCD version later, maybe even with 3d-stacked elements, as a 4090 competitor.

    • @HighYield
      @HighYield  Рік тому +1

      I think AMD might be a bit conservative, but I dont think they are actually sandbagging.

  • @AwesomeBlackDude
    @AwesomeBlackDude Рік тому

    What I haven't seen yet on UA-cam is a review Critic talking about why AMD high-tier isn't worth $1,000 bucks.

    • @u13erfitz
      @u13erfitz Рік тому +1

      Good point. I do think the 7900 is bad value though. If you are spending 900 spend the 1000 and get something way better.

    • @HighYield
      @HighYield  Рік тому

      GPU prices have been broken since the mining hype, sadly I dont think theres a way back...

  • @fluffycrepe4057
    @fluffycrepe4057 Рік тому +2

    I see it as a serious competitor. It's like 40% cheaper with what 80% to 90% of the performance without having to melt your gpu or wallet I was waiting for the 4090 but then the I saw the price 400$ increase too much it's ridiculous and to the people who bought your just teaching g them to abuse the market cause you'll pay for it same kinda people buying cars 10k above msrp your just fools

    • @HighYield
      @HighYield  Рік тому

      I think it will have a better price/$ (maybe aside from RT).

  • @dariusz6601
    @dariusz6601 Рік тому

    7950xt or Radeon pro is real?

    • @HighYield
      @HighYield  Рік тому

      Radeon Pro are like Nvidia Quadro and a refresh is likely, but IMHO only after next summer.

    • @dariusz6601
      @dariusz6601 Рік тому

      @@HighYield do you have any info about 7950xt or xtx

  • @DoObs
    @DoObs Рік тому

    Oh yay...more Speculation. YAWN

    • @HighYield
      @HighYield  Рік тому

      Not speculating here, everything is based on information AMD released.

    • @DoObs
      @DoObs Рік тому

      @@HighYield my point exactly. Those numbers cannot be use to any basis cause they are never accurate. When users have a physical card in their hands for an acrual comparison then we can talk. But parroting exactly what has been shown is speculation. Opinion based on what you see not what you have actually tested yourself.

  • @johnmellinger6933
    @johnmellinger6933 Рік тому +1

    This is a huge win for AMD in performance. For so long they have been in the shadow of Nvidia. Right now I see AMD ahead of NVidia dealing with performance per watt which I did not see happening since the Maxwell days. NVidia being stuck on Mono dies it hurting them and it shows with the 4000 series. But now we just have to wait another month and about a weeks time to see just how well the 7900XTX does for it's $999 USD price point vs the 4080 at $1199

    • @dante19890
      @dante19890 Рік тому +1

      thats not really an advantage they have. 4090 is actually even more efficient if u powerlimit it to 350w it still outperforms the 7900xtx in rastorized and run circles around it with RT.
      Price is the only thing they have that they can compete with..... well and formfactor

    • @SirSleepwalker
      @SirSleepwalker Рік тому

      @@dante19890 I see you benchmarked and reviewed card that releases in over a month of time.

    • @johnmellinger6933
      @johnmellinger6933 Рік тому

      @@dante19890 nobody who spends $1700+ is going to limit the 4090 too 350w...

    • @dante19890
      @dante19890 Рік тому

      @@johnmellinger6933 its only around 7 percent loss for a substantial decrease in wattage

    • @dante19890
      @dante19890 Рік тому

      @@SirSleepwalker we already know what kinda performance its gonna have.

  • @joakimedholm128
    @joakimedholm128 Рік тому

    navi 21 has 26,800 billion transistors and navi 31 has 58 billion!!

    • @Fezzy976
      @Fezzy976 Рік тому

      *billion

    • @joakimedholm128
      @joakimedholm128 Рік тому +1

      @@Fezzy976 thats what i thought too, but techpowerup claims million. they got it wrong lol

    • @HighYield
      @HighYield  Рік тому

      Yes, a huge increase, also not a huge as Nvidias AD102 with its 76bn transistors!

    • @joakimedholm128
      @joakimedholm128 Рік тому

      @@HighYield nvidia has the cache on die though

  • @ska4dragons
    @ska4dragons Рік тому

    I am not disappointed at all. I think AMD has a smarter goal. Go for efficiency, go for margins, go for price to performance, while the world is in recession and an energy crisis.
    Could they have fought Nvidia? I think they could have. But why? The vast majority of their competitive top end cards sat unsold until prices collapsed. At the highest end people still buy Nvidia even at a terrible price.
    Let's say AMD could make a card that beats 4090 right now. Let's say they could launch a 4090 Ti competitor when it comes out. They push power, costs, price, etc to the breaking point and people still buy Nvidia? They lost money and waste time.
    They make their process more efficient and cost efficient and down the road, maybe a refresh, maybe next gen, they have untapped performance ready to go for cheaper than Nvidia can compete with for better margins.
    Maybe. I think AMD is working smarter not harder.

    • @HighYield
      @HighYield  Рік тому

      I'm not disappointed by the product, I just hope AMD would do "more" with RDNA3. But its for sure a smart design, fully agree!

  • @Knorrkator
    @Knorrkator Рік тому

    So Navi 33 (7600xt) is gonna be especially bad. It adds no compute units/shaders at all compared to its predecessor.

    • @NootNoot.
      @NootNoot. Рік тому +3

      We could be seeing 6700 XT performance, which generationally is consistent (new "budget" range is equal to last gen mid range).

    • @theoriginalmikee
      @theoriginalmikee Рік тому +5

      N33 should clock higher than the other Navi dies. I'd expect it to match/beat Navi 22 while using less power, so it should be fine. Just don't expect 6800xt levels of performance (as some people were suggesting)

    • @malathomas6141
      @malathomas6141 Рік тому

      are you forgeting much higher clocspeeds?

    • @silverback2773
      @silverback2773 Рік тому

      Did you even watch the video?

    • @HighYield
      @HighYield  Рік тому

      Yes, seems like same amount of CUs and thus also RT cores. Just more capable dual-issue shaders and maybe higher clk speeds.

  • @TheStaniG
    @TheStaniG Рік тому

    We ever gonna get a 7950 XTX or what?

    • @HighYield
      @HighYield  Рік тому

      There's a good chance for a refresh, but not before next summer.

  • @jeffreymelton2200
    @jeffreymelton2200 Рік тому

    I am surprised that you didn't delve into the chiplet design even more, and question why AMD would have only 1 graphics chiplet and not at least 2, and that by only having on graphics chiplet, it is basically defeating the overall purpose of the chiplet design in the first place

    • @HighYield
      @HighYield  Рік тому +1

      Don’t worry, there will be another video in the future. For this one I just focused on the new shader architecture, because I felt like most tech UA-camrs missed it (AdoredTV uploaded a video on this to today).

    • @TdrSld
      @TdrSld Рік тому

      I would say it's about testing and future room to grow into a multi graphics chiplet design. Like how they did the Ryzen 1k series, all of those were a single compute chiplet at first with bad units turned off to make the 6 and 4 core models. Once they have the design refined will most likely see multi graphic chiplet cards that work like the old dual mono chip gpus back in the day.

    • @shepardpolska
      @shepardpolska Рік тому

      It's about cross GCD latency and scheduling two GCDs in a way that doesn't cause stuttering and screen tearing I would guess

    • @jeffreymelton2200
      @jeffreymelton2200 Рік тому

      @@HighYield Awesome, I look forward to the future content

  • @JoeWayne84
    @JoeWayne84 Рік тому

    Everyone remember the moral outrage all over the internet when Nvidia released two different cards with the same name????
    The entire internet tech media cried the exact same thing Nvidia is
    Confusing the consumer …
    AMD one month later announced release of two different cards with the same name and no one mentions anything.

    • @HighYield
      @HighYield  Рік тому

      The outrage was not over the naming, but over the performance difference. The 4080 16GB is based on the Nvidia AD103 chip, while the "4080 12GB" is based on the smaller Nvidia AD104 chip. Big performance difference.
      7900 XT & XTX are based on the same chip.

  • @GraveUypo
    @GraveUypo Рік тому +1

    i just want the affordable cards not to be a fraction of the power like nvidia seems to be doing.

    • @HighYield
      @HighYield  Рік тому

      I think Navi32 could turn out really nice, lets see. My mid-range hope this gen.

  • @mrshinobigaming8447
    @mrshinobigaming8447 Рік тому +1

    7900xtx is
    Upto 80% faster rt than 6950xt
    54% faster in raster
    By napkin maths

    • @hustleacc8075
      @hustleacc8075 Рік тому +1

      wtf he didn't even have the card to properly test it

    • @mrshinobigaming8447
      @mrshinobigaming8447 Рік тому +2

      @@hustleacc8075 Amd said 50% performance per cu
      6950xt has 80 cu so assume 80x100=8000
      7900xtx has 96
      96x150=1440 which 80% more of 8000
      So basically it's upto 80% faster in rt than 6950xt
      On average 60% faster in most games still behind ampere

    • @HighYield
      @HighYield  Рік тому

      Yes, lets see how close the napkin is to the reviews.

  • @lilcheesyfry
    @lilcheesyfry Рік тому

    So basically its an rnda 2 refresh

  • @rocksfire4390
    @rocksfire4390 Рік тому

    7900xtx isn't the final chip, there is still going to be the 7950xt and 7950xtx. the 7900 xt/xtx is for the 4080 (both crush it), the 7950xt/xtx are for 4090/TI seeing how close 7900xtx is to 4090 already the 7950xt is going to crush it. the 7000 series has a lot of headroom (wayyyy more then the 4000 series does).
    also ray tracing isn't as popular as you seem to think it is. the vast majority of people DO NOT CARE about ray tracing and as such focusing on it is really silly and out of touch with what gamers want. there is a select group of people (20-25% of buyers) that want that stuff. the 7950xt and 7950xtx will be for those people and the 7900xtx and below is for everyone else. which is fine because again most people just don't care about that stuff, the 7900xtx is a very good deal i'm sure the budget cards are going to have a tough time being a better deal beyond not wanting to throw 1k at a GPU, which is 100% fair.

    • @rafawroblewski8900
      @rafawroblewski8900 Рік тому

      Yep, its always like this, if you cant have it, dont care/hate it. RT is the best thing happend in graphics in many years and only way to go forward. Its so funny to see that every AMD fan DO NOT CARE about RT. And the reason is so obvious, they just can have it. I cant afford Ferrari so i dont care about Ferrari, as simple as that. If AMD was king of RT every AMD fan will scream how fantastic RT is (and it would be ok, because it really is).

    • @rocksfire4390
      @rocksfire4390 Рік тому

      @@rafawroblewski8900
      RT is only in games that support it, so if you don't play those types of games then there is actually no reason to pay for RT. don't get me wrong, RT looks amazing but the cost to enable it is a lot more. it's easier to just not have it at all.
      we would feel the same way regardless, AMD buyers are money conscious we wouldn't drop that kind of money on things.
      like i'm not going to drop $600 (actually $1.1k - $1.5k) more on a GPU just so i can have slightly higher fps with RT on.
      even if i had millions of dollars just sitting around, i wouldn't buy stuff like that.
      i have a GTX 970 still, so that should explain everything you need to know. 970 is still amazing, Nvidia does make good shit but their price increases are just disgusting.

    • @rafawroblewski8900
      @rafawroblewski8900 Рік тому

      @@rocksfire4390 many games support RT now and in games like CP2077 or DL2 its gamechanger. But yep, prices now are crazy.

    • @HighYield
      @HighYield  Рік тому

      While theres a good chance we will see a "7950" refresh at some point next year, I think not before summer and not at a 4090 Ti counter.

  • @Herbertti3
    @Herbertti3 Рік тому +1

    Somewhat disappointed. AMD really had a change to mop a floor with nvidia.

    • @HighYield
      @HighYield  Рік тому

      They could have gone all out, the question is, if its worth it for AMD. They dont seem to think so.

  • @NaumRusomarov
    @NaumRusomarov Рік тому

    the price-performance ratios we had in the past don't work any more. i'd have to wait for the proper reviews to understand what's happening. this is a middle-range card with the price of a high-end card, while nvidia's high-end has moved toward the $2k limit.

    • @HighYield
      @HighYield  Рік тому +1

      The crypto craze damaged GPU pricing for ever.

  • @REgamesplayer
    @REgamesplayer Рік тому +3

    RDNA 3 is an amazing architecture. I'm not sure what you expect from AMD if their big wins are just a disappointment to you.

    • @HighYield
      @HighYield  Рік тому

      Its not a disappointment for me, I just hoped for a more all-out chiplet design.

    • @REgamesplayer
      @REgamesplayer Рік тому

      @@HighYield I think that their architecture does not scale well at the moment and that all out design would be equivalent to previous generation. It would beat Nvidia flagships in rasterization and power consumption, but would be an overall loser. Though, it is just my assumption based on my guess that AMD engineers had thought about it and came to most rational solution themselves.
      Market had shown that they do not want a runner up when they are buying flagship product for over a thousand of dollars. AMD also does not have mindshare that people would prioritize them over their competitor for a similarly performing product.

  • @theonlyjumpyhumpy
    @theonlyjumpyhumpy Рік тому

    Try focusing on what it is, instead of what it is not. It IS calling out NVIDIA for their pricing bullshit, as well as reading the room that is the current global market and playing the long game. If navi 31 was better than 4090, you would be paying for it no doubt about that. $1600 for a GPU is just stupid.

    • @HighYield
      @HighYield  Рік тому

      Pricing is much better, but $1K for a GPU is still absurd imho. Every since the mining craze, pricing has been broken.

  • @ssaini5028
    @ssaini5028 Рік тому

    comparing TFLOPS from two different architectures is pointless

    • @HighYield
      @HighYield  Рік тому +1

      Thats the exact point of this video. And since RDNA3 changed the shader architecture, we cant compare performance with RDNA2.

  • @natsu78999
    @natsu78999 Рік тому

    Im excited for rx7900xt,$899 cost is big win for my buck

    • @HighYield
      @HighYield  Рік тому +1

      Its a decent price in today GPU market, but still a lot of money for a graphics card.

  • @rain4279
    @rain4279 Рік тому

    AMD with it's MCM aproach hase way more ways to increase performence compared to Nvidia.
    They can double the size of the GCD as it's only 300mm^2.
    They can jump to a smaller node which is available at TSMC.
    They can use faster VRAM (GDDR6X).
    On the other side, Nvidia is going all out.
    Maybe they could make a bigger die, or have an MCM design hidden somewhere.

    • @shaolin95
      @shaolin95 Рік тому

      Sure sure...

    • @HighYield
      @HighYield  Рік тому

      Yes, in the future a chiplet design offers more potential to increase chip size, without having horrible yields. Tho GDDR6X is a Nvidia exclusive.

  • @Kintizen
    @Kintizen Рік тому +1

    What's going on is that 4080 was cancelled. So RX 7900 XTX has no competition other than 3080 Ti and 3090, who 7900 beats in price/performance. AMD's strategy was to beat Nvidia in Cost and Efficiency. Now a number of AMD employees, have come out and said AMD is playing conservative with RDNA 3. We should expect to see a 7950 and 7950 XTX that is competitive to 4090. (Maybe even a 7999, just for number play). Also Nvidia is at its limit in terms of design for the RTX models, it's why 4090 is power hungry and a cider block, most can't fit or keep cool enough. And folks are returning them. We will see huge AMD market share gains. Because Nvidia has to redesign their entire RTX line for the next generation. AMD employees have been talking about skipping RDNA 4 and just do RDNA 3+ instead. Since they have been so conservative. We will most likely get RDNA 3+ next December, and at the end of 2024, just get RDNA 4+ or 5.

    • @NovaDoll
      @NovaDoll Рік тому

      I’m annoyed that AMD held bad. But understand they just Fed over NVIDIA with their good pricing.

    • @HighYield
      @HighYield  Рік тому

      Only the 12GB 4080 was cancelled, the "normal" 16GB 4080 will still release next week,

  • @sbacon92
    @sbacon92 Рік тому

    If you didn't show your face talking I'd have continued thinking it's a text-to-speech talking.
    Now I'm concerned it's a rendered meta face AI talking.

    • @HighYield
      @HighYield  Рік тому

      0111011100011010110101011101

  • @KrunchyTheClown78
    @KrunchyTheClown78 Рік тому +2

    I think AMD is holding back. They are probably gonna unleash a 2 GPU die 7950 that will crush Nvidia.

    • @HighYield
      @HighYield  Рік тому

      I honestly dont think so, but it would be a nice surprise!

  • @crazyelf1
    @crazyelf1 Рік тому

    Yes, I am disappointed with this design decision, although I understand why it is done. I would have preferred a larger die for the flagship and a bit more emphasis on ray tracing. To me, the smart strategy is to go for performance per mm2 for the "mid range and "mid high end" dies (so think dies like the older GTX 1060 and 1080), but then for the flagship to go as big as possible - even to close to the reticle limit, which is what Nvidia has done.
    The mid range and mid high end dies will emphasize cost efficiency, while the flagship dies are for people who have the higher budgets. Also, having the "halo" product drives a lot of sales for lower end GPUs. AMD right now has a "mind share" deficit, partly because Nvidia has the halo products and because of the history of drivers. There are other issues of course - CUDA for workstation adoption is a big barrier for AMD.
    Essentially, AMD needs a 4090 competitor and a 4090Ti/Titan competitor when Nvidia releases one. Long story short, the 4090 has 16,384 out of 18,432 shaders enabled on AD102. The flagship needs to be able to compete.

    • @HighYield
      @HighYield  Рік тому

      I think one argument is that Nvidia still has the better "brand" and even if AMD was faster, it takes some time to build up a brand (see Ryzen). So maybe its not worth it for AMD?