RDNA3 - what went wrong?

Поділитися
Вставка
  • Опубліковано 12 лис 2024

КОМЕНТАРІ • 525

  • @mtunayucer
    @mtunayucer Рік тому +87

    4:00 i guess the transistor count on ga102 is wrong here

    • @HighYield
      @HighYield  Рік тому +85

      Of course it is, thats the effing density number. Good catch! The transistor count is 28.3bn.

    • @mtunayucer
      @mtunayucer Рік тому +23

      @@HighYield yay my first pinned comment! thanks!

    • @gstormcz
      @gstormcz Рік тому +2

      Even if AMD own expectations were to make faster gpus on release, if it failed get that level of performance, it had be priced lower bit lower than announced.
      Idk if AMD marketing team knew typical reviewers set of games for testing, sure they had enough options to get that info.
      Maybe 7900 doesnt deliver Nvidia 90 tier performance and it's tough to rename is as 7800, it would not be good marketing, knowing that gpu potential. But buyers usually decide depending on current performance. Maybe it is reason for growing 4080 sales, but I guess many those 1000+usd gpu buyers were waiting overall performance of navi31 that's incl raytracing. I get them, at that level of performance they expect all features maxed ingame.
      But certain improvements won't affect current 7900 already released as chip revision or fuctioning stuff inside chip(shader prefetcher?).
      Bit worried those gpus with high raw performance would well perform for cryptomining if it returns.
      Isn't tgere possibility, both AMD and Nvidia were designing new gen to perform great mainly for other than gaming? Or is that worse gaming than raw performance given by fact if both bad game engines optimizations and different game engines specific emphasis on certain features/computing?
      (also new architecture for Rdna3 could result on that big difference in performance among all games tested).
      But I guess AMD can improve with updates at least to some level. They Surely they know what games need, as long AMD influence game development tools m, Direct X too right?
      What makes me happy AMD made nice surprise is some competitive online games performance at top beating 4090, seems it could be win for them...at least if that proves for Navi31 too.

    • @HighYield
      @HighYield  Рік тому +8

      You noticed first, you got it right, you get the pinned comment! That's how it works here :D

    • @mtunayucer
      @mtunayucer Рік тому +2

      ​@@HighYield ringing the bell helps mate :D

  • @kispumel
    @kispumel Рік тому +132

    Do not forget, Nvidia actually jumped 2 nodes. Ampere was on Samsung's 8nm, which is just a refined 10nm. They now skipped 7nm and went straight to 4N, which is a refined 5nm. Previously with RDNA2 AMD had about 1 node advantage. Now they are ~half a node behind.

    • @MechAdv
      @MechAdv Рік тому +32

      Nvidia jumped 2 process nodes and 3 generations of lithography machinery. It’s the cutting edge of mass production capable Lithography tech currently. That’s why the Lovelace wafers are reportedly 4 times the price of Ampere. However, they massively cut die area for the 4080 vs the 3080, so it shouldn’t be THAT much more expensive to manufacture gen on gen. The obnoxiously huge coolers for a 300W max chip adds quite a bit of BOM cost and the markups are huge. I still think the 4080 should have released with its own cooler and a 700-800$ price tag.

    • @ismaelsoto9507
      @ismaelsoto9507 Рік тому +2

      @@MechAdv If it was released at that price it sure would be praised

    • @MechAdv
      @MechAdv Рік тому +9

      @@ismaelsoto9507 For sure. I honestly would have bought one. I only upgrade when I can get 50% more perf for the same tier card. I upgraded from a 980 to a 2080 to a 3080. Not willing to pay more than 700-800$ for a GPU though. So it’ll probably be another generation before I can get my 50% uplift at the price I’m
      willing to pay.

    • @ejkk9513
      @ejkk9513 Рік тому +13

      That's not true whatsoever. We need to stop using nanometers as a measurement for nodes. They started lying about nanometers back in 2009. TSMC "5nm" node is actually between 22-36 nanometers depending on how you measure it. We need to focus on transistor density. The 6900xt Navi 21 chip has a density of 51 million transistors per square millimeter. The 3090ti with the GA102 has a transistor density of 45 million transistors per square millimeter. As you can see, it is nowhere near an entire generational jump. A generational jump is generally 2x-3x the transistor density. So while TSMC "7nm" node is better than the supposed "8nm". It's not by much.

    • @fissavids8767
      @fissavids8767 Рік тому

      @@MechAdv for real. as a water cooling enthousiast the cooler is just something that will end up on a shelf.

  • @Psychx_
    @Psychx_ Рік тому +178

    This is so frustrating. All of GCN had power draw issues. Those finally seemed fixed with RDNA1 and 2. Now we're back to square one. IMO AMD should have waited with the release until they fixed whatever is wrong here.
    Also, their driver team needs more people! Drivers being rough around the edges at every launch doesn't need to happen anymore with the profit AMD has been making during the last few years. This would also allow them to start thoroughly testing the HW way earlier, potentially allowing them to resolve issues before they ever become public.

    • @thana3270
      @thana3270 Рік тому +17

      AMD's drivers have always been a problem
      which sucks cuz Im looking to leave team green

    • @timothygibney159
      @timothygibney159 Рік тому +42

      As a former AMD user I have to say the drivers have drastically improved in the past 5 years

    • @ImaITman
      @ImaITman Рік тому +27

      @@thana3270 I've got a 3080 right now, but AMD drivers don't really have those issues anymore. If anything I have more trouble with "Day 1" drivers Nvidia launches all the time. Makes me want to turn off auto update.

    • @atavusable
      @atavusable Рік тому +13

      there is a few thing that seems to have make AMD release anyway :
      - Pressure from investor
      - Soon incoming tarif in the US on electronics
      Considering that they plane to make the driver team works during the holydays, they intend to leverage wathever they can with the drivers.
      Btw AMD lies before launch where pretty disgusting.

    • @timothygibney159
      @timothygibney159 Рік тому +15

      @@atavusable Like how Nvidia's 4080 was going to be 300% faster than the 3090ti 😂. It's barely 15% faster and crickets. It's only bad when AMD does it as it would hurt reviewers and fanboys feels who purchased them

  • @TheDaNuker
    @TheDaNuker Рік тому +41

    The 1st CPU chiplet Zen 1 had plenty of weird performance dips if you weren't doing all core workloads and there is probably something in RDNA3 currently that just isn't tuned properly. AMD probably had to shove this out the door now to please shareholders and customers who buy now are going to be have a sub-optimal version of RDNA3. Maybe we see a proper respin next year? But always hoping that ATI/AMD will get better next generation versus the competition is getting tiring now.
    Lastly, this really felt to me that AMD should not have priced it as the 79xx the top tier labeling and price and should have done what they did with the 5700XT generation and label them one price tier below for what they are.

    • @JoeWayne84
      @JoeWayne84 Рік тому +2

      Sure would of made more sense than milking the fanbase who all just threw a fit because a 4080 was a $1400.00 AIB model now.
      So AMD decided they would release there 4080 version for $1300.00 aib price and the AMD fanboys rejoiced!!!
      Anyone making excuses for a billion dollar company releasing inferior products and raising the prices are morons.
      What happened to that whole this card is going to be so effecient that it’s gonna basically compete with a 4090 at 250 watts ??? Haha man I heard some major retard level stupidity from the AMD fanbase the last 6 months even after the presentation where they talked more about the IO of the card than the performance haha … all those display port 2.0 fanboys at least they got them a card now … for all the 8k 240hz gaming there gonna be doing.

    • @Klar
      @Klar Рік тому +1

      I agree

    • @HighYield
      @HighYield  Рік тому +11

      Calling it 7800 XTX and 7800 XT would have been much better imho, but then the $1k and $900 price tags would have seemed odd, I guess AMD does like money.

    • @-Ice_Cold-
      @-Ice_Cold- Рік тому +3

      @@HighYield ''AMD does like money.'' - Wow, really? You're Captain Obvious

  • @yogiwp_
    @yogiwp_ Рік тому +38

    As always with significant arch change, I'll wait for the 2nd version or at least a refresh.

    • @HighYield
      @HighYield  Рік тому +2

      Not a bad attitude, but it also means you might be missing out ;)

  • @jackr.749
    @jackr.749 Рік тому +41

    My view on this, the 7900 series being the 1st chiplet based GPU is a HUGE beta test, kind of like Zen 1 was. As people use these new GPUs and AMD gets feedback, they will change things and fix things as they redesign the chiplets for RDNA4, now, with this in mind, if AMD goes the way I think they are heading, next gen could be VERY interesting, multiple chiplet based GPU (just like how Zen evolved), this is when AMD might have a wild performance swing on the positive side. But until then, I agree, the 7900xtx is not meeting my expectations so I'll just hang on to my 6900xt and 3080ti until next gen.
    On the nVidia side, agreed, the 4080 is way overpriced and so is the 4090, but the 4090 is the fastest GPU at this time but commands a ridiculous price.
    nVidia has gone full circle, the 3000 series were power pigs (I know my FTW3 3080ti is) and now they are much better, where as the AMD 6000 series was power efficient, now they are the power pigs.

    • @sunderkeenin
      @sunderkeenin Рік тому +5

      Heck, I'm actually interested in what RDNA3 will look like 6-12 months from now after a lot of the teething issues peter out. The potential of the 7900XTX is already showing up in a replicable manner in rare cherrypicked scenarios. I don't think it's ok that it's going to take that long for something even close to that performance to show up, however as someone who is simply sitting and watching I think it'll be interesting to watch.

    • @robojimtv
      @robojimtv Рік тому +1

      Tbh I don't think the chiplet part is what's causing the issue. In fact, when they eventually move to multiple GCDs I'd imagine we'll see even more issues.

    • @hardwarerx
      @hardwarerx Рік тому +4

      I'm using powercolor 7900xtx red devil le, the stability of the card is better than my expectation, i use it for gaming cod mw2, video editing and browsing internet, so far it doesn't has any crash occur. Video encoding is crazy fast. The only problem is the multi-monitor idle power is quite high@110w, single monitor doesnt has this problem. I'm waiting for the driver to fix it.

    • @disadadi8958
      @disadadi8958 Рік тому +4

      If it's truly a beta test product to scout the chiplet design and its potential, then you should sell them as beta test products. Now the customers pay big bucks and expect a refined user experience like nvidia offers. Currently AMD falls short of what the customers would expect from a $999 GPU.
      And the performance per watt is just horrendous especially when they boast the efficiency during the launch show.

    • @Dell-ol6hb
      @Dell-ol6hb Рік тому +1

      @@disadadi8958 what do you mean it falls short of what customers' expect from a $1000 gpu? It performs at about the same level as the 4080 which costs $1200 or more so to me I think customers would be fine with the performance they're getting at that price point in comparison to competitors. Don't get me wrong, I think it's ridiculous to be spending so much money on a fucking graphics card but that is what the market is like at the moment

  • @joehorecny7835
    @joehorecny7835 Рік тому +43

    It wouldn't surprise me if there multiple small issues/bugs in the chip/driver. I work in software, and often there the QA folks test A, and they test B and they test C, and all three might pass, but when they combine A+B+C it doesn't go so well. Often too, there is a push from the top to meet the "date". Everyone knows its an artificial date, and does it really matter a whole lot if its released on Monday or Friday? Most cases its doesn't matter, but from the wallstreet side it does seem to matter, and they really want to make that date. The TSMC capacity is an interesting point, if I was paying for that capacity, and had a choice of getting nothing for my money or something sub-optimal, I think I would take the sub-optimal instead of nothing myself. Ill bet they were hoping/praying that they would solve the issues between the announcement date and launch date, ive seen behaviors similar to that in my career. If there are issues, I hope AMD can identify and correct the issues quickly.

    • @HighYield
      @HighYield  Рік тому +10

      Software could indeed be behind the majority of the problems. I'd be super happy if AMD would be able to greatly improve RDNA3 with upcoming drivers.

    • @misterj2253
      @misterj2253 Рік тому

      Dude you just described AMD for the last decade

    • @osopenowsstudio9175
      @osopenowsstudio9175 Рік тому

      @@HighYield this is really good if AMD try to fix this fast enough like Intel fixing their Airbus GPU, A750 and A770

  • @Typhon888
    @Typhon888 10 місяців тому +2

    I can actually confirm all of this. I’ve been on RDNA3 testing for fun and found RDNA2 more stable and was left scratching my head.

  • @trickyok
    @trickyok Рік тому +36

    Very surprised how few subscribers you have based on the quality of your videos. Objective vs subjective reviews are always much better in my opinion.
    Please keep making content like this, as it's only a matter of time before you gain the audience you deserve.

    • @HighYield
      @HighYield  Рік тому +10

      I’m going to continue making videos regardless of how many subs (tho it’s more fun with an audience). Thanks for your nice comment!

  • @Psychx_
    @Psychx_ Рік тому +19

    Power management having issues is something AMD's GPU division is no stranger to. Maybe some fundamental mechanism of that (i.e. clock gating) is broken, so that unutilized parts of the GPU can't be shutdown in some cases. As for the interconnect between the GCD and MCDs, this is said to comsume around 20-30W.

    • @HighYield
      @HighYield  Рік тому +4

      It could be a couple of things. Maybe the voltages are fine, but its higher than expected chiplet interconnect power draw or power gating problems. It also could be mainly a software issue, lets see if we find out at some point.

    • @misterj2253
      @misterj2253 Рік тому

      ​​​@@HighYield personally I think it's a bios issue reminds me of when a default Vista driver for a hard drive would 12 volts when it was supposed to send 9volts FYI the issue was not fixed until win7 and several bios updates

  • @TheDrTopo
    @TheDrTopo Рік тому +73

    Thanks for making this video! It's really refreshing to have this kind of talk instead of fully dissing a product like the other youtubers out there!

    • @kamata93
      @kamata93 Рік тому

      GamersNexus and Hardware Unboxed fanboys incoming 😂

    • @TheDrTopo
      @TheDrTopo Рік тому +4

      @@kamata93 tbh i was thinking more of coreteks. Don´t get me wrong i understand his point but still i think it was too harsh

    • @HighYield
      @HighYield  Рік тому +4

      I think knowing the limits of my knowledge really helps. Thanks for the support!

    • @kicapanmanis1060
      @kicapanmanis1060 Рік тому +3

      I mean AMD tricked users with marketing, I think some blowback is only fair. If AMD themselves overhyped a product, it's only fair they get some PR damage from it. It's not behaviour people should be encouraging.

    • @kicapanmanis1060
      @kicapanmanis1060 Рік тому

      @@kamata93 Gamers Nexus review of the xtx was pretty good overall I recall. HU too despite the thumbnail.

  • @Morden97
    @Morden97 Рік тому +33

    Nvidia conceded a penalty kick and AMD sent it to the stands... didn't even hit the post

  • @ytmish
    @ytmish Рік тому +8

    Your opinion seems the most balanced on the subject, from my point of view (I've watched other youtubers also)
    New technology (chiplets) will always have issues at first, it's normal, but, what's not normal is to waste so much reputation and trust gained in many years, just to cover the mistakes.
    As a solution, I don't know the best route, but, if I were AMD, I would compensate somehow the buyers on RDNA3.

  • @superscuba73
    @superscuba73 Рік тому +10

    If Radeon hadn't lied about performance in their reveal the reviews would be much more in Radeon's favor and there would be much less hate from the community.

  • @pieluver1234
    @pieluver1234 Рік тому +8

    The comparison in purely transistor count is not very fair. A significant portion of the die is dedicated to cache, which doesn't scale much in density from 7nm to 5nm, and 7900XTX has the same total die size while needing to dedicate a large portion to cache.
    Looking at the GCD, 6950xt has 80 compute units or 5120 shader units, while 7900xtx has 96 compute units or 6144 shader units. That's a 20% increase in CU count for >30% increase in performance.

  • @zocher1969
    @zocher1969 Рік тому +16

    good video dude.
    I too think that navi 31 is likely broken, but i think the largest miss is not performance but efficiency. I like my cards and cpu low power and efficient and the clearest sign of this die being unfinished or broken is the messed up high power use when more then 1 monitor is used. something is up here but performance, to me, is actually fine. It beats the 4080 and costs less, if there are driver updates that increase perf its just a bonus to me but efficiency should be the focus

    • @b130610
      @b130610 Рік тому

      The high idle power usage isn't unique to rdna 3. I know that at least some rdna2 models will have elevated idle power consumption when using certain display configurations (esp multiple monitors with high or mixed refresh rates). It's not as high as rdna3, but still well above expected power consumption. My suspicion is this is a recent driver issue that happened to coincide with the rdna3 launch.

    • @zocher1969
      @zocher1969 Рік тому +1

      @@b130610 optimum tech has a video where he shows the numbers and the power consumption is litteraly more then twice that of rdna 2 at about 100 watts

    • @b130610
      @b130610 Рік тому

      @@zocher1969 yeah, that's why I said rdna 3 uses more: rdna 2 can idle down at 10 to 15 watts, but will suck down over 40 with some monitor configurations which is not normal behaviour. That's all I was getting at.

    • @Dell-ol6hb
      @Dell-ol6hb Рік тому

      try undervolting, AMD cards are usually really good undervolters in my experience and you can make them a lot more efficient and run cooler with the same performance

  • @theevilmuppet
    @theevilmuppet Рік тому

    You invite discussion - that's wonderful and appreciated!
    Your analysis is great. You lean into the specific statements made, don't engage in baseless speculation and offer supporting evidence for all of your assertions. The "I look forward to knowing whether I'm right or wrong" statement is all too rare and wonderful.
    Love the fact that you called out the power impact of going to an MCM architecture - most reviewers didn't touch on this at all when comparing the 7900 series to Ada.
    My opinion is that, as was the case with Zen and Zen+, AMD opted to launch RDNA3 as-is and defer some features and architectural uplift to RDNA4. AMD have a history of doing this, as TSVs were in place on Zen 2 but not leveraged until Zen 3 for stacked cache dies.
    Once again, please keep going!

  • @robertwilliam5527
    @robertwilliam5527 Рік тому +6

    35% more PERFORMANCE than the previous generation for the SAME PRICE, is good generational improve if you ask me.

  • @asplmn
    @asplmn Рік тому +13

    AMD confirmed that the silicon is fine re: shader prefetch. The fact that they nailed it on the first (A0) taping is a testament to the skills of their hardware engineers.
    The issues lie with their drivers.

    • @stuartandrews4344
      @stuartandrews4344 Рік тому

      Drivers, or maybe even a firmware update might solve the issues..

    • @HighYield
      @HighYield  Рік тому +3

      Yes, I mentioned it in the video too, but kept the previously recorded parts because it was an interesting rumor imho.
      A0 is great, especially for such a revolutionary chiplet design. Still, there are usually a lot of different bugs in any A0 silicon, doesnt mean they have to be that impactful.

  • @Themisterdee
    @Themisterdee Рік тому +11

    Simplistic i know ( but I like simple )
    If you class the gpu structure as a mechanism. Amd have suddenly gone from 1 moving part in N 21 to 7 for N31.
    Thus my take on it would be the problems they are facing is the silicon equivalent of a mechanical tolerance stack.
    Where instead of dimensional tolerances the stack could be more along the lines of binning tolerances.
    Leading to the discrepancies of performance between each component. even though on paper each part fits perfectly.
    Of course getting drivers to accomodate this silicon stack variance would be difficult as pin pointing the errorrs and combinations would be an up hill struggle.

    • @ochsosfollies
      @ochsosfollies Рік тому +2

      Good video . Thanks
      Yeah the silicon lottery multiplied by 6 or 7 with Simulating so many unknown variables you end up with different empirical problems compared to paper stats.
      Keeping up the mechanical anaology .
      AMD have moved from a single cynlinder engine straight to a V6 in one leap. So pretty much certain they will have run into issues.
      Yup and these issues only really appear when the engine is built and up and running.
      There appears to be a misfire somewhere.
      Could have well have firmware issues ( coil / alternator)
      And the drivers (ECU) could help smooth out the power curve.
      Balancing the whole package to peak performance may be one step too far .(Cam shafts)
      Reference card shroud wasnt really designed for so much power draw and possibly over heats ( Radiator)
      Im sure maybe the v-cache addition could act like a buffer ( flywheel) - but thats a different chip.
      Maybe the delayed Navi 32 has resolved some of the problems and could end up with similar performance.
      Heck if it beats it ... what do AMD do?

    • @HighYield
      @HighYield  Рік тому +7

      Usually, semiconductor companies are really cautious. Intels tick-tock, to never change architecture and process node at the same time, was the benchmark for years.
      RDNA3 has a new architecture on a new process node & is using a completely new chipset design. AMD is playing dare devil.

    • @Themisterdee
      @Themisterdee Рік тому +2

      @@HighYield Very true. All journeys start with a single step .AMD have taken thier first, albeit a stumbling one.
      New ideas will always lead to new problems which require new approaches to solve them or avoiding them.
      as mentioned what if Navi 32 has resolved any architectural problems and begins to tread on the toes of Navi 31?

  • @forsgsrekylgevr2113
    @forsgsrekylgevr2113 Рік тому +4

    it feels like even though they dont meet their promises the cards still perform well especially compared to the other options out there at the moment

  • @ChiquitaSpeaks
    @ChiquitaSpeaks Рік тому +2

    I think it was pretty good timing how the consoles landed on getting RDNA 2, it was a well executed generation. Zen 3 would’ve been a plus too but oh well

  • @SSpider41
    @SSpider41 Рік тому +3

    The problem is their slides say they tested it against the 6900xt while the guys on stage said 6950xt when you look at the test systems they used. They weren't even on the same page within the company 🤦🏽‍♂️

    • @JoeWayne84
      @JoeWayne84 Рік тому +2

      The slide during presentation showed it compared to 6950xt … the clip is in this video??!
      The problem is in the memory controller with the infinity fabric having a 100 watt idle power draw is something that might not be fixed … I’m almost certain there is no way they could be so incompetent to not only be 20% slower than there lowest promised performance metrics and overlook hey are card is running at 30% power draw while it’s on the sleep screen guys… like wtf are there engineers doing.

    • @SSpider41
      @SSpider41 Рік тому +1

      @@JoeWayne84 Yeah I guess it was changed when they posted them to their website lmao. it says 6900xt now.

  • @jellowiggler
    @jellowiggler Рік тому +4

    The 790xtx isn't bad, but the price of everything isn't great. All of the currently released 'next-gen' cards are missing dollar/frame gains. Nothing is actually performing better per dollar. The industry has stagnated. Until they drop prices to normal the industry is stalled. These new cards should be $200 less than they are, or more in the case of the 4090.

  • @HappyDrunkGamer
    @HappyDrunkGamer Рік тому +2

    I have a Sapphire 7900 XTX reference card and its been interesting. At stock the junction temps hit 110c in most games after 5-10 minutes, however by capping the GPU clock to 2500Mhz and reducing voltage to 1100mV from 1150 only RT games push the junction temp beyond 75c. What is really odd about this is in Horizon Zero Dawn I do lose 2fps when capped (4K, Ultimate quality) but at stock the clocks where only 2650Mhz anyway, when capped it averages 2520Mhz. The cap also reduces board power from a peak of 346w (which it hit often) to 303w which is only hit occasionally during the benchmark run.
    In Heaven benchmark at stock it takes less than 4 minutes for the junction temp to hit 90c, while capped it sits at 73c without issue after 15 minutes. The only game I can't fix yet is Metro Exodus EE (4K, Ultra RT, 2x VRS), as the clocks don't go above 2500Mhz at stock never mind capped, the card still hits 346w board power and the junction temps do hit 90c pretty quickly, however, they don't ramp beyond 100c like they do at stock. PS this is NOT an air flow issue, when the junction temps hit 110c the core temps DROP as the fans hit 100% (and my PC sounds like its about to take off 🤣)

    • @HighYield
      @HighYield  Рік тому +2

      Hm, the temp could be a thermal paste problem, right? Have you checked if other ppl or reviews have similar temps?

    • @HappyDrunkGamer
      @HappyDrunkGamer Рік тому +1

      @@HighYield that is what it was in the end I think. On Thursday when I got the card I ran into the 110c junction temp issues straight away, DDU'd the driver, clean install etc. . I decided to repaste the card. After my return was approved Idecided to check the screws after removing the card just in case and I noticed 2 of the 4 both in the right side of the leaf spring where a little loose (around half a turn at most). I have now tightened them and it's sorted. In every game tested at stock I'm seeing 90c max junction temps apart from Metro Exodus EE, but they don't go above 94c. Plus after lowering the RT from ultra to high my FPS went up and the junction temps went down, so I think this is a software issue (either the AMD drive or the game, most likely a combo of the 2) and frankly high RT still looks great, capped to 60fps and my card is now running much cooler and quiter while also actually staying at a locked 60. For now I'm gonna play some less demanding games from my backlog.

  • @External2737
    @External2737 Рік тому +1

    I'm commenting for the algorithm. You deserve far more subscribers.

  • @bslay4r
    @bslay4r Рік тому +2

    Imo most of the problems can be derived back to the MBA (made by AMD) card being not well put together.
    1. the factory default voltage it too high on these cards, with undervolting the cards can reach 2.9+ GHz clockspeed even on the MBA card
    2. the factory default power limit is limiting these cards
    3. 2x8 pin connector is not enough for the card to stretch it's legs, on 3x8 pin AIB cards it boosts to much higher clockspeeds
    4. the cooler on the MBA card is not sufficient when it boosts at default voltage
    TPU showed that the AIB cards with huge coolers, 3x8 pin power connectors, undervolting and higher power limit these cards are beasts.
    AMD should've shipped the MBA card with lower default voltage, it would've reach 2.8-2.9 GHz without high temperature and without throttling (the reason why the fps results are all over the place) and everyone would be much happier so to speak.

  • @rootugab
    @rootugab Рік тому

    Just stumbled upon your channel and I love it so much, thank you for making these informative videos!

  • @TorqueKMA
    @TorqueKMA Рік тому +3

    People are putting way too much thought into this. My 2 cents. Release graphs were accurate provided inhouse testing, review drivers and first release have problems.
    We are talking about a tech company with loads of middle management, and dozens of sub teams within sub teams. Even Linus said they were incredibly late to release the review drivers, so your analysis that a bug was discovered last second and they had to crunch to get something out is most likely the case...
    The architecture turn is the other side of the equation, lots of new means more unknowns... "But Nvidias is better"... Nvidia took everything they already had and just cranked it up to 11, throw a ridiculously oversized cooler on it and call it a day. Then scream that Moore's law is dead and charge crazy prices. AMD has the challenge of competing with that on a radically new architecture...
    I bet by February we will see AMD get much closer to that 50% improvement per generation. Next generation may even be more interesting if AMD perfects this architecture since Nvidia can't rely on their coolers getting any bigger.

    • @mjkittredge
      @mjkittredge Рік тому

      Reminds me of that april fools video showcasing the 4090 and 4090 ti ua-cam.com/video/0frNP0qzxQc/v-deo.html they're running out of road with the monolithic designs. With the MCD architecture AMD can finally leapfrog ahead of NVIDIA instead of playing from behind. And I totally agree, I think the refreshed cards in a year (or the rumored 7990xtx) will see the technology perfected in hardware and software so we can see the full potential of RDNA3

  • @Ericozzz
    @Ericozzz Рік тому +30

    The prices are actually just as frustrating as the lower than expected performance improvement. I Just feel like the cards are 200 dollars above what they should be. Nvidia did even worse regarding that. Guess i'll stick to my 6700xt for a couple of years and skip this generation.

    • @HighYield
      @HighYield  Рік тому +11

      Completely agree with your point. Nvidia is leading the way and AMD just follows. The 7900 XTX should be $800 and the XT maybe $679 or something.

    • @TheDemocrab
      @TheDemocrab Рік тому +4

      Pretty much this, it feels like AMD had lower than expected performance but then just kept pricing where it was originally planned to be despite that.

    • @Ericozzz
      @Ericozzz Рік тому

      @@TheDemocrab I think both AMD and Nvidia were still hoping on the crypto boom to keep the prices up. Hopefully now you can get a $150 5700xt and play anything 1080p60 high settings and 1440p60 on mid settings. I still regret getting my 6700xt for $450.

    • @joshtheking1772
      @joshtheking1772 Рік тому +1

      @@Ericozzz i wouldn't regret it at all. Thats where it should be. The reason I say that is 4k is NOT dominating the market yet. A friend explained this to me and it rings in my dome when I build a system. A basic gaming computer is good at everything including grocery shopping online. An enthusiast computer has it where it counts for great gaming performance. A High-end gaming computer is good for when gaming is YOUR JOB. Knowing where you are on that list, is key to building a computer for your cost level. Its averaged across ALL KNOWN METRICS, that 90% of people are in the first and second categories. The other 10% are just whales and overspend because its their JOB to overspend.

    • @Ericozzz
      @Ericozzz Рік тому

      @@joshtheking1772 absolutely

  • @joaomiguelxs
    @joaomiguelxs Рік тому +9

    Greed is the straight answer as to what led AMD to this mess and why they sacrificed consumers trust. As you, I had high hopes for the new architecture, irrespective of performance comparisons, specially to Nvidia. Im very thankful to the independent reviewers that called out the huge discrepancies and far from looking at that as a hate bandwagon I see it as appropriate accountability. What really gets to me is AMDs vanity. They are in such a weak position now, regarding to market share and public perception. I sincerely hope AMD can do something to fix and improve performance for those 1k+ parts and advance innovation with the chiplet design. I also wish they would price their products accordingly, but we can see that's a very tall order for Lisa's minions =)

    • @HighYield
      @HighYield  Рік тому +4

      For me "greed" is more on a personal level. AMD is a huge publicly traded company, its more pressure from share holders I think. So yes, greed by proxy. Still no excuse and in the end I think it will hurt AMD more than help.

    • @john-xo9mg
      @john-xo9mg Рік тому

      Should have priced the xtx at £850 people may have been more forgiving and understanding £1k cards no thanks

  • @TheCivicsiep3
    @TheCivicsiep3 Рік тому +6

    i actually think the specifications cant be directly compared. i think the 6144 shader count makes much more sense and if you calculate tflops with this number in mind the numbers really do all add up. that 61.4 tflops assumes that the gpu can run dual issue fp32 which obviously most games cannot. this is why some games the 7900xtx is faster than a 4090 then others it barely matches a 4080 this is likely expected performance. this also makes sense why nvidia with the 30 series didnt hesitate to list the dual fp32 shaders as normal shaders where as amd is choosing to say 6144 not 12288. likely something in the design requires specific things to take place to utilize the rdna 3 design fully. i dont this drivers will fix this and i think people should just accept this is what we got. they arent bad gpu's by any means quite the opposite and i am sure with time stability and some performance will increase. i just wouldnt sit here waiting for some massive turn around because its never gonna happen. id gladly take an aib 7900xtx and run it at 3 ghz and id be more than happy with that level of performance. who knows maybe this time we will actually get a rdna 3+ next year with tweaks to better utilize the shaders. but honestly i think the gpu is running at the perf you should expect out of 6144 shaders and 31 tflops

    • @dralord1307
      @dralord1307 Рік тому +1

      Aye the Duel Issue SIMD doesnt seem to be working as one would expect. That alone should increase perf a lot but most of the perf uplift seems to be coming from all the other changes.
      Nvidia did the same thing with last gen. Saying they doubled their FP32 but it was sharing with INT and the cards could never reach what would be expected from the doubling.

    • @skilletpan5674
      @skilletpan5674 Рік тому +1

      pcworld was saying that there's an AIB 7900xtx card that's about $100 more than 7900xtx AMD and it's around 20% or so better because it's clocked near 3ghz etc. I'd be rather happy to buy that as I need to upgrade from my rx570.

    • @zdenkakoren6660
      @zdenkakoren6660 Рік тому

      RX 7900XTX is just RTX 3090 TI TITAN edition... new nm process with higher clocks and full die.
      RX 7900XTX should be named and priced like RX 7800XTX 399$
      RTX 4080 should have ben priced 499$

    • @TheCivicsiep3
      @TheCivicsiep3 Рік тому +2

      @@skilletpan5674 any of the 3x 8 pin 7900xtx seem to clock around 3ghz so if u can find any of them buy it. i currently have a 6900xt red devil in an open loop. id really like to get the asrock aqua because it comes with that water block

    • @skilletpan5674
      @skilletpan5674 Рік тому

      @@TheCivicsiep3 yeah I'm broke at the moment. I may beable to get something in feb/march.

  • @andreyzhavoronkov6746
    @andreyzhavoronkov6746 Рік тому +3

    That's all great of course. But what about all the folks who already bought 7900 XTX? RMA and wait until summer?

  • @rvs55
    @rvs55 Рік тому +14

    I believe the clock speed/power issue can be fixed with firmware/drivers, but AMD has a piss poor history of quick and proper fixes.
    The performance deficit however is permanent, unless we're talking about boosting power consumption to 4090 levels. But even then, a 4090 with its power limited to 355W is already outperforming the 7900XTX at that wattage. AMD is basically second tier this generation.
    And then they keep using that adage: "Age like fine wine", which to me implies they launch a subpar product and have to fix it over its 2 year life cycle, before the next gen launches.
    That's a cop out to me. It should fly, right out of the gate, and not take part or its entire lifecycle to reach its potential. I'm not sure how some people can even defend that notion.
    They should have launched this card as a 7800XTX and priced it as such. As it stands, they are as bad as nVidia in misrepresenting this card as a top tier performer. And it definitely is priced the same as the overpriced 4080.

    • @HighYield
      @HighYield  Рік тому +1

      I'm really hoping it does turn out to be mostly driver related, then we still have a chance for decent competition this gen!

    • @opinali
      @opinali Рік тому

      To be fair on the performance comparison, the 4090 has 30% more transistors and they have that with less dense libraries (total die size only 15% larger) which is good for clocks/temps. It would be shocking if the 4090 wasn't proportionally faster, and that's even before talking of chiplet overheads. It just wasn't AMD strategy to make a bigger GCD this time (or last time either), they know that they're still second place in branding and in some features and they have to compete on price.
      And I agree on the rushed release with apparent low quality software but I don't agree that this is a pattern, at least not recently. On the contrary, as a current owner of a 6800 XT, driver quality has been very strong through the entire lifecycle of RDNA2. Very good track record on other software pieces like the Radeon app, Ryzen Master (now so good I stopped using Hydra), FSR. You get some bad bugs in non-beta drivers, but you get plenty of those for RTX also if the tech forums I follow are any indication they had a couple stinkers this year.
      Here's my theory: AMD didn't scale the software team sufficiently for this launch. GPU drivers have ramped up in complexity and AMD already scaled that work during the RDNA2 cycle with improved quality and support for game-specific optimizations, catching up with ray tracing, releasing surprising improvements for both DX11 and OpenGL both in 2022. They didn't handle the extra squeeze of a new architecture, part of that work during the pandemic crisis. NVidia of course has the same challenges but bigger R&D capacity.

  • @28Soul
    @28Soul Рік тому +4

    Good Input! Like it! I got myself a 4080, wanted a xtx on 13th dezember, but then things Turned around and I got myself a new GPU, and I think just for myself I made the right decision, but I still think both Nvidia and AMD make good GPUs!

    • @HighYield
      @HighYield  Рік тому +1

      The 4080 is an amazing card if you have the money, its only fault it its high price. But performance and efficiency wise its great.

    • @thomasfischer9259
      @thomasfischer9259 Рік тому

      imagine paying over 1k for a graphics card lol

    • @28Soul
      @28Soul Рік тому +3

      @@thomasfischer9259 imagine Someone else caring about the way you spend your money

  • @Sevastous
    @Sevastous 10 місяців тому

    What a nice channel man. Keep it up I'm enjoying the speculations on these pieces of silicon.
    But I still see navi31 as an ongoing investment for AMD to chiplet'ise their whole stack and make it bug-free in the long run against Intel and Nvidia with both on Monolithic dies. with ARM racing together to solve interconnect latency-power inefficiencies. Hope to see a revolution like Ryzen 3000-5000. Unless Nvidia (don't see intel prepared after seeing 14th gen) has a perfect working Chiplet GPU or server chip lineup somehow kept all secret. which I doubt

  • @NaumRusomarov
    @NaumRusomarov Рік тому +8

    i honestly think that's all rdna3 could give. I don't expect some magical rdna3-rev2 products that will be 20%-30% more performant than what we're seeing now.

    • @HighYield
      @HighYield  Рік тому +1

      I think AMD was expecting more and I also think drivers will get better. But yes, there wont be a magic refresh, especially now that we know the shader pre-fetcher works.

  • @ToeCutter0
    @ToeCutter0 Рік тому

    Cheers for an excellent video that presents some well-organized information. It’s abundantly clear that AMD made compromises with Navi 31. It’s interesting how nVidia provided a gapping hole that AMD felt they could fill even with a subpar offering. Taking AMD’s own comments into consideration it’s clear that they either planned on underclocking Navi 31, or were forced to in order to ship by Q422. AMD might release an iterative version with a faster clock speed, but that will do little to redeem their once sterling reputation.
    My take on this parallels the comments made by “High Yield”. I’m guessing that AMD knew they had to bite the bullet to introduce the very first GPU chiplet design. Because power requirements for nVidia’s Lovelace architecture were so insanely high, AMD sought to offer a GPU that was somewhat below most folk’s expectation of 50% improvement over Navi 2, while also consuming far less power than nVidia’s new GPUs.
    AMD likely assumed their lower power consumption combined with much lower retail pricing for Navi 3 would make it a no brainer, at least until benchmarks showed the actual performance, which is *always lower* than what manufacturers show during keynotes. I mean seriously, have community benchmarks ever exceeded keynote expectations?
    That said, when I consider how much additional engineering went into AMD’s first chiplet-based GPU design, there’s likely some extra performance hiding in Navi 3. It’s difficult to assume otherwise when we consider how rushed Navi 3 appears at first glance? Requiring any processor to go off-die to access cache will impact performance, but AMD has bet the farm that their chiplet approach will work even for high bandwidth ops like graphics processing. AMD will discover both the good and bad of what this architectural shift provides. As AMD continue developing drivers and probably more importantly, revving hardware with process improvements, Navi could far exceed its targeted 3Ghz clock does to provide some serious performance improvements. There’s likely some headroom built in to Navi 3 that will take time to unlock as AMD continues to refine what’s a brand new approach to GPU packaging.
    Phew! Sorry for the novel-length comment! I really enjoyed the video and appreciate all the time and effort that went into making it. Well done! 👏 👏👏

  • @lefthornet
    @lefthornet Рік тому +4

    I expect a little better increase in performance with drivers than the 6900XT and 6950XT, Maybe 10-15% or so. Hope that I'm wrong and AMD can squeeze more performance out of that GPUs

    • @TerraWare
      @TerraWare Рік тому +5

      The best thing about RDNA 3 could be what may lead to RDNA 4 as they work out the issues they've been facing. That said the consumer shouldn't have to pay a premium price to essentially be a beta tester imo.

    • @HighYield
      @HighYield  Рік тому +1

      I'd like to see that too, but my rule is to never buy a product on promised features.

  • @maynardburger
    @maynardburger Рік тому +1

    8 months later and all RDNA3 GPU's are out now - same lack of performance and efficiency for all of them. I'm thinking there is some fundamental design problem with RDNA3 at this point.

  • @BigHeadClan
    @BigHeadClan Рік тому +2

    Not having watched the video yet my reasoning is below.
    RDNA3 performance is less than expected, but its worth keeping in mind 30-40% improvement generation over generation is typically what we see with new generations. Personally I think we will see AMD make up the performance in a revision, but I think we will need to wait for that to manifest in a 7950XT and XTX. Just like how it wasn't Xen 1 but Xen 2 that refined and improved the new design.
    That said I imagine the below reasons to be why performance isn't where most expects, they are in order of what I consider impactful but its a combination of all these things that delivered the results we've seen.
    1. Cache Size & Speed - RDNA3 only has about 6MB of total L2 cache (vs 98MB of AD102) AMD then needs to go off chip in order to access its larger L3 memory cache (Infinity Cache) which while a good size but it's going to be much slower when compared to L2 and likely won't be able to transfer data between the GCD to the Infinity Cache without crippling performance. I suspect this is why AMD heavily improved the transfer rates between its various levels of cache in order to remove as much bottle necking as possible.
    2. Driver Optimization - The new RDNA3 architecture is an entirely new design and is likely leaving a fair bit of performance on the table due to early driver support, I suspect we could see up to a 10% uplift over the next couple of months and likely a few extra percentage beyond that as it ages.
    3. Power Targets - At only 100-200Mhz higher clock speeds and nearly identical identical power draw and board power AMD is clearly not getting the most out of their node shrink, we've also seen Videos from places like Jayz2Cents that see very little gains from GPU overclocking when pushed heavily the cards appears to be reducing Shader performance to maintain stability. Although this could simply be a software bug on the GPU or tuning software.
    4. Time - AMD didn't launch with the unveiling of the 7900XT and XTX in November as one expected but instead waited until early December, likely to have a new product on sale for Q4 earnings reports and holiday season. This lends some credence to the rumor that RDA3 may be an early version of Silicon (Probably A0), which isn't technically a bad thing it means the first batch of chips were hitting the majority of their targets and is totally workable, but it may also signify they may not have had time to tweak or optimize as much as they'd like to meet that Q4 deadline where they needed to ramp production with what they could manufacturer immediately.
    This could help explain the many rumors of how amazing RNDA 3 was performing in recent lab leaks and even AMD's own slides.

  • @Pillokun
    @Pillokun Рік тому +1

    from what I gather the perf uplift seems to come from the wider memory bus, the additional compute perf is not really in effect like it was on ampere/ada. We dont see the perf uplift like we saw with ampere with its int/fp alu units.

    • @HighYield
      @HighYield  Рік тому +1

      The additional IPC from the dual-issue shaders is well below what I expected, maybe it really the drivers?

  • @hypothalamusjellolap8177
    @hypothalamusjellolap8177 Рік тому

    I like the uplift of the 6600 vs 7600. FP32 8.9 Tflops vs 21.75 Tflops for 123 vs 146 Watts with some 20ms spikes at 249 vs 186 watts power consumption. Taken from techpowerup. Looks like there is ~30% more cu in 7600 along with the RDNA3 generation update

  • @IgoByaGo
    @IgoByaGo Рік тому +2

    With the cost of current GPUs, I don't think I will upgrade from my Red Devil 6900, but I think there are some teething issues with this card and drivers (No, I do not believe AMD releases sub-par drivers, I have never had an issue), but because of the new memory setup, it is going to take some time to solve the issues. Kind of like my first gen Ryzen.

    • @Yandross
      @Yandross Рік тому

      Hello, any coil whine from your Red Devil?

    • @IgoByaGo
      @IgoByaGo Рік тому +1

      @@Yandross no, but I ended up getting a 4090 because of its vr performance.

  • @brianemmons1206
    @brianemmons1206 Рік тому +1

    An interesting insight and conversation about the new card and for the usage I have I am looking at getting the older 6750XT as it meets and actually exceeds my requirements

  • @MassimoTava
    @MassimoTava Рік тому +2

    AMD made a card much faster than the 3090ti for $899/999. Even at todays prices, that seem reasonable but it did not live up to the huge hype.

  • @caldark2005
    @caldark2005 Рік тому +2

    I am more than happy with the uplift over my 6900xt.. Big boost in fps in all the games I play and not only that they are a lot smoother as well.. Would need to do some side by sides with my old card but not sure if I can be bothered still more than happy with the performance especially in 4k

    • @swarmvfx2818
      @swarmvfx2818 Рік тому

      For the price the upgrade not even worth it.

    • @caldark2005
      @caldark2005 Рік тому

      @@swarmvfx2818 has been worth it for me.. the improvements at 4k make my gaming a hell of a lot better now.

    • @HighYield
      @HighYield  Рік тому

      35% is definitely not bad, but everyone expected more and even AMD teased more.

  • @forest6008
    @forest6008 Рік тому

    i love these styles of videos, they are very informative and i love hearing about the underlying archatecture of these things.

  • @Fine_i_set_the_handle
    @Fine_i_set_the_handle Рік тому +3

    The funny thing is that their only redeeming feature over nvidia is the cost savings. Except they're so power inefficient an nvidia card will make up for the extra cost in electricity after 1-3 years depending on your electricity cost and usage, and it will do this while completely destroying amd in FPS meanwhile.

  • @lugaidster
    @lugaidster 5 місяців тому

    My guess is they determined that RDNA3 was good enough and their flaws in RT and AI makes then as competitive as they were going to be without requiring more R&D money. If RDNA4 info ever comes out, I hope it's more competitive architecturally.

  • @ippothedestroyer
    @ippothedestroyer Рік тому +2

    I hope nvidia drops the price of the 4080 just to put heat on the 7900xtx. Even a drop to $1,100 makes it worth it against the 7900xtx

  • @jwkelley
    @jwkelley Рік тому

    Rumors are that there was an artifact appearing in games after an hours or so run time that required AMD to put in stunt in the drivers impacting performance.

  • @LG-jn5fx
    @LG-jn5fx Рік тому +5

    35% is around the average for a generational leap. With the radical design change with chiplets it is actually impressive they did this.
    Yes they should have been more honest with performance expectations. Losing goodwill and trust is not good.

    • @HighYield
      @HighYield  Рік тому

      It's not a bad increase, but lacks well behind Nvidia and this gen AMD did change a lot, so it's disappointing, especially with what they claimed.

  • @RafaGmod
    @RafaGmod Рік тому +1

    I think there's another point to make: real heavy users (corporations) do not uses AMD for IA, deep learning, etc, beacause the majority of libraries realy on CUDA and the ROCm translation is not really good by now. This puts NVvidia WAAY ahead on the market that really buys flaship chips. So fighting for the top makes no sense by now. Maybe AMD will fight in price and performance with lower tier 4000 in the consumer market and use the experience to refine ROCm and chiplet design.
    I'm a researcher/engineer in electrical engeenering and computing. The amount of optimzations made for Intel CPUs and nvidia GPUs make literally impossible to choose other than this brands. Acctualy in the 5000 series ryzen the performance in CPU are equal, but in GPU not even close.

    • @HighYield
      @HighYield  Рік тому

      You are right, Nvidia's software advantage can't be understated. CUDA and cuDNN for AI are basically 50% of Nvidias offering.

  • @3dkiller
    @3dkiller Рік тому +2

    Lisa Su needs to focus on their Radeon department.

    • @Weather_Nerd
      @Weather_Nerd Рік тому +2

      Gpus are far from their most profitable venture. Especially high-end cards. I don’t disagree though

  • @NootNoot.
    @NootNoot. Рік тому +4

    Your opinions with RDNA3 are more in line with mine as well. After staying away from all the tech talk and rumors until the launch, I was disappointed with what AMD produced as well. My first reactions were more akin to "AMD just ruined their product line by naming the 7900 XTX as an RX #900 card" not being in parity with the 4090 and even at its worst, performing the same as a 4080 (ironically losing efficiency as well).
    Then taking a step back and looking at all the reviews and benchmarks, this whole controversy had me more invested in tech than ever before, lol. Just taking a look at benchmarks, it was obvious to see that there were HUGE inconsistencies. Games like MW2, Cyberpunk, and RDR2 etc. were more or less expected from what AMD "promised," while the rest was really hit or miss. This may be another Fine Wine situation, which is no excuse for a subpar launch. AMD has usually been "honest" (while still cherry-picking) with their benchmarks, and now it seems they have lost the publics trust.
    Then clock speeds: the reference cards, reviewers were disappointed with and recommended AIB cards. Rumors had it that AIB cards were going to bring very little performance; in reality, it became quite the opposite. Techpowerup's Cyberpunk OC (+UV and power limit) benchmark made the rounds around the internet. Now on Twitter, I saw a tweet stating that on Blender, an AIB can clock up to 4GHz (at 450W, btw). These cards are soo interesting, but to gamers (who are THE main consumer because their software stack is still severely lacking: cuda, media encoding, AI?? for now) will have to make a hard decision.
    Then the whole hardware bug. Before launch, talks about the bug where they couldn't clock it any higher because of it, now it's because of silicon: A0. "Was this rushed? Is it impossible to fix?" All of these discussions are really interesting, and figuring out if this was done on purpose. Your historical analysis on the GTX 285 to 580 as well as AMD's Vega were really interesting. Q4 is already over, so we know AMD HAD to release something, but were they optimistic with the silicon? Ian Cutress had a positive thing to say about this, quite the opposite to the discussions making it seem like a fatal flaw. AMD seems confident in the silicon, and I don't imagine they would launch something that is impossible to fix software wise later on.
    This now comes full circle with the rumors and AMD on record assuring performance targets to be hit. Before the announcement everyone was hopeful for the top SKUs from both teams to be head to head, raster +-10% from eachother where AMD loss to RT while gaining in price to performance. Then announcements hits, $1000 355w card, called a "7900 XTX", no benchmarks compairsons. Red flag, you know the rest. Botched launch, which sort of makes the $1000 card less favourable (if performance never impoves, price should reduce), making the 4080 worth it even if for $200 (cuz ur spending 1K anyway for early adopter tech).
    Although with a this said, I'll gladly take another hit of copium. There IS some potential to these cards, and I am hopeful not just because I am a "fanboy" but because of competition and for Nvidia to bring these absurd prices down which they are making insane margins for.

    • @HighYield
      @HighYield  Рік тому +1

      The strange thing for me is, that it doenst feel like a failed gen. Something does feel off here, the cards clearly have potential, if only I knew where it was hiding!

    • @NootNoot.
      @NootNoot. Рік тому

      @@HighYield The RDNA 3 case continues...
      Hoping for a follow up on the topic whether it may end good or bad. As well as more insights on the architecture as well and chip analysis either from AMD or NVIDIA like from your other videos. Read a whole bunch of technical talk from twitter relating to the possible problems and I don't know if its worth talking about as I don't really understand it and if its really speculation. Could be an interesting discussion point tho, didn't really understand things like A0 silicon until watching ur content!

  • @theminer49erz
    @theminer49erz Рік тому +1

    I will preface this by acknowledging that I am, and have been an AMD/ATI fan for over 25 years. However, I think most of this is driver based. I have a feeling they noticed they were having issues and may have bogged down their performance in exchange for stability until they can get more real world data. I was expecting something like this considering it is the first chiplet based GPU. The pressure from tight chip availability and the decline of prices could have been a "corprate" stress point and could have been forced through. Either way, I have learned that my patience with them over the years has always paid off. When a company focuses more on innovation than just throwing money at the problem, there are going to be hiccups. I can see Nvidia having this same problem when they finally decide they have to move onto chiplets to remain cost competitive. I guess we will see.

    • @HighYield
      @HighYield  Рік тому +1

      Drivers would be the best case scenario, because then we still have a good shot at a competitive GPU market next year! I'm also curious where Navi32 will land. If its closer to the 7900 XT than expected, the hardware bug rumors might be true.

    • @theminer49erz
      @theminer49erz Рік тому

      @@HighYield I am clearly and optimist lol. I am definitely not ruling out hardware malfunctions either, but thatbisnwhat Intel seemed to be suffering from on top of drivers amd reviewers found much more wrong hardware wise if I remember correctly, but then they started making a good bit of headway with drivers. If Intel cam do it with their foray run AMD can with what seems like a much better piece of hardware.
      I was hoping their initial estimates ere conservative amd they were going to try to avoid a roll out like intel by doing so then eventually be like "oh btw the new driver give you an __ increase in performance!".
      However, I have engineered things that should have worked just fine and did in testing just to have some whacky seemingly gremlin realworld results. I'm sure it can happen to the best minds

  • @liberteus
    @liberteus Рік тому +2

    what went wrong? Expectations. Expectations from hopeful people believing leaks, and then AMD set expectations during their presser in october.
    If you have no expectations you're not disappointed. I had NONE because PR and leaks are not a final product review.
    Never board the hype train, never believe marketing PR. Both will get you disappointed.
    That said, I'm fairly sure AMD missed their own perf target by a third, they announced +50, they reached +35 so to reach +50 they'd have to improve A LOT. Still, I'm not disappointed :D I'll see what the product stack looks like because I may replace my 3080 or my 6600xt at some point soon (although I love the 6600xt)

    • @liberteus
      @liberteus Рік тому +1

      watching the video and very good job explaining the technical side of things! Yes, AMD probably messed up something. The good thing is that with chiplet, they can actually change the core without changing the rest, of keep the core and change the MCD... so it's much easier to fix things since it's modular. Let's hope they'll refresh something and and RDNA3.1 will be out in less than a year!

  • @fungo6631
    @fungo6631 3 місяці тому

    RDNA3 works really great for iGPUs. The latest AMD APU can run Cyberpunk with ray tracing at sort of playable framerates.

  • @Carnage_Lot
    @Carnage_Lot 11 місяців тому

    In november of 2023, my 7900 XT is awesome. No issues, and a seemless switch from Nvidia with a DDU wipe.
    Very happy with my team change. Lol especially for how much of a beast i got for the money.

  • @36cores
    @36cores Рік тому +5

    There is nothing wrong with this card other than the price. Its a $699 card and I hope we see that price by the fall of next year. Like every amd release, ignore the reference models. As for performance, drivers will squeeze out another 15% in the near term. Hardcore enthusiast(water) will have a field day with >3100mhz clocks, 2800mhz vram and

    • @bryzeer5075
      @bryzeer5075 Рік тому +2

      Remember when people were shitting on nvidia for power draw, and here we are. The 4080 is far more efficient. Uses less power, runs much cooler, and has no coil whine. All while leading performance overall by around +15% (before RT and DLSS3)

    • @ertai222
      @ertai222 Рік тому +1

      @@bryzeer5075 copy and paste

    • @bryzeer5075
      @bryzeer5075 Рік тому

      @ertai222 Hey, guy. Come talk with the grownups when you can actually grow facial hair.

    • @ertai222
      @ertai222 Рік тому +1

      @@bryzeer5075 good one👍🏻

    • @ShaneMcGrath.
      @ShaneMcGrath. Рік тому +3

      @@bryzeer5075 And cost way too much, At least in my country!
      Both cards are garbage until we return to pre crypto pricing with just the inflation added.
      Inflation maybe 20-30% on some things but it's not 100-200% that they were charging during mining.

  • @yungnachty4474
    @yungnachty4474 Рік тому +3

    AMD really, and I mean really must absolutely must not continue to mess up these launches. RTX 50, AKA Blackwell, will have it's flagship built to the reticle limit of TSMC N3, which will be 850mm square die, the 4090 is only 600mm square, the 2080ti was 770mm square. A near 50% larger die with a 35-40% transistor density jump, in addition to actually cranking the TDP to around 650( a 50% power jump on a 30% more efficient process) and you are looking at again, another 3090 to 4090 jump, but to the 5090. I don't think most people realize how fast a 4090 is either, it is roughly 3.5 RTX 3070s worth of performance, and 2 3090s..or 6 GTX 1080s. It is obscene.
    AMD needs to get this right, and then release a 7950XTX using a 550mm square + GPU die in addition to the MCM approach. Nvidia keeps saying they are monolithic only, but the reticle limit on TSMC N2 is only 450mm square, Nvidia will likely move to the same approach AMD is currently using then, which will still be a huge net gain as currently cache takes up over 2/3 the die on the 4090, and will likely on the 5090.
    Worst case scenario though, Nvidia releases a N3 MCM GPU where the 850mm die is just the compute die, at that point AMD would be unable to contest Nvidia's performance for at least 2 or 3 years.
    This isn't even addressing the absolutely dogwater Ray Tracing performance AMD has. It does not matter that many forms of RT in games are Nvidia based methods. There are AMD methods out there, they look bad compared to Nvidia's RT and are not even worth using when compared to already existing shadow methods, especially against modern solutions like those that exist in UE5, with Lumens. AMDs approach is only good for reducing complexity of the GPU compute die, they build with the same ethos of 1 size fits all like they do with their CPUs. The difference is a CPU whether it has 2 cores or 16 can still run all the same stuff, only difference is speed. Their approach to Ray Tracing with RDNA is like trying to run the physics in a game off the CPU.
    It is so fundamentally flawed because the GPUs do not have dedicated RT cores, so until AMD can figure out how to run RT asynchronously, via the FP 32 + FP 32/Int architecture(this probably isn't even possible) they added with RX 7000, it is always going to carry an enormous performance penalty because you are not only running more stuff, but you are taking away cores for raster and giving a significant number of them to RT.
    Perhaps now that AMD has an MCM design they could add a RT dedicated chiplet to the set up. That would alleviate the issue. But it is not acceptable that 4-5 years on, RTX 20 series cards will be able to out perform some RX 7000 cards in RT.

  • @KingFeraligator
    @KingFeraligator Рік тому

    Very informative and entertaining video! The purpose was to provide a plausible set of events that fit the facts and rumors at the time and you succeeded. With that being said, how does your story change now that AMD claims the prefetcher is fine?

  • @samghost13
    @samghost13 Рік тому

    I was looking in to GPU OPEN Documentation on RDNA3 but i have to dig in deeper and that takes time. Sapphire Nitro 7900XT should arrive soon. Trough some testing i hope to find out more. Bye

  • @GreyDeathVaccine
    @GreyDeathVaccine 2 місяці тому

    What 'GPU binning' stands for? (english is my second lang)
    BTW, I am big fan of your channel and fact you can take different take on a product.

    • @HighYield
      @HighYield  2 місяці тому

      Very few semiconductor chips arewithout flaws. If you have a GPU with 10,000 GPU cores, it's very likely that some of them are defective. Because companies don't want to throw away their partially defective chips, some cores will be disabled and they will be sold as lower tier parts.
      Same with GPUs. A Ryzen 7 9700X has all 8-cores active, a 9600X has two disabled. Thats called "binning". Selling chips with small defects.

  • @duladrop4252
    @duladrop4252 Рік тому +1

    Well I am AMD fan, and I really want them to humble Nvidia, like what they did to Intel. Going to performance shortage RDNA3 yeah a lot of people are complaining about this, but there is big BUT, AMD has not failed us to show the RDNA 3 Architecture right? Like how they add 2 AI Accelerators in every CU of their GPU, these accelerators is what is missing on their previous Gen GPU, but it did add the costs of the GPU for what is worth AMD didn't ripped our wallet on this regard. Many of you are aware as well that DLSS2 in terms of image quality are better than FSR2, because we are talking about DLSS using Hardware for rendering processing with the help of AI cores, compare to FSR2 only running on software solution. I am not a hardware enthusiasts, but I do know that AMD doesn't wastes parts on a hardware for a meaningless reasons, but I do believe those parts are there to support every element out there in the GPU to make them run efficiently like parts that will power up those AI cores and their 2nd gen Ray Tracing processor.
    With all this parts incorporated on this GPU, where the hell are the the performance? Another lets us go back to FSR 2 biggest limitation and that is it doesn't support Hardware AI cores/Accelerators. So technically the AI cores that is present on RDNA 3 are not yet running.
    BUT things will change when FSR 3 is ready This will be the time when RDNA3 AI accelerators will be cranking up + Fluid Motion Frame + HYPR RX, I believe this will be the time that RAdeon 7000 series will redeem its reps. So to people who owned Radeon 7000 just be patient time will come that your card will repay you for the hard earned money that you used in buying them, just wait for FSR 3 and HYPR RX.
    Oh one important thing don't overworked your card disable your Ray Tracing for now, Wait for FSR 3 then start tweaking your Ray Tracing.

  • @Shibalba
    @Shibalba Рік тому +2

    6950xt isnt flagship. It came out a while after the 6900xt. Good information though, thank you for this.

  • @itzzz_killzz5720
    @itzzz_killzz5720 Рік тому +2

    Lol this video didn’t age well, those Driver updates are already improving performance. And don’t forget about undervolting + OC. The 7900XTX beats out the 4080 in most Rasterized performance, and can almost catch the 4090 if it’s undervolted + OC’ed

  • @Sakosaga
    @Sakosaga Рік тому +2

    The issue is AMD refused to delay the line up, they should have. They even took months to fix alot of things in drivers but in reality that wasn't enough. I hope they learn their lesson tbh because they have been doing great in terms of performance per watt, and per dollar. The 7900XTX is a good card, with problems that it shouldn't.

  • @Nordmende01
    @Nordmende01 Рік тому +1

    There is a problem if you overclock the GPU that the performance go down.
    Some UA-cam camels have reported about that.
    There is one thing that catch me about that.
    The clock speed goes up and the memory down.
    I think the memory controller doesn’t get called for data from the gpu and lowers the clock.
    Maybe I’m totally wrong, but could it be that there is a error correction inside the GPU. If the clock goes up more and more errors happen, and the data must be calculated again and again.
    If it is like that, that is why there is low activity to the RAM.
    This happened all intern inside the gpu and the cache
    What are you thinking ?

  • @VoldoronGaming
    @VoldoronGaming Рік тому +1

    What went wrong is that all the leakers were wrong about the shader count. They were right about the RTX 4090 but were wrong about the 7900XTX and XT. One has to wonder if it was purposefully midleading...given that they were right about the 4090. Also what went wrong is that AMD cherry picked benchmarks way more on this release than they did the RX 6000 series.
    As for the 5700XT it was targeting the 2070 but many reviewers put it against the 2070 Super anyway. It was indeed faster than the 2070 and $100.00 cheaper.
    Hmm about the transistor count....RDNA2 21 die has 5120 shaders...the RDNA2 31 die has 6144 shaders...does that sound like double the transistors to you? It would mean their CUs got LARGER with a new node if all 6144 shaders account for those 57 billion transistors...maybe some of the die is disabled.

    • @HighYield
      @HighYield  Рік тому +2

      The rumors were indeed way to optimistic. As for the transistors, AMD added AI-cores for the first time, the RT-cores got bigger, the a much improved display engine, a dual video-decoder that also now support AV1, a new encoder, a 50% larger memory interface and while its not that many more shader cores, they now are dual issue. Ofc I dont know for sure, but I think there are no disabled parts.

  • @celdur4635
    @celdur4635 Рік тому

    AMD recently said they have expanded the machines they test their Radeon (should be called ATI really) products for the first time to more machines than Nvidia. But we won't see those gains until later 2023 - 2024, since they are in the middle of testing.
    Not to mention management improvements, testing strategy, software, etc.

  • @abhishekchakraborty63
    @abhishekchakraborty63 Рік тому +1

    Never trust amd leaks, never trust amd fanboys on youtube, never judge the product based on hypothesis.

  • @ShailenSobhee
    @ShailenSobhee Рік тому +1

    35% better than the previous generation and the video's title is "what went wrong"? My take: That's massive improvement..

  • @hrod9393
    @hrod9393 Рік тому +1

    6950xt would need a 7950 though. I'm a Nvidia supporter but I'd just like to point that out.
    Also pointed out that their chiplet lower cost design would be more apparent if they had a 12gb 7900, it'd be 400-500 bucks. VRAM and supporting bandwidth is what makes a card expensive these days.

    • @HighYield
      @HighYield  Рік тому

      The 6950 XT is just a slightly higher clocked 6900 XT, so I think its OK to compare it. And yes, VRAM cost is a large factor, 20GB and 24GB are not cheap at all, Nvidia has the cost advantage with 16GB on the 4080 and 12GB on the 4070 Ti.

  • @maral04
    @maral04 Рік тому +4

    7900 XTX owner here, I'm enjoying my awesome GPU that takes 110w on idle desktop :D (previous generation also had issues with high refresh monitors, taking up to 50w on idle)
    While gaming I see a 30% performance uplift at the same wattage, compared to a 6900 XT LC (XTXH+18Gbps).
    One more thing I noticed is that if you force the GPU to stay on high clock speeds, lets say 3000Mhz, it's gona sacrifice memory speed... resulting in a lower framerate (core 2400Mhz higher FPS than 3000Mhz) could it be that there's not enough power on the board to maintain high memory and high clock speeds at the same time ?

    • @Mr-Clark
      @Mr-Clark Рік тому

      A 3rd 8 pin may be needed. But AMD will have to put their foot in their mouth thanks to their claims earlier.

    • @HighYield
      @HighYield  Рік тому

      It's a really interesting GPU and I'm sure better drivers will fix a lot of early quirks.

  • @PRIMEVAL543
    @PRIMEVAL543 Рік тому +1

    To be honest, I think it’s not like they were lying, I just think their last tuning didn’t work out as well as they thought.

    • @HighYield
      @HighYield  Рік тому

      Well, if you have about 6 weeks until launch, you need to be more honest with your claims, especially if your honesty is really valued by your customers.
      I also think AMD was trying to get more performance in time, but they should know better than to play this game.

  • @squirrel6687
    @squirrel6687 Рік тому +1

    AMD's performance, as I've been saying since I joined the discussion, is that the performance envelop was smaller in the past generations comparatively between Nvidia and AMD. It came down to process. Nvidia chose to go with a better process from TSMC and proved the architecture of AMD will not have been better.
    TSMC is the winner here.

  • @maxwellsmart3156
    @maxwellsmart3156 Рік тому +1

    You have to go with the product you have and not the product you want. When the original Phenom (Barcelona Architecture) processor came out it had a TLB bug and the BIOS fix could cost up to a 20% performance hit. At that time AMD made a comment about the difficulties of marrying a design to the process node. There are always errata and AMD is definitely not alone. If they can't patch with firmware or drivers then they can spin up another batch with a fixed design. Will AMD refresh with a patched design or just move on to RDNA4 instead? AMD is in a position to significantly undercut Nvidia because their design should be much cheaper to produce. I'm hoping AMD will make the 7900XTX a $699 card, not because of these issues, but because they can still make reasonable margins (to satisfy those investors) and drag Nvidia well in to the deep end.

  • @vanceg4901
    @vanceg4901 Рік тому +2

    I'm interested in RDNA 3 integrated graphics for a budget laptop.

    • @HighYield
      @HighYield  Рік тому +1

      Phoenix will be coming in the first half of 2023 and have a RDNA3 iGPU!

  • @redko79
    @redko79 Рік тому

    Super video! Came on time right when I was deciding to switch from Nvidia to AMD for my next PC

  • @ventyll1897
    @ventyll1897 Рік тому +5

    Like any new AMD architecture, it has its problems. The next driver versions as well as much higher clock speeds of 3.0Ghz + Ghz will bring a significant advantage over the competition. I've never owned any Nvidia product, I've been using ATI/AMD products since the early 90's. In terms of price and quality, the new architecture leaves no doubt as to who won the race. Yes. As Linus said, it's AMD time

    • @HighYield
      @HighYield  Рік тому

      RDNA2 had a much smoother launch, I was really hoping AMD would stick to that quality over time.

  • @Boorock70
    @Boorock70 Рік тому

    Loved the video, RX 7X50 sounds more closer now. Thanks.

  • @abritabroadinthephilippines
    @abritabroadinthephilippines Рік тому +2

    7950 XTX OC is what we're waiting for. 👍✌️

    • @HighYield
      @HighYield  Рік тому +2

      By then, I'm waiting for the 8900 XTX ;)

    • @abritabroadinthephilippines
      @abritabroadinthephilippines Рік тому

      @@HighYield rrr the circle ⭕ of GPU life. 🤣👍✌️🇬🇧❤️🇵🇭👨‍👩‍👧💯✅☮️

  • @simonecoggiola3128
    @simonecoggiola3128 Рік тому +1

    I just returned a 6950xt one month ago (XFX). Unfortunately, it was faulty. I noted the main issue was about power. When it worked, it was impressive, then a lot of green screen and reboot. Can't pass 303w (instead 321w/340w full power stock settings). No way. Similar problem of a streamer with a 6950 gigabyte (stuck at 328 instead 350 or more). I'm starting to think about some old manifacturing mismatch of rdna2 still present on rdna3.

    • @HighYield
      @HighYield  Рік тому +1

      I bought a RX 6800 in early 2021 and didnt have any issues so far, RDNA2 is very stable over all. Sorry you got a faulty one!

  • @viktortheslickster5824
    @viktortheslickster5824 Рік тому +1

    If you think about it the 7900xtx was never meant for greatness. This is how I see it by doing a comparison with the 6900xt.
    The 7900xtx has 20% more compute units (96 vs 80), an average in game clock that is around 13% higher (2650 mhz vs 2350mhz), and the individual WGPs in rdna3 have an IPC advantage of around 17% according to AMD's slides. So let's do the math:
    1*1.2*1.13*1.17 = 1.586 or 58%
    The only factor which this analysis doesn't take into account is memory bandwidth, and we'd expect to see an additional smaller boost because I think many people agree the 6900xt was slightly bandwidth limited starved at 4k due to the 256 bit bus, and the 7900xtx is very impressive in the memory bandwidth department.
    In some games we do see +60% improvement on the 7900xtx vs 6900xt, but my thinking is this is where the game engine/driver can capitalise on the 17% ipc improvement from the new architecture. I have to admit I was disappointed that Amd have double the number of fp32 units per shader, and added larger registers/l1 cach, only for a meager 17% increase. In some games it looks like the new architecture doesn't offer anything above rdna2, and that will be a challenge for the driver team. That is my interpretation, always enjoy your videos!

    • @Pillokun
      @Pillokun Рік тому

      wait wait wait? must be coffee, but do you mean that the 6900xt ran at 2350mhz? had two 6900xt and both of them ran with a frequency of 2540 without my doing anything. so the gameclock is a worst case scenario like on nvidia when they show a boost clock minimum which every gpu hits way over that.
      To me it seems like everythign is in order, ie like it should if we talk about the increased memory bandwidth and shader count over 6900xt. If we then talk that the new rdna3 gpu actually is like ampere/ada then we should get at least additional perf uplift of 50% which we dont see. So something is up with the fp perf. And lets be honest, future games will probably not use that much integer executions so having int/fp capable alu units is the future because why would u have unused die area.

    • @viktortheslickster5824
      @viktortheslickster5824 Рік тому +1

      @@Pillokun You should check out an article by Computerbase where they did a clock for clock comparison between rdna3 and rdna2. They found the rearchitected rdna3 compute unit only gives a 9% performance increase on average. So definitely a long way from 50%!

    • @Pillokun
      @Pillokun Рік тому

      @@viktortheslickster5824 it means it does not work as it should. if u make the only int capable alu units able to execute float as well then all the unused resources often as many as the fp32 alu units u suddenly gain that much fp perf. so from 6k fp32 we now have 12k float alu units.
      Now games still need integer calculations but over all the most a game is doing is 25 to 33% of the total float calc so if we gaing 6k fp units we need to remove at least 33% to integer. so 6k fp32 x .66 = just a bit less than 4k additional fp32 units. and then we have to remove a bit because of u-arch deficiencies/game not utilising/scaling 1:1 with the available float units.
      so we should at least have gaming perf that equates to 9k fp32 units and that is not what we see because something is up with rdna3.

  • @quantumdot7393
    @quantumdot7393 Рік тому +3

    Something that has to be said and was touched upon in this video is that AMD is not getting good performance per die area as people claim. If you look at charts shared by AMD cache only shrink 10% going from 6nm to 5nm that means that a monolithic 7900 xtx would have been 480 mm2 in size. if you look at the 4080 it has a die size of 380 mm2. With less than 80% of the theorized 7900 xtx die size the 4080 matches it in raster and it destroy it in ray tracing/upscaling all while being more energy efficient . I understand that the 4080 is more expensive but putting the prices that companies assign to their products aside and just analyzing technology, Lovelace is a far superior architecture to RDNA 3 and AMD lost bad. Maybe they can get better with a new release but i don't think this can be mitigated with just drivers.

    • @mjkittredge
      @mjkittredge Рік тому

      going by price to performance AMD did not lose bad. I wouldn't say far superior technology either. It's just that RDNA 3 was rushed before they perfected it on the software side and that's very fixable. When fully overclocked the 7900xtx matches the 4090 for 500 less, and that's before the driver updates that are sure to come. Go look at the benchmarks.
      Lovelace is superior in RT and efficiency, but overall no. RDNA3 has higher potential at much better value.

    • @quantumdot7393
      @quantumdot7393 Рік тому +1

      @@mjkittredge if we are going to compare over clocking then also over clock the 4090 . If you do that the 7900 xtx will be a bit behind the 4090 . Which really gives you the same result as the comparison to the 4080. Because the 7900xtx is a fully unlocked die and the 4090 isn't. If you remove the part of the die that doesn't work the 4090 is a 540 mm2 slightly bigger than the 7900 xtx and slightly better in raster and again destroys it in Ray tracing, upscaling and efficiency . So similar conclusions. Not to mention the 4090 extra cores doesn't make it scale linearly compared the 4080 and really it is too big for games to take full advantage of. It has 60% more cores than the 4080 but gets only 30% more performance .
      And as I said drivers will not fix this if you look at the games where RDNA 3 allegedly shows its true potential , you will notice that RDNA 2 is also doing better than its average. These are games that simply work better on AMD hardware it is not a window to RDNA 3 true potential. And I would say the AIB models are already getting around the silicone flaws of RDNA 3 by just giving it more power . If AMD fixes the silicone I imagine the gains compared to these models are going to be in efficiency not performance.

  • @vsuarezp
    @vsuarezp Рік тому

    16:51 to 16:52 - At some point you can hear very low "Perfect" from the Street Fighter II game announcer. Pump up the volume and you will be able to hear. How did it get there??? Who knows?

    • @HighYield
      @HighYield  Рік тому +1

      OMG, you are right and I just freaked a bit, how did it get there?
      But I think I figured it out: its part of the background music. I run the background music with noise reduction against my voice, so it never overpowers it. But the quick "perfect" was too fast and too loud to get caught by the algo, so it sticks out.
      Really cool you noticed!

    • @vsuarezp
      @vsuarezp Рік тому +1

      @@HighYield I just noticed because I am using a very good headset today and it reproduces every little detail even at low volume. And I played SFII like crazy back in the 90s. Let's say that was a nice easter egg. LOL.

    • @HighYield
      @HighYield  Рік тому +1

      Maybe its there just for your ;)

  • @twoeggcups
    @twoeggcups Рік тому +1

    Nothing went wrong. The RX 7900XT gives >50TFLOPs of FP32 and the XTX gives >60! The previous gen were absolutely hopeless for productivity. The problem here is you're comparing to the 4090, which is a freakish GPU that is really unsuitable for gaming. The fact that NVIDIA had to build such a ludicrous cooler indicates that they were running scared.

  • @ImaITman
    @ImaITman Рік тому +1

    Are we comparing flagships though? Is there not going to be a 7950 introduced at some point?

    • @HighYield
      @HighYield  Рік тому

      We dont know yet, the 7900 XTX is for sure AMDs flagship atm.

  • @TheGrizz485
    @TheGrizz485 Рік тому +1

    So is the 7900xtx a gpu with hyberthreading enabled?

  • @Machistmo
    @Machistmo Рік тому

    35% over the 6950 say that again... think...the next gen of AMD CPU and GPU will be FANTASTIC. This was them dipping their feet into a new arch.

  • @lamjay6656
    @lamjay6656 Рік тому

    Have all these flaws or issues more or less been corrected in the upcoming Navi 32 (RX 7800 xt) release?

  • @Kernoe
    @Kernoe Рік тому

    such great information. i wonder that the more popular hardware channels are hardly covering that. They are to busy with products being thrown at them.

  • @BTAT2101
    @BTAT2101 Рік тому +1

    In the keynotes supplementary info of the AMD presentation of November you can see that they used a 6900XT instead of a 6950XT for benchmark comparison and some voodoo 7200 MHz DD4 RAM. Since the 6950XT is in average 15% faster than the 6900XT it could be that they messed up the presentation and here we go with 35% instead of 50%.

    • @HighYield
      @HighYield  Рік тому +1

      In the slides they say "vs 6950 XT", so thats why I expected.

  • @marsflee3815
    @marsflee3815 Рік тому

    Hopefully drivers update can take advantage of the chipset and increase its performance.

  • @thiagodelgado3128
    @thiagodelgado3128 Рік тому +1

    I believe that's they need to improve performance game by game, not a generic overall DX11 fixes like they do it.

    • @HighYield
      @HighYield  Рік тому

      Yes, the dual-issue shaders need a lot more game specific optimization.

  • @albtein
    @albtein Рік тому

    It's common tech companies release unfinished products and sell them to get the money to fix the issues.

  • @fletcher9328
    @fletcher9328 9 місяців тому

    What about the cache hierarchy which hasn't been doubled over RDNA 2, RDNA 3 is choking on its own memory bandwidth and this explains both the performance and power issues.

  • @K9PT
    @K9PT Рік тому +1

    So for you price is not important, and not the problem RTX 4090 have with melting power cables...NICE