Intel Engineer explains BAD Arrow Lake Performance, Battlemage, AMD Zen 5 Turin | Broken Silicon 279

Поділитися
Вставка
  • Опубліковано 14 гру 2024

КОМЕНТАРІ • 755

  • @MooresLawIsDead
    @MooresLawIsDead  Місяць тому +11

    [SPON: Support MLID by Downloading Filmora 14 for FREE: bit.ly/47zm388 ]
    [SPON: Use "brokensilicon“ at CDKeyOffer to get Win 11 Pro for $23: www.cdkeyoffer.com/cko/Moore11 ]
    #Filmora14 #videoeditor #MoreAI_LessClicks #EditYourWayToSuccess

    • @henrythegreatamerican8136
      @henrythegreatamerican8136 Місяць тому

      If I got $1 for each time he said "uhhhhh" or "ummmm" I'd be able to buy enough INTEL stock to own the company.

    • @JackWse
      @JackWse Місяць тому

      Hey for what it's worth, anybody I've seen that has tried to max out the ring bus clock, hasn't really managed to get a lot of performance scaled from that.. so they may actually just have it set there for other reasons then lol future investment in an RMA department.

  • @MechAdv
    @MechAdv Місяць тому +255

    300$ is a wide open market for Intel in the GPU market. If they offered something that consistently beat Nvidia’s 60 series on value, and anchor themselves as the best option at that price point, they would move units.

    • @HXRDWIREDGaming
      @HXRDWIREDGaming Місяць тому +15

      Yes but in the scheme of datacenter and larger clients, the "gamers" are such a horrendously small subset that they usually get hand-me-downs. Intel will be full throttle into AI and won't go into death throws but, it's going to be shaky for 5 years or so (IMO)

    • @SirSomnolent
      @SirSomnolent Місяць тому +12

      they kind of need the architecture and performance anyway for igpu or they're toast. I don't see stretching into discreet gpu as a bad play.

    • @Fk8td
      @Fk8td Місяць тому +6

      @@HXRDWIREDGamingintel gpu are extremely good at creator for the price compared to nvidia. If they make a gpu that has larger amount of vram will likely dominate creators in the future.

    • @HXRDWIREDGaming
      @HXRDWIREDGaming Місяць тому +2

      @Fk8td Nvidia sees the writing. you see 16gbs and say wow that's kinda low but when it performs like amds 24 or even 32, that's what they are after. AMD won market by a chance and shoved as much horsepower as they could. too bad their driver team for the entire company is like 10 people lol.
      we're talking about a sub billion dollar market vs potential of billions>trillions, it's no comparison

    • @christophermullins7163
      @christophermullins7163 Місяць тому +7

      Personally.. if there is a single game that I enjoy that doesn't run the GPU is worth 1/3 as much as it's "normal performance would suggest" running 98% of games is a fail. You need to be 99.7 minimally. That's why I have Nvidia. 🤷 Not worth my time to get a game installed and be let down even once.

  • @Brent_P
    @Brent_P Місяць тому +213

    Intel fails to break into the discrete GPU market due to a lack of patience. No one dominates the market upon their first release.

    • @i486DX66
      @i486DX66 Місяць тому +14

      Their best opportunity was in 2012 under project Larabee although maybe the Larabee architecture was not the right approach. The bean counters killed the project.

    • @draconian6692
      @draconian6692 Місяць тому +2

      Agreed

    • @Brent_P
      @Brent_P Місяць тому +2

      @i486DX66 I remember reading about Larrabee back in 2008.

    • @Smartcom5
      @Smartcom5 Місяць тому +1

      @@i486DX66 What a nonsensical take …
      Intel have been trying to step into that market as well and tried avoiding any 3D-rendering (after they already failed at 3D), for going after that very GPGPU-route you're referring to using their Many-core architecture of a bazzilion tiny light-weight x86-cores thrown together.
      At first it was called Larrabee, which failed. Their second try was called Xeon Phi, and failed, again.
      -
      Their already two stillborn childs Larrabee and Xeon Phi were *_doomed_*_ from the very beginning_ as they tried to compete with AMD's and nVidia's highly-parallelized integrated graphics by just trying to brute-force their way into the market using their non-specialised x86 many-core architecture.
      It literally was a sheer force bruteforce-attack using a multitude of multi-purpose simple x86-cores against highly-integrated and highly specialised graphics-IP. It was doomed to fail from the beginning, since you just _can't_ beat a GPU's a thousands stream-processors using ordinary x86-cores, that's just impossible. Trying that is stupid, to say the least - and a traditional GPU using its refined ALUs always will come out atop winning single-handedly with ease, no matter what.
      -
      The problem is, Intel always offered their x86 as the one and only panacea and the industry's universal remedy to every problem (of x86) arising - Just replacing one evil with another. That's like trying to cast out devils by Beelzebub. Doesn't work nor has even once, ever.
      Their x86 as a substitute for graphics-card - It failed hard (Larrabee). Then tried rehashing the left-overs of Larrabee on their Many-Integrated-Core-arch (MIC) as Xeon Phi. Needless to say that it remained at the mere trying, again. Next up were the mobile markets with their Atoms against ARM-offerings, subpar at best. Their modems, x86-based, Quarks-cores (even slower than Atoms) and so on.
      Intel only had their x86 at disposal as the only lone solution to every problem arising … Well, apart from the only time they didn't and sported --the infamous Itanium™-- _their iconic Itanic_ … Though, after being sunk (with massive costs, as a fundamental fallacy) into insignificance, had nothing else to offer but to go down in history as the industry's single-worst and most-longest dead-on-arrival (or at least comatose) µArch which has ever existed to date.
      -
      Then again, guess who was in charge of Larrabee … _Exactly!_
      Your bad 'ol friend *Gelsinger* living in Lala-Land already back then, when he claimed that GPUs have no future and that Intel's non-starter Larabee-project was about to take over the market, and how he recently claimed that nVidia's position was pure luck … He's out of his mind and a man!ac.
      arstechnica.com/gadgets/2007/04/clearing-up-the-confusion-over-intels-larrabee/
      Gelsinger was manic and imbec!le enough, to not only spear-head Larrabee (and consequently Xeon Phi), but even daft enough to think, that he could beat highly-specialised ASICs (ALU fot that matter) at their own game, using a mere multi-purpose core (x86). Larrabee was Gelsinger's baby and it still is. The worst part is, he delulu not only brought Intel down on their Larrabee/Xeon Phi super-computer but even the later proposed Aurora at Argonne National Laboratory (Intel got slapped a $600m USD fien for that project; they made huge losses).
      All in all, Gelsinger and his delulu is single-handedly responsible for the ganze megillah and whole bad string of consequences of Larrabee, Xeon Phi, everything Aurora and the delulu which sported anythign Ponte Vecchio. His stubbornness costed Intel billions to date on anything Aurora alone, never mind the awful mess of their DG1/DG2/Xe and finally ARC Graphics - Consequences of Gelsinger alone. He is plain !ll.

    • @jonissesmarchadesch7025
      @jonissesmarchadesch7025 Місяць тому +2

      Never underestimate the brainwashed NVIDIA fanboys

  • @Meinhardt.Erschwans
    @Meinhardt.Erschwans Місяць тому +36

    Dude, let your guests talk! You're talking more than the people youre interviewing and then also constantly interrupting them. Whats the point in bringing in guests then?

    • @asdfghjkl1755
      @asdfghjkl1755 Місяць тому +2

      the guy had nothing interesting to say

  • @moist_ointment
    @moist_ointment Місяць тому +175

    Intel *has* to keep GPU alive. They need good iGPU in mobile. They need GPU for datacenter. At that point, releasing a desktop card isn't a ton of extra work.

    • @fleurdewin7958
      @fleurdewin7958 Місяць тому +12

      Data center GPU and gaming ones are very different. That is the reason why AMD has RDNA and CDNA for separate use case. Nvidia has its own Data Center GPU.
      Discrete gaming GPU margin is very small, need very high volume to be profitable. So it is tough for Intel in gaming dGPU.

    • @moist_ointment
      @moist_ointment Місяць тому +19

      @fleurdewin7958 theyre not *very* different. AMD is introducing UDNA to unify the two.
      Nvidia uses the same dies in their RTX gaming GPUs, their A series workstations GPUs, and their L series datacenter GPUs. L20, L40, A5000 Ada, A6000 Ada, 4090, etc. Are all AD102.
      Not to mention the synergies that the NRE brings. See how ARL desktop has ray tracing cores in its iGPU? Because the Alchemist+ design uses scalable IP blocks where the end product adds more or less of these units, rather than being a different design that adds/removes specific IP.

    • @GreenJalapenjo
      @GreenJalapenjo Місяць тому +3

      Why do they need GPUs for datacenter? Are they realistically going to be competitive wrt datacenter GPU compute against AMD and NVidia?

    • @moist_ointment
      @moist_ointment Місяць тому +14

      @GreenJalapenjo idk, why does Nissan make a pickup truck? You think Intel should just give up on competing in one of the most important market segments because Nvidia is better? Do you think AMD's position as 2nd place is secured long term?
      Im sorry. But it's just a nonsense question. People who tell Intel they should just focus on x86 CPUs are specifically arguing that Intel should intentionally make themselves irrelevant long term instead of adapting to market conditions.

    • @Nanerbeet
      @Nanerbeet Місяць тому +1

      Actually, the desktop card is the most difficult thing to do. Making and optimizing the drivers to be competitive is nearly impossible.

  • @reinhardtwilhelm5415
    @reinhardtwilhelm5415 Місяць тому +218

    Intel could be saved by Nvidia’s greed here, just throwing that out there - remember, no matter how fast the RTX 5090 is, the RTX 5060 is unlikely to be more than about 20% better than the RTX 4060 per dollar.

    • @cykablyatman5758
      @cykablyatman5758 Місяць тому +31

      It’s prolly gonna be 20% faster overall and cost more so prolly like 5% faster per dollar if that

    • @l0lum4d
      @l0lum4d Місяць тому +19

      ​@@Drylochthat's a shockingly ignorant comment

    • @glalih
      @glalih Місяць тому +18

      @@l0lum4d did you missread 4090 for 3050? cause 4090 is going to last him a lot of time. And as a published game dev... i can safely say that current crop of devs working with brand new engines think of optimisation as "turn dlss on"... Im really struggling to find the ignorant part of his comment

    • @travisjacobson682
      @travisjacobson682 Місяць тому +6

      ​@@DrylochGames run poorly because they target the consoles and there are not enough resources invested in ensuring performance scales properly on higher end hardware.

    • @DELTA9XTC
      @DELTA9XTC Місяць тому

      @@travisjacobson682 and bc companies hire based on racist, sexist DEI "qualifications" and not on the only real qualification - MERIT. they are more interested in pushing a message/an agenda to their audience, then to give the audience what they really want. they actively make male characters fat, or feminine and obviously also gay at the least, they make female characters look worse and ugly, possibly fat, less sexy, more masculine, they gender and race swap existing and beloved characters, they only care about virtue signalling. But you know whats a good thing? as we can see with those ludicrous games likes Concord, Dustborn and I hope with Dragon Age - the majority of people doesnt need every sexuality on this planet in their game, they just want a good game, that doesnt push a specific political message. and the gaming journalists, the "big papers" do exactly the same as the big dev companies (Ubisoft, Blizzard, etc.), they all have the same opinions.
      as soon as a company doesnt care about the mainstream gaming journalists - like Game Science for example (Black Myth Wukong) - those gaming "journalists" come out with a fake story about the supposed "sexist comments" of the ceo of this dev company (which were completely mistranslated and again - absolutely ridiculous by those "journalists"), trying to destroy the success of this game, before it even gets released. Then they write articles that Wukong is bad bc there are no females in the game. Im sure you will find articles that critize, that there are no alphabet people in it as well. those people are INSANE. and they are destroying gaming. companies like SBI (Sweet Baby Inc) literally destroy gaming with their unfathomably racist and sexist beliefs and services.

  • @thepuff791
    @thepuff791 Місяць тому +12

    It is so cool to have an actual Intel Engineer on this. Love hearing from the actual team that designed something (good product or bad). Very interesting.

  • @Tharlyn7435
    @Tharlyn7435 Місяць тому +151

    I like this channel a lot, but I did not like this interview at all. You bring a guest with very unique experience and insights, and you do not let him talk! I am through first half-an-hour, and all I hear are your observations and theories as if you mostly wanted to hear their confirmation from your guest. Please, let your guest do the talking, it will make the interview much better. Thank you.

    • @Met1900
      @Met1900 Місяць тому +18

      Well he just wants to bash intel, thats all. He talks like arrow lake is compleete trashh but the thing is, it has best multicore performance and beats 9950x in anything. This guy is so AMD biased, is sick.

    • @andersjjensen
      @andersjjensen Місяць тому +21

      @@Met1900 Benchmarks aren't out yet and the slides Intel showed were "interesting" when it came to which power targets they were using for which slides. So hold your horses on the assumptions. And the Intel guy seemed pretty on board with Tom's assessments.

    • @TAH1712
      @TAH1712 Місяць тому

      @@Met1900 if you look and compare overall results 9950x / i9-14900ks / ultra 285K from the CPU average suite test results metrics posted in the cpu pass benchmark score table, the 285 wins in 3 of the 9 characteristics tested, but is last in all the other 6. Whereas, the 9950x wins in 5, second in 3 but last in only 1. The 14900ks still wins in 1, is second in 7 and last in 1. It's looking difficult not to be disappointed in the 285K so far. Listen, I'm no expert - wait and see more from the cpu commentators like Level1Techs or der8auer EN

    • @ridleyroid9060
      @ridleyroid9060 Місяць тому +8

      @@Met1900 I do, and likely will continue to use AMD CPUs but yeah this is correct, why are people shitting on arrow lake when it is a genuine logical step forward? There is only so much performance headroom you can push going forward before you go full iNvidia and require rocket engine fuel to power your CPUs and require a physical reallocation to the antartic to cool.
      Pushing more efficient products that still give slightly more performance, while being more stable, is the correct step forward imo.
      Will it make tech reviewers cream themselves when they hyper overclock it to push it to like 8ghz and render 10 videos at the same time in 3 seconds, and get like 1000fps at 4k in Horizon Forbidden West? No, of course not. But if the price is right and they lower your electric bills, that is something to be appreciated, and may pan out long term.
      Again, I don't think I'm switching to intel, I'm just too comfortable in the AMD ecosystem and familiar with what I will get for their products (I have a 5600 and can slot in a 5800x3d if I really need to, I'm good for a few years at least) but arrow lake being painted as the death knell of Intel is overblown af.

    • @HanSolo__
      @HanSolo__ Місяць тому +21

      @@Met1900 He is not AMD-biased. He laughs all the time about kids rumble which company is the best company. He called the marketing of AMD stupid multiple times. He does not take any company as better than the other because he is a grown-up man.
      The problem is as mentioned above not letting the guest talk.

  • @JusticeGamingChannel
    @JusticeGamingChannel Місяць тому +148

    This was more of an interrogation, than an interview, IMO. It was the presenter leading the narrative of the conversation, telling the guest what he thinks he's going to say, rather than letting the guest just say what he is going to say, and then responding or reacting to that. The presenter was more spouting opinions and beliefs, more than letting the guest speak, and present their narrative and take, IMO. This seemed more like a mouth piece to knock on Intel, rather than be open to what the guest had to say and comment.

    • @zorbakaput8537
      @zorbakaput8537 Місяць тому +23

      True to form for this channel imo - if you have watched more than one episode.

    • @DatAsianDude
      @DatAsianDude Місяць тому +16

      Eh, the guy is an engineer still in his relatively early years. Unless he is a higher-up, younger people struggle to speak confidently. Just be amazed that he was even willing to talk to his supervisor and ask permission to give his two cents on behalf of his company to a tech influencer.

    • @FrancisBurns
      @FrancisBurns Місяць тому +5

      Remember that Tom needs to validate all the leaks he makes. It would be hilarious if someone makes a compilation of Tom saying "Which I/this channel leaked x time ago" or try to explain a bad leak by saying "I never said X".

    • @ridleyroid9060
      @ridleyroid9060 Місяць тому +13

      Welcome to Moore's law is biased asfk. It's honestly part of the entertainment to me, I don't take him too seriously imo.

    • @Met1900
      @Met1900 Місяць тому

      @@JusticeGamingChannel bro wake up. This guy will do anthing to make Intel look bad. If you listen to him you think Intel is bankrupt sold and the workers will get killed. I think he holds a ton of AMD shares and wants Intel to get as bad as possible in his channel. I watch just for entertainment.

  • @YogyBear
    @YogyBear Місяць тому +71

    Obviously I haven't listened through the podcast yet, but I have to say that it's always incredible when you manage to get talented people in the industry like this on the show. Glad to see them coming back on, and I'm sure this will be a great episode!
    Keep up the great work!

    • @luxemier
      @luxemier Місяць тому +2

      its always interesting how intel has a much bigger improvement than amd this gen in fps to wattage ratio and he still calls it 'BAD' on the title lol

    • @justinvanhorne8859
      @justinvanhorne8859 Місяць тому +1

      @@luxemier It is very difficult for companies like Intel which have gone bankrupt repeatedly (Tom's best sources and claims, not mine) to release anything, lets give them some credit!

    • @abdfahim1144
      @abdfahim1144 Місяць тому +7

      ​@@luxemier if you run a race and you are in 10th position... Next you try hard and get the 5th position it's not good it's alright.... But if you on the other hand in 3rd Position and next gram 2nd yeah your improvement is lower but it's skill way better than 5th position... I hope you get your answer why it's called bad

    • @luxemier
      @luxemier Місяць тому +2

      @@justinvanhorne8859 55 billion in revenue going bankrupt? lol you funny

    • @brugj03
      @brugj03 Місяць тому +3

      @@luxemier Well it was absolutely dismal so i guess there was very much room for performance.
      AMD has so much better efficiency it`s not even funny. No wonder they doing worse i guess, but still doing it.

  • @devonmoreau
    @devonmoreau Місяць тому +85

    An Intel engineer?! I really appreciate him coming to give us his insight!

    • @syedshaqutub613
      @syedshaqutub613 Місяць тому +17

      I have to be honest. This engineer doesn't have even an ounce of clue about what's happening on the GPU side.

    • @Alex-wg1mb
      @Alex-wg1mb Місяць тому +1

      lets hope he is not whistle blowed like Boeing

    • @thepunish3r735
      @thepunish3r735 Місяць тому +20

      @@syedshaqutub613separate division .. Intel is massive , and global … he’s a peon if anything and know nothing of the company as a whole

    • @syedshaqutub613
      @syedshaqutub613 Місяць тому +1

      @@thepunish3r735 is precisely the point I want to make.

    • @thepunish3r735
      @thepunish3r735 Місяць тому +7

      @@syedshaqutub613 I have talked to engineers working on the new N.A. euv machine.. they have nothing but great things to say… and they dnt say Intel ..they say we

  • @NAMOR5000
    @NAMOR5000 Місяць тому +3

    Rooting for INTEL to stay up and floating and truly PHOENIX on both CPU and GPU areas. Let them come out with a 385 that will truly do what they had in mind before and do even better for the CPU's. They were on a good path with the ARC's GPU's and what was supposed to come after, we NEED more competition and affordable, they have proven they can design it and I feel it was awesome how the software engineers were able to improve performance with updated drives so much, in my view proof they do have the capability. Not addressing all the 'other' out of the box thinking. Keep the faith, you can do it Intel 🤛

  • @predabot__6778
    @predabot__6778 Місяць тому +15

    Regarding the Zen5 IO-die: Wendell at Level1Techs have done some testing, and the new IO-die that Epyc (and presumably threadripper would) uses, makes a massive difference; memory-performance is massively improved, latency is improved, and on and on. So yeah... had AMD shipped a new IO-die for Desktop, AMD could perhaps have delivered on their performance-claims.

  • @kyraiki80
    @kyraiki80 Місяць тому +89

    It's Dan with a voice changer 😁

    • @Jason_Quinn
      @Jason_Quinn Місяць тому +24

      The tell will be if he says "exasperate" when he means "exacerbate."

    • @rednammoc
      @rednammoc Місяць тому +1

      Dan is OK

    • @geoffreystraw5268
      @geoffreystraw5268 Місяць тому +6

      Funny but the breathlessness of the voice is different than Dan's who is more nasally.

    • @christophermullins7163
      @christophermullins7163 Місяць тому

      ​@@Jason_Quinnthat was mean.😢

    • @BellJH
      @BellJH Місяць тому

      100%. He has a lot of the same mannerisms

  • @dronecz19
    @dronecz19 Місяць тому +4

    I'm the mysterious engineer from Intel, I've said everything, even what I didn't want to say, it's great here

  • @MaximusTruth
    @MaximusTruth Місяць тому +59

    *"You Know"* - every time Tom utters this sentence, take a shot. *WARNING* you will be extremely drunk...dangerously so.

    • @GREG_WHEREISTHEMAYO
      @GREG_WHEREISTHEMAYO Місяць тому +7

      If you want to be even more wasted take one every time one of the two days uhm 😂

    • @timothyvaher2421
      @timothyvaher2421 Місяць тому +1

      "You Know that's like having Uber deliver you a Half Gallon in the parking lot, after walking out of the treatment center".

  • @mytech6779
    @mytech6779 Місяць тому +33

    Arrow Lake is clearly just an emergency stop gap, they needed to fill the lack of product and rebuild customer confidence until the new fab is operational. 14th gen would have been milked for a while longer but …yeah no longer market worthy.

    • @winstonrhock9021
      @winstonrhock9021 Місяць тому +3

      This

    • @J-Kimble
      @J-Kimble Місяць тому

      Intel has been waiting for the next big thing for the past 10 years. All they have is stopgap products and big projects not panning out. It was supposed to be alder lake, then raptor, then rocket, then meteor and god knows what. Fab it's the same story: the next breakthrough is just around the corner with the next process node. Which gets cancelled eventually.
      IMHO they are just appeasing shareholders so they don't get sued to oblivion. They are in trouble big time.

    • @johnscaramis2515
      @johnscaramis2515 Місяць тому +3

      It's more than a stop gap in my opinion. They were behind AMD+TSMC in nodes which, together with the monolithic design, meant higher costs and lower yields. At some point Intel needed to introduce their tile design in mainstream market. It's not like AMDs first chiplet design was optimized, they were simply lucky to get a top notch TSMC node to compensate for the initial drawbacks of the chiplet design.
      Which might also be a problem Intel is facing: up to know they developed their new CPUs in combination with their own new production node. The optimizations that can be done there on both sides, implementation and manufacturing, are surely better than now with Intel having to use a predetermined and predifened node by TSMC.
      But much of guesswork from my side in here.

    • @Hugh_I
      @Hugh_I Місяць тому +1

      While that sounds plausible perf wise relative to RPL - no, it's not just a stop gap. Or at least, it was never intended to be one. It was clearly developed as the next big step towards their new core design, a generation that's at least as big a move forward as Alder Lake was. It also IS on a better node than RPL, there was no last minute backporting or anything. Intel did not just yesterday decide to quickly cook up a refreshed core here, they were planning for much more - and once again missed their targets.

  • @ctjmaughs
    @ctjmaughs Місяць тому +26

    They lost a lot of talent when Pat left the first time

    • @PsychomikeIV
      @PsychomikeIV Місяць тому

      Why did he leave the first time? what made him come back?

    • @ctjmaughs
      @ctjmaughs Місяць тому +3

      @@PsychomikeIV he wasn't doing anything the first time around.

    • @Smartcom5
      @Smartcom5 Місяць тому

      ​@@ctjmaughs Gelsinger back then him being failed --up-- _off_ to VMware, was for the simple reason, that Pat in _his *delus!on of grandeur* (which still lasts to this very day, I might add …) honestly thought, that he could be Intel's next CEO in line and inherit the role of Intel's next CEO from Otellini - For the simple reason, only since he was mentored by former Intel-CEO Andrew Grove, and he effectively demanded it.
      Gelsinger being effectively brushed off on purpose, was only to get rid of him. Since he became delulu enough, to think that he could take over from Otellini and saw his own ambitions as a Intel CEO, just because he sported the Intel IDF and had Larrabee as his personal favourite pet child (which later spawned Xeon Phi, and the whole string of consequences of all of it on Aurora; Ponte Vecchio; In turn Intel DG1/DG2/Xe Graphics eventually and finally their ARC; their $600m contractual fine for delayed execution on Aurora).
      Though, the problem with all this, was, that Intel's already two stillborn childs Larrabee and Xeon Phi were effectively *_doomed_*_ from the very beginning_ as they tried to compete with AMD's and nVidia's highly-parallelized integrated graphics by just trying to brute-force their way into the market using their non-specialised x86 many-core architecture.
      It literally was a sheer force bruteforce-attack using a multitude of multi-purpose simple x86-cores against highly-integrated and highly specialised graphics-IP. It was doomed to fail from the beginning, since you just can't beat a GPU's a thousands stream-processors using ordinary x86-cores, that's just impossible. Trying that is stupid, to say the least - A traditional GPU using its refined ALUs always will come out atop winning single-handedly with ease, no matter what. Everything Larrabee was outright dumb and stup!d from any technological standpoint.
      -
      Yet the overarching problem is and always was, that Intel always offered their x86 as _the one and _*_only_*_ panacea and the industry's universal remedy to every problem (of x86) arising_ - Just replacing one evil with another. That's like trying to cast out devils by Beelzebub. Doesn't work nor has even once, ever.
      Their x86 as a substitute for graphics-card - It failed hard (Larrabee). Then tried rehashing the left-overs of Larrabee on their Many-Integrated-Core-arch (MIC) as Xeon Phi. Needless to say that it remained at the mere trying, again. Next up were the mobile markets with their Atoms against ARM-offerings, subpar at best. Their modems, x86-based, Quarks-cores (even slower than Atoms) and so on.
      So yes, he _was_ doing something the first time around. It was Larrabee, which he pushed heavily as his personal favourite.
      arstechnica.com/gadgets/2007/04/clearing-up-the-confusion-over-intels-larrabee/

  • @bassamatic
    @bassamatic Місяць тому +44

    guys... i bought an A750 for half the price of a 3070. Its a good value card. I love the negativity though. The card has the most beautiful box and will become a collectors edition for sure with muted blue lights and clean look. Does a great job at video capture. It has been fantastic to NOT have geforce experience installed... what a useless piece of software that was. I have never experienced a blue screen or crash to desktop from a 3d game yet. It does NOT perform well with most of the new games... but I dont care because these games suck. Ray tracing doesn't mean its a good game LOL

    • @bmqww223
      @bmqww223 Місяць тому +17

      i will be honest here , i personally think this youtuber has no clue about battlemage, he speaks random bs and gives info which gets contradicted in future news or leaks, in his previous video he said yea battlemage is dead, in general arc has its shutter down closed shop, now he comes out and says that yea nah , i will quote some imaginary viewer that arc cards are supposed to beat 3070 and battlemage will target budget range gpu, wait a minute didn't he said that arc is dead and there will be no battlemage???? he even says that battlemage cards have come out for tests or engineering samples are not created but magically we get geekbench benchmarks from other reputed hardware review websites , and tells that there is no development or news for the battlemage or arc sector...... we already know that they will target lower end models and amd probably realises this threat thats why they are singing new tune about affordability , how they want focus on market share, etc etc...... i personally think that nvidia will do whatever in the higher end and amd and intel wont care much like they didn't for past few gens whereas amd and intel will fight for the budget range of gpus and it will be a tough one otherwise company like amd wouldn't play a defensive game given how they didn't wait a minute to charge as close as nvidia but competition is the only factor making them think otherwise?

    • @C-M-E
      @C-M-E Місяць тому +5

      I've been handcuffed to CUDA for non-gaming use the last 14 years, and just recently adopted a 7900XT because they were stupid cheap and users finally got a toolkit to make use of their higher-end cards. I looked at intel's cards for a long time though, and if they had better integration for use cases outside of games, those cards are Bargains for what they could do outside of gaming. Drivers and top end performance hold them back in gaming, we all know that, but they have great potential for tasks where parallel processing excels like simulations. The big problem is that Nvidia has such a stranglehold on that whole segment of the market that you're forced to use an nvidia card because CUDA, and options dwindle like a candle in the wind when you stray from there.

    • @jeremymerry7967
      @jeremymerry7967 Місяць тому +4

      which is why arc can only really succeed in the low to lower mid range end otherwise the value isnt there

    • @did00p
      @did00p Місяць тому

      I bought a750 for 180$ new and got ac mirage for free and paired it with 5900x on rog strix b450 mobo (lol what a combination) . Using it as workstation display adapter 2x wqhd + 1x fhd without any issues. but lately have been playing some ganes for example war thunder, mgs V which worked flawlessly and now even triying wither 3 - it actually is 45% faster than 4060 and only 20% slower than 3070 with RT ultra WQHD xess quality. Also you can oc it pretty easily.
      I had no expectations but i am very satisified.

    • @Slyons89
      @Slyons89 Місяць тому +9

      I just want to point out that GeForce experience is not required for using an Nvidia card. I have never installed it on my current system. You can just download the driver package without it. Everything else still works. Anyone who knows what they are doing can set their own graphics settings in games and will use OBS for screen capture instead of nvidias shadowplay which is ass.

  • @markldevine
    @markldevine Місяць тому +11

    I was going to sincerely attempt to squeeze a workstation workload into a desktop, AM5 + 9950X. After hearing the discussion at 1:06:51, I'm hitting the brakes. TR with the new IO die. I hope that I don't have to wait long for the TR/TRP lineup.
    Great discussion Tom & Anon.

    • @kevinerbs2778
      @kevinerbs2778 Місяць тому +8

      Level1techs is showing that zen 4 epyc is getting to bw faster than desktop even with RDIMM's. Its looks like bandwidth is holding zen 5 back on desktop currently.

    • @markldevine
      @markldevine Місяць тому

      "Process Node: The Zen 5 I/O die uses a more advanced process node, which allows for higher transistor density and improved power efficiency.
      Memory Support: Zen 5 I/O die supports higher RAM speeds compared to Zen 4, which can enhance overall system performance.
      Interconnect Improvements: There are enhancements in the interconnects within the Zen 5 I/O die, which help reduce latency and improve data transfer rates between the CPU cores and other components."
      I think AMD was too zealous in choosing things to cut down with Zen 5 desktop. My consumer opinion of the result is "knee capped", but to each his own.

  • @thanosaias2717
    @thanosaias2717 Місяць тому +43

    That info about Zen 5 server using a new IO die was epyc ;)

    • @kevinerbs2778
      @kevinerbs2778 Місяць тому +2

      That would also include threadripper 9000 then cause it uses epyc's i/o die

  • @romaniachin6751
    @romaniachin6751 15 днів тому +1

    Great video hope intel will survive competition is good

  • @zvonimirkomar2309
    @zvonimirkomar2309 Місяць тому +12

    A barely watchable video. Tom, you're putting your (interesting and informed, sure, but that's not the point of this kind of video) constructions about things way too much into the mouth of your interviewee (who has a very relevant job for the subject talked about and probably unique insights). Plus all the ummmmmms, aaaahhhhhs and all the other repeated phrases, together with really bad sentence structures...just a waste of time. I looked forward to this video, but learned nothing from it.

  • @JuanGarcia-lh1gv
    @JuanGarcia-lh1gv Місяць тому +4

    I know other people are saying similar things, but Intel has the 300 dollar market and below. I see no reason why they can't do what AMD did with Zen. Put out a good product with a good price, gradually improve performance, efficiency and features every generation and focus on growing market share. It takes patience. Lisa Su seems to understand that.

  • @davidfell5496
    @davidfell5496 Місяць тому +5

    That was really interesting because it was honest and very informed. Thank you.

  • @dwu9369
    @dwu9369 Місяць тому +19

    Arrow Lake is a full year too late to market. It was meant to be the 14th gen, not the 15th.

    • @philipppuchner1115
      @philipppuchner1115 Місяць тому +3

      Like 11th gen should haven been released as 7th gen + beeing in the 10nm node instead of a 14nm+++ backport.
      Even with tick-tock, then after 6th gen Skylake (tock) and then 7th gen "Skylake "refresh" (tick), it should have been the next tock.
      Instead we got 8th, 9th, 10th gen beeing simply Skylake with more and more cores too keep up with AMDs Ryzen + pedal to the metal frequency outmaxing, nom atter the (thermal and electricity) cost.

    • @MrQuashu
      @MrQuashu Місяць тому +2

      Yep, one year ago it would be good.

    • @Gattberserk
      @Gattberserk Місяць тому +1

      instead they come out their refresh refresh nonsense which we are so sick of during the skylake 14nm++++++ with refreshes x4.

    • @happycube
      @happycube Місяць тому

      @@philipppuchner1115 Cannon Lake was basically halfway between 7th and 11th gen on 10nm, but it couldn't be produced in volume. It did have AVX512 at least...

    • @philipppuchner1115
      @philipppuchner1115 Місяць тому

      @@happycube yep, just small doses of canon lake, but those quickly vanished...from the market or just out of my mind I don't know :)
      The first "real"(?) 10nm Intel CPU was 11th gen Tiger Lake for Notebooks, while this Rocket Lake was the 14nm+++ sunny cove backport 11th gen desktop CPU.

  • @GeraltofRivia5150
    @GeraltofRivia5150 Місяць тому +34

    Intel and Nvidia lost me years ago due to their anti-consumer/competition behavior. I am glad Intel is struggling but I am sad that their workers have to loose their jobs due to the terrible leadership that put Intel in its current state. Fire the idiots at the top.

    • @CyberneticArgumentCreator
      @CyberneticArgumentCreator Місяць тому +2

      What CPU are you using in your home desktop computer?

    • @Smartcom5
      @Smartcom5 Місяць тому

      Intel has been in a condition, that the bulk of its employees have eaten up the very delulu of their praised false God of pretended leadership.
      Most are blind boys of fans of the very blue brand anyway, and deserve being laid off - Sounds harsh, but most of them doesn't add to any greater value anyway for the company as a whole, but solely praise them to hell and back, and are even proud about that. It's cultural though.

    • @thorwaldjohanson2526
      @thorwaldjohanson2526 Місяць тому +4

      I want them to be competitive to keep the other players in check.

    • @ridleyroid9060
      @ridleyroid9060 Місяць тому

      There is no big tech company that isn't starkly anti-consumer to their bones. Not a one. Not AMD, not Intel, not iNvidia. All of them will and have fucked the consumer over if they can get away with it.
      They can only be kept in check when they HAVE to win consumer trust with good competitive behavior, currently, Intel has to do that cause they're kinda loosing on both fronts, CPU and GPU. Whether they will, well, I hope so. We need all 3 companies to be at each others throats to have a healthy consumer friendly market.

    • @rotmistrzjanm8776
      @rotmistrzjanm8776 Місяць тому +1

      Oh yeah and AMD is any better since Zen2 XT launch

  • @biomagic8959
    @biomagic8959 Місяць тому +52

    i detect an asian descent from your anonymous guest 🤓

    • @AnEyeRacky
      @AnEyeRacky Місяць тому

      You CSI?

    • @MrTweetyhack
      @MrTweetyhack Місяць тому +8

      thanks to you, we found him and fired him. we are shipping you a prize. Do hold your breath

    • @pf100andahalf
      @pf100andahalf Місяць тому

      He wasn't trying to hide his voice.

  • @MrHav1k
    @MrHav1k Місяць тому +7

    Wild how you got the Intel Engineer on THIS WEEK of all weeks. This is the week they do they layoffs...
    I don't know if this week or next would have been better timed. Might have been titled Disgruntled Ex Intel engineer then lol. Great stuff Tom.

    • @dROUDebateMeCowards
      @dROUDebateMeCowards Місяць тому +1

      It’s Tom. Might as well be titled “bunch of horseshit Tom credulously accepts and/or makes up”. I just fully don’t buy that he has an engineer in a position to actually know enough to speak meaningfully on it that is willing to risk torching his career.

    • @Hugh_I
      @Hugh_I Місяць тому +2

      @@dROUDebateMeCowards From what he claimed he was neither risking his career nor a disgruntled ex engineer either. Did you listen to the podcast? He clearly stated that he was on the podcast with permission from his supervisor and there are several moments where he's choosing his words not to go outside of what's ok to talk about. He didn't claim nor appear to be there to leak any inside info that would get him into trouble.

    • @dROUDebateMeCowards
      @dROUDebateMeCowards Місяць тому

      @@Hugh_I The only evidence I have for that claim is that notorious bullshitter Tom vouches for him.

  • @smakfu1375
    @smakfu1375 Місяць тому +3

    The problem for Intel in the inference space (on the model execution side) is that if I'm running a serious workload, Nvidia GPU hardware, even GPU hardware that's several generations behind, is massively more performant and capable. I don't think I've yet run into any instance of where an NPU, outside of very basic things (e.g. stuff users aren't going to care about or pay a premium for) does anything particularly well. Furthermore, it's pretty obvious that we're encroaching upon a "peak silicon" situation where we simply aren't going to get significant further process improvements, transistor density, power handling improvements or clock bumps. That leaves us with increase die area via chiplets / tiles.
    The upshot is that the assessment in this video that Intel should, first and foremost, be focused on creating the best CPU cores possible, is absolutely correct. Not just because it's what they do well, it's also because CPU cores and CPU uArch does still matter significantly (and will as long as we have CPUs). Notably, there are still a plethora of uArch and ISA improvements we could benefit from including (gulp) a revisiting of VLIW (from a non-religious perspective, this time) along with hardware based nested loop flattening and dynamic recompilation (which would partially address issues around VLIW optimization challenges).
    What Intel needs to stop doing is chasing fads or markets that they don't understand. They've had, since the late 90's, a serious problem with getting religious about chasing stuff that isn't core, or undermines, the CPU business. VLIW is potentially great, but their implementation with IA64 took such an uncompromising approach that they ceded x86 ISA leadership (permanently) to AMD64 (which, whether they like it or not, is what all 64bit x86 CPU's are based on) and lost the 64bit race. Then they blew wads of cash chasing successive attempts at MIC (which they tried to pitch as GPU) from Larabee onwards. Then they tried again with GPU's, and now they want to pitch NPU's, etc., etc.. Meanwhile, they ignored process improvements at TSMC and AMD's resurgence until all hell broke loose.

    • @lucasrem
      @lucasrem Місяць тому

      GPU computing is the future, they are on the right path !

  • @NZRanger
    @NZRanger Місяць тому +2

    I'm going to remind people that once upon a time AMD was the underdog on the brink of bankruptcy, and look at them now! Things CAN change. Lisa Su not only turned AMD around but has now also just teamed up with Intel... something she would not have done if she thought Intel was done! Obviously there are things going on behind the scene we have no clue about.

  • @wallertdwp
    @wallertdwp Місяць тому +44

    Do you guys think intel will eventually make cpus with 3Dcache?

    • @MooresLawIsDead
      @MooresLawIsDead  Місяць тому +68

      Yes - I have leaked it for Nova Lake, it's called "eLLC" and we talk about it briefly in this episode.

    • @steezegod2768
      @steezegod2768 Місяць тому +3

      @@MooresLawIsDead Nova Lake is leaked to be on 1851, correct?

    • @petern.8357
      @petern.8357 Місяць тому +20

      They did already once with the Broadwell generation. For e.g. the i7 5775C had a L4 cache with a size of 128 MB and this CPU could compete with later i7 7700K CPUs and more.
      This cache was shared with the Iris iGPU BUT when you deactivated or have not used the iGPU, the CPU used it as L4 cache as said.
      The clock speeds were quiet low. Nevertheless I once owened one, oc'ed to 4.4 GHz and it was really awesome!

    • @Razzbow
      @Razzbow Місяць тому +3

      What about adamantium?

    • @Decki777
      @Decki777 Місяць тому +1

      Adamantium ? What is that wolverine's claw? ​@@Razzbow

  • @beyondearth6418
    @beyondearth6418 Місяць тому +9

    Time for AMD to shine!

    • @fastedwarrior7353
      @fastedwarrior7353 Місяць тому

      Nope time for Apple to shine even brighter!!!!!! At this rate AMD and Intel are both done for!!!!!!

    • @iLegionaire3755
      @iLegionaire3755 Місяць тому +1

      @@fastedwarrior7353 AMD is fine, its Intel that is on the ropes.

  • @bugleboop
    @bugleboop Місяць тому +10

    after the little news about Nova Lake socket size I'm more inclined to go AMD as I don't want to pay £500 for arrow lake/raptor lake refresh only for Nova Lake to do a completely new Socket.

    • @lucasrem
      @lucasrem Місяць тому

      @bugleboop
      why you care about the socket ? Is it bottlenecking the GPU ? then upgrade the CPU !

    • @bugleboop
      @bugleboop Місяць тому +2

      @@lucasrem the reason isn’t bottlenecking it’s the future proof if that LGA 1851 is only a year while AM5 is another 3 years then it makes more sense going to AMD as I can upgrade the CPU in 3 or 5 years to last me another 3- 5 years

  • @romanpul
    @romanpul Місяць тому +2

    On the matter of the IO die holding back zen5. If I remember correctly from Wendels first impressions video on it, he tweaked the InfinityFabric and memory clocks on his 9700 and got some pretty good uplifts from that in gaming

  • @Dudewitbow
    @Dudewitbow Місяць тому +2

    in a niche gamer sense, i think it would be more exciting for an implementation of like triple channel ram for igpu based systems, than gaining more cores.
    more cores only matter when the main consoles in development use the same thread count (e.g 8 threads during PS4/Xbox One Era, 16 theeads in PS5/Xbox Series era) so developers have an incentive to thread more

    • @nostrum6410
      @nostrum6410 Місяць тому

      I would think these high end cpus have to be memory starved. Especially if you are stuck using ddr5 6000. 3 channel memory seems a reasonable solution

  • @RyordanPanter
    @RyordanPanter Місяць тому +1

    Let your guest speak!!!

  • @oappi4686
    @oappi4686 Місяць тому +4

    I still think Intel went into GPU business at incorrect way. Their target should have been one dgpu somewhere between 3060 and 3050 and make it extremely cheap. At that generation they should have just focused making drivers work with minimal feature set, but work as many games as possible. Kinda like "we know we cannot compete and our drivers are lacking, but you get pretty good card once we get hang of it". Then start to build on that fundation. Biggest reason why I did not go for intel GPU was that the drivers were bad to a point where small discount didn't really matter. They shouldn't have went into it guns blazing, but I guess Intel management didn't really understand how bad their IGPU drivers were and how much foundational building they actually needed. Instead they went with "we already have huge market share on IGPU, lets slap that thing to pcie card and make it bigger. Otherway to go about it would have been to make Igpu drivers good and then break into dgpu market. Now they just wasted huge amount of cash and have egg on their face

    • @Hugh_I
      @Hugh_I Місяць тому

      To be fair, they kinda tried that approach a bit with their DG-1 GPU. Alchemist is technically their 2nd gen that they probably expected would be in a better state thanks to a first low end run - probably why they had to delay the actual release as long as they did, because they weren't. The time they had with DG-1 was obviously not nearly enough, and they went way too fast into typical "we're Intel, of course we'll be #1"-mode way too fast, agreed.

  • @arkama67
    @arkama67 Місяць тому +1

    thank you for the adamantine question ! I was wondering that since last year

  • @SitWithAnkit
    @SitWithAnkit Місяць тому +6

    a770 is great price to performance for productivity tasks. If Battlemage can provide similar price and performance without driver issues in gaming, it will sell a lot.

  • @JoJoDramo-ih7qk
    @JoJoDramo-ih7qk Місяць тому +3

    AMD will fork zen line up in x3d for gamer and z5c for server laptops. The focus was in battery because of servers and they knew Intel couldn't do it. Now AMD has servers, consoles, laptops, steamdeck clones and PC. Jesus. It's a great strategy.

    • @damara2268
      @damara2268 Місяць тому +1

      Intel is dead in 5 years.. bad for us cause that means AMD will jack up prices

    • @kevinerbs2778
      @kevinerbs2778 Місяць тому

      ​@@damara2268 niether company can die because of the cross licesining agreement they have with each other one for x86 for intel another for 64 bit for amd.

    • @kevinerbs2778
      @kevinerbs2778 Місяць тому

      ​@@damara2268 niether company can die because of the cross licesining agreement they have with each other one for x86 for intel another for 64 bit for amd.

    • @andersjjensen
      @andersjjensen Місяць тому

      @@damara2268 Need I remind you that AMD was declared dead for a decade in a row? They had just sold their fabs for $4 billion but was still $4 billion in debt when Zen launched. Intel can take one hell of a pounding still without croaking.

    • @damara2268
      @damara2268 Місяць тому +1

      @@andersjjensen problem is AMD had plans for the future.. Intel has nothing, only marketing promised about new products which in the end turn out to not be any better than last gen.

  • @coolcat23
    @coolcat23 Місяць тому

    It was great hearing from someone with actual insider knowledge. I just wish we heard more of him and fewer leading questions and opinions and ramblings from the interviewer.

  • @zodwraith5745
    @zodwraith5745 Місяць тому +31

    90 minutes of Tom telling an Intel engineer how much Intel sucks. What was this supposed to accomplish exactly? 😆

    • @jeremymerry7967
      @jeremymerry7967 Місяць тому +8

      you must be watching a different podcast

    • @CyberneticArgumentCreator
      @CyberneticArgumentCreator Місяць тому

      @@jeremymerry7967 At least give that sack a wash before you gargle it in your mouth like that, dude

    • @zodwraith5745
      @zodwraith5745 Місяць тому +4

      @@jeremymerry7967 Please point out *_ONE_* place he said a positive thing about Intel over the entire 90 minutes. I'll wait.... You must be a AMD fanboy.

  • @ichemnutcracker
    @ichemnutcracker Місяць тому +1

    Memory bandwidth, controller latency, and core-to-core/chiplet-to-chiplet transfer speeds are the things that need to be improved the most in the coming years if higher core count CPUs are going to be a real value proposition to the home user. There really isn't much reason to go beyond 8 performance cores until those issues are addressed.

  • @corey2232
    @corey2232 Місяць тому +29

    "Ummmm, ahhh, uh aaaahh, um and uhhhh, yeah uhhhh omm and yeah"

    • @james...cardinal
      @james...cardinal Місяць тому

      the editing really added some punch to what you were saying ... good job

    • @corey2232
      @corey2232 Місяць тому

      @james...cardinal Thank you. I didn't want autocorrect to change the "ummms" and "ahhhs" to actual words

    • @Bassalicious
      @Bassalicious Місяць тому +15

      He's an engineer. Cut him some slack. It's not his job to be good at public speaking.

    • @corey2232
      @corey2232 Місяць тому +3

      @Bassalicious This isn't public speaking. It's about as anonymous as you can get without voice distortion. But you're right, he's an engineer.
      It was just SO distracting... I actually said "OH MY GOD!" out loud while driving because it was all I could hear. Sorry, I was just frustrated. Apologies to our engineer friend.

    • @jamiehalford1345
      @jamiehalford1345 Місяць тому +12

      It's hilarious that Google offers to translate this comment to English.

  • @markkoops2611
    @markkoops2611 27 днів тому +2

    What we need is for AMD and Intel to get together on x64 optimisation and decide x64 isn't done until nvidia will not run

  • @MazdaChris
    @MazdaChris Місяць тому +1

    The increased focus on gaming performance in the DIY space always seemed like such a strange thing to me. When you see the benchmarks showing x% uplift over whatever other CPU, it's in a benchmark specifically designed to demonstrate the peak performance of the CPU without being GPU limited (as far as possible). Which means running older titles in low res with lower quality settings on the highest level GPU available. It's not a realistic representation of how people play games. And reviewers acknowledge this. But the reality is, outside of some very specific edge cases, as long as you're GPU limited with a decent amount of CPU headroom, the CPU doesn't really make much difference to the overall performance of the game.
    If you're CPU bottlenecked while playing a AAA title, chances are you either under-specced the CPU, over-specced the GPU, or under-specced your monitor.

  • @arrdubu
    @arrdubu Місяць тому +3

    Intel's "We don't make keychains" went the same way as Google's "Don't be evil".

  • @JukkaX
    @JukkaX Місяць тому +1

    5 minute 'best of' video of this would be good.

  • @gandralf
    @gandralf Місяць тому

    Lunar lake was a stunning success. I was surprised by AMD's answer to Qualcomm and even more surprised by Intel's answer.

  • @TheGuruStud
    @TheGuruStud Місяць тому +6

    All you have to do is spend 30 secs in bios for +15% zen 5 gaming. The best OCers have tutorials.

  • @Freudian.Kickflip
    @Freudian.Kickflip Місяць тому +11

    Really interesting. Let's hope for Intel to keep a solid position in the game for a healthy market

    • @danielkenz1
      @danielkenz1 Місяць тому +1

      nah fk intel

    • @auritro3903
      @auritro3903 Місяць тому +1

      @@danielkenz1 'fk intel' means you don't want competition?

    • @danielkenz1
      @danielkenz1 Місяць тому +1

      @@auritro3903 intel playing dirty games, cant support them no more

    • @Freudian.Kickflip
      @Freudian.Kickflip Місяць тому

      And you think others would not, once they control the market?

    • @jasonvors1922
      @jasonvors1922 Місяць тому

      ​@@danielkenz1 Kinda like ryzen 9000 series being a flopped launch from the get go, If you can't have the software up to snuff don't release it.

  • @TheKazragore
    @TheKazragore Місяць тому +2

    It's such a shame AXG went the way it did, because prior to that turning into a money pit Intel had great products in networking and storage, and now even both of those have been jettisoned because they needed the cash on hand from selling those businesses.

  • @2ndtlmining
    @2ndtlmining Місяць тому

    Kudos to coming on in a tough time for Intel.

  • @ArouzedLamp
    @ArouzedLamp Місяць тому +4

    This is a big one folks! Truly one for the books.

  • @remmyredd
    @remmyredd Місяць тому

    You guys need to stop this bullcrap

  • @geoffreystraw5268
    @geoffreystraw5268 Місяць тому +2

    Definitely thinking of zen 5+ io die update. Very cool idea.

  • @JamesMCrutchley
    @JamesMCrutchley Місяць тому +1

    I'm pretty sure that Nvidia has realized that you can charge the same rate for performance from one generation to the next. So a 5 percent increase in performance means 5 percent increase in cost. They can charge whatever they want. They have a captive audience and no real competition.

  • @angeluorteganieves2539
    @angeluorteganieves2539 Місяць тому +3

    I think inflation has made people forget that 20$ is the new 5$

  • @zahirkhan778
    @zahirkhan778 Місяць тому +23

    This is just mostly rambling. Nothing concrete was shared.

    • @CyberneticArgumentCreator
      @CyberneticArgumentCreator Місяць тому +4

      First time on this channel?

    • @sgfan-jj1kf
      @sgfan-jj1kf Місяць тому +3

      MLID is a joke, just see what people say about him on Twitter/X and Reddit. Still it's fun to watch what cooked up things he bring up in each video. 😁

  • @AugmentedGravity
    @AugmentedGravity Місяць тому +1

    intel core ultra here speaking

  • @singular9
    @singular9 Місяць тому

    The solution for intel is to focus on battlemage in handhelds and laptops.
    If they can deliver better performance per watt and a reasonable price point to next gen handhelds, it would be kick ass.
    Intel really needs to analyze the history of GPU making from both Nvidia and AMD and why they are in the positions they are in.
    Nvidia has notoriously been big die big gpu focused. They "cut down" and that is how they have been doing it for a long long time, and if you literally examine the die, the michroarch, you will see how the scaling from top to bottom is quite hard for nvidia, this is why there is no 4050 anymore, and why the 3050 was quite hard to achieve at a price point that was palatable for consumers. They had a few good arch's that did scale down quite well but at the same time we were living at a time where the top end die wasn't pulling more than 300w max, which means that the floor was also lower. The 4090 is a fantastic example of Nvidia just embracing its large die engineering prowess, where the difference between the 4090 and the 4080 is so enormous, that its almost like its two different generations.
    Then look at AMD and how they have repeatedly tried to go high end, but ever since ATI and their engineers, they are much better at designing a fundamentally sound microarch when it comes to perf/watt but scaling it UP has always been very difficult, simply because of their experience and expertise. There is a fundamental reason why ryzen APU's are so strong at the very low end and handhelds, and its not just because AMD is "good at it". The fundamentals of the first Navi chips were always medium die that can be cut in half and the smaller die being quite close to the larger die, simply because their philosophy when designing their GPU's has always been this way. Each type of design principle has a certain "sweet spot".
    Alchemist did launch behind the "competition" but Battlemage should have been the moment where Intel really focused on what they are targeting. If they were targeting high end, then they should have hired on the people who are good at it. Instead, they have Raja, whose track record has always been making the most of an inferior node at a price competitive market segment. Think RX480. He doesn't seem to focus on "efficiency" as much as "squeezing out the best performance out of inferior principles. This is NOT the way intel wants to go, but do you seriously think someone who has been doing the same thing for a decade would somehow magically figure out how NVidia does things? No.
    Thus, in conclusion, Intel needs to stop being microsoft, chasing the competition, but rather be first or early to market in new segments, such as handhelds, which will be HUGE, and in a way kind of already are. They can always have a team focused on scaling this concept up to as high of a level as they can, and then simply pricing it correctly, which is how AMD did things. If intel wants to hit the high end out of the gate even with its C (3rd) generation, then I don't see how they can get it done.

  • @chriskaradimos9394
    @chriskaradimos9394 Місяць тому +2

    great video thanks

  • @DeliciousFood69420
    @DeliciousFood69420 Місяць тому +1

    Intel is just going to have to suck it up and go through the hard times. The days when everything was easy because they were the first are long gone. Their competition has balls they went through their hard times, they suffered, sacrificed, crawled for years before they can walk…..I really hope intel doesn’t fold and instead focuses, works harder than their competition, Fights back not only for themselves but for us as well.

  • @HablaConOwens
    @HablaConOwens Місяць тому +13

    I demand a minimum of 10 minutes dedicated to ps6 topic at least every month.
    You are in compliance.

  • @thepunish3r735
    @thepunish3r735 Місяць тому +2

    I remember Microsoft making fun of Apple and their new phone launch. Nobody knows the future and people acting like they do is cringe to me

  • @ChristianHowell
    @ChristianHowell Місяць тому

    With what happened with 24H2 on win11, the bigger problem was optimization which has always been a problem... Between the new AGESA and the updates to Win11, Zen4/5 are getting 20-30% perf increases... And since Zen5 has more IPC it's still faster mostly...
    Everyone needs to move to the new Win11 and AGESA for testing... It shouldn't hurt Intel...

  • @ArtofServer
    @ArtofServer Місяць тому

    When a large company has great ideas but can't execute, there's a serious issue with management. Why did they hire Koduri after his failure at AMD? How did they piss off Jim Keller? Those people who hired Koduri, or pissed off Keller need to be fired, along with any other incompetent executives.

  • @halrichard1969
    @halrichard1969 Місяць тому +13

    Interesting to listen to a guy who is genius level IQ, struggle to communicate with us regulars. :D.

    • @JohnWalsh2019
      @JohnWalsh2019 Місяць тому +1

      How do you know he's a genius? He's likely above average though.

  • @novantha1
    @novantha1 Місяць тому +2

    I’m not sure that Intel has the right approach for capitalizing on the AI inference market. They currently have some of the best libraries and integrations for CPU optimizations, but they’re also focusing heavily on integrating NPUs. One of the huge issues with NPUs is that on a CPU you have a limited quantity of memory bandwidth, so for low context Transformers you end up in this situation where your NPUs doesn’t really matter that much because you’re limited on memory bandwidth.
    In contrast, where an Intel or AMD platform maxes out around ~100GB/s of memory bandwidth, you can hit anywhere between 200 to 800GB/s on Apple platforms (albeit at a higher price that might justify a used Zen 4 server with 400GB/s of bandwidth).
    I personally think that rather than NPUs, (well, okay, they make sense on laptop), the right approach is to find some way of bolting together two CPUs on a single package (or a single motherboard, even; I’d even take a special networking cable), such that you can run them in Tensor Parallel operation to merge the bandwidth of the two chips (having two full x16 PCIe slots would be a nice bonus!). Even if it was goofy, or buggy at first, or looked stupid, whatever, it’s the only approach that makes sense to me.
    AI is inherently a parallel operation, but we’re seeing Intel and AMD design CPUs as thought it’s a serial one that you have to get perfect in a single chip.
    This is the only approach that I think might make Intel CPUs interesting in an AI inference (or even fine tuning) context, and they need to just get a random group of optimistic engineers together in a skunkworks project and get something like that out now. Just use whatever you have available off the shelf and use it creatively.

  • @NoSpamForYou
    @NoSpamForYou Місяць тому +18

    Almost no one wants a 128-bit gpu. 192-bit is already budget

    • @Slyons89
      @Slyons89 Місяць тому

      Bit width is only half the equation, if the memory is fast enough it doesn’t matter if the bus is smaller. Not that a 128 card would be a performance champ by any means, but in the sub $300 price range it could still be reasonable.

    • @kevinerbs2778
      @kevinerbs2778 Місяць тому

      ​@@Slyons89thats only true of you have somwthing like infinity cache to over come that prbolem. Large L2 caches won't help it.

    • @StephenMcGregor1986
      @StephenMcGregor1986 Місяць тому +1

      I can't speak for everyone but personally I want bigger numbers for speed and bandwidth and smaller numbers for latency and heat.

    • @FirestormX9
      @FirestormX9 Місяць тому

      ​@@StephenMcGregor1986bigger number bigger better

    • @Sami-rp1xk
      @Sami-rp1xk Місяць тому

      ​@@Slyons89don't forget that the bus width also sets the vram amount you can go with, a 128bit card could at most have a 8gb vram "unless they clamshell it but that would be expensive" and tbh 8gb is not acceptable anymore for 250$ to 300$ cards

  • @MdMonsurulHuda
    @MdMonsurulHuda Місяць тому +5

    This guy doesn't let the guests talk, he wants everyone to say yes to his opinions and then move to his next opinion. Why does people come to his podcast I wonder. Another thing is in Intel nobody knows everything about what's going on. Actually even within a product team nobody knows the whole story, the work is very segmented and compartmentalized. If this guy really works at Intel, shouldn't come to this guys podcast and I have been warning about Intel's problems for last 5 years.

  • @fepethepenguin8287
    @fepethepenguin8287 Місяць тому +8

    "Uh"
    Loved the chat, nice to hear from an insider

    • @jeevejavari8461
      @jeevejavari8461 Місяць тому +2

      One doesn't have to be eloquent in a podcast, but this anonymous engineer seems sort of . . . Uhh not insightful? Is it just me? Im not saying he's a fake but Tom has had sharper guests on

    • @FirestormX9
      @FirestormX9 Місяць тому +2

      ​@@jeevejavari8461 maybe he was making sure he doesn't accidentally say something that gets used against him, in each sentence. 🤷🏻‍♂️

  • @JerryFlowersIII
    @JerryFlowersIII Місяць тому +3

    Honestly I would like an Intel GPU.
    I like their goals, their openness and they are beautiful.
    The thing is I wouldn't want it in my main PC, I'd like to have it for a smaller secondary PC, for the living room perhaps.
    I hope they don't go away. It's an uphill battle to catch up which means they are hungry to do something cool.
    They did improve the Arc cards a lot post launch and I'm looking forward to seeing how they learned from it and put those lessons into Battlemage.

  • @fleurdewin7958
    @fleurdewin7958 Місяць тому +1

    1 of the reason why people buy AMD CPU is because of the legendary longevity of the AM4 platform. Intel should learn this from AMD as good motherboards these days are blardy expensive, people want to keep it longer for the platform investment that they did.

    • @nostrum6410
      @nostrum6410 Місяць тому +1

      That isn't a big deal for most people. If you are spending the cash on a new cpu, why not benefit from new chipset features as well

  • @jrherita
    @jrherita Місяць тому

    41:00 - Adamatine is cool but Intel was executing so poorly for so long, it seems reasonable to reduce a high end feature to focus on basic execution.

  • @magfal
    @magfal Місяць тому +2

    If Intel did a GPU that can run 6 or 8 monitors with drivers where it can run with stability alongside an iGPU of another generation as an eGPU.

    • @magfal
      @magfal Місяць тому

      I'd even pay quite a bit for a large very VRAM configuration. Doesn't have to be fast for my workloads but being capable of loading a serious chunk of data into memory would be nice.
      Approximately RTX3070 speed and large quanities of relatively slow memory would be perfect.
      My 4090 can handle gaming and high intensity workloads.

  • @jonahhekmatyar
    @jonahhekmatyar Місяць тому

    One thing I find interesting is that intel only seeing 5-10% performance increase from extra cache but like the guest was saying, they didn't have a robust team testing performance. I wonder if intel good have found double digit performance gain with their cache like AMD has in games

    • @MM-gd3be
      @MM-gd3be Місяць тому +2

      I believe the gain from having large cache on Intel CPUs is lower because they have memory controller on CPU die. Accessing memory has less penalty on Intel than on AMD. Having memory controller on IO die increases latency when accessing memory, That's probably why AMD CPUs get huge gains with large L3 cache.

  • @theworddoner
    @theworddoner Місяць тому +3

    The problem with prioritizing inference devices is that yesterday’s training gpu’s are today’s inference devices.
    I could fine tune llms using my 3090 and I have. But if I want serious training done, I’d rent the latest and greatest for faster compute. That wasn’t the case at initial launch of the 3090.
    At some point these training devices can’t compete anymore with newer offerings and will mainly be used for inferencing.
    Inferencing itself will become more computationally expensive as we use chain of thought etc.

  • @PsychomikeIV
    @PsychomikeIV Місяць тому

    Good video. Good guest

  • @jondonnelly3
    @jondonnelly3 Місяць тому

    When it comes to the Ring, gonna ease it in slowly but after a while it will come good.

  • @tiavor
    @tiavor Місяць тому +2

    if it only was 10% win ... but AMD still uses only half the energy for the same performance.

  • @johnphamlore8073
    @johnphamlore8073 Місяць тому

    I don't get it. Maybe 3 years ago I said on this channel that Intel was in incredible trouble due to not being able to work ASML's EUV lithography machines, and that was hardly an original idea. This was explained by I think Charlie Demerjian of SemiAccurate that in contrast to previous lithography, Intel could never hope to repeat its multi-patterning tricks that enabled it to extend its nodes the previous decade. So basically, once Intel fell as much as a decade behind TSMC, they were totally screwed with no path to ever catch up. Intel lost what, $7 billion on its fabs in 2023? How is Intel supposed to ever produce an efficient design if they are going to have to be switching fabs repeatedly the next few years, if they ever get theirs going? And now Intel does not have the financial clout to be a priority for TSMC's fabs. But another thing I mentioned from the start 3 years ago is that Intel's corporate culture with stack ranking may have doomed it a decade ago, because in a stack ranking system, there is no path for someone any good staying in the company who can get the ASML machines working. It's not possible if you are being stack ranked every year -- you are going to become bottom 10% if you stop working on the immediately profitable products instead of improving yourself for the next generation technologies.

  • @arrdubu
    @arrdubu Місяць тому +1

    Make Intel Great Again, Please.

  • @firerx
    @firerx Місяць тому +1

    remember intel came back from the single edge MMX CPU mess. I expect them to come back from this.

  • @mikelay5360
    @mikelay5360 Місяць тому +12

    An intel Engineer roasting intel speaks volumes of their motivation,He's not even afraid of masking his voice .Pat is truly a garbage CEO. Maybe Gordon Moore himself would have saved intel but unfortunately RIP. Intel this is the lowest of lows. Dissapointed as an intel fan but Still buying that 285k though !

    • @TheShmrsh
      @TheShmrsh Місяць тому +2

      😂

    • @Razzbow
      @Razzbow Місяць тому +4

      Jim Keller woukd be a good CEO.

    • @francishallare204
      @francishallare204 Місяць тому +12

      Intel trying to do everything at once is what's rally hurting them buying companies like Mobileye , Habana , Altera and other premature startups is a waste of money that could have been spent on hiring more Fab engineers.

    • @mikelay5360
      @mikelay5360 Місяць тому +1

      @@Razzbow Better than great.

    • @mikelay5360
      @mikelay5360 Місяць тому +1

      @@francishallare204 they should have retained jim keller.

  • @aladdin8623
    @aladdin8623 Місяць тому +1

    Intel being allegedly proud of AMD and announcing an x86 ecosystem advisory group against threats like arm must be a joke. I cannot count the times when intel screwed AMD by anti competitional crimes.

  • @efx245precor3
    @efx245precor3 Місяць тому

    5:15 he’s talking about profits and margins more then performance and market dominance

  • @DreadyBearBoi
    @DreadyBearBoi Місяць тому

    Also for 8+32 I’m sure the ring bus failing was a consideration but E-cores also share L3 cache with the P-cores. At the point of having that many E-cores without having a separate L3 dedicated to them as lunar lake has there would be tons of contention. Finally as I’ve kinda laid out in my other responses E-cores don’t do as well as hyper-threaded P-cores into many fully loaded scenarios. At least in terms of efficiency and power, that is because of how they are designed. I just don’t think for what you would be using that many cores for you’d want them to even be E-cores. Intel probably saw it as an even more niche set of consumers even compared to the current Ultra 9 offering and didn’t think putting extra effort in getting it that to work was worth it. Which I kinda echo the sentiment of, it would be a very expensive CPU.

  • @iffy_too4289
    @iffy_too4289 Місяць тому

    My cat loved this episode.🐱🐱

  • @memadmax69
    @memadmax69 Місяць тому +5

    "Permission by his supervisor"
    I stopped the video right there.

  • @AugmentedGravity
    @AugmentedGravity Місяць тому +1

    i only hear error lake

  • @Waldoe16
    @Waldoe16 Місяць тому

    Tom, I think you should have asked also about the IFS and that using TSMC is bad for the trust in their foundries. Intel is pretty much behaving like a fabless company. Still nice interview!

  • @kemaldemirel1714
    @kemaldemirel1714 Місяць тому

    What I got from this. "Intel is cooked and I don't think there will be 100.000 open positions in AMD and NVIDIA. Hopefully I will be one of those who find a job."

  • @markcentral
    @markcentral Місяць тому +9

    If Intel cared about competing in the gaming segment , it'd release an enthusiast 12P/0E cpu

  • @mattpulliam4494
    @mattpulliam4494 Місяць тому +1

    Intel needs to bring back the blue men and der kamisar.
    If you can't beat em then out promote em.

  • @edge8616
    @edge8616 Місяць тому

    Yes, Intel should focus on building strong APUs with ARK with enhanced XeSS support and add 1-2 competitve 3xx and 5xx series discrete models for the $100-300 market segment with a good bang per buck but of course most important are good and efficient CPUs for Mobile, Desktop and especially also HPC. It is going to be interesting to see how good they will compete with 18A and 14A coming into series. With a great efficiency, Ultra 9 could also be offered with 10 P-Cores keeping Ringbus. This together with 32 Atom Cores should be strong against the ZEN 7 (+) HALO SKU. It would be great, if possible within like a 200-230W PL2 and maybe they could also bring a "gaming mode" with "only" a 125-150W PL2 and maximum 16 E-Cores active. Adding some 3D Cache SKU on top - maybe like a better KS - would clearly help here, I guess. Let's not forget i7-5775C did a good job back in the days with 128MB Cache L4 eDRAM. With a bit lower clockrates, this CPU could have for example a maximum of 1.200V. Maybe 8C+16c and 3D V Cache would be superior when having Ringbus, though.

  • @micbanand
    @micbanand Місяць тому

    The GPU we really need is a. 0 usage at standby, very very little idle usage. That have ALL the codecs hardware support. so it can be used at decent gaming, in combi with a media server. 4050 strength should be enough.

  • @RobinGething
    @RobinGething Місяць тому

    If intel fabricated CPU/GPU/APU chiplet design with a largish footprint, large L3 for CPU, seperate GPU/APU cache, all the power savings and massive latency savings compared to discrete GFX cards etc would produce the future and they would lead it. This is the decade where we cant just keep pushing more power into monster cards that overheat and burn out the power delivery systems.

    • @lilnapkin462
      @lilnapkin462 Місяць тому

      You mean if Intel made Strix Halo?
      They are way behind AMD when it comes to interconnects and heterogeneous systems architecture. And so is nvidia

  • @Diamond_Hanz
    @Diamond_Hanz Місяць тому

    my favoraite WCCF Tech drama comment reader