I think maybe you need more 30-minute green screen deadlines to adlib content like this - this was top notch! Great balance of off-the-cuff and detail focused, with all the right post-production fact checking done out of band to not mess with your flow! Keep it up!
I think there's a title in there with a meteor dinosaur pun. Something like "Intel's new meteor lake destroys their old chips like the dinosaurs they are." It's probably too wordy.
The fact that meteorlake is supposedly gonna be using half the power of raptorlake while being the same performance makes this concept very believable. I would like to see AMD compare do this kind of efficiency
@@__aceofspadesIntel would need to finish the “Intel 3” process before claiming that its 18A process will surpass TSMC’s 3nm. Apple A17 on 3nm are shipping now!!!
13:30 This isn't about yields, rather there simply isn't a GPU die in the package from the sound of it. With the display controller on the SoC die, it may still be able to drive a display but without any sort of 3D acceleration (codecs are still on the SoC die so there could still be some acceleration there). This is an interesting cost/power saving move and curious how the end products will play out here. More interesting in terms of power consumption is if a system can boot with only the SoC + IO die in the package. This would be horrible for mobile/desktop usage but would be interesting in the embedded sector depending on the power consumption. I just haven't heard of anyone asking this, much less the answer. 19:00 Curious if this NPU works in the same data formats as Intel's AMX extensions found on their larger cores. Similarly if the Xe graphics can leverage the same data structures. The advantage here would be that properly formatted data in memory could be processed by either the CPU with AMX, GPU or NPU. Needing to convert the data structures between styles takes time (latency) and additional power not spent on the raw execution of said data. The advanced binning techniques are what I was hoping to see. This should be a good leap for mobile users as one of the dimensions Intel bins for is raw power consumption. I do feel for the low end desktop users (with Lunar Lake since Meteor Lake not coming to desktop) that will be getting seemingly the worst of every metric with potentially some units disabled. On the flip side, this should improve the highend desktop a bit as it'll be easier to pair up known good dies that support high clocks. This also likely means the death of the KF line of SKUs at the high end. The phrase you get what you pay for will be in full play here.
@@peanutnutter1 Weird since they're also saying that it is a 2024 release which would overlap with Arrow Lake which is getting a desktop release as well. Perhaps they're doing some segmentation with Meteor Lake at the lowend and Arrow Lake for the highend desktop with a simultaneously release? The general rumors about Meteor Lake on desktop is that it would haven't been mostly a lateral move with higher IPC but lower clock. However Arrow Lake was to boost IPC and clocks higher making it a clear candidate for desktop.
@@peanutnutter1 They do have a new socket planned, LGA 1851. Just everything using it was pushed back into 2024 for Arrow Lake's desktop debut. Before this flip flop, Meteor Lake was scheduled to use LGA 1851 in late 2023. So they have the infrastructure for the desktop ready, just things are now seemingly delayed.
Oh man, you could’ve done a meetup while at Malaysia. It’d be a banger of a time. We have plenty of hardware engineers here, and I’m sure there are many fans of the channel
The NPU is cool. I haven't looked at Intel's SDK for it but having delt with redhat and terraform dropping open source support this year and pivoting because of it, that's my only worry is they will turn around later on the attractiveness to work on.
I know this isn't a constructive comment, but damn is it satisfying seeing all of these technologies we've been hearing about for so long, all coming together like this.
@ChuckNorris-lf6voLisa Su, Jensen Huang, & CC Wei are paid more in stock options. They’re compensated more when their companies do better. What’s up with Intel?
@@tringuyen7519 the comparison includes bonuses and stock options and such. It's basically a rule of the stock market regulator to publish the true compensation. I looked it up on a site called simply wallst
Intel lost its way because the suits, bean counters, and professional MBAs took over running the company. Tech companies only thrive when they are run by engineers who understand the tech. Pat is such a person. The accountants and managers are there to support the engineers, but it ended up the other way around at Intel for many years.
Ok, about that singulated die sorting, the key idea there I believe is that it ensures only perfectly good dies gets soldered onto the interposer, or that correctly binned dies gets soldered together. After a tile goes on an interposer, that process can’t be reversed. Better testing ensures Intel does join a tile where one memory channel is crippled with a compute tile that can hit really high clocks.
Finally.. someone is taking a serious stab at power efficiency. lately things have had a trend of just *throw more power at it* for performance :/ Just want a desktop CPU/GPU that are not 300+ watts :|
Would never sing apples praise as a company though. @@C4rb0neum Good job to their engineers that are not sitting there working on ways to screw over repairability
So redwood cove and crestmont are essentially refreshes of golden cove and gracemont, but on intel 4 ? Is it fair to say that if Intel 4 was ready on time, we wouldn’t have seen Raptor lake ? Speaking of desktop did Intel mention meteor lake desktop or Raptor lake refresh ? Seems that the GPU is the biggest update in terms of performance. The NPU is new. And also seems like a dedicated focus on power efficiency for the CPU cores. Hopefully the next Lake has thunderbolt 5.
Yes, Arrow Lake will support thunderbolt 5. But it’s delayed. Intel delayed its TSMC 3nm (used for iGPU on Arrow Lake) order for 6 months. Think 2025, not 2024.
@@tringuyen7519Arrow Lake isn't delayed.. They literally just reiterated yesterday that it was a 2024 launch, and showed it off on 20A silicon. They even showed us a working Lunar Lake demo, the successor to ARL... Intel is ahead of schedule.
Crazy how fast and smart these machines are getting that the chip can 'sleep' between keystrokes while typing and between 16 frames of video while playing a video. And I still can't get over how shocking the "5 nodes in 4 years" statement is. What a gangster move. I am very excited to see what happens in the Angstrom era with CPUs, GPUs and the new hardware accelerators for AI made available in these coming chips.
Was very skeptical too on the '5n/4y' map, but they delivered 7 and 4 on time so it's much more credible now. To me, it's ASML and Intel worked together somewhat on the maturity of EUV so Intel can get the EUVs to work on its new nodes much quicker. The meteor lake project is massively complex, so I'd say Intel is on good track in its coming back fight.
@@fluphybunny930 ...And give us his perspective on them. Ian's been around a long time and I value his insight into performance and potential mis-steps in a design.
This is so interesting. I wouldnt buy a new CPU right now as I JUST upgraded but this is cool. Rumors say Intel will go really wild next gen and I'm waiting to see what the mad scientists have cooked up again (huge IPC gain, removal of HT, massive efficiency but also reduction in clocks??)
@@MaxIronsThird let me tell ya, PGA like on my AM4 sucks. I believe my old I7 2600 was PGA and holy hell am I happy that this arrangement is dead by now. I personally just hope Intel figured something out other thab just throwing more cores at the problem. This is what kinda breaks AMDs neck right now. V Cache is nice for gaming and I hope Intel got something similar coming. Removing of HT can be risky but worth a lot. We sacrificed a lot of power to get HT in the first place. Hope Intel doesnt fall for the more cores more better trap...
NPU on an M.2 would be a marketing genius move. It could literally move the needle for exponential growth in bandwidth needs to the endpoint device, creating a need for higher bandwidth devices at work and home. Not to mention the software development jump to utilize this new hardware. This could be one of those immediate market disrupters. Whereas having to wait to upgrade to a new PC/laptop would be a slower adoption and Intel could cede more market to competition.
Awesome video, I think Meteor Lake when it comes to Desktop will be the point I finally update my desktop. I just wish it offered more PCIe lanes that I will lose coming from x299.
They might be able to create a design with a second CPU tile, for 12P / 16E + 2 E-LP on Soc (assuming they can get the scheduling to work properly with two tiles). If the power efficiency is as good as they suggest that would make a great desktop processor. The power consumption on 12th and 13th Gen desktop was out of control, but the performance was there. I would be happy with similar P and E core performance, but with more reasonable power consumption. Throw in the new GPU and NPU and it would be a winner. Even if they just released this design as an i5 I think it would do well.
Feels strange how long it is since we heard of a new process node from Intel. Wonder when we will hear the plans after 18A. Presumably they must have made decent progress on it by now.
They showed 3 more process nodes in this year's innovation event after 18A. but shall come manufacturing ready end of 2026, for the very first process node of those three if we expect they stick with 1 year cadence like industry norm.
@@TechTechPotato I was just curious about your opinion on the matter; might that be because of low current production capacity on Intel 4 (and using all that capacity to CPU tiles)?
@@kayaogz Probably a combination of lack of Intel 4 capacity and the fact that Alchemist discrete chips are already made on N6, so it was likely an easier port. I'm more curious as to why the SoC and I/O tiles are N6 instead of Intel 7.
@@scottynoes If I would guess, probably it's better optimized for very low power use case. Intel 7 probably geared for high frequency, 14th gen RPL R will reach 6Ghz on top SKUs.
Love what they are doing with Battlemage - have invested in Intels graphics and hope they continue to provide a 3rd option that will rival team red/green. However, will the Intel GPU require the Meteor Lake CPU to work efficiently?
I'm less interested in discrete Arc GPUs than the idea of Arc A380 power integrated into mobile CPUs. Imagine that much power integrated running on a laptop - on battery! This would be HUGE for PC Gaming. Still not quite good enough for raytracing but RT apps would at least run. XeSS could be much more meaningful for low power situations too.
3 types of cores per socket... I wonder which task scheduler is going to handle it on launch day (I hope Microsoft is on top of this). Linux is still struggling with simple P&E - as a result you have thread migration with all cache migration as well and resulting CPU branch predictors just throwing a towel anytime scheduler decides to move load from one type of core onto another (and even worse, another tile). I know ARM managed to figure it out in mobile chips but I keep my hopes low for MS, Linux, BSD...
I write real time dpdk code that depends on stable timing. I sure hope this OS controlled thread promotion stuff can be bypassed for high performance code.
Any idea if Meteor Lake will have proper 4 DIMM sticks of DDR5 support ? That is at normal speeds, not 3600-4400 MT/s (if you're lucky). I'm still using (writing from it) a desktop replacement laptop, 17.3" which is 2 months shy of being 7 year old. It has 64 GB of RAM, and boy, did I used them. I want my next one to be able to have 128 GB of RAM at decent speeds, so I can use that one too for at least 5 years, preferably 10. With the current generation, if you go for 128 GB, you'll have, in laptops, most likely 3600 MT/s on both 13th gen and Zen 4. Though, I know that Meteor Lake is much more about power efficiency and stuff. I don't know if we'll have workstation level laptop with it. I have this feeling that most likely I'll get a Framework 13 laptop with Meteor Lake to still have something in case this dies. Which I'll also be able to use for misc stuff. And I'll probably have to get a 15th gen Framework 16 or something like that. Sigh... so much waiting.... this 6700 HQ still does the job, but I'm constantly drooling at modern 8 core / 16 thread CPUs...
Now, what will be insteresting is when Intel will port this architecture to CPU for x86 NAS. Synology and QNAP shifted to AMD for their current generation of refresh, but once the SoC tile LP E-Cores+ CPU tile E+P-cores scheme get good enough support on Linux and FreeBSD, Intel CPUs will do their great comeback on the NAS and homelab scene. Because those machines are most of the time at idle.
it's good Intel is trying things "new to Intel", but that sure seems way more complex than Apple silicon design. It will be interesting to see if it actually means a Dell laptop will get any where close to apple's M2 that can run full throttle for an entire work day on battery.
The thing I really want to see is if performance tanks when running on battery . I think the biggest advantage of macs with m series is that you get almost the full performance on battery.
there is no secret for any processor-core architecture to achieve any particular characteristic. you just have to choose which one is most desired. From there, you can band-aid to achieve other characteristics, but not as optimal as if that were the original target. At any given "process node" (example 4), there can be performance, density or low power versions. Intel and to a degree AMD, have always targeted max performance, suitable for desktop systems. These can be band-aided to lower power. Recently, AMD did a density version of Zen 4 in the c version. I believe one reason AMD was able to do this is that they have "synthesizable" masks? in that the design logic is translated to actual layout is done automatically. In the past, Intel did manual mask designs, which required many hundreds of people working for several months? This allows squeezing out a little more density and performance? Not sure if they still do this? This made it impractical to do a second design variation of any architecture with different optimization goal.
@@обычныйчел-я3еOn older thin and lights with Intell 11xxx series performance tanks a lot. We've seen decodeAudio do an mp3 in arround 1 second on power, drop to 7 - 15 seconds on battery. I hope this architecture does better, the Apple silicon certainly does better in this regard, although macbook air also seems to be thermaly throttled. It will differ a lot on the workload you give it.
What will be the DDR5 memory bandwidth? That will be the limiting factor while using the NPU, because of streaming in the quantized weights of large models from RAM. Cache sizes? Will the CPU cache be shared with the NPU?
New third level of compute cores will only further increase latency in thread scheduling. Not that keen on more input latency. But yeah will have to see I guess...
Very good vid Ian. In your opinion, how is this chiplet strategy going to evolve? Many more smaller chipplets? Chipplets on more stacked levels? Different nodes mixed together? Curious about your answer.
Do not think more chiplets - as the techtechpotato video shows, the 'landscape' (4 chiplets, foveros, ) likely remains the same while there can be changes within individual chiplets. For example, total new functions can be placed in the SoC die without need of a new chiplet.
Excellent video, thank you! I’ve been hesitant about hybrid core designs, but it looks like going all-in properly and supporting various use cases within the design might just make them sing. Of course, the real challenge is how operating systems interact with them - a possible interview and consultancy idea is to talk with the scheduler engineers at MS or within the Linux kernel community 🤔 The other big takeaway for me as an engineer was the testing setups, I suspect this is going to be a big area of growth within the industry as this shifts left a lot of challenges - will be interesting to see how this affects working with external fabs, will the benefits of trust in product promote such arrangements, or will the external fabs take issue with heavy shift-lefting and come up against resistance when ultimately needing to shift all the way left back to the designers?
I'm now using a Microsoft Surface 9 as my main computer and I must say the performance is stellar. Similar to Apple M cpu powered laptops. It's probably because Microsoft is leveraging Intel's EVO system standards. The future looks good for Intel if they continue their current efforts around balanced system performance.
is this going to be purely a mobile product or will there be a desktop release also? curious to see how that newer revision of alchemist does with desktop class cooling and memory vs the cards we've currently got
This current design seems to be aimed at mobile. For Desktop I would regard some of the effort put into power-saving as overkill when connected to a power-plug. And on Desktop (Gaming) probably 8 P-Cores are the target for high-performance CPUs, as this is, what more and more games are optimized for.
@@ThorDyrden Plus on desktop it may be more meaningful to put new threads on the P-code and go down from there, so the latency of moving threads from slow to faster cores does not affect the desktop and workstation models.
@@ThorDyrden If there's one thing Intel desktop needs it's better power efficiency. 12th and 13th gen i7 and i9 were ridiculously power hungry and hard to cool, though the performance was good. Until Intel sorts out its power and heat issues though I wouldn't consider buying them at the high end.
@@Pushing_Pixels no doupt - the power-consumption and therefore cooling complexity was one of the reasons I chose team red this year. So efficiency should be a goal also for the desktop-CPUs... and I would expect the new manufacturing-process will be one step in that direction. But I still stand to my initial statement, that some of the measurement to reduce power-consumption we see in this currently leaked chip are too complex to be worth it for a desktop. What is the benefit of saving 10W on desktop, when it increases the CPU-price by 100$ ? That will be one of the benefits the this tile-concept. Intel can combine a different SKU aimed for different purposes. E.g. the LPE-Cores in the SOC probably don't have much use in a desktop and make the scheduling complex. Also consider - a lot of these new measurements are aimed to improve idle- & low-load powerusage, where current Intel are not bad already - Intel looses to AMD on high-load scenarios.
I am wondering if there is more information regarding Intel's plan of 5 nodes in 4 years. They gave a brief presentation and said that they are on-track with their plan, but it would be very interesting to know what the status is in their fab. It would be very surprising if they can really manage to produce their 20A nodes by 2024, as there are some rumors that TSMC might even delay their N2 HVM to 2026. After all, switching from Fin FET to GAA is architectural change and it is really surprising if they really can manage such technological shift in such a short time. It is also quite puzzling if such fast switch is financially beneficial.
Are you using a pre release laptop from Asus or MSI? Hoping Intel fabs turns out to be great and would love to see competition between tsmc, Samsung, global foundries, Intel and IBM
I am worried about the NPU. I hope intel and AMD stay under DIrectML with their implementation, because I would hate there to be a whole CUDA vs OpenCL fight in the CPU space. "You need an intel chip for this fuctionality in our software." That would get me upset.
@@mahirabbas3700 If I am not mistaken, you can run CUDA in linux as well? My experience is with blender3D for example - where the optix (RTX based) API gives you the best performance, next being CUDA - and on the AMD side you now have HIP. Before HIP you had OpenCL which had terrible performance. My point being that the software devs have to tailor their software to multiple specific hardware capabilities. Depending on the install base, it can take a long time for the minority's hardware to be properly supported. I would wish, for any AI hardware accelerators, we dont go down the fragmented rabbit hole... Hopefully it will stay under one API.
Maybe, although I'm not keen on increasing Nvidia's market dominance at the moment. Anything but CUDA for a little while, until competition is more healthy@@HexerPsy
Waiting to see if function by function, power storage basis if this can get close to the M1 M2 line. I am not convinced this thread director works all too well. All day battery life without trying would be ideal.
I wonder how will the CPU react when I start baking in blender and then launch a light game in foreground. It could slow down the main task as it's in background, so I'm wasting time running a game on full cores while baking gets the slower ones
So weird... I would expect them to target AI/ML workloads through their XMX dedicated 'cores' in their GPU - not a new 'architecture' sort of. Still, this whole package sounds awesome - not only for laptops but also from pc-gaming handhelds like the steam deck. Intel showing a very strong full package such as this, their willingness to work with others in the industry (whether sending chips for production at TSMC fabs or their IFS) and competitive performance, I think it is possible that the next generation of Consoles adopt them (work together to develop unique solutions). I don't know how much money does AMD make with those partnerships due to 'intangible' indirect benefits to every other segment simply due to the sheer VOLUME of units moved has to be great (and difficult to isolate in their earnings reports).
With all the amazing tech inside Meteor Lake I'm disappointed there is no Thunderbolt 5. Ian reported at Anandtech in August 2021 when Gregory Bryant an executive at Intel leaked a Thunderbolt 5 working prototype demo. It is not that complicated
@@TechTechPotato Meteor Lake is mobile 1st and most likely mobile only. Imagine a situation where integrated we have Thunderbolt 4 but have to use a discrete controller to get Thunderbolt 5....fail for me Intel will probably have to wait until Arrow Lake Desktop to even use the new Barlow Ridge Thunderbolt 5 controllers.
I do hope that these NPUs, and also AMD ones, will have good software support. I.e. I am looking most for some OBS plugins that support them natively. Currently for example background removal, is really limited to Nvidia, and requires complex SDK. Would be nice to be able to use AMX (for simpler things) or NPU (for more intensive stuff) for this instead.
"500 mJ per frame" sounds a bit much to me - 0.5 J * 60 fps (Hz) is 30 Watts, unless this comparison was done on a desktop card, since it was about "leveraging Arc software improvements".
right, you'd go apple if you've been sucked into that quagmire or amd otherwise. But it's good that intel are trying, if they're back to competitive on power in a few years then that bodes well for consumers. Competition is good.
I for one would be estatic to have an NPU and quick sync on an M.2 card. I have a weird use case where i need AMD for one software solution, but cant put another software on in with proper support for AMD's VCE, so for best results i need 2 different boxes. A 7950x, and a 12100, but to get the NPU built in, it would be like having that correl TPU i remember seeing a while back that i considered for my security cameras.
I don't have high hopes, but what are the chances to see AVX-512 again? I still can't believe they gave me cachline-sized register to play around with just to take them away again :(
@@catchnkill Not a very satisfying answer, but it looks like OpenSSL has a bunch of AVX-512 paths and is itself included in a lot of software. I'm also pretty sure I've seen Tensorflow reporting once that it used AVX-512 when a driver update broke GPU support. It would make sens that programs like Photoshop or Autodesk Fusion make use of AVX-512 when available, but I have no proof of that.
@@LarsDonner It is exactly the reason whey Intel drops the AVX-512 in their consumer cpus. You are telling me that it is for less than 0.0001% of the market. If they have a need they can use a GPU to do the job or even with a specialized FPGA accelerator for the purpose. Just don't add die space for function that 99.9999% of users never used. AVX-512 is not backward compatible with former AVX-256 and it is a killer for this technology. It is Intel stupidity to include AVX-512 in consumer cpus that castrates the cpus. Those precious die area for execution of AVX-512 can be used for other function say larger first level cache etc. To save power in laptop and notebook computer, Intel requires manufacturers to turn-off the AVX-512 by default in firmware. It is a defeat to Intel.
I still wish Intel would finally give us some more small-formfactor and media-station oriented products. Now with the LPE-cores i would like to see something like 4+8+4 with 96/128 EUs but for DESKTOP for once. Why limit desktop to 24 EUs?
Of course things/tech get better with time.. but it’s truly impressive what engineers are designing, testing and producing at the bleeding edge of what’s currently possible. I’m not an engineer but sometimes wish I had the drive and motive to have followed though with my half baked plans to study ME. Kinda funny.. last 2 of 3 gfs were engineering students. One EE and one ME… really fascinating stuff. The workload was utterly insane tho. Like 3-4 hours a day in homework/study time. I called them both the weekend girlfriends. And honestly was often times a once a week kinda deal. I randomly met an AMD engineer a decade or so ago out exercising and gave him crap for AMDs ( at the time) really bad power efficiency compared to intel. Of course he was one of the top guys (from what I recall from a brief convo a long while back) at intel and I think was recruited to help AMD figure out their power and efficiency issues. Well it’s good to see that his team (sure there were many others) succeeded in their efforts.. smart people (especially engineers) are just something else. Gets me so excited and engaged to talk about crazy tech stuff with the people who are actually designing it. learning the basics of what goes into design/implementing ideas. Remember that everything that exists was once just an idea. Down to the seemingly minor details. I’ve got a lot of decent ideas but taking them thought to production won’t happen. Wish I had an endless inventor fund for trying to design/test stuff. It would for sure be what I’d do most of the time. I guess that’s what engineers do all day haha. Missed that boat unfortunately…
just talked w/ an engineer mechcinal and they just do what works not new things for sure I think it depends but the work then the keeping up w/ the work is also why I didn't go into CS now regret it all due to every job needing to keep up or just falling behind
when you're thermally/power budget throttled ; power efficient == performant ; even for desktops the cpu limits overclocks according to power budgets ...
Meteor Lake is the most exciting consumer chip Intel has released in a decade. New Intel 4 node, tiles (chiplets), NPU (AI engine), new CPU and GPU architectures, focus on power efficiency, etc. I have zero doubts that Meteor Lake will easily surpass Zen 4 mobile, not in just performance but performance per watt and features.
funny I dont care about CPU performance as much any more. Its like all SOCs now adays have more power than what we need. Its all the other bits like codecs, connectivity, AI, GPU etc that I find interesting now adays. O and efficiency. So glad this has AV1 10bit encode decode
CPU perfomance is very niche. I have an i7-12700K for music production, it has amazing perfomance per watt (for a desktop chip) and the 8P+4E is an amazing core count sweetspot for me. Music production is 100% bottlenecked by CPU, i wouldn't be surprised if the i7-12700K itself can outperform Meteor Lake. DAWs (digital audio workstation) love monolithic Intel CPUs with Ring Bus, it doesn't like chiplets or tiles very much. My i7-12700K can beat a Ryzen 9 5900X and Ryzen 7 7700X, it has beefy floating point speed as well and audio loves floating point.
Meteor Lake is insane for mobile, i hope they decide to launch it to desktop as an early i5 for the new LGA1851 socket, to get mobos ready for Arrow Lake, so we don't get the same shitshow that was AM5 mobos' launch.
I think maybe you need more 30-minute green screen deadlines to adlib content like this - this was top notch! Great balance of off-the-cuff and detail focused, with all the right post-production fact checking done out of band to not mess with your flow! Keep it up!
ARC performance integrated on mobile is a BIG DEAL. Even if its just A380 performance, that would be a massive change for laptops.
It also shows how bad the discrete graphics market is.
more like 1650 or even on par with radeon 780M,but im gonna see which one has lower power draw for laptops/handhelds
amd already done that with their 7040 apus though
I want someone to make a Steam Deck/Ally competitor with it
I think there's a title in there with a meteor dinosaur pun. Something like "Intel's new meteor lake destroys their old chips like the dinosaurs they are." It's probably too wordy.
Or it could be negative because with the amount of delays for this and previous chips, by the time it's out, it's already as ancient as the dinosaurs
Meteor lake causes Raptor lake extinction
Meteor lake destroys monolithic DUV dinosaurs.
The fact that meteorlake is supposedly gonna be using half the power of raptorlake while being the same performance makes this concept very believable. I would like to see AMD compare do this kind of efficiency
This was the best technical deep dive into Core Ultra (aka Meteor Lake) on UA-cam, great work Doctor Ian.
I hope Intel EUV gets better over the time to compete with TSMC.
Intel and analysts are predicting that Intel 18A will surpass TSMC in 2024. As we've seen with the A17 pro TSMC N3 has been a bit of a dud.
@@__aceofspadesIntel would need to finish the “Intel 3” process before claiming that its 18A process will surpass TSMC’s 3nm. Apple A17 on 3nm are shipping now!!!
Yield of Intel 4 is 50% better than TSMC N3B.
Intel 4 has same density as TSMC N3B.
Intel 4 has same performance as TSMC N3B.
@@HDRPC You need to compare with N3E though.
Intel already got backside power working.
13:30 This isn't about yields, rather there simply isn't a GPU die in the package from the sound of it. With the display controller on the SoC die, it may still be able to drive a display but without any sort of 3D acceleration (codecs are still on the SoC die so there could still be some acceleration there). This is an interesting cost/power saving move and curious how the end products will play out here.
More interesting in terms of power consumption is if a system can boot with only the SoC + IO die in the package. This would be horrible for mobile/desktop usage but would be interesting in the embedded sector depending on the power consumption. I just haven't heard of anyone asking this, much less the answer.
19:00 Curious if this NPU works in the same data formats as Intel's AMX extensions found on their larger cores. Similarly if the Xe graphics can leverage the same data structures. The advantage here would be that properly formatted data in memory could be processed by either the CPU with AMX, GPU or NPU. Needing to convert the data structures between styles takes time (latency) and additional power not spent on the raw execution of said data.
The advanced binning techniques are what I was hoping to see. This should be a good leap for mobile users as one of the dimensions Intel bins for is raw power consumption. I do feel for the low end desktop users (with Lunar Lake since Meteor Lake not coming to desktop) that will be getting seemingly the worst of every metric with potentially some units disabled. On the flip side, this should improve the highend desktop a bit as it'll be easier to pair up known good dies that support high clocks. This also likely means the death of the KF line of SKUs at the high end. The phrase you get what you pay for will be in full play here.
Intel have now said that Meteor Lake is coming to desktop - PC World interview.
@@peanutnutter1 Weird since they're also saying that it is a 2024 release which would overlap with Arrow Lake which is getting a desktop release as well. Perhaps they're doing some segmentation with Meteor Lake at the lowend and Arrow Lake for the highend desktop with a simultaneously release? The general rumors about Meteor Lake on desktop is that it would haven't been mostly a lateral move with higher IPC but lower clock. However Arrow Lake was to boost IPC and clocks higher making it a clear candidate for desktop.
@@powerpower-rg7bk yeah, it doesn't make sense really and they'd need a new socket too.
@@peanutnutter1 They do have a new socket planned, LGA 1851. Just everything using it was pushed back into 2024 for Arrow Lake's desktop debut. Before this flip flop, Meteor Lake was scheduled to use LGA 1851 in late 2023. So they have the infrastructure for the desktop ready, just things are now seemingly delayed.
Oh man, you could’ve done a meetup while at Malaysia. It’d be a banger of a time. We have plenty of hardware engineers here, and I’m sure there are many fans of the channel
The NPU is cool. I haven't looked at Intel's SDK for it but having delt with redhat and terraform dropping open source support this year and pivoting because of it, that's my only worry is they will turn around later on the attractiveness to work on.
so many corrections edited in, Steve is coming for you!
ha! I only had the *rented* studio for 30-35 minutes left when I started recording.
So it would cost me $500 to reshoot.
@@TechTechPotato Now THAT was funny.
The definitive Ian review of a new CPU/system? 🎉🎉
Nice to see that Intel still does testing right with the per chip tests
The only tech/hardware channel that the guy knows about what he's talking.
I know this isn't a constructive comment, but damn is it satisfying seeing all of these technologies we've been hearing about for so long, all coming together like this.
looks like Intel is getting interesting again
They arent
They're.. For all the wrong reasons
It is my humble opinion that Pat Gelsinger will be the person to lead Intel back
His salary certainly shows the hopes for that, I believe he was paid more in 2022 than Lisa Su, Jensen Huang and C. C. Wei combined 😃
@ChuckNorris-lf6voLisa Su, Jensen Huang, & CC Wei are paid more in stock options. They’re compensated more when their companies do better. What’s up with Intel?
@@tringuyen7519 the comparison includes bonuses and stock options and such. It's basically a rule of the stock market regulator to publish the true compensation. I looked it up on a site called simply wallst
Intel lost its way because the suits, bean counters, and professional MBAs took over running the company. Tech companies only thrive when they are run by engineers who understand the tech. Pat is such a person. The accountants and managers are there to support the engineers, but it ended up the other way around at Intel for many years.
nice content, love the presentation style. 👍
Ok, about that singulated die sorting, the key idea there I believe is that it ensures only perfectly good dies gets soldered onto the interposer, or that correctly binned dies gets soldered together. After a tile goes on an interposer, that process can’t be reversed. Better testing ensures Intel does join a tile where one memory channel is crippled with a compute tile that can hit really high clocks.
Yup, KGD gets hardware when it's stacked
Finally.. someone is taking a serious stab at power efficiency. lately things have had a trend of just *throw more power at it* for performance :/
Just want a desktop CPU/GPU that are not 300+ watts :|
Credits for that go to Apple’s chips though.
Would never sing apples praise as a company though. @@C4rb0neum
Good job to their engineers that are not sitting there working on ways to screw over repairability
👍@@C4rb0neum
So I assume these won't have AVX-512? Does that mean that we'll have to wait for AVX10.2 to be ready for client chips to jump back on?
Correct
So redwood cove and crestmont are essentially refreshes of golden cove and gracemont, but on intel 4 ? Is it fair to say that if Intel 4 was ready on time, we wouldn’t have seen Raptor lake ? Speaking of desktop did Intel mention meteor lake desktop or Raptor lake refresh ?
Seems that the GPU is the biggest update in terms of performance. The NPU is new. And also seems like a dedicated focus on power efficiency for the CPU cores.
Hopefully the next Lake has thunderbolt 5.
it probably ready but can't reach Intel 7 Clock yet
@@heickelrrxI see you're a fellow nerd aswell hekel heheh
Yes, Arrow Lake will support thunderbolt 5. But it’s delayed. Intel delayed its TSMC 3nm (used for iGPU on Arrow Lake) order for 6 months. Think 2025, not 2024.
@@tringuyen7519Arrow Lake isn't delayed.. They literally just reiterated yesterday that it was a 2024 launch, and showed it off on 20A silicon. They even showed us a working Lunar Lake demo, the successor to ARL... Intel is ahead of schedule.
Great video Ian. I love this kind of content.
Great video on the details of the new architecture!
Base die is 22nm for the active one right? The one with L4
near future is looking good, thanks for sharing!
Crazy how fast and smart these machines are getting that the chip can 'sleep' between keystrokes while typing and between 16 frames of video while playing a video.
And I still can't get over how shocking the "5 nodes in 4 years" statement is. What a gangster move.
I am very excited to see what happens in the Angstrom era with CPUs, GPUs and the new hardware accelerators for AI made available in these coming chips.
Was very skeptical too on the '5n/4y' map, but they delivered 7 and 4 on time so it's much more credible now. To me, it's ASML and Intel worked together somewhat on the maturity of EUV so Intel can get the EUVs to work on its new nodes much quicker. The meteor lake project is massively complex, so I'd say Intel is on good track in its coming back fight.
was waiting for your perspective on it! Lets see intel in the EUV space, looking forward to the Intel TSMC battles ahead!
His perspective is to parrot the specs.
@@fluphybunny930 ...And give us his perspective on them. Ian's been around a long time and I value his insight into performance and potential mis-steps in a design.
best summary of the Meteor Lake that I have seen so far...
This is so interesting. I wouldnt buy a new CPU right now as I JUST upgraded but this is cool. Rumors say Intel will go really wild next gen and I'm waiting to see what the mad scientists have cooked up again (huge IPC gain, removal of HT, massive efficiency but also reduction in clocks??)
Meteor Lake, Arrow Lake and Panther Lake will be crazy, and they're all on the same LGA1851 socket.
@@MaxIronsThird let me tell ya, PGA like on my AM4 sucks. I believe my old I7 2600 was PGA and holy hell am I happy that this arrangement is dead by now. I personally just hope Intel figured something out other thab just throwing more cores at the problem. This is what kinda breaks AMDs neck right now. V Cache is nice for gaming and I hope Intel got something similar coming. Removing of HT can be risky but worth a lot. We sacrificed a lot of power to get HT in the first place. Hope Intel doesnt fall for the more cores more better trap...
@@KiyujaPanther Lake is supposed to introduce Rentable Units, as a substitute tp HT.
NPU on an M.2 would be a marketing genius move. It could literally move the needle for exponential growth in bandwidth needs to the endpoint device, creating a need for higher bandwidth devices at work and home. Not to mention the software development jump to utilize this new hardware. This could be one of those immediate market disrupters. Whereas having to wait to upgrade to a new PC/laptop would be a slower adoption and Intel could cede more market to competition.
There are npus/tpus on an m.2 of different types from Google.
Exciting to see Intel glueing CPUs together
Awesome video, I think Meteor Lake when it comes to Desktop will be the point I finally update my desktop. I just wish it offered more PCIe lanes that I will lose coming from x299.
It's not coming to desktop though.
Is Intel's Core Ultra available for desktops?
are they going to use that tile design in desktop cpu too ?
I wonder what an hypothetic "15900k with foveros tiles" would look like :o
They might be able to create a design with a second CPU tile, for 12P / 16E + 2 E-LP on Soc (assuming they can get the scheduling to work properly with two tiles). If the power efficiency is as good as they suggest that would make a great desktop processor. The power consumption on 12th and 13th Gen desktop was out of control, but the performance was there. I would be happy with similar P and E core performance, but with more reasonable power consumption. Throw in the new GPU and NPU and it would be a winner.
Even if they just released this design as an i5 I think it would do well.
milijoules per frame is actually a great unit of measurement, I’ve never heard that used anywhere else
Digital Foundry has used it for a good bit of time by now. But yeah doesn't change that it is a very good and simple way to compare efficiency
Actually I think they do the inverse and compares frame per joule 😅
Feels strange how long it is since we heard of a new process node from Intel. Wonder when we will hear the plans after 18A. Presumably they must have made decent progress on it by now.
They showed 3 more process nodes in this year's innovation event after 18A. but shall come manufacturing ready end of 2026, for the very first process node of those three if we expect they stick with 1 year cadence like industry norm.
@@delongzhai4887 all I have found is next, next+ and next++
thanks ian
Great job of breaking the Core Ultra down, my friend!
Why do you think the GPU is still fabricated using TSMC 5nm instead of Intel 4?
That's what Intel told us. CPU tile on Intel 4, GPU tile on TSMC N5. It's not a thought, this is all information direct from Intel.
@@TechTechPotato I was just curious about your opinion on the matter; might that be because of low current production capacity on Intel 4 (and using all that capacity to CPU tiles)?
@@kayaogz Probably a combination of lack of Intel 4 capacity and the fact that Alchemist discrete chips are already made on N6, so it was likely an easier port. I'm more curious as to why the SoC and I/O tiles are N6 instead of Intel 7.
@@scottynoes good point
@@scottynoes If I would guess, probably it's better optimized for very low power use case. Intel 7 probably geared for high frequency, 14th gen RPL R will reach 6Ghz on top SKUs.
Holy tasty spuds! A Meteor Lake presentation with fine detail that is understandable. 🤙🏽
You don’t have a video of the Xeon 5th Gen.
Love what they are doing with Battlemage - have invested in Intels graphics and hope they continue to provide a 3rd option that will rival team red/green.
However, will the Intel GPU require the Meteor Lake CPU to work efficiently?
You can use Arc GPUs with any CPU, however if you use Arc with a modern Intel CPU, you can bond them together for faster encoding and performance.
I'm less interested in discrete Arc GPUs than the idea of Arc A380 power integrated into mobile CPUs. Imagine that much power integrated running on a laptop - on battery! This would be HUGE for PC Gaming. Still not quite good enough for raytracing but RT apps would at least run. XeSS could be much more meaningful for low power situations too.
3 types of cores per socket... I wonder which task scheduler is going to handle it on launch day (I hope Microsoft is on top of this). Linux is still struggling with simple P&E - as a result you have thread migration with all cache migration as well and resulting CPU branch predictors just throwing a towel anytime scheduler decides to move load from one type of core onto another (and even worse, another tile). I know ARM managed to figure it out in mobile chips but I keep my hopes low for MS, Linux, BSD...
sooner or later they will figure it out. This is the future and not just for Intel.
At this point, just program like the CPU is a distributed system of its own... Parallel algorithms might not work well.
@@eliadbu I've had 2 terrible years with Alder lake so thats what I'm afraid of.. They'll launch it and figure it out on my time.
@@impuls60 I have alder laker, the only issues I have is the heat all other work just fine on Win 11.
7:41
A M O G U S
thanks man i got a better understanding of meteor lake chips
I write real time dpdk code that depends on stable timing. I sure hope this OS controlled thread promotion stuff can be bypassed for high performance code.
Any idea if Meteor Lake will have proper 4 DIMM sticks of DDR5 support ? That is at normal speeds, not 3600-4400 MT/s (if you're lucky).
I'm still using (writing from it) a desktop replacement laptop, 17.3" which is 2 months shy of being 7 year old. It has 64 GB of RAM, and boy, did I used them. I want my next one to be able to have 128 GB of RAM at decent speeds, so I can use that one too for at least 5 years, preferably 10. With the current generation, if you go for 128 GB, you'll have, in laptops, most likely 3600 MT/s on both 13th gen and Zen 4.
Though, I know that Meteor Lake is much more about power efficiency and stuff. I don't know if we'll have workstation level laptop with it. I have this feeling that most likely I'll get a Framework 13 laptop with Meteor Lake to still have something in case this dies. Which I'll also be able to use for misc stuff. And I'll probably have to get a 15th gen Framework 16 or something like that. Sigh... so much waiting.... this 6700 HQ still does the job, but I'm constantly drooling at modern 8 core / 16 thread CPUs...
I'd be surprised if laptops didn't stay in SOC/low power mode all the time, except for initially opening apps. Esp with media engine in SOC.
The question is how much is in the low power mode, and can you go lower. Static power is a pain
Now, what will be insteresting is when Intel will port this architecture to CPU for x86 NAS. Synology and QNAP shifted to AMD for their current generation of refresh, but once the SoC tile LP E-Cores+ CPU tile E+P-cores scheme get good enough support on Linux and FreeBSD, Intel CPUs will do their great comeback on the NAS and homelab scene. Because those machines are most of the time at idle.
seems like that would be a intel's m2 judging by performance, interesting if any better than 7840hs and of course m2 in terms of battery life and such
it's good Intel is trying things "new to Intel", but that sure seems way more complex than Apple silicon design. It will be interesting to see if it actually means a Dell laptop will get any where close to apple's M2 that can run full throttle for an entire work day on battery.
The thing I really want to see is if performance tanks when running on battery . I think the biggest advantage of macs with m series is that you get almost the full performance on battery.
there is no secret for any processor-core architecture to achieve any particular characteristic. you just have to choose which one is most desired. From there, you can band-aid to achieve other characteristics, but not as optimal as if that were the original target. At any given "process node" (example 4), there can be performance, density or low power versions. Intel and to a degree AMD, have always targeted max performance, suitable for desktop systems. These can be band-aided to lower power. Recently, AMD did a density version of Zen 4 in the c version. I believe one reason AMD was able to do this is that they have "synthesizable" masks? in that the design logic is translated to actual layout is done automatically. In the past, Intel did manual mask designs, which required many hundreds of people working for several months? This allows squeezing out a little more density and performance? Not sure if they still do this? This made it impractical to do a second design variation of any architecture with different optimization goal.
@@gloriouscat-fishLover I think consumption doesn't tank on all laptops with a total consumption of
Bruh, Apple M3 MacBooks are coming out Q1 2024. Will MTL beat an M3? Intel should be more worried about AMD’s Strix Point.
@@обычныйчел-я3еOn older thin and lights with Intell 11xxx series performance tanks a lot. We've seen decodeAudio do an mp3 in arround 1 second on power, drop to 7 - 15 seconds on battery. I hope this architecture does better, the Apple silicon certainly does better in this regard, although macbook air also seems to be thermaly throttled. It will differ a lot on the workload you give it.
What will be the DDR5 memory bandwidth? That will be the limiting factor while using the NPU, because of streaming in the quantized weights of large models from RAM. Cache sizes? Will the CPU cache be shared with the NPU?
New third level of compute cores will only further increase latency in thread scheduling. Not that keen on more input latency. But yeah will have to see I guess...
Very good vid Ian. In your opinion, how is this chiplet strategy going to evolve? Many more smaller chipplets? Chipplets on more stacked levels? Different nodes mixed together? Curious about your answer.
Do not think more chiplets - as the techtechpotato video shows, the 'landscape' (4 chiplets, foveros, ) likely remains the same while there can be changes within individual chiplets. For example, total new functions can be placed in the SoC die without need of a new chiplet.
Thanks@@SeanLi-i7n indeed, it offers a lot more modularity.
I wonder if the SoC tile's media decoders and display engines are copied directly from ARC. They're on the same 6N process, correct?
Excellent video, thank you! I’ve been hesitant about hybrid core designs, but it looks like going all-in properly and supporting various use cases within the design might just make them sing. Of course, the real challenge is how operating systems interact with them - a possible interview and consultancy idea is to talk with the scheduler engineers at MS or within the Linux kernel community 🤔 The other big takeaway for me as an engineer was the testing setups, I suspect this is going to be a big area of growth within the industry as this shifts left a lot of challenges - will be interesting to see how this affects working with external fabs, will the benefits of trust in product promote such arrangements, or will the external fabs take issue with heavy shift-lefting and come up against resistance when ultimately needing to shift all the way left back to the designers?
I'm now using a Microsoft Surface 9 as my main computer and I must say the performance is stellar. Similar to Apple M cpu powered laptops. It's probably because Microsoft is leveraging Intel's EVO system standards. The future looks good for Intel if they continue their current efforts around balanced system performance.
whoa this is a cool architecture! thanks for the break down
Recent intel laptops are too hot and too power hungry for me. Hope this one can deliver better experience.
is this going to be purely a mobile product or will there be a desktop release also? curious to see how that newer revision of alchemist does with desktop class cooling and memory vs the cards we've currently got
This current design seems to be aimed at mobile.
For Desktop I would regard some of the effort put into power-saving as overkill when connected to a power-plug. And on Desktop (Gaming) probably 8 P-Cores are the target for high-performance CPUs, as this is, what more and more games are optimized for.
@@ThorDyrden Plus on desktop it may be more meaningful to put new threads on the P-code and go down from there, so the latency of moving threads from slow to faster cores does not affect the desktop and workstation models.
@@ThorDyrden If there's one thing Intel desktop needs it's better power efficiency. 12th and 13th gen i7 and i9 were ridiculously power hungry and hard to cool, though the performance was good. Until Intel sorts out its power and heat issues though I wouldn't consider buying them at the high end.
uArch wise this is 100% different. Massive changes on top to bottom. At this rate just wait for 15th gen desktop.
@@Pushing_Pixels no doupt - the power-consumption and therefore cooling complexity was one of the reasons I chose team red this year.
So efficiency should be a goal also for the desktop-CPUs... and I would expect the new manufacturing-process will be one step in that direction.
But I still stand to my initial statement, that some of the measurement to reduce power-consumption we see in this currently leaked chip are too complex to be worth it for a desktop. What is the benefit of saving 10W on desktop, when it increases the CPU-price by 100$ ?
That will be one of the benefits the this tile-concept. Intel can combine a different SKU aimed for different purposes. E.g. the LPE-Cores in the SOC probably don't have much use in a desktop and make the scheduling complex.
Also consider - a lot of these new measurements are aimed to improve idle- & low-load powerusage, where current Intel are not bad already - Intel looses to AMD on high-load scenarios.
I am wondering if there is more information regarding Intel's plan of 5 nodes in 4 years. They gave a brief presentation and said that they are on-track with their plan, but it would be very interesting to know what the status is in their fab. It would be very surprising if they can really manage to produce their 20A nodes by 2024, as there are some rumors that TSMC might even delay their N2 HVM to 2026.
After all, switching from Fin FET to GAA is architectural change and it is really surprising if they really can manage such technological shift in such a short time. It is also quite puzzling if such fast switch is financially beneficial.
Are you using a pre release laptop from Asus or MSI? Hoping Intel fabs turns out to be great and would love to see competition between tsmc, Samsung, global foundries, Intel and IBM
How many people tap the screen to see how long the video was after he said if it's only 10 minutes I screwed up?!! 😂😂
I will build a crazy PC next year for both AI and 3D GameDev
cant wait with this new CPUs and GPUs in next year....
I am worried about the NPU. I hope intel and AMD stay under DIrectML with their implementation, because I would hate there to be a whole CUDA vs OpenCL fight in the CPU space.
"You need an intel chip for this fuctionality in our software."
That would get me upset.
OpenCL allows for usage in linux.
@@mahirabbas3700 If I am not mistaken, you can run CUDA in linux as well?
My experience is with blender3D for example - where the optix (RTX based) API gives you the best performance, next being CUDA - and on the AMD side you now have HIP.
Before HIP you had OpenCL which had terrible performance.
My point being that the software devs have to tailor their software to multiple specific hardware capabilities. Depending on the install base, it can take a long time for the minority's hardware to be properly supported.
I would wish, for any AI hardware accelerators, we dont go down the fragmented rabbit hole... Hopefully it will stay under one API.
Maybe, although I'm not keen on increasing Nvidia's market dominance at the moment. Anything but CUDA for a little while, until competition is more healthy@@HexerPsy
i really hope this can trade blows with the m2 ultra and the upcoming m3
If not than Arrow Lake will definitely
Waiting to see if function by function, power storage basis if this can get close to the M1 M2 line. I am not convinced this thread director works all too well. All day battery life without trying would be ideal.
I wonder how will the CPU react when I start baking in blender and then launch a light game in foreground. It could slow down the main task as it's in background, so I'm wasting time running a game on full cores while baking gets the slower ones
Maybe you are running blender on the graphics card and you don't have enough headroom for other stuff.
So weird... I would expect them to target AI/ML workloads through their XMX dedicated 'cores' in their GPU - not a new 'architecture' sort of. Still, this whole package sounds awesome - not only for laptops but also from pc-gaming handhelds like the steam deck. Intel showing a very strong full package such as this, their willingness to work with others in the industry (whether sending chips for production at TSMC fabs or their IFS) and competitive performance, I think it is possible that the next generation of Consoles adopt them (work together to develop unique solutions). I don't know how much money does AMD make with those partnerships due to 'intangible' indirect benefits to every other segment simply due to the sheer VOLUME of units moved has to be great (and difficult to isolate in their earnings reports).
A lot will depend on how fast software will start supporting the new NPU.
Intel continues to have subpar naming. A proud Intel tradition started after the success of 'Intel Inside' and 'Pentium'
You can still buy a desktop Pentium for their newest socket.
With all the amazing tech inside Meteor Lake I'm disappointed there is no Thunderbolt 5. Ian reported at Anandtech in August 2021 when Gregory Bryant an executive at Intel leaked a Thunderbolt 5 working prototype demo. It is not that complicated
First TB5 silicon is coming out Q1 2024. That's the controller, not sure when it'll be embedded into the CPUs.
@@TechTechPotato Meteor Lake is mobile 1st and most likely mobile only. Imagine a situation where integrated we have Thunderbolt 4 but have to use a discrete controller to get Thunderbolt 5....fail for me
Intel will probably have to wait until Arrow Lake Desktop to even use the new Barlow Ridge Thunderbolt 5 controllers.
great low power video playback will be 666ms behind, i am sure this is a feature everyone wanted
Intel's attempt to catch up with AMD, while not very convincing, is a very welcome move in the right direction.
Pretty surprised that a new process node is being developed in Malaysia,or is it just the assembly? also, hope you had a good time visiting Malaysia.
I do hope that these NPUs, and also AMD ones, will have good software support. I.e. I am looking most for some OBS plugins that support them natively. Currently for example background removal, is really limited to Nvidia, and requires complex SDK. Would be nice to be able to use AMX (for simpler things) or NPU (for more intensive stuff) for this instead.
According to AMD and others, gamers don't care much about efficiency for gaming. I mean, they may be asking people who buy 4090s.....
"500 mJ per frame" sounds a bit much to me - 0.5 J * 60 fps (Hz) is 30 Watts, unless this comparison was done on a desktop card, since it was about "leveraging Arc software improvements".
If you want better battery life on laptop maybe run Apple silicon, Intel is still playing catch with Apple and TSMc
right, you'd go apple if you've been sucked into that quagmire or amd otherwise. But it's good that intel are trying, if they're back to competitive on power in a few years then that bodes well for consumers. Competition is good.
25:12 indeed it is.
on-CPU HBM for desktops/mobile when? :P
I for one would be estatic to have an NPU and quick sync on an M.2 card.
I have a weird use case where i need AMD for one software solution, but cant put another software on in with proper support for AMD's VCE, so for best results i need 2 different boxes.
A 7950x, and a 12100, but to get the NPU built in, it would be like having that correl TPU i remember seeing a while back that i considered for my security cameras.
I don't have high hopes, but what are the chances to see AVX-512 again? I still can't believe they gave me cachline-sized register to play around with just to take them away again :(
Can you quote me a popular software that people are using now using AVX-512?
@@catchnkill Not a very satisfying answer, but it looks like OpenSSL has a bunch of AVX-512 paths and is itself included in a lot of software.
I'm also pretty sure I've seen Tensorflow reporting once that it used AVX-512 when a driver update broke GPU support.
It would make sens that programs like Photoshop or Autodesk Fusion make use of AVX-512 when available, but I have no proof of that.
@@LarsDonner It is exactly the reason whey Intel drops the AVX-512 in their consumer cpus. You are telling me that it is for less than 0.0001% of the market. If they have a need they can use a GPU to do the job or even with a specialized FPGA accelerator for the purpose. Just don't add die space for function that 99.9999% of users never used. AVX-512 is not backward compatible with former AVX-256 and it is a killer for this technology. It is Intel stupidity to include AVX-512 in consumer cpus that castrates the cpus. Those precious die area for execution of AVX-512 can be used for other function say larger first level cache etc. To save power in laptop and notebook computer, Intel requires manufacturers to turn-off the AVX-512 by default in firmware. It is a defeat to Intel.
Can't wait for the first remote exploit which turns the NPU against the local machine it is in for the purposes of on-device cracking.
这两年intel的进步在我们中国的硬件爱好者群体中有一个很有意思的描述,电子黄片,太刺激了😄
I still wish Intel would finally give us some more small-formfactor and media-station oriented products. Now with the LPE-cores i would like to see something like 4+8+4 with 96/128 EUs but for DESKTOP for once. Why limit desktop to 24 EUs?
Of course things/tech get better with time.. but it’s truly impressive what engineers are designing, testing and producing at the bleeding edge of what’s currently possible. I’m not an engineer but sometimes wish I had the drive and motive to have followed though with my half baked plans to study ME. Kinda funny.. last 2 of 3 gfs were engineering students. One EE and one ME… really fascinating stuff. The workload was utterly insane tho. Like 3-4 hours a day in homework/study time. I called them both the weekend girlfriends. And honestly was often times a once a week kinda deal. I randomly met an AMD engineer a decade or so ago out exercising and gave him crap for AMDs ( at the time) really bad power efficiency compared to intel. Of course he was one of the top guys (from what I recall from a brief convo a long while back) at intel and I think was recruited to help AMD figure out their power and efficiency issues. Well it’s good to see that his team (sure there were many others) succeeded in their efforts.. smart people (especially engineers) are just something else. Gets me so excited and engaged to talk about crazy tech stuff with the people who are actually designing it. learning the basics of what goes into design/implementing ideas. Remember that everything that exists was once just an idea. Down to the seemingly minor details. I’ve got a lot of decent ideas but taking them thought to production won’t happen. Wish I had an endless inventor fund for trying to design/test stuff. It would for sure be what I’d do most of the time. I guess that’s what engineers do all day haha. Missed that boat unfortunately…
just talked w/ an engineer mechcinal and they just do what works not new things for sure I think it depends but the work then the keeping up w/ the work is also why I didn't go into CS now regret it all due to every job needing to keep up or just falling behind
so this intel is just for laptops?
For now, yes
Thank you
when you're thermally/power budget throttled ; power efficient == performant ; even for desktops the cpu limits overclocks according to power budgets ...
That's very impressive
Meteor Lake/Foveros is going to be a long awaited next BIG leap for Intel. Competition is good indeed!😉
can intel close the gab to nvidia + amd in 2 years?
Interesting, I thought each tile is from separate wafer, but looks like they're printed on same wafer
As long as this scales to Desktop users well with better cooling I don't have a problem with any of this.
I wonder how it's going to run on Windows 10 and Linux.
lol. Low power gaming usually does equate to low performance gaming (so you're probably correct).
Meteor Lake is the most exciting consumer chip Intel has released in a decade. New Intel 4 node, tiles (chiplets), NPU (AI engine), new CPU and GPU architectures, focus on power efficiency, etc. I have zero doubts that Meteor Lake will easily surpass Zen 4 mobile, not in just performance but performance per watt and features.
1272 maybe?
I tels about to re-take the industry over
I hope some sku of it supports my b760m ddr4 motherboard😖
funny I dont care about CPU performance as much any more. Its like all SOCs now adays have more power than what we need. Its all the other bits like codecs, connectivity, AI, GPU etc that I find interesting now adays. O and efficiency. So glad this has AV1 10bit encode decode
CPU perfomance is very niche. I have an i7-12700K for music production, it has amazing perfomance per watt (for a desktop chip) and the 8P+4E is an amazing core count sweetspot for me. Music production is 100% bottlenecked by CPU, i wouldn't be surprised if the i7-12700K itself can outperform Meteor Lake.
DAWs (digital audio workstation) love monolithic Intel CPUs with Ring Bus, it doesn't like chiplets or tiles very much. My i7-12700K can beat a Ryzen 9 5900X and Ryzen 7 7700X, it has beefy floating point speed as well and audio loves floating point.
Meteor Lake is insane for mobile, i hope they decide to launch it to desktop as an early i5 for the new LGA1851 socket, to get mobos ready for Arrow Lake, so we don't get the same shitshow that was AM5 mobos' launch.
We can thank AMD for Intel finally moving again… - damn: this feels like living in the future.