Thank you guys for doing this. Windows gets all the love and as a recent adopter of Fedora in February who also likes hardware knowing how well new hardware runs on day +1 is important to me.
Microsoft probably reasons that most compute should be done in their Azure cloud and same for storage really. They want to own your digital you and get rent for accessing it at the same time. The last thing they want is an optimal performing non-cloud solution for everyone.
@@TheEVEInspiration I wish so badly this weren't the truth. Why are they so evil. Why do they need more money? Just make enough for everyone to have a nice life. We don't need socialism, but we do need to put a lid on capitalism.
It's mainly because most of Microsoft's business is in the corporate world and they intentionally tune the power plans and settings to save power. Also its to keep the state of California off their backs too cause of the laws there.
Wendell! Just wanted to say I am enjoying your work. I'm a recent Windows refugee, and have switched to Kubuntu 24.04. Hoping I can learn more about Linux from this channel.
I wonder if all that background telemetry, windows recall, ads all over the OS, online integration nobody has asked for, is impacting windows performance 😂
I've seen a post on Twitter where someone disabled all but 1 P-Core and performance in Cyberpunk actually went up, by like 10%. So I think its scheduler related. (And I definitly don't want to defend microsofts data vacuuming.)
Not only that. The Windows scheduler is dumb. It will literally move software threads across the physical threads of your CPU for no reason at all. This is literally why the Ryzen 7 9700X sometimes does better than the Ryzen 9 9950X (2 CCDs instead of 1).
@@Moejoe647 Are you sure you're not mixing this up with someone disabling 1 P-core (core-0) and leaving the rest untouched? That's a post I remember seeing too. It's an old way to get better performance.
Not surprised this works better on Linux, this platform is the first one that came out with a fix/optimizations for intel's big-little design over Windows. Microsoft just doesn't seem like they know what they are doing anymore smh
Really? The "Thread director" was only available for Windows 11 at the launch of Alder lake and I heard bits over the following years on how the Linux kernel support got better. As far as I've read on phoronix and elsewhere the kernel didn't necessarily know when to use a P-core or an E-core at the beginning. I don't know how it is know, technically, I do know I've never experienced any problems with my NUC 12 pro alder lake system on Linux (that I've had for a couple of years now). I still would never use Windows personally because as mentioned elswhere who knows what else it's doing in the background? "Windows is a service" as MS likes to remind us of
@@Vantrakter Thread director is built into the CPU, Microsoft just needed to streamline support by optimizing the Windows scheduler to accommodate the whole system better. But they were failing to do this for a bit after Alder Lake first launched with the new thread director design with P and E cores. Meanwhile on Linux with kernel version 6.0 if I remember correctly was when they implemented those optimizations and it just worked while it would take Microsoft a few more updates to get their scheduler up to snuff.
@@cosmicusstardust3300 Everything I can read now and did back towards 2021 when ADL launched says the Thread director was supported via the OS Scheduler in Windows 11 from the start, not so for Windows 10. If you read the "Intel Posts Big Linux Patch Set For "Classes of Tasks" On Hybrid CPUs, Thread Director" on Phoronix from Sept 10th 2022 it describes how Intel is aiming to get the Linux kernel to correctly utilize little and big cores with a set of patches that look destined for sometime 2023. I'm not sure when they went into the kernel, if they have.
@@cosmicusstardust3300 I'm not sure why my last reply was vanished. Anyway, Windows 11 was ready at launch of Alder lake, Windows 10 was not. Neither was the Linux kernel; sometime during 2023 Intel provided patches to make it determine when to use P or E cores. Phoronix has lots of articles on it.
@@Winnetou17 "the few linux gaming tests showed the performance regressing compared to the core i9 14900k but with much better power efficiency. across various other workloads was decent generational improvements in raw performance for areas like code compilation, some hpc tasks in cases where avx-512 isn't utilized, and various other creator and developer workloads. the 24 physical cores with the core ultra 9 285k was enough to outperform the 32-thread ryzen 9 9950x in many code compilation tasks, some mpi workloads, and more, but in areas leveraging avx-512 like ai and many creator workloads, the amd ryzen 9 9900 series continued to dominate"
GameModeRun shouldn't need any special adjustments for ArrowLake since the e- vs p-core detection is simply "looking to see if there are differences in the max frequency among the cores and use the set that report the higher number" so that should already work out of the box. At least with the v1.8.2 since my initial implementation in v1.8.0 had a bug for e- vs p-cores (due to not having such a system so could never test it). edit: ok spoke too soon, I added a 5% safety limit in the detection code and it appears that some individual cores on at least the 13900k can boost more than 5% over the other P cores max frequency leading to GameModeRun pinning the game to only those 4 cores that can boost.
Light travels 7 times around the world in 1 sec, but in one period of a 5.7 GHz clock it only travels 5.3 cm (2") - and electrical signals travel slower than the speed of light. How CPUs function at these clock speeds is amazing!
Correct me if I'm wrong, but wasn't the Ryzen 9000 series also sometimes better on Linux than Windows? Between similar performance at lower power, improved AI compute, and sometimes better Linux performance, do you think this is a consistent shift in the market strategy from the big manufacturers (perhaps trying to stay competitive with the rise of ARM?) or is it just a matter of economics (silicon yields down, geopolitics goofy, etc.)?
Open source, when widely adopted, is clearly a superior solution. Linux is more optimized in general (gaming suffers mainly due to low adoption, but it's still mostly pretty good).
@@MrFatpenguin I only game on Linux now and i can attest to that with NVIDIA. Except where Anti-Cheat lies and also some Unreal Engine games (Icarus being a big one).
@cajonesalt0191 I don’t think it’s part of any new strategy. The server market has always been dominated by Linux and business customers have been their biggest customers. They would have been fools to ignore their needs and they didn’t. You can look through the x86 instruction set for dozens of oddly specific single instructions where it’s hard to see why they need to be implemented on hardware. These were requests from their big customers because they improved performance or energy efficiency on a large scale. I think the discrepancy in performance between Windows and Linux lies in their thread schedulers. Windows moves around threads to different cores more than it needs to and thus wastes time doing nothing in those nanoseconds which adds up over thousands of threads. This made the tech news when earlier Ryzen chips started running faster on Linux a few years ago. It’s hard to know if anything has changed in the time since then. Meanwhile Linux has experimented with a few different thread scheduling algorithms and has multiple available as options depending on your needs. Microsoft doesn’t have the impetus to create a high performance kernel for windows because their big customers that care about performance are running Linux on azure already. Windows users who do care about performance are a captive market because they’re either tied to windows by the software they use but don’t spend enough money with Microsoft to have any influence there. Could they close the gap? Yes. Will they? No because high performance doesn’t seem to be a priority for the Windows team right now. It’s more about pushing Bing, telemetry, advertising, and “AI” features. As far as they care, the thread scheduler is good enough.
@@ruikazane5123 I took Wendell's advice when he made his first RIP Optane video whenever that was, a couple years ago. I got an p1600x, 118 gig, for around $100. It's awesome, I use it as the boot drive for my laptop. They've almost doubled in price since then, the used market is drying up.
N100 is great - but it has one and only (but big) problem… single channel memory…. If that thing would have dual channel memory capabilities… it would be such a great thing…. Sad thing? Its new brother - N150 - is still single channel only…
Hey Wendel, great work! Just a small correction, the memory latency is not an inherent problem of the Foveros packaging, but the architecture layout of Arrow Lake. In Arrow Lake the memory controller is separated from the computer tile with the cores, thereby necessitating movement over the fabric to access the memory. The, in the short term, better way is to include the memory controller in the compute file, like they do in the Xeon 6 parts. A disaggregated memory controller is in the long term better but Intel’s current implementation has problems.
nice to see much works well, also as hardware on it's own. expected it to quite much run good at Linux directly anyway since Linux is often better with new hardware than windows, but still also nice to see for sure it works well, with new hardware it is always nice to see how well and stable it works, and nice to see some proper tests of this cpu and motherboard family as noone really covered it properly.
Intel All access described how this generation of thread director would use e cores first, and only move to p core if workload was too big for an ecore
Im always in the market for new server hardware to refresh my development servers The number 1 problem is always - once I configure and price up a baselevel offering, I head over to that auction site, and have a look at whats avail for around the same price in barely-used mobo+cpu+ram combos. Then end up in a vast rabbit hole of comparing cheap maxxed out Epyc combos, with limitless cores and a tonne of ECC ram ... and waste the next 2 days playing what-if. Then I give up and get back to work
Does the GCC code path optimization include support and/or tuning for the wider ALU execution units per core? Zen 2/3/4 have 4 integer ALUs while Intel's Core also had 4 ALU's with Alder Lake 12th through 14th gen, have 5 integer ALUs. Both Zen5 and Arrow Lake have 6 ALUs. I feel that both uarch's will require compiler optimizations to take advantage of the wider execution paths per core but Zen5 less so die to it's SMT implementation. Arrow Lake will need even tighter compiler optimization to utilize wider ALU's. Any optimizations Intel may push for will benefit Zen5 as well.
Will get it to run dual RTX 5090 on ProArt boards with dual x16 PCIE slots. 2x32GB will finally get to a reasonable AI platform on a budget while being totally awesome
This is an interesting step for Intel. When it excels, it does really well, but also uses about the same power as the previous architecture. When it performs okay, it's more efficient but nowhere near AMD at times. Then sometimes it seems just, bad. I think it is a step in the right direction, but still needs work both on the software and hardware sides. 🤔
@@tommyking626 kinda but not that close. I mean ryzen 1000 was miles ahead of previous AMD CPUs in everything, both performance and efficiency. While intel 200 trades blows with 14th gen and is somewhat more efficient. Still it is true that both ryzen 1000 and intel 200 mark a change for each company, let's just hope intel has similar success with following generations, otherwise AMD is just gonna keep increasing the prices and offering eh performance improvements each gen just like intel did from 3000 to 7000 series
It wasn't ready for release and has a lot of bugs. Ryen 1000 was 8 powerful cores for $350 (we bought a 1700x) and Intel would sell you only 60% of that for $350.
I'm curious too. On my i7-6700HQ this stupid chromium takes 14 hours, if I'm not doing anything else on the computer. I know you weren't interested, I just wanted to complain. How can Firefox be literally 10 times faster (between 1.0 and 1.5 hours) is beyond me. Stupid, stupid chromium.
Thanks for the unbiased and objective reviews. I really dislike how many tech reviewers are shitting so hard on this generation launch. Personally, I think it's impressive how good it is for such a different architectural layout. It's a 1st gen of its type and drivers are probably still being worked on. Performance will probably improve in the next few months. That being said, I can't wait for the 9000X3D launch.
So many channels are just gaming benchmark and test numbers channels. Few actually look into (or even understand) the actual HARDWARE and what it offers as a piece of technology. I'll give Gamer's Nexus a pass though since they do other in-depth hardware tests around what they're testing, but the context of their presentations doesn't do them much wonders.
Yeah, just seemed like everyone was being overly negative for clicks and views. There will eventually be some deals and I’ll take the power reductions in this generation even if it’s iso-performance.
Some YTers seemed overly hostile towards this CPU when it came to gaming while lightly touching over the topic of power efficiency and productivity improvements. It almost seemed like they were forcing themselves to say those BS for clicks.
JayzTwoCents actually found one of the biggest issue with 285. Some performance issues got fixed by using CU-DIMMs. I think the issue is having a task that takes certain amount of time, and splitting it into a pipeline and then running it way lower frequency than it is designed to handle, thus if you run traditional DDR5 you have extra latency that you wouldn't have if you would use CUDIMM.
its not cudimm, its the memory frequency. Jayz used ram at 8400, you don't need cudimm for such a frequency. Any higher than that might even harm performance a lot cuz the memory controller would run in gear4.
As a complete aside, I was just pondering why they had chose to go with with SMT on the P cores in 12th→14th gen. My gut tells me they should have had SMT on E cores and non-SMT P cores if they really wanted to get the best single threading and latency for foreground tasks and efficient use of die area for background tasks.
probably because modern games allocate 12-16 threads by default, and disabling HT for the P cores would have issues with e core threads being utilized in the wrong applications causing stutters
Time for AMD and Intel to start trying out the Apple thing and introduce really fast RAM chiplets/tiles. We have the technology. Imagine the potential performance advantages. Cheers!
I never heard about PCI Express ECC before. Actually PCI express encoding already have built-in forward error correction, I suspect this get implemented because of CXL that require PCI Express behave like ECC memory. This mean extra bits need to be transmitted for ECC memory in CXL to CPU to work, butt will add overhead. May able to prevent retransmission in case bit errors ?
So this is Faverose? Lots of separate systems “working together” in one chip? Seems to complicated to work without headaches…Intel’s worked on this for sometime and it’s appears to be beta.
"Chiplets are the future"... as evidenced by AMD making all their CPUs that way for the better part of a decade at this point. They're very much the present. Intel has just been plagued with so many other problems they've finally found the time to get with the program.
I think you misspoke at the time you said the Core Ultra 5 CPU has 5 P cores it is 6. But to my question how many compute dies does Intel make for this series. Is it only one and binning decides what bracket it ends up in. or is it more than one die?
I wonder how Microsoft does it. Do they allow Intel engineers to write kernel code for NT, or do they write a spec that Microsoft has to implement? Open source must be a lot easier for the hardware producers to contribute to.
Hmm, I have noted a lot of work in intels linux development so i guess its paying off, im just a casual user anyway, i wonder if for now making sure that the future of intel works well on linux is something that can sway some share away from AMD? as currently i feel that intel has a lot of good future tech and ideas, but many need some tuning to work optimally, after that its the balance of power/performance. Im stuck on a nuc 12 extreme compute module, im guessing an upgrade to the 245k would work for me considering i cant hit past 4.5Ghz with cooler limitations and i expect the multi tasking would likely work better on these with linux so i would not be losing anything, trying to aim for a more efficient build overall, from the sounds of it, I think CUDIMMs are the optimal way for me to go as well.
I recently run some test in wsl2 windows on my gen14 rig and I got predictably 20% worse performance than about 2-3 month ago. On the native Linux I have the same performance as it was 3 month ago. I suspect that MS broke something in the recent updates, since they don't describe in details full list on charges in the updates.
I knew it, Windows is just a dead weight at this point. I am all AMD, and most likely will stay like that forever, but I knew the reason why those new cpus were behind was Windows being terrible. I will never update to Windows 11, and I can't wait for some programs such as AMD Adrenaline to be available on Linux so I can fully make the jump there.
Watching windows users constantly complain about windows being a pile of utter rubbish never gets old. As a non gamer that has very little interest in gaming outside of a couple indie games I find arrow lake quite nice.
For gaming it's a pass but still on 12900KF Linux, would probably go 9000 series if I were to build again today mostly for gaming, 9800X3D around the corner or a 9950X/9950X3D depending on use case.
Aaahh Linux with its 8.9 billion different distros and still 99.9999999% of the population have never even heard it. Good for tools/utilities but other than that, you can keep it.
I really appreciate your comment about the NPU not being leveraged atm and that most comparisons focus on the CPU part. yes the performance compared to previous generations and platforms are meh at best, however when the software catches up and games and applications take advantage of the NPU it will make a world of difference...maybe a bit scarry too. Companies need to innovate and build the platforms to develop the next generation applications.... and indeed people need to understand what their workload requires and what planform is the most suited for it and at what point to make the upgrades. At least for now there are new tools coming on to the fore and it may make sense to make this point a little louder in commentaries .😎
@@roccociccone597 they just arnt giving their toys away to play with.. cant find the bugs if no one has them yet. makes me wonder what they are really going to do with these
@@roccociccone597 Bold claim with zen 5 x3d all but confirmed to have flipped the script on 3d cache, allowing 10%+ higher clock speed, the avx512 improvements, and the lower power draw. Gaming is latency limited. There's not going to be significant gaming performance improvements anymore until we move beyond the DIMM form factor and/or slap even more massive cache onto the CPU package.
You never did another video after Intel's "final" microcode for 13-14th gen. Could you? Did it fix all the servers? Sorry I don't have any questions about 200S series I don't care about it, you don't seem to either. Your non enthusiasm for this unstable platform (seems yours is unstable with the memory errors) is clear, all you could really do was show their slides and talk about intels slides...and show video of a broken computer from your end. It's like the question is "Is the new Intel platform good", and your answer is "Linux is different from windows and I work on servers and thats cool"
Gaming is literally the most useless metric ever to be used to measure cpu performance. Oh noo my game doesn’t run at 200 fps, I must now sacrifice everything else just to reach a barely noticeable change in frame time latency. What an investment
@@roccociccone597 I agree. I was fuming over Zen 5 reviews and to a degree Arrow Lake ones (there was more regression there). Gaming is nice and quite useful as a chaotic benchmark of CPU functionality, but also takes ages to update. If it even is updated. I made this comment just in case someone who watched Wendell's video forgot that there will be 9800X3D dropping in 8 days.
@@RotaryJunkie For gaming community certainly. And for engineering as well. Though most games don't need super insane FPS that currently handful of monitors can even take advantage of. Experienced player with 120 Hz monitor will beat hype-gamer with 500 Hz monitor. Don't get me wrong - fluidity is nice, but seeing CPU's only through lens of FPS is a bit ridiculous (I'm not saying you do that). I expect mild teething issues with flipped cache to pop up. And we will see if it was a good idea to give users option to OC. It needs to be made super clear that with OC of X3D part warranty is VOID.
interesting you say memory latency is worse - Jayz2C was looking at overclocking and noticed that putting new CUDIMMs in gave it a 15% uplift before any OC happened.
Memory latency is worse compared to RPL, if you run AIDA64 memory test the latency shown in nanoseconds is sadly higher than RPL. That’s why going with CUDIMMs helps because with apples to apples UDIMMs the same kit on ARL has higher latency than RPL. This is a big topic of discussion on the overclocking forums.
Even with what Jayz2c did, even though I didn't see his numbers, I saw others, IIRC Derbauer and somebody else had some memory lantency and even after all the OC and ringbus OC and tuned memory, it was still higher latency than Raptor Lake. I think the lowest I saw was 70something nanoseconds, while on RPL it was 60something nanoseconds. And that 70 was miles better than the stock being at over 100 IIRC.
I clicked in because I thought you were holding a 3.5 inch floppy lol
Way too small to be a 3.5”. More like 5.25”. And yeah, I thought the same thing.
I also thought it is a floppy drive. 😆
same lol, but I thought it was a 5 1/4 floppy.
same tbh
I also thought it was a floppy but a 5 ¹/₄
Thank you guys for doing this. Windows gets all the love and as a recent adopter of Fedora in February who also likes hardware knowing how well new hardware runs on day +1 is important to me.
Well said.
It's crazy that windows throttles new chips from both Intel and amd
Microsoft probably reasons that most compute should be done in their Azure cloud and same for storage really.
They want to own your digital you and get rent for accessing it at the same time.
The last thing they want is an optimal performing non-cloud solution for everyone.
@@TheEVEInspiration 110% this, look at how autosave is only to the cloud on office apps
@@TheEVEInspiration I wish so badly this weren't the truth. Why are they so evil. Why do they need more money? Just make enough for everyone to have a nice life. We don't need socialism, but we do need to put a lid on capitalism.
It's not that crazy. They've always been incompetent, but Wendell hasn't always had a youtube channel.
It's mainly because most of Microsoft's business is in the corporate world and they intentionally tune the power plans and settings to save power. Also its to keep the state of California off their backs too cause of the laws there.
Wendell! Just wanted to say I am enjoying your work. I'm a recent Windows refugee, and have switched to Kubuntu 24.04. Hoping I can learn more about Linux from this channel.
KDE great choice 👍🏼
Welcome to Linux! The more, the better for all. Ubuntu here, but nevermind, it is the same family.
I wonder if all that background telemetry, windows recall, ads all over the OS, online integration nobody has asked for, is impacting windows performance 😂
FBI! OPEN UP!!
DANG DANG DANG DANG!
I've seen a post on Twitter where someone disabled all but 1 P-Core and performance in Cyberpunk actually went up, by like 10%. So I think its scheduler related. (And I definitly don't want to defend microsofts data vacuuming.)
Not only that. The Windows scheduler is dumb. It will literally move software threads across the physical threads of your CPU for no reason at all. This is literally why the Ryzen 7 9700X sometimes does better than the Ryzen 9 9950X (2 CCDs instead of 1).
Not to mention how poorly programmed those ad displays may be.
@@Moejoe647 Are you sure you're not mixing this up with someone disabling 1 P-core (core-0) and leaving the rest untouched? That's a post I remember seeing too. It's an old way to get better performance.
Not surprised this works better on Linux, this platform is the first one that came out with a fix/optimizations for intel's big-little design over Windows. Microsoft just doesn't seem like they know what they are doing anymore smh
Really? The "Thread director" was only available for Windows 11 at the launch of Alder lake and I heard bits over the following years on how the Linux kernel support got better. As far as I've read on phoronix and elsewhere the kernel didn't necessarily know when to use a P-core or an E-core at the beginning. I don't know how it is know, technically, I do know I've never experienced any problems with my NUC 12 pro alder lake system on Linux (that I've had for a couple of years now).
I still would never use Windows personally because as mentioned elswhere who knows what else it's doing in the background? "Windows is a service" as MS likes to remind us of
@@Vantrakter Thread director is built into the CPU, Microsoft just needed to streamline support by optimizing the Windows scheduler to accommodate the whole system better. But they were failing to do this for a bit after Alder Lake first launched with the new thread director design with P and E cores. Meanwhile on Linux with kernel version 6.0 if I remember correctly was when they implemented those optimizations and it just worked while it would take Microsoft a few more updates to get their scheduler up to snuff.
@@cosmicusstardust3300 Everything I can read now and did back towards 2021 when ADL launched says the Thread director was supported via the OS Scheduler in Windows 11 from the start, not so for Windows 10. If you read the "Intel Posts Big Linux Patch Set For "Classes of Tasks" On Hybrid CPUs, Thread Director" on Phoronix from Sept 10th 2022 it describes how Intel is aiming to get the Linux kernel to correctly utilize little and big cores with a set of patches that look destined for sometime 2023. I'm not sure when they went into the kernel, if they have.
@@cosmicusstardust3300 I'm not sure why my last reply was vanished. Anyway, Windows 11 was ready at launch of Alder lake, Windows 10 was not. Neither was the Linux kernel; sometime during 2023 Intel provided patches to make it determine when to use P or E cores. Phoronix has lots of articles on it.
I searched Phoronix for a review on these processors as I had a strong hunch that Windows was the culprit for low performance.
And ? What's the conclusion ?
@@Winnetou17
"the few linux gaming tests showed the performance regressing compared to the core i9 14900k but with much better power efficiency. across various other workloads was decent generational improvements in raw performance for areas like code compilation, some hpc tasks in cases where avx-512 isn't utilized, and various other creator and developer workloads. the 24 physical cores with the core ultra 9 285k was enough to outperform the 32-thread ryzen 9 9950x in many code compilation tasks, some mpi workloads, and more, but in areas leveraging avx-512 like ai and many creator workloads, the amd ryzen 9 9900 series continued to dominate"
GameModeRun shouldn't need any special adjustments for ArrowLake since the e- vs p-core detection is simply "looking to see if there are differences in the max frequency among the cores and use the set that report the higher number" so that should already work out of the box. At least with the v1.8.2 since my initial implementation in v1.8.0 had a bug for e- vs p-cores (due to not having such a system so could never test it).
edit: ok spoke too soon, I added a 5% safety limit in the detection code and it appears that some individual cores on at least the 13900k can boost more than 5% over the other P cores max frequency leading to GameModeRun pinning the game to only those 4 cores that can boost.
Light travels 7 times around the world in 1 sec, but in one period of a 5.7 GHz clock it only travels 5.3 cm (2") - and electrical signals travel slower than the speed of light. How CPUs function at these clock speeds is amazing!
Correct me if I'm wrong, but wasn't the Ryzen 9000 series also sometimes better on Linux than Windows? Between similar performance at lower power, improved AI compute, and sometimes better Linux performance, do you think this is a consistent shift in the market strategy from the big manufacturers (perhaps trying to stay competitive with the rise of ARM?) or is it just a matter of economics (silicon yields down, geopolitics goofy, etc.)?
I'd say this is at least partially optimization. Linux optimizes better and faster, sometimes years before Windows does.
Open source, when widely adopted, is clearly a superior solution. Linux is more optimized in general (gaming suffers mainly due to low adoption, but it's still mostly pretty good).
@@MrFatpenguin
I only game on Linux now and i can attest to that with NVIDIA.
Except where Anti-Cheat lies and also some Unreal Engine games (Icarus being a big one).
Intel also provide code fixes and optimization for linux
@cajonesalt0191 I don’t think it’s part of any new strategy. The server market has always been dominated by Linux and business customers have been their biggest customers. They would have been fools to ignore their needs and they didn’t. You can look through the x86 instruction set for dozens of oddly specific single instructions where it’s hard to see why they need to be implemented on hardware. These were requests from their big customers because they improved performance or energy efficiency on a large scale. I think the discrepancy in performance between Windows and Linux lies in their thread schedulers. Windows moves around threads to different cores more than it needs to and thus wastes time doing nothing in those nanoseconds which adds up over thousands of threads. This made the tech news when earlier Ryzen chips started running faster on Linux a few years ago. It’s hard to know if anything has changed in the time since then. Meanwhile Linux has experimented with a few different thread scheduling algorithms and has multiple available as options depending on your needs. Microsoft doesn’t have the impetus to create a high performance kernel for windows because their big customers that care about performance are running Linux on azure already. Windows users who do care about performance are a captive market because they’re either tied to windows by the software they use but don’t spend enough money with Microsoft to have any influence there. Could they close the gap? Yes. Will they? No because high performance doesn’t seem to be a priority for the Windows team right now. It’s more about pushing Bing, telemetry, advertising, and “AI” features. As far as they care, the thread scheduler is good enough.
The hanging M.2 is cursed
It's even better that it's an optane, that module is becoming irreplaceable.
@@evildude109 There goes my dream of having ultra-high-speed, ultra durable (not Gigabyte) M.2 on my laptop...Optane died so soon
@@ruikazane5123 I took Wendell's advice when he made his first RIP Optane video whenever that was, a couple years ago. I got an p1600x, 118 gig, for around $100. It's awesome, I use it as the boot drive for my laptop. They've almost doubled in price since then, the used market is drying up.
I am glad Steve introduced me to your content. I wish it happened sooner. You have such amazing information! I can see why you two linked up.
Thanks for your balanced opinion - I value it.
Very nice to hear about the CPUs from a Linux standpoint. I'm more focused on the laptop variants
The new E-cores are kinda exciting. You can do so much with an N100 mini-pc now. Can't wait til the replacement hits.
N100 is great - but it has one and only (but big) problem… single channel memory….
If that thing would have dual channel memory capabilities… it would be such a great thing….
Sad thing? Its new brother - N150 - is still single channel only…
@@Karti200 and only 9 pcie lanes.
I’ve been waiting for this to drop since you the Windows review on the main channel. Thank you as always, Wendell.
Will be interesting to see these CPUs in NUCs, Asus now but NUCs have always been interesting.
now that I have my 14900K running stable and with Stable RAM, I will be staying put for now.
Hey Wendel, great work! Just a small correction, the memory latency is not an inherent problem of the Foveros packaging, but the architecture layout of Arrow Lake. In Arrow Lake the memory controller is separated from the computer tile with the cores, thereby necessitating movement over the fabric to access the memory. The, in the short term, better way is to include the memory controller in the compute file, like they do in the Xeon 6 parts. A disaggregated memory controller is in the long term better but Intel’s current implementation has problems.
I'm so old. I thought Windell was holding up a 5.25" floppy disc in the thumbnail.
ngl, looks like a floppy
So old that people still measured things in inches!
keen to see some Optane SSD benchmark comparisons with 14th gen
Thanks so much for the Video, most times i found only useless Windows Videos for it.
nice to see much works well, also as hardware on it's own. expected it to quite much run good at Linux directly anyway since Linux is often better with new hardware than windows, but still also nice to see for sure it works well, with new hardware it is always nice to see how well and stable it works, and nice to see some proper tests of this cpu and motherboard family as noone really covered it properly.
Curious to know how the iGPU behaves on Linux
+
Have you guys done a video on Linux performance with lunar lake, e.g. 288V?
I am very interested in ECC support comparison between current Intel and AMD. Please the most likable person on YT!
05:24 that old arctic cooler :D impressed that works in the new socket
Wendell!
Better experience in Linux does not surprise me. I may chose Intel for my next PC for code compilation.
Intel All access described how this generation of thread director would use e cores first, and only move to p core if workload was too big for an ecore
I see that Nobara wallpaper ;D
Im always in the market for new server hardware to refresh my development servers
The number 1 problem is always - once I configure and price up a baselevel offering, I head over to that auction site, and have a look at whats avail for around the same price in barely-used mobo+cpu+ram combos. Then end up in a vast rabbit hole of comparing cheap maxxed out Epyc combos, with limitless cores and a tonne of ECC ram ... and waste the next 2 days playing what-if. Then I give up and get back to work
The best takeaway from this video was the existence of lstopo (or hwloc).
Does the GCC code path optimization include support and/or tuning for the wider ALU execution units per core?
Zen 2/3/4 have 4 integer ALUs while Intel's Core also had 4 ALU's with Alder Lake 12th through 14th gen, have 5 integer ALUs.
Both Zen5 and Arrow Lake have 6 ALUs.
I feel that both uarch's will require compiler optimizations to take advantage of the wider execution paths per core but Zen5 less so die to it's SMT implementation. Arrow Lake will need even tighter compiler optimization to utilize wider ALU's.
Any optimizations Intel may push for will benefit Zen5 as well.
I’ve seen it where r cores give big uplift in gaming along with cu-dimm and cache over clock.
Great video as usual thank you
Will get it to run dual RTX 5090 on ProArt boards with dual x16 PCIE slots. 2x32GB will finally get to a reasonable AI platform on a budget while being totally awesome
I didn't know you had a Linux channel 😱
This is an interesting step for Intel.
When it excels, it does really well, but also uses about the same power as the previous architecture.
When it performs okay, it's more efficient but nowhere near AMD at times.
Then sometimes it seems just, bad.
I think it is a step in the right direction, but still needs work both on the software and hardware sides. 🤔
The same thing happens with Ryzen 1000
@@tommyking626 kinda but not that close. I mean ryzen 1000 was miles ahead of previous AMD CPUs in everything, both performance and efficiency.
While intel 200 trades blows with 14th gen and is somewhat more efficient.
Still it is true that both ryzen 1000 and intel 200 mark a change for each company, let's just hope intel has similar success with following generations, otherwise AMD is just gonna keep increasing the prices and offering eh performance improvements each gen just like intel did from 3000 to 7000 series
It wasn't ready for release and has a lot of bugs.
Ryen 1000 was 8 powerful cores for $350 (we bought a 1700x) and Intel would sell you only 60% of that for $350.
What do you think of Jayz2Cents overclocking and also of alleged (I didn't see it not-cropped) 1P+16E cores gaming that was faster?
how are your kernel or other compile times, any uplift over RPL-R or Zen 5?
on a 14700K, can do chromium in 1 hour, 50 minutes and 14 seconds
I'm curious too.
On my i7-6700HQ this stupid chromium takes 14 hours, if I'm not doing anything else on the computer.
I know you weren't interested, I just wanted to complain. How can Firefox be literally 10 times faster (between 1.0 and 1.5 hours) is beyond me. Stupid, stupid chromium.
@@Winnetou17 bloat, simple. Bloat and probably lots of hidden back doors few actors know about
@@TudorgeableI wonder what are the chances that is a coincidence?
Thanks for the unbiased and objective reviews. I really dislike how many tech reviewers are shitting so hard on this generation launch. Personally, I think it's impressive how good it is for such a different architectural layout. It's a 1st gen of its type and drivers are probably still being worked on. Performance will probably improve in the next few months. That being said, I can't wait for the 9000X3D launch.
So many channels are just gaming benchmark and test numbers channels. Few actually look into (or even understand) the actual HARDWARE and what it offers as a piece of technology.
I'll give Gamer's Nexus a pass though since they do other in-depth hardware tests around what they're testing, but the context of their presentations doesn't do them much wonders.
Yeah, just seemed like everyone was being overly negative for clicks and views. There will eventually be some deals and I’ll take the power reductions in this generation even if it’s iso-performance.
Some YTers seemed overly hostile towards this CPU when it came to gaming while lightly touching over the topic of power efficiency and productivity improvements. It almost seemed like they were forcing themselves to say those BS for clicks.
its six cores for the Ultra 5, wish it was 5, would have gotten it instantly!
JayzTwoCents actually found one of the biggest issue with 285. Some performance issues got fixed by using CU-DIMMs. I think the issue is having a task that takes certain amount of time, and splitting it into a pipeline and then running it way lower frequency than it is designed to handle, thus if you run traditional DDR5 you have extra latency that you wouldn't have if you would use CUDIMM.
yup and it's completely logical despite all the early kneejerk bagging of intel. next gen memory controller next gen ram
its not cudimm, its the memory frequency. Jayz used ram at 8400, you don't need cudimm for such a frequency. Any higher than that might even harm performance a lot cuz the memory controller would run in gear4.
As a complete aside, I was just pondering why they had chose to go with with SMT on the P cores in 12th→14th gen. My gut tells me they should have had SMT on E cores and non-SMT P cores if they really wanted to get the best single threading and latency for foreground tasks and efficient use of die area for background tasks.
probably because modern games allocate 12-16 threads by default, and disabling HT for the P cores would have issues with e core threads being utilized in the wrong applications causing stutters
whats the SRIOV functionality like on this platform?
i love this channel ❤
linux mentioned 😎
A possible Primagen mentioned?
Wendle, you are way to optimistic @11:20. It's going to result is better shareholder outcomes, not lower prices for us consumers.
Time for AMD and Intel to start trying out the Apple thing and introduce really fast RAM chiplets/tiles.
We have the technology. Imagine the potential performance advantages. Cheers!
I never heard about PCI Express ECC before.
Actually PCI express encoding already have built-in forward error correction, I suspect this get implemented because of CXL that require PCI Express behave like ECC memory. This mean extra bits need to be transmitted for ECC memory in CXL to CPU to work, butt will add overhead.
May able to prevent retransmission in case bit errors ?
wait can you run an m.2 without screwing it down?
some lap tops are not screwed down, just pushed down with the case
When you insert it into the slot, there is a noticeable bite where the contacts "clip in". Just don't wiggle it around too much
does cudimm improve gaming?, on windows seams to be that way, as per jay2cents video
So this is Faverose? Lots of separate systems “working together” in one chip? Seems to complicated to work without headaches…Intel’s worked on this for sometime and it’s appears to be beta.
"Chiplets are the future"... as evidenced by AMD making all their CPUs that way for the better part of a decade at this point. They're very much the present. Intel has just been plagued with so many other problems they've finally found the time to get with the program.
I was hoping for more charts but it's ok
What is the CPU frequency scaling doing? What happens to webXPRT if you `cpupower frequency-set -g performance`?
the disadvantage on spacing the pcores out is latency. which arrow lake suffers a lot from
more vids on hardware on linux, plz
I think you misspoke at the time you said the Core Ultra 5 CPU has 5 P cores it is 6. But to my question how many compute dies does Intel make for this series. Is it only one and binning decides what bracket it ends up in. or is it more than one die?
What do you mean with the P cores ? This clearly has 8 P cores and everybody knew that for months.
@@Winnetou17 0:49 in talking about the core 5 variant
@@afre3398 Oh, ok, sorry, missed that.
Interesting that both Ryzen 5 and Arrow Lake do better on Linux than Windows.
How. about running 4 DIMM of RAM ?
So is it a stretch to state that Microsoft has a hard time to follow all the changes coming from the cpu manufacturers?
I wonder how Microsoft does it. Do they allow Intel engineers to write kernel code for NT, or do they write a spec that Microsoft has to implement? Open source must be a lot easier for the hardware producers to contribute to.
Hmm, I have noted a lot of work in intels linux development so i guess its paying off, im just a casual user anyway, i wonder if for now making sure that the future of intel works well on linux is something that can sway some share away from AMD? as currently i feel that intel has a lot of good future tech and ideas, but many need some tuning to work optimally, after that its the balance of power/performance.
Im stuck on a nuc 12 extreme compute module, im guessing an upgrade to the 245k would work for me considering i cant hit past 4.5Ghz with cooler limitations and i expect the multi tasking would likely work better on these with linux so i would not be losing anything, trying to aim for a more efficient build overall, from the sounds of it, I think CUDIMMs are the optimal way for me to go as well.
Any Linux benchmark?
I recently run some test in wsl2 windows on my gen14 rig and I got predictably 20% worse performance than about 2-3 month ago. On the
native Linux I have the same performance as it was 3 month ago. I suspect that MS broke something in the recent updates, since they don't describe in details full list on charges in the updates.
Soooo, they’ve made a more complicated chip- how many years to tune it?
when hdr is officially supported on linux, i'll never ever ever look back at windows.
I need a beginner Linux distro. Windows is sucking my performance
@@PizzahutbabyLinux mint
@@Alex-ii5pm Absolutely the best distro for someone who wants things to just work.
@nov3316 KDE Plasma has HDR support now on Wayland. Getting software to actually output HDR to KWin is hard currently, though.
You missed something. If you want to get the best performance out of these new CPU's, you will also need new memory.
Wendel... timestamps, timestamps!
Why not run the latest kernel (6.12 rc4) instead of such an old kernel.
I knew it, Windows is just a dead weight at this point. I am all AMD, and most likely will stay like that forever, but I knew the reason why those new cpus were behind was Windows being terrible. I will never update to Windows 11, and I can't wait for some programs such as AMD Adrenaline to be available on Linux so I can fully make the jump there.
Windows always was, is and will be crap compared to GNU/Linux (not "Linux", which is not an operating system, but a kernel).
Watching windows users constantly complain about windows being a pile of utter rubbish never gets old. As a non gamer that has very little interest in gaming outside of a couple indie games I find arrow lake quite nice.
wish it was hyper.
For gaming it's a pass but still on 12900KF Linux, would probably go 9000 series if I were to build again today mostly for gaming, 9800X3D around the corner or a 9950X/9950X3D depending on use case.
I'm also on 12900K, but won't build AMD until they widen their DMI
what is VID?
Thanks, just me, I'll stick with the TR1950X as I have a solar farm that powers the CPU I have.🤭
Please, secure that M2 connection 5:24 .
Windows 10 is a big step up from Windows 11.
0:46 core 5 is 5 performance cores, you said. haha
imagine shrinking x2 and not gaining x2 in perf like it was before....
They (intel) never claimed 2x performance
@@PixelatedWolf2077 so they wasted shrinkage
@@uhohwhy No, not at all. Ryzen was like this on some fashion, too
@@PixelatedWolf2077 go fap on defective 285k LOOLLL
Gamers might be disapointed by it, but I think this is Intel trying to compete against the ARM CPUs.
Aaahh Linux with its 8.9 billion different distros and still 99.9999999% of the population have never even heard it. Good for tools/utilities but other than that, you can keep it.
I must be really old, cause the thumbnail keeps making me think you are holding an original Nintendo game cartridge
I really appreciate your comment about the NPU not being leveraged atm and that most comparisons focus on the CPU part.
yes the performance compared to previous generations and platforms are meh at best, however when the software catches up and games and applications take advantage of the NPU it will make a world of difference...maybe a bit scarry too.
Companies need to innovate and build the platforms to develop the next generation applications.... and indeed people need to understand what their workload requires and what planform is the most suited for it and at what point to make the upgrades.
At least for now there are new tools coming on to the fore and it may make sense to make this point a little louder in commentaries .😎
what about this instability in core chips with graphics enabled?
Man I hope this isn’t a wide spread problem… we can’t have intel be fumbling all the time, AMD is already stagnating…
@@roccociccone597 they just arnt giving their toys away to play with.. cant find the bugs if no one has them yet. makes me wonder what they are really going to do with these
@@roccociccone597 Bold claim with zen 5 x3d all but confirmed to have flipped the script on 3d cache, allowing 10%+ higher clock speed, the avx512 improvements, and the lower power draw. Gaming is latency limited. There's not going to be significant gaming performance improvements anymore until we move beyond the DIMM form factor and/or slap even more massive cache onto the CPU package.
Wendell .. you have got contacts to Intel, right? Could you ask them just one thing: "Why?"
Meanwhile 7800x3d in windows:” aham.., cool”
You never did another video after Intel's "final" microcode for 13-14th gen. Could you? Did it fix all the servers? Sorry I don't have any questions about 200S series I don't care about it, you don't seem to either. Your non enthusiasm for this unstable platform (seems yours is unstable with the memory errors) is clear, all you could really do was show their slides and talk about intels slides...and show video of a broken computer from your end.
It's like the question is "Is the new Intel platform good", and your answer is
"Linux is different from windows and I work on servers and thats cool"
I'd still wait for 9800X3D to make judgement what is best gaming CPU of this year.
I mean, if the 9800X3D /isn't/ the best gaming CPU of this year, it's a pretty severe problem.
Gaming is literally the most useless metric ever to be used to measure cpu performance. Oh noo my game doesn’t run at 200 fps, I must now sacrifice everything else just to reach a barely noticeable change in frame time latency. What an investment
@@roccociccone597 I agree. I was fuming over Zen 5 reviews and to a degree Arrow Lake ones (there was more regression there). Gaming is nice and quite useful as a chaotic benchmark of CPU functionality, but also takes ages to update. If it even is updated. I made this comment just in case someone who watched Wendell's video forgot that there will be 9800X3D dropping in 8 days.
@@RotaryJunkie For gaming community certainly. And for engineering as well. Though most games don't need super insane FPS that currently handful of monitors can even take advantage of. Experienced player with 120 Hz monitor will beat hype-gamer with 500 Hz monitor.
Don't get me wrong - fluidity is nice, but seeing CPU's only through lens of FPS is a bit ridiculous (I'm not saying you do that).
I expect mild teething issues with flipped cache to pop up. And we will see if it was a good idea to give users option to OC. It needs to be made super clear that with OC of X3D part warranty is VOID.
So... smashed by 9950x, got it :P Glad I went with a big-boy pseudo-workstation chip instead of this abomination.
Didn't Jay just show a huge jump in gaming with cu dimms.
I wonder how well it will run on Russian Linux
Thread thrashing is real on Windows ...
Intel made good server cpu like Xeon. Only its called ultra now?
interesting you say memory latency is worse - Jayz2C was looking at overclocking and noticed that putting new CUDIMMs in gave it a 15% uplift before any OC happened.
Memory latency is worse compared to RPL, if you run AIDA64 memory test the latency shown in nanoseconds is sadly higher than RPL. That’s why going with CUDIMMs helps because with apples to apples UDIMMs the same kit on ARL has higher latency than RPL.
This is a big topic of discussion on the overclocking forums.
Even with what Jayz2c did, even though I didn't see his numbers, I saw others, IIRC Derbauer and somebody else had some memory lantency and even after all the OC and ringbus OC and tuned memory, it was still higher latency than Raptor Lake. I think the lowest I saw was 70something nanoseconds, while on RPL it was 60something nanoseconds. And that 70 was miles better than the stock being at over 100 IIRC.
Arow in the lake
*But, WHERE ARE THE GAMING ON LINUX BENCHMARKS?*
ECC support again locked? So another useless CPU from Intel.