Correction: The 5% geomean loss comparison was against the 265K, not 285K, while the "on par" statement remains accurate against the 285K. The first part has been corrected with the trim tool. This correction comment will remain in place to transparently document the change, although the error will no longer be present. The on-screen correction card has been added as well. If you missed it, check out our deep-dive on the 12VHPWR & 12V-2x6 spec problems here: ua-cam.com/video/Y36LMS5y34A/v-deo.html Watch our Intel CPU fab tour here: ua-cam.com/video/IUIh0fOUcrQ/v-deo.html Working on testing all the CPUs again right now!
I'm wondering if they really need to look at moving from 12v to 24v to start handling this power need increase. Do you think that would make a difference in the long-term?
Another correction... The term "Glue" in chip manufacture/packaging is over 45 years old AFAIK, and has always (still is!) been used to talk about chiplet com traces routed through the organic substrate (the glass fiber package baseplate). I still hear it all the time. The main downside to this is degradation of signal integrity, and HUGE power losses for high frequency lines. We're talking about a 3-5 THOUSAND percent higher power losses than if you can route something directly on the chip. Hence the huge problems with information sharing, cache sharing and thread relocation on AMD's chiplet designs, who all use what is very correctly termed "Glue" to interconnect them. Using a silicon interposer in stead of routing down all the way through the organic baseplate can decrease power losses by 90+ percent, while increasing the stable line frequency by 50% and line packaging density by several hundred percent
@@AJBtheSuede That's not a correction. We are referencing an old Intel diss on AMD where they used the term in a derogatory way. It was a joke about Intel's bad takes.
I wonder if the slide you removed from the video (Intel's "A Cooler and Quieter Flagship Experience" 265K slide), was an Intel mistake, typo, or if perhaps they are being intentionally misleading. Intel's slide included a link for more details, but that URL doesn't lead to ANY 265K specific gaming performance, temperature, or power draw information. It did, however, include such information for the 285K. If it wasn't a mistake by Intel they've, so far, failed to publish the relevant details. Meaning that your testing will be all the more relevant. If Intel doesn't provide those details for the 265K, and leaves us with only their 285K information, I hope your testing is done with both the 265K and 285K and compares your findings with what they have shared.
The naming could have been really clean - Ultra 9 290K, Ultra 7 270K, Ultra 5 250K. But they decided "NO! Take 5 off all those!". Maybe thats where the 5% lower performance comes from.
Some made fun of "Zen 5%", but atleast it was a Gain of "5" and not a loss of "5". That being said, this new architecture from Intel is all but surely an improvement for them going forward due to the better power effiency (albeit still not at Zen levels) and hopefully temps, as well as possibly solving their degradation issues that were the inevitable result of them jacking up their 13/14th gen voltage and power consumption up to such obscene levels to squeeze out miniscule gains.
Intel is taking memos from the Sony Xperia naming scheme: confuse the customer so they have no idea which generation of what they're looking at, then give up and go to a different brand
"Oh yeah I love my sony xperia 1 VI, I just upgraded from my Sony Xperia 10 IV. I also like listening to music with my Sony WH-1000XM4, but they are kinda bulky, so I will wait for the WF-1000XM5"
Nvidia is incapable of increasing performance without a significant node jump so they had to do an old school Intel and juice the 5000 series to the max.This shows they are not as capable as people think.They only look good because they are compared to the joke that is AMD Radeon.
@@Ignisan_66 no, you're changing the definition of AI. We've had AI since like the 80's. AI is just any algorithm that mimics decision making. things like minmaxing have been around for like 40 years. marketing conflated AI and ML so now to the general public they are interchangable, but they arent. ML is a form of AI, but AI is not limited to ML. Any decision tree is AI.
New PC specs are bad, new AAA games are bad (as always), is there any reason at all to upgrade even from older hardware? I am still running an i7 7700k with GTX 1080 ti and have not yet seen problems with the games I play at least, no matter how heavy they are.
Lmao, everything sucks ass. Ps5pro, amd leaving high end, nvidia going after $$$ and intel bringing lower performance. But tbh, intel looks like the most modest ones. Half the energy for the same performance is actually good.
@@aleksandarlazarov9182 I agree, have a 4080 paired with a 13600k but my 'old' system was a 6700xt with a i7 9700k and I could play everything on my 'old' sytem still..
@@f.9344 zen5 was that, same perf for half the power and people called it dumb .. soo I don't know, it seems that average consumer want power hungry chips for additions 10% ... its stupid
You know you're getting old when you're more interested in power efficiency than raw performance. If the power claims are true then that means lower power bills, quieter PC's and a whole new range of CPU's for the SFF space.
Yeah. Comments like "GaMeRs DoN'T CaRe AbOuT eFfIcIEnCy" evoke the image of someone who still lives in Momma's basement and who doesn't pay the utility bill.
computers dont use that much electricity, just a couple lightbulbs worth. And if you're using sleep or turning it off when not in use its even less. The real concern is cooling. Both the package and the VRM on the motherboard.
ahh yes. let marketing fl0gs decide naming so they turn it into something OTHER then what naming is meant for (clear and simple identification and communication of that identification). Marketing - obfuscation and deception since its inception.
Those are things nobody understands. I make fun of my mom's neurologist all the time. They have no cure, or way of slowing down any neurological disorder. From bi-polarism, to dementia. They've got zip..😅
@@GamersNexus Intel is doing the naming, correct? So we get the 285 in 2024. In 2026 we get the 287H. H because 2026 will be the year of the Horse of course.
Genuinely shocked they are so dumb they didn't drop the 2 decade old Core branding. If they wanted to con people into thinking it was something new, they should have just called it the 'Ultra'.
I have my 14700KF undervolted and loadline adjusted to keep max stock clocks in cinebench and max 247w. Roughy 35600 multi and 2170sh single. Temps hit max hover around 75-80 P cores and 65-70 Celsius on E cores. I cannot undervolt anymore since then booting even comes unstable. VID values hover average 1.22- 1.27v. In gaming powerdraw is roughly 50-90w, depending on topic. Temps 45-55 degrees Celsius. Arctic III 420mm AIO. I tend to play star wars outlaws, world of warcraft, CS2, Overwatch 2, Cyberpunk time to time. Paired with 4070 TI Super and playing 1440p graphic maxed out.
I have undervolted my new 14700k, as I don’t need it’s bleeding edge performance at the moment, and it’s only lost a small amount of performance from the out of box scores, and uses much less wattage (around 57 watts for light use, around 100 for higher loads, iirc?) I game at 4K and the CPU stays under 45C at 30% use (most games) and at 100% burn it’s steady at 66C… On Cinebench it’s cleanly faster than previous gen AMD chips, and of course, Apple Ultra M1/M2 :P This is much cooler than my previous 8700K, seems pretty good compared to what AMD would claim for operating temps, so I don’t really see any improvement or benefit with the new Intel Ultra CPUs. Maybe next version of Arrow Lake will be meaningful, but this first one seems a take it or leave choice. I’m impressed with how well my i7 14700k works. It seems to “cruise” through tasks smoothly. And it’s not melting my rig 😂
@@willwunsche6940 if your bacteria are 1000x smaller than you, then they are the size of large cockroaches that you apparently are filled to the brim with
Rebrand, it’s like moving to a new house for them to get away from any bad criticism from the Intel core ix ; just take time to understand it. they started with 200 to debut instead of 100 due to meteor lake not being as exciting as they hope. So there's reasons for why intel do everything the way intel does. hopefully its better decisions moving forward, less confusing.
Lol, so if you are a gamer, you are paying alot more for the new intel cpus while getting less in performance?. What a joke. The 9800x3d is the only way to go. I couldn't care less about productivity benchmarks.
Meanwhile the x925 core in D9400 has better single core integer/float perf than hx 370. 😅Maybe Qualcomm and mtk will make exciting arm chips for desktop in the future.
Considering Raptor Lake Refresh was manufactured in Intel 10 nm and Arrow Lake in TSMC 3 nm while performing this close to each other (power consumption not included) something must have gone pretty wrong while designing the microarchitecture. Makes you wonder if it would have been better to port/adapt Raptor Lake to TSMC 3 nm…
Do you know your history? Intel introduced that process as Intel 10 nm and when they got stuck on it they renamed it to “Intel 7” to appear more competitive on paper.
@@abavariannormiepleb9470 Or (which is more likely) it is a quick redesign of chip that was made in 10nm to use new process, without actually utilising much of the positive aspects of new architecture. I think it is just rushed out, and they will release actual next-gen chips (15th gen) later. After all, there should be a reason to change their naming scheme.
I noticed the ad policies url banner during the sponsor spot and for whatever reason, decided to read through - I really appreciate that it's in plain English and not lawyer speak, and the clear boundaries! Great use of the exploding PSU box pic too.
why they didnt start with Ultra 9 190KF, Ultra 7 170K, Ultra 5 150K and Ultra 3 130K instead we get these confusing numbers for us to explain to customers.
@@GamersNexus Well, if you test by searching for the most hardcore overclocker mobo that pushes everything to 11 by default then yeah, there will probably be next to no difference, up to the new CPU even using more power if the process is more mature and can stand more power before melting.
Gaming benchmarks vs 9950X Productivity benchmarks vs 7950X3D That's how you know that Intel has nothing worthwhile to show edit: it was the 7950X3D not 7800X3D, my bad. The point remains.
Plus the no comparison of efficiency against any AMD products. And the price isn’t good either. I’m mean they improved a little, finally making chiplets instead of a one big die, but that’s all😢
I just got a 5700X3D which I will probably be using until the end of the decade. If you prioritize framerate (as most modern gamers do), the 5700X3D performs as well as a 14900K and, by intel's own admission here, performs about the same as the Core Ultra 285K. Since I don't make a lot of Zoom calls, one of these new intel CPUs would be a waste of money for me😝
I'm ditching intel. I had a 12600k intending to upgrade to a later gen on the same socket later, but intel ruined my plans. I'm so done with them I'm planning to go with a ryzen 7 7700 build and sell my intel pc.
@@Veyrah64 Ok, It never really affected me since CPU improvement really slowed down from 2011 and motherboards died on me but I do agree it sucks. So you mean you are unhappy that the socket changed and you get the exact cpu equivalent from AMD for what reason exactly? I would at least change when there is an actual improvement. When it is time to upgrade for me I will take whatever is the best bang for bucks at the level of performance needed, both companies weren't always great.
@@GamersNexus sticking up. A part is the whole chopsticks sticking up, incense for worship. Also a tripping hazard, probably from a final destination movie
@@cl4ster17 I honestly don't really care about Intel's naming scheme here. If you are an enthuasiast you should know that naming tiers don't really matter when it comes to performance. The 14900K was basically equivalent to the 13900K. The Zen 5 CPUs are barely better than previous gen. Ryzen 8000 laptops are mostly just 7x4x laptops with an NPU. All these names are just useless in a vacuum. You need actual benchmarks to know how good a CPU is. And you know what tech-illiterate people frequently use? They say they have "an i5 CPU", only for me to find out they have some old ass 3rd gen CPU. The only problem I have with this new naming scheme is the same problem I have with current CPUs - if you search for 12600 for example, it will find all the 12600Ks long before it will find the locked CPUs. I wish Intel used numbers for the iGPU and lock specifiers instead of suffixes. But I guess then the names would not be as pretty...
One thing I liked about the 12000, 13000, 14000, etc, is that it made it easy to see what the generation was at a glance compared to previous. Once I figured this out, it made shopping and placing different models in an arrangement easier. I could also pick out a SKUs release year by simply looking at the CPU naming convention. Changing this is dumb, but whatever, it already happened with 14th gen.
I cannot stress enough how GN has kept me sane this last years. Please, everyone at GN, stay healthy whilst doing this videos. It shouldn't be like this, but I depend on this quality and down-to-earth level being real to keep me going against all the BS going around, INCREASINGLY! Cheers, everyone.
Could have named it 243, 245, 247 and 249, first 2 digits for the year of release and last digit for the performance tier, but no, lets confuse the consumers more...
Don't want a product name to start with a "2" forever but it's a good idea overall, let's just use incrementing numerals instead and the tier indicator at the end. Well, for additional SKUs let's add two digits at the end. And a more marketable overall class indicator, like 3, 5, 7, 9 at the beginning as a prefix, separated with a hyphen. Then, a letter or two to signify the category as a suffix, like "H" or "KF". And maybe instead of "Ultra" let's use something shorter, something that is reminiscent of the brand itself, so maybe the letter "i"? How does Core i7-2650K sound? I think - pretty good.
Ironically enough, that's roughly (accidentally) how Intel CPUs have been, just shifted down by 10; every Intel CPU for the past 14 years have roughly lined up with the year.
How about the utter hilarity of the 'V' following the processor number on the laptop side of things? Because calling a chip "285V" is *really* that good an idea when you're known for being power hungry...
the trick part is it does not take much space, it have AI in the name but is the dumbest processing part of the cpu literally, does not take much space for huge numbers.
@@s-x5373 If you look at Intel’s materials, it is a very significant amount of die area dedicated to their NPU. This makes sense because you don’t get tens of trillions of multiply-add operations per second without some sacrifices. Any amount of space is too much for my preference. Any AI models I run will be on a GPU, or non-local. More importantly, Intel needs to focus their efforts on making their core product better if they want to stay competitive.
CPU ≠ Only gaming Many people use this on thier office uses and laptop without gpu it will be helpful in ai assistance and most of the application now have built in ai features in it
I was honestly looking forward to this announcement because I want the NPU for running a self hosted AI voice assistant in my home server. That said the NPU in these is kinda disappointing (the one in the Lunar Lake laptop chips is much better) and overall I’m not as excited as I expected to be.
Why would Intel delete its strongest selling point out of the name? People outside of the PC enthusiast space think that "i7" is the equivalent of a good CPU. The new naming scheme is more complicated AND deletes the recognition for buyers before. So now people that just bought Intel because they always had Intel need to make some research and will probably consider AMD too.
So the flagship is not going to see much "gains" (probably loses even), but at lower power. Me using a manually tuned 12900K for years: Puts wallet away.
@@MaryannLynch-z9c i was definitely open to the idea of 9000 X3D, but the non X3D has my expectations way down now. We will see I guess, all I know is its not going to be an intel purchase this gen.
Yeah, I was hoping for a nice normal speed boost this gen to finally replace my 9600K since at this point it would have been worth it to me. But with both teams doing the exact same strategy for some bizarre reason almost as is colluding, giving us nothing faster at all this gen just efficiency, I'll pass. I just got a 7900XT from the amazon prime sale to finally touch 4k on my LG c3, so I guess I'm not upgrading cpu until next gen, unless the 9800x3d surprises us all, which I doubt.
Lol, so if you are a gamer, you are paying alot more for the new intel cpus while getting less in performance?. What a joke. The 9800x3d is the only way to go. I couldn't care less about productivity benchmarks.
Intel optimizing for power... hell just froze over. The nagain, consdiering how they wasted it until now, just applying some sanity leads to great improvements. Good to see that change!
what I'm most interested in seeing with arrow lake is how well they perform when you limit PL1+PL2 to lower values like 85w, 65w, and 45w. pretty sure it'll be fine for gaming at all of those, so it's low power productivity testing that'll be really interesting. even if it still loses to ryzen at low power, as long as it's close enough, the (presumed) massive _idle_ power savings will push it over the edge for me
Imagine if they never stopped using the Pentium naming? We'd be at the Pentium 24 or something by now. It would be so much easier for people to understand... Oh I have a Pentium 13? My computer is old!
The 12900k had hyperthreading...to get the same-ish level of performance with the same number of cores but without HTT is a pretty good improvement, not massive but still.
@@bracusforge7964 In contrast to AMD CPUs that actually did fry themselves, resulting in exploding CPUs that steve did cover, there has not been one single report of a fried intel CPU, degraded yes but that's not fried.
@@terrylaze6247intels issue is much bigger. AMDs issue wasnt due to a defect by AMD itself, but the motherboard manufacturer pushing way too much voltage. People say this is the reason Intel is dying but its not the only reason. Intel had contamination issues in their fabs and there is also a fatal flaw with their ring bus. Not even close since only 20 amd cpus were affected and hundreds of thousands of 13th and 14th gen chips have had to be RMA’d
These look very promising, I hope Intel can deliver. They're absolutely setting up for a special edition Ultra 9 386K, 2025 is 40 years since the 386 launched!
TL;DW just wait for the 9800X3D if you're a gamer. :P Edit: to be clear I still enjoyed learning about Arrow Lake and GN always does a good job balancing thoroughness and being concise. However since gaming is my primary use case it is hard to not be disappointed to see Intel advertising only performance parity with their previous generation especially when the 7800X3D was already faster than 14th gen at a lower power draw.
@@zee-fr5kw Depends on what you're currently using. It's probably not worth it for AM4 and 12th-14th gen owners to upgrade to a new platform. But if you're on something older and are primarily gaming then the performance gains might be worth it.
@@zee-fr5kw It depends on what games a given person is playing. There are CPU-heavy games where these older CPUs will struggle. Plus if you want to play at a high refresh rate the CPU is often what ends up being the bottleneck.
That ILM distribution contour chart looks exactly like my 12900KF did before the contact frames started releasing everywhere. My IHS had an hourglass shape for thermal contact, throttled at ~150W before I swapped over, then I could run it at 318W without throttling on a loop after swapping.
Roman aka deBauer likened the new clamp as taking an aspirin every day for a headache rather than going to a doctor to find out what is causing the headache.
Why is it that just as you're getting comfortable with one naming scheme companies feel the need to jump over to another? Sure, the i-naming convention wasn't perfect by a longshot, but at least it was both informative and intuitive once you figured it out.
@GamersNexus You must have significant issues if it takes a decade to figure out what a name means. Or maybe you just like to complain about trivial things and have to find something because glass half empty comments gets subscribers.
@@Tugela60 Deliberately confusing consumers is not trivial, it speaks to a marketing strategy that defrauds the average consumer who isn't perfectly informed. To be clear, it's not just Intel that's guilty of it, it's been a problem in the PC space since its foundation; it's equally petty when AMD decided to call its chipsets B350 after Intel's B250 motherboards. An uninformed consumer would get their new Intel processor, know it needs a new motherboard, and know their last one was a B250, then go to the store and buy a B350 motherboard, not realizing that it's for the wrong CPU. It's nothing but a waste of time, money, and is a disservice to the people these companies allegedly try to serve. Also if you really dislike Steve and the GN team's commentary, stop giving them views and boosting their algorithm lol.
Regarding NPUs, like other acelerated math functions there is a potential for some software to get a significant boost by leveraging it like math co-processors, embedded encryption acceleartion and gpus have done. iGPUs are a good example, embedding funcitons like decoding/encoding, etc. This will of course require applications to use these cores, but provided there are approximately standard interfaces professional software and games both leveraging accelerated algebraic funcitons in parallel just makes sense. For basic examples: CPU offloaad for interactive physics that can't be entirely passed to the GPU, or convolution funcitons can be offloaded from the CPU prior to shader processing on the GPU.
Shhhhh, AI = bad. The hivemind has decided as such. There is no AI workload that Joe, in a previous post, personally runs today. That means the NPU is useless for everyone for the rest of time.
Yeah, forget what it's named and what they're advertising it as, hardware matrix math sounds interesting. Tons of gaming stuff is just matrix multiplication, for example. Having dedicated hardware to offload that onto could be nice. ("What? Run the game's AI in the AI processor? No that doesn't make any sense, it wouldn't work at all! We need it to do VECTOR MATH!")
Changing naming style is a great way to hide the fact that they pushed older CPUs to the point of failure :D Strangely they still are keeping a high Turbo power figure of 250W. Some companies never learn :/
they stated 3 times differently that they increased prediction windows, while disabling hyper-threading increased efficiency, while decreasing max turbo boost it's basically 14000 series cpu with disabled hyper-threading and fixed turbo boost behavior
First it was AMD making fun of intel for gluing CPUs together. Then it was intel making fun of AMD for gluing CPUs together and now we're back to AMD making fun of intel for gluing CPUs together. No, wait, snapping them together like a bunch of Legos.
With the loss of hyperthreading, I really just wish we had all p-cores with 10-12 cores. Then they could actually make the Xeon more mainstream and bring the E cores into the Xeon CPU for creatives needing the parallel compute power. 8 P cores just seems too low now a days.
If I recall correctly, the die area for 1 P core is supposed to be the die area for 4 E cores, so they really could have done 12 P cores in the same die area, but then they would not have anyway to compete with AMD on core count. That said, 12 P cores would have been making what people want, but this is Intel. They make things and then expect people to buy them anyway.
@@richardyao9012 Not only that but Intel couldn't get those high Cinebench numbers with just 12P cores. The very benchmark Intel called fake performance numbers that the public should ignore, until they added those E-Cores.
Gaming doesn't significantly benefit from more than 8 cores, or realistically 6. All you e-core haters are asking for is for Intel to make the CPU slower in Actually Multithreaded Code for no reason.
@@Vegemeister1 Thats wrong. The best games like Battlefield, Pubg and many other new multicore games like many more cores. The problem you cant test that because the current cpus doesnt have 12p on one bus ring with enough ram bandwith.
I'm now convinced that all these companies sit down together and collectively decide to name their products by smacking each other's heads on their keyboards and pick whatever weird amalgamation of characters shows up.
I am actually kind of impressed. They ditched Hyperthreading and still kept the same performance. Sure, it's not really much of an upgrade right now, but I fully expect the next couple of generations to be big upgrades. I bet Hyperthreading comes back once they figure out how to make it stable on this architecture. They also have a lot of power headroom. If they can make it scale with that power there is huge performance improvements there.
Its not the same performance though, in Battlefield, Pubg and other optimised multicore games it'll have an upto 20% recession. Never should have removed HT without upping the number of P cores. Why cant we have anything nice anymore?.
@@impuls60 I completely understand what you are saying and wouldn't fault anyone for skipping this generation. I am on a 12700k and won't be upgrading for at least another couple years. I am just impressed that they can cut such a huge feature while not losing performance as an average. Every major architectural change will have applications that do not scale well. I am mainly just excited for the future. This whole presentation looked to me like they were basically saying that this is a foundation and there will be tons of headroom to build on. We are at a point in transistor history where getting more performance is becoming extremely difficult. Engineers are working with transistors that are barely bigger than atoms with 10s of billions on a single chip. Although it is not fun for the consumer we have to be honest with ourselves. The days of getting big performance uplifts from one generation to another are gone. I am happy Intel is trying something new and not just sticking with their previous model of pushing a chip it's absolute max and then doing that again for the next Gen and calling it something new. We also have all of the performance most people really need. Nobody really needs to run games at over 200 FPS, we just want to see the bigger number. And if this new generation isn't for you then the 14000 series will still be there and the prices should become cheaper.
But wait, they've undervolted the crap out of the old CPUs, will there even be an overclocking scene at this point? Overclocking has pretty much just been undervolting to keep boosting longer, but now even that is done at the factory.
Yay! You're gonna be the wizzend grey beard of YT hardware... That means I'm going to grey soon because I'm sure we're almost the same age...... Steve you got keys, is that why you showing the wizdom wizzard hair? Keep it coming man! Don't get burned out, you and your team are a gem!
The amount of PCI lanes itself is incredible. We're finally done with this ridiculous 24 PCI lane cap. I wonder if Intel will run HEDT this generation too, considering their 4677/7592 are upwards of 60 lanes. These have the same amount of lanes from Broadwell-E. No longer do we need to run 10 year old hardware to run NVME arrays, or $5,000 plus for server grade/Threadrippers, Epyc & Xeons (barely affordable 1st/2nd gen)
If tech companies could just get rid of their damn artificial limitations we would actually see innovation. Intel actually deserves credit here for doing this.
The two things I want to happen atm. I want 40-60 pcie lanes per cpu. Everything needs pcie now an having to pick an choose which will run an what won't run on a board is stupid and confusing. Everything on that motherboard should work. x16 at x16 x4 at x4 an all sata ports should work. The second thing is that they make a new form factor taller then ATX where when you plug in a GPU it doesn't cover a pcie slot. If you have 2x GPU you now took up 4 of your 4-6 pcie slots that you probably needed.
I expect it to be BS. It's gonna be a x16 slot a maybe 1 or 2 nvmes straight from cpu. Rest is gonna be eaten by USB and we will rely on chipset for more expansion.
I just went to the conclusion, saw that Steve's spirit is more crushed than before and got all the info I needed from this video. Hang in there, Steve. It'll get better. Maybe.
8:20 they have the 9950X power usage up to 250W in that graph, I thought its maximum draw was 200W on Cinebench 24? Graph is probably also misleading as the 14900K goes up to 330W to compete, somehow don't think the "relative" gap there can be that large with a up to 15% per watt improvement.
Base on those details of Intel gives honestly, arrow lake is still untouchable to 7800x3d on gaming, no matter power draw or performance. I think Intel should market it as a very good cpu for working rather than gaming.
Finally a sensible comment besides the sea of people complaining about a name. I see ARL as the CPU for content creators and people who do not care to have the maximum gaming performance. AMDs new X3D might be more compelling for gaming at top tier performance but I still think ARL with its single thread performance will be better in content creation.
Aye and the market formall rounder aka working sometime gaming is overwhelming market. That surely beats amd. You think why amdrelease power efficiency ryzen 9000 with no ai? For efficiency but it hurts if meeting intel with ai. That means intel can do ai and saving more energy while amd relied on external dedicated gpu. But in term of security that means amd kost likely does not activatetd copilot of microsoft hnlike intel
The only specific thing I would like benchmarked is the iGPU hardware encoder speed. How fast it can run AV1 4k10bits and the highest resolution it can handle at 60fps for example.
As someone with home, media and game servers and gaming needs in the same machine (space and connection based restrictions), idle power draw and temps from both default and more optimized settings would be very interesting to see comparisons around. Comparing power draw when gaming is imho not the most important metric since most consumers use their machine at that load
Technically, the 7800x3D trade overall performance for gaming performance The 285k looks to be about 2-2.5x faster in all core full load even at 125w Plus the 7800x3D is a previous gen cpu, I think they are just butthurt over it and feel it's unfair, since they can't beat it in gaming.
Yes, Steve. If possible, I would like to see if the 285K will be able to run 4x24 at 6400MT/S. If so I would also like to see the how much performance loss or gain there is by going 2x48 in the 8000MT/S area vs 4x24 at the maximum JEDEC rated speed. I remember a few years ago in one of your videos you were saying that you can gain up to 10% performance when running your system in 4x16 configuration.
people thought AMDs latest were a flop... this is the most underwhelming release by intel in a long time. i get they are trying to lay a foundation for the next gen, but between the the frying of their chips fiasco and this lame release, intel is makin amd lookin good.
I think some of the biggest improvements this generation are actually in the platform. Last gen only had 20 PCIE lanes with this gen now supporting up to 48? That is a massive increase and something you would normally find on like a workstation platform. I will seriously be looking into this chip for upgrading my home server.
Looks like they focused on everything EXCEPT performance. Which I actually like. Could have had one or two slides dedicated to a renewed focus on stability, and not just in their chips, but all of their drivers. I think though it's kinda clear they're focusing on that in the background. Or at least, they certainly better be...
CPU will have 24 lanes: 16 + 4 (m.2) pcie 5 like previous gen + 4 pcie 4. Chipset will have 16 + 8 (m.2) pcie 4. Definitely better but I’d still like more (-8
@@arnox4554 There is nothing wrong with other areas getting the focus, but a complete lack of performance increase just makes the chip boring to most. I am glad they didnt keep throwing stupid amounts of power at their chips, but no budge on the performance needle at all is going to equal no buy.
@@utubby3730 Eh I am going to disagree with this one. Most people don't upgrade their computers every generation. It is more like every 5-10 years depending on where your preference. If I were looking at this or a 14900k right now to upgrade from a 9700k I would definitely choose this or an AMD X3D chip. If I was coming from a 2700x I would actually just see if I could update my bios and throw in a 5800X3D. My point though is that they don't have to deliver 10-20% better performance year over year. They just have to be better than the last gen and everyone but the weird people with nothing better to do with their money will be happy with it.
@@utubby3730 Even in the performance department though, they were HEAVILY hinting that these chips would be extremely overclockable, so if performance is really what you want, you should definitely be able to push the envelope with this architecture. Or so Intel says anyway.
I would hope they have 2 targets with this: 1) get enough power headroom for chips with more power cores 2) laptops. I am hoping for the pro laptop markets to be updated so regular work on a gaming laptop can push the battery to a solid 7/8 hour mark.
Seeing Intel 'humble' themselves and actually work on a different architecture that is unconventional (e.g. Hyper-Threading) and releasing chips that are somewhat competitively priced is definitely very good for us consumers. But the real competition for Intel, in my opinion, is their ability to use motherboards with the same 'type' and socket for at least more than 3 generations would be a significant contender to AMD's 'forever-lasting' motherboards.
So let me get this straight... Intel's new CPUs are going to require a new socket (aka new motherboard), to get at best on par performance, and in some cases less (probably almost always less as well since these are Intel's numbers), just to gain power efficiency (a low bar for Intel). Power usage still being higher than AMD at full usage. AMD's Zen5 was disappointing to people because they only gained some efficiency and performance uplift was fairly small. However, at least you don't need a whole new motherboard, it's still AM5. Was really actually hoping Intel would put out something good again. With their melting chips, higher cost, higher power usage, lower performance CPUs, AMD has been the only logical choice. They needed a win here... That's not because I'm rooting for either company, but because competition keeps prices low (look at the GPU market for example).
That's not the point ARL is basically a whole architectural redesign so of course the physical layout of the pins have changed. And to be honest, same performance (In Gaming) Because I'm sure these new chips will beat 14th gen in Content apps. But same performance at almost half the power is a W in my book. Plus these chips might surprise us in gaming once people start messing with overclocking these chips since there is now a good amount of thermal overhead and also ram speeds up to 9000+ should give a big boost in performance. Why are people so quick to judge with out proper testing?
@@thetheoryguy5544 We don't know what the headroom for overclocking is yet. Intel says the E-cores could go higher, but nobody knows about the P-cores. Overall performance is ok, if you are not on 13/14th gen, but something older. It's basically a fixed version of those CPUs. If you are on 13th/14th gen or the AMD equivalent, there's no point in paying full MSRP to gain no performance, just efficiency.
@@LinusBerglund yeah, probably.. So the memory controller in the CPU understands ECC, but you gonna need an 'enterprise grade' chipset to enable it.. I hate them.
These benchmarks will be interesting. Nothing in the past 10 years has been worth upgrading to for my use case for varying reasons, but I like where Intel is going with all these low level architecture changes. Seems like a really futuristic design.
Correction: The 5% geomean loss comparison was against the 265K, not 285K, while the "on par" statement remains accurate against the 285K. The first part has been corrected with the trim tool. This correction comment will remain in place to transparently document the change, although the error will no longer be present. The on-screen correction card has been added as well.
If you missed it, check out our deep-dive on the 12VHPWR & 12V-2x6 spec problems here: ua-cam.com/video/Y36LMS5y34A/v-deo.html
Watch our Intel CPU fab tour here: ua-cam.com/video/IUIh0fOUcrQ/v-deo.html
Working on testing all the CPUs again right now!
I'm wondering if they really need to look at moving from 12v to 24v to start handling this power need increase.
Do you think that would make a difference in the long-term?
It's the 2nd gen of Core Ultra series, because 1st gen is mobile. Derp.
Another correction... The term "Glue" in chip manufacture/packaging is over 45 years old AFAIK, and has always (still is!) been used to talk about chiplet com traces routed through the organic substrate (the glass fiber package baseplate). I still hear it all the time. The main downside to this is degradation of signal integrity, and HUGE power losses for high frequency lines. We're talking about a 3-5 THOUSAND percent higher power losses than if you can route something directly on the chip.
Hence the huge problems with information sharing, cache sharing and thread relocation on AMD's chiplet designs, who all use what is very correctly termed "Glue" to interconnect them.
Using a silicon interposer in stead of routing down all the way through the organic baseplate can decrease power losses by 90+ percent, while increasing the stable line frequency by 50% and line packaging density by several hundred percent
@@AJBtheSuede That's not a correction. We are referencing an old Intel diss on AMD where they used the term in a derogatory way. It was a joke about Intel's bad takes.
I wonder if the slide you removed from the video (Intel's "A Cooler and Quieter Flagship Experience" 265K slide), was an Intel mistake, typo, or if perhaps they are being intentionally misleading.
Intel's slide included a link for more details, but that URL doesn't lead to ANY 265K specific gaming performance, temperature, or power draw information. It did, however, include such information for the 285K. If it wasn't a mistake by Intel they've, so far, failed to publish the relevant details. Meaning that your testing will be all the more relevant.
If Intel doesn't provide those details for the 265K, and leaves us with only their 285K information, I hope your testing is done with both the 265K and 285K and compares your findings with what they have shared.
The naming could have been really clean - Ultra 9 290K, Ultra 7 270K, Ultra 5 250K. But they decided "NO! Take 5 off all those!". Maybe thats where the 5% lower performance comes from.
they did something similarly confusing with Pentium D
Some made fun of "Zen 5%", but atleast it was a Gain of "5" and not a loss of "5". That being said, this new architecture from Intel is all but surely an improvement for them going forward due to the better power effiency (albeit still not at Zen levels) and hopefully temps, as well as possibly solving their degradation issues that were the inevitable result of them jacking up their 13/14th gen voltage and power consumption up to such obscene levels to squeeze out miniscule gains.
@@juniorjunior8494 i missed when AMD copied intel and released AMD 387 :(
@@juniorjunior8494 why did you answer it like you work for Intel?
You're lucky they didn't use decimals 😅
Im still gonna call it the 15900K
This is the kind of protest the world needs more of: Pure pettiness!
As someone who was military and now a truck driver, love the pettines and im behind 100%
It is a 15900K that technically ties at best 14900K. We should call it a 15900K and be done with the naming. ;)
Might be in the market for a 15700K myself, although I like the shorter 265K naming. I just wish they kept the i5/i7/9.
You're not the only one, brother😅
Intel is taking memos from the Sony Xperia naming scheme: confuse the customer so they have no idea which generation of what they're looking at, then give up and go to a different brand
stop giving sony ideas they may up their confusion game
As a Sony fan 😂😂😂
"Oh yeah I love my sony xperia 1 VI, I just upgraded from my Sony Xperia 10 IV. I also like listening to music with my Sony WH-1000XM4, but they are kinda bulky, so I will wait for the WF-1000XM5"
Now all they need to do is announce and release their confusingly named CPUs in 6 month intervals and clog their product stack!
@@Lollllllz What do you mean, Sony is the biggest player for these shit hole naming.
The S stands for Steve. Thanks again.
You and me
Me and you
And your friend Steve!
Tu tu dududu
Steve!
the S stands for "scam". As in you pay to get no performance improvement over last gen.
steve's hair really went and doped those chips?
no, it stands for Slaves
Intel: "We cut the power consumption by half. BY HALF!"
Nvidia: "We tripled the power consumption of our next graphics card line."
I don't trust anyone that repeats themselves two times three times in a row
Nvidia is incapable of increasing performance without a significant node jump so they had to do an old school Intel and juice the 5000 series to the max.This shows they are not as capable as people think.They only look good because they are compared to the joke that is AMD Radeon.
@@ytctuser Radeon was so close to killing GeForce with the 9000 series vs the FX/5000 from nVidia. Never happened again...
I'll believe the 600W TDP of the 5090 when I see it - if that's what you're talking about.. :)
450w to 600w....hmm, I'd say that's about triple.
-"How much has it improved in gaming?"
-"AI"
And its not even AI, its just machine learning. AI is a marketing term. True artificial *intelligence* doesn't exist.
Ay ay ayyyyy
@@Ignisan_66That's what infuriates me the most.
@@Ignisan_66 yes "AI" is an oxymoron. similarly, so is a "quantum" computer.
@@Ignisan_66 no, you're changing the definition of AI. We've had AI since like the 80's. AI is just any algorithm that mimics decision making. things like minmaxing have been around for like 40 years.
marketing conflated AI and ML so now to the general public they are interchangable, but they arent. ML is a form of AI, but AI is not limited to ML. Any decision tree is AI.
Zen 5%
Arrow Lake -2.85%
RTX $5090
2024 PC Space is Wild.
New PC specs are bad, new AAA games are bad (as always), is there any reason at all to upgrade even from older hardware? I am still running an i7 7700k with GTX 1080 ti and have not yet seen problems with the games I play at least, no matter how heavy they are.
Lmao, everything sucks ass. Ps5pro, amd leaving high end, nvidia going after $$$ and intel bringing lower performance.
But tbh, intel looks like the most modest ones. Half the energy for the same performance is actually good.
This is more than sufficient for more than 99 percent of all games and the new energy efficiency is great!
PCs will be cooler and less noisy.
@@aleksandarlazarov9182 I agree, have a 4080 paired with a 13600k but my 'old' system was a 6700xt with a i7 9700k and I could play everything on my 'old' sytem still..
@@f.9344 zen5 was that, same perf for half the power and people called it dumb .. soo I don't know, it seems that average consumer want power hungry chips for additions 10% ... its stupid
I love it when Steve just casually drops TNG references into the videos.
You know you're getting old when you're more interested in power efficiency than raw performance.
If the power claims are true then that means lower power bills, quieter PC's and a whole new range of CPU's for the SFF space.
AMD still offers better performance/W and performance/$.
So now, intel power draw is just ridiculous instead of completely insane.
Yeah. Comments like "GaMeRs DoN'T CaRe AbOuT eFfIcIEnCy" evoke the image of someone who still lives in Momma's basement and who doesn't pay the utility bill.
computers dont use that much electricity, just a couple lightbulbs worth. And if you're using sleep or turning it off when not in use its even less. The real concern is cooling. Both the package and the VRM on the motherboard.
Thanks Steve
Back to you Steve
... with sunglasses!
I look for this comment or post it on every video and like them all lol
Things I will never understand:
- Advanced Neurobiology
- Naming scheme of tech companies for their products
ahh yes. let marketing fl0gs decide naming so they turn it into something OTHER then what naming is meant for (clear and simple identification and communication of that identification).
Marketing - obfuscation and deception since its inception.
Those are things nobody understands. I make fun of my mom's neurologist all the time. They have no cure, or way of slowing down any neurological disorder. From bi-polarism, to dementia. They've got zip..😅
@@liahfox5840 That's quite alright, I have to imagine the neurologist is getting a good laugh outta your mum.
Apple’s naming convention is simple enough.
I’m a neurobiologist, but I still don’t understand Intel or NVIDIA naming schemes.
Imagine telling a computer nerd in the 80s that in 2024 you can buy intels all new two eighty-five CPU 😂
hahaha. Just wait for the new 286!
@@GamersNexus Intel is doing the naming, correct?
So we get the 285 in 2024. In 2026 we get the 287H. H because 2026 will be the year of the Horse of course.
@@GamersNexus omg this was all a marketing plan to resurrect the Pentium
@@chaddesrosiers1107 "the year of the Horse, of course of course. " There, fixed it for you.
@@terrylaze6247 Its been so many years I for got to double it up. lol o7
Genuinely surprised they didn't go with Core ai9/ai7/etc.
Genuinely shocked they are so dumb they didn't drop the 2 decade old Core branding. If they wanted to con people into thinking it was something new, they should have just called it the 'Ultra'.
@@Lurch-Bot what next? Intel 9 Ultra pro max? 😂
Bonus points for the Picard "THERE ARE.. 4.. LIGHTS!" reference.
Now someone can benchmark the 14th gen under-volted to compare
I have my 14700KF undervolted and loadline adjusted to keep max stock clocks in cinebench and max 247w. Roughy 35600 multi and 2170sh single. Temps hit max hover around 75-80 P cores and 65-70 Celsius on E cores.
I cannot undervolt anymore since then booting even comes unstable.
VID values hover average 1.22- 1.27v.
In gaming powerdraw is roughly 50-90w, depending on topic.
Temps 45-55 degrees Celsius. Arctic III 420mm AIO.
I tend to play star wars outlaws, world of warcraft, CS2, Overwatch 2, Cyberpunk time to time.
Paired with 4070 TI Super and playing 1440p graphic maxed out.
This would be a cool content piece… undervolted Intel v AMD CPU benchmark comparison…
I have undervolted my new 14700k, as I don’t need it’s bleeding edge performance at the moment, and it’s only lost a small amount of performance from the out of box scores, and uses much less wattage (around 57 watts for light use, around 100 for higher loads, iirc?)
I game at 4K and the CPU stays under 45C at 30% use (most games) and at 100% burn it’s steady at 66C… On Cinebench it’s cleanly faster than previous gen AMD chips, and of course, Apple Ultra M1/M2 :P
This is much cooler than my previous 8700K, seems pretty good compared to what AMD would claim for operating temps, so I don’t really see any improvement or benefit with the new Intel Ultra CPUs.
Maybe next version of Arrow Lake will be meaningful, but this first one seems a take it or leave choice.
I’m impressed with how well my i7 14700k works. It seems to “cruise” through tasks smoothly. And it’s not melting my rig 😂
vs the 200S gen undervolted.
Do HX cpus count?
"We're very cultured here" - I knew I was here for a reason 🧐
Very cultured like bacteria.
@@user-vk2nr2yh7xYeah, we are outnumbered 30 trillion to 1 in our own bodies
There. Are. Four. Lights!
@@NJ-wb1cz It's closer to 1:1 in numbers I think 😂. Bacteria are a x1000 smaller tho so they make up like 2lbs of our bodyweight
@@willwunsche6940 if your bacteria are 1000x smaller than you, then they are the size of large cockroaches that you apparently are filled to the brim with
The disappointment tour of 2024 is gonna be a blanket at this rate
New shirt defaults at 3XL as we couldnt fit it on the regular sizes.
9800X3D might be pretty good
My 2024 has been really nice!
..oh yeah computer parts! Shitshow.. luckily I got my 4TB nvme drive just before those got 50 €/$ more exprnsive.
It's going to be a quilt, like this grab bag of chiplets.
Maybe the best of 2024 will fit in a t-shirt...
Love that steve can't hide his contempt whenever he mentions the buzzword "AI"
There is no AI - it's the David Copperfield of code.
The S is for Spongebob, and the S is for Sandy (bridge is now old)
who at intel approves this weird ass naming
All it does is confuse everyone, and whoever came up with that name scheme probably makes $500,000 a year 😭
it's rebranding to sell you sht. It means this generation is meaningless.
@@avonire This naming is really confusing
Rebrand, it’s like moving to a new house for them to get away from any bad criticism from the Intel core ix ; just take time to understand it. they started with 200 to debut instead of 100 due to meteor lake not being as exciting as they hope. So there's reasons for why intel do everything the way intel does. hopefully its better decisions moving forward, less confusing.
How is it bad though? It's two digits shorter and fundamentally unchanged, just reset from 15 to 2. Xx600 > X45, xx700> x65
Zen 5% vs Arrow late -2.85%.
We're somehow at worse than skylake gains.
Don't forget Nvidia RTX $5090 😂
its not competing with the 14900k, if it was it would be called the 290k not 285k
With the big loss from hyperthreading it's impressive that they didn't lose a lot more performance.
Lol, so if you are a gamer, you are paying alot more for the new intel cpus while getting less in performance?. What a joke. The 9800x3d is the only way to go. I couldn't care less about productivity benchmarks.
Meanwhile the x925 core in D9400 has better single core integer/float perf than hx 370. 😅Maybe Qualcomm and mtk will make exciting arm chips for desktop in the future.
Considering Raptor Lake Refresh was manufactured in Intel 10 nm and Arrow Lake in TSMC 3 nm while performing this close to each other (power consumption not included) something must have gone pretty wrong while designing the microarchitecture. Makes you wonder if it would have been better to port/adapt Raptor Lake to TSMC 3 nm…
Or maybe Intel 7 was actually an appropriate name? Being wrong for the sake of being petty is a poor strategy.
Do you know your history? Intel introduced that process as Intel 10 nm and when they got stuck on it they renamed it to “Intel 7” to appear more competitive on paper.
@@abavariannormiepleb9470 Or (which is more likely) it is a quick redesign of chip that was made in 10nm to use new process, without actually utilising much of the positive aspects of new architecture. I think it is just rushed out, and they will release actual next-gen chips (15th gen) later. After all, there should be a reason to change their naming scheme.
I noticed the ad policies url banner during the sponsor spot and for whatever reason, decided to read through - I really appreciate that it's in plain English and not lawyer speak, and the clear boundaries! Great use of the exploding PSU box pic too.
Having the Fractal North XL case (literally the best case I've ever had) and seeing it be an approved as a sponsor on Gamer Nexus brings me joy
I dont know why im watching both your and HUBs coverage of a pressrelease back to back, but im enjoying every minute of it. Keep up the good work.
Because the only thing better than one Tech Steve is two of them!
Looks like 7800X3D will the 1080Ti of CPUs at this pace of progress from AMD and Intel.
May even be better, considering the volume of people being able to afford it. It led performance, topped charts, and sold best.
Too bad you can’t buy one that’s not 600 usd or find it anymore.
100%
No. The 1080 ti will forever remain the GOAT, nothing else compares with it.
I'd argue it's the 5800x3D. I put it in my five-year-old AM4 motherboard and I probably won't need to upgrade again until the AM6 socket.
why they didnt start with Ultra 9 190KF, Ultra 7 170K, Ultra 5 150K and Ultra 3 130K
instead we get these confusing numbers for us to explain to customers.
Because 2 is bigger than 1, therefore better. Just wait until they invent a number bigger than 2. We're not sure what it'll be yet.
@@GamersNexus 4? I heard 8 is pretty big too but those may just be rumours at this stage
@@GamersNexusRumors are they’re working closely with Valve to discover this alleged number that’s bigger than 2
@@Isndar said that to radeon too from RX 500 to RX 5000
@@IsurusOxyrinchus If that trajectory continues, we'll be at 14900 again in no time!
These are certainly new cpus... and that's about all there is to say.
Hope to see Battlemage soon though!
Loved the 'There are 4 lights' reference.
We ARE cultured here ;)
As a guy who started PC gaming with the AMD Athlon II X4 750K, it's interesting to see how Intel now has the wacky naming.
wouldnt be hard to get efficiency improvement vs 14th gen
You'd hope!
don't you mean 13th gen, which in of itself is an efficiency patch for 12th gen?
We got an engineer over here⬆
@@G4RYWithaFour I thought it was just a higher frequency / new memory controller patch
@@GamersNexus Well, if you test by searching for the most hardcore overclocker mobo that pushes everything to 11 by default then yeah, there will probably be next to no difference, up to the new CPU even using more power if the process is more mature and can stand more power before melting.
Gaming benchmarks vs 9950X
Productivity benchmarks vs 7950X3D
That's how you know that Intel has nothing worthwhile to show
edit: it was the 7950X3D not 7800X3D, my bad. The point remains.
Intel basically told us to check again in 2 years
Plus the no comparison of efficiency against any AMD products. And the price isn’t good either. I’m mean they improved a little, finally making chiplets instead of a one big die, but that’s all😢
"we can beat team red in gaming and productivity!! Just don't ask which CPUs we beat"
Bullseye Bro!
Is any gamer still seriously contemplating a new Intel CPU for their gaming PC after the 13th/14th gen debacle? You would have to be mad.
Yes, I am....once I feel like my Intel CPU is lagging behind
Yeah, and bulldozer was pretty terrible, so you should not use amd either.
I just got a 5700X3D which I will probably be using until the end of the decade. If you prioritize framerate (as most modern gamers do), the 5700X3D performs as well as a 14900K and, by intel's own admission here, performs about the same as the Core Ultra 285K. Since I don't make a lot of Zoom calls, one of these new intel CPUs would be a waste of money for me😝
I'm ditching intel. I had a 12600k intending to upgrade to a later gen on the same socket later, but intel ruined my plans. I'm so done with them I'm planning to go with a ryzen 7 7700 build and sell my intel pc.
@@Veyrah64 Ok, It never really affected me since CPU improvement really slowed down from 2011 and motherboards died on me but I do agree it sucks.
So you mean you are unhappy that the socket changed and you get the exact cpu equivalent from AMD for what reason exactly? I would at least change when there is an actual improvement.
When it is time to upgrade for me I will take whatever is the best bang for bucks at the level of performance needed, both companies weren't always great.
Thanks Steve! please NEVER stop using that clip lol
Intel: I used to be a cpu manufacturer then i took an arrow(lake) to the knee
Lamo good one
oh man... if it really flops the memes will be abundant
Intel aimed the Arrow at AMD but their AI targeting system missed and caused it to land in their foot instead.
Haven't read this classic in a looong time
@@FacialVomitTurtleFights Broken arrow!
That pencil is making me nervous
I'm not sure why, but I am hearting this to find out.
@@GamersNexus sticking up. A part is the whole chopsticks sticking up, incense for worship. Also a tripping hazard, probably from a final destination movie
I've seen a magic trick that started like that.
@@GamersNexus I'm guessing The Dark Knight Joker's magic trick scene?
Joker's pencil trick
Rest in peace the i3s, i5s, i7s, and i9s you will be missed
i3/5/7/9 replaced by ultra 3/5/7/9. Wow how different!!!
Not really.
@@lycanthoss Give it some time and people will shorten it to u3/5/7/9, making it look like Intel just shifted their keyboard.
i9 15900k, monster hunter 7 on & on
@@cl4ster17 I honestly don't really care about Intel's naming scheme here.
If you are an enthuasiast you should know that naming tiers don't really matter when it comes to performance. The 14900K was basically equivalent to the 13900K. The Zen 5 CPUs are barely better than previous gen. Ryzen 8000 laptops are mostly just 7x4x laptops with an NPU. All these names are just useless in a vacuum. You need actual benchmarks to know how good a CPU is.
And you know what tech-illiterate people frequently use? They say they have "an i5 CPU", only for me to find out they have some old ass 3rd gen CPU.
The only problem I have with this new naming scheme is the same problem I have with current CPUs - if you search for 12600 for example, it will find all the 12600Ks long before it will find the locked CPUs. I wish Intel used numbers for the iGPU and lock specifiers instead of suffixes. But I guess then the names would not be as pretty...
One thing I liked about the 12000, 13000, 14000, etc, is that it made it easy to see what the generation was at a glance compared to previous. Once I figured this out, it made shopping and placing different models in an arrangement easier. I could also pick out a SKUs release year by simply looking at the CPU naming convention. Changing this is dumb, but whatever, it already happened with 14th gen.
I cannot stress enough how GN has kept me sane this last years. Please, everyone at GN, stay healthy whilst doing this videos. It shouldn't be like this, but I depend on this quality and down-to-earth level being real to keep me going against all the BS going around, INCREASINGLY!
Cheers, everyone.
Could have named it 243, 245, 247 and 249, first 2 digits for the year of release and last digit for the performance tier, but no, lets confuse the consumers more...
Don't want a product name to start with a "2" forever but it's a good idea overall, let's just use incrementing numerals instead and the tier indicator at the end. Well, for additional SKUs let's add two digits at the end. And a more marketable overall class indicator, like 3, 5, 7, 9 at the beginning as a prefix, separated with a hyphen. Then, a letter or two to signify the category as a suffix, like "H" or "KF". And maybe instead of "Ultra" let's use something shorter, something that is reminiscent of the brand itself, so maybe the letter "i"? How does Core i7-2650K sound? I think - pretty good.
they can add more CPU with Ti after the name. then you wont find out whether you are buying CPU or GPU
Did you not get the memo?
Ironically enough, that's roughly (accidentally) how Intel CPUs have been, just shifted down by 10; every Intel CPU for the past 14 years have roughly lined up with the year.
damn that woulda been tight ngl
Nice of Intel to use the wattage as their naming SKU's now.
well if thats the naming convention, we gonna be in the 1000s again pretty soon with the way it currently goes
@@theemperor4040thankfully, physics doesn’t allow that.
How about the utter hilarity of the 'V' following the processor number on the laptop side of things? Because calling a chip "285V" is *really* that good an idea when you're known for being power hungry...
@@theemperor4040bro arrow lake half’s the energy usage
The mental gymnastics Intel's marketing department tries to pull is incredible
I would gladly pay a little bit less for a CPU that doesn’t have wasted die area on a “neural processing” feature.
the trick part is it does not take much space, it have AI in the name but is the dumbest processing part of the cpu literally, does not take much space for huge numbers.
Microsoft wants it. I don't know of any other source asking for it.
@@s-x5373 If you look at Intel’s materials, it is a very significant amount of die area dedicated to their NPU. This makes sense because you don’t get tens of trillions of multiply-add operations per second without some sacrifices. Any amount of space is too much for my preference. Any AI models I run will be on a GPU, or non-local. More importantly, Intel needs to focus their efforts on making their core product better if they want to stay competitive.
CPU ≠ Only gaming
Many people use this on thier office uses and laptop without gpu it will be helpful in ai assistance and most of the application now have built in ai features in it
I was honestly looking forward to this announcement because I want the NPU for running a self hosted AI voice assistant in my home server. That said the NPU in these is kinda disappointing (the one in the Lunar Lake laptop chips is much better) and overall I’m not as excited as I expected to be.
Great video. Helped me get updated on the changes.
Why would Intel delete its strongest selling point out of the name? People outside of the PC enthusiast space think that "i7" is the equivalent of a good CPU. The new naming scheme is more complicated AND deletes the recognition for buyers before. So now people that just bought Intel because they always had Intel need to make some research and will probably consider AMD too.
I can not wait to buy an 295KF Ultra High 4K XD Processor (I have constant crashes because Intel did not tell me they still had voltage problems)
They think Ultra 7 is better-er. Am not crazy about the new naming scheme.
@@thatboiswoy648 you missed the AI
Core ultra baby
It works on me
@@thatboiswoy648 Intel 295ULTRA HIGH HD 4K IA (it's a pentium with 1 core and 2 threads, and it melts itself)
So the flagship is not going to see much "gains" (probably loses even), but at lower power.
Me using a manually tuned 12900K for years: Puts wallet away.
Same, 12900KF is safe. When I do replace it likely will be X3D AM5. This maybe Ultra 9 ties the 14900K at best from my understanding.
@@MaryannLynch-z9c i was definitely open to the idea of 9000 X3D, but the non X3D has my expectations way down now. We will see I guess, all I know is its not going to be an intel purchase this gen.
Yeah, I was hoping for a nice normal speed boost this gen to finally replace my 9600K since at this point it would have been worth it to me.
But with both teams doing the exact same strategy for some bizarre reason almost as is colluding, giving us nothing faster at all this gen just efficiency, I'll pass.
I just got a 7900XT from the amazon prime sale to finally touch 4k on my LG c3, so I guess I'm not upgrading cpu until next gen, unless the 9800x3d surprises us all, which I doubt.
Lol, so if you are a gamer, you are paying alot more for the new intel cpus while getting less in performance?. What a joke. The 9800x3d is the only way to go. I couldn't care less about productivity benchmarks.
Wow this sucks who should buy these
0:16 the "S" stands for Squality
💀
I can't decide weather to laugh or cry.
@@navi2710yes
S for SPEEEEEEEEEED!!
S stands for Smelting, and it doesn't bode well for the CPU
Intel optimizing for power... hell just froze over. The nagain, consdiering how they wasted it until now, just applying some sanity leads to great improvements.
Good to see that change!
what I'm most interested in seeing with arrow lake is how well they perform when you limit PL1+PL2 to lower values like 85w, 65w, and 45w. pretty sure it'll be fine for gaming at all of those, so it's low power productivity testing that'll be really interesting. even if it still loses to ryzen at low power, as long as it's close enough, the (presumed) massive _idle_ power savings will push it over the edge for me
I wish they got rid of the core branding entirely tbh
Its been more than 15 years at this point, we know its made of cores
Do we, though? Do we *really*?
Imagine if they never stopped using the Pentium naming?
We'd be at the Pentium 24 or something by now. It would be so much easier for people to understand... Oh I have a Pentium 13? My computer is old!
@@volvo09 I prefer my Intel 2986 DX96
Made of cores? Lol I guess you don't know very much then.
it's*
Man it really feels like intel is just re-releasing the 12900k over and over again.
The 13900K was like 40% faster than that in multi core, certainly wasn't a re-release.
You're right, one didn’t fry itself like the other.
The 12900k had hyperthreading...to get the same-ish level of performance with the same number of cores but without HTT is a pretty good improvement, not massive but still.
@@bracusforge7964 In contrast to AMD CPUs that actually did fry themselves, resulting in exploding CPUs that steve did cover, there has not been one single report of a fried intel CPU, degraded yes but that's not fried.
@@terrylaze6247intels issue is much bigger. AMDs issue wasnt due to a defect by AMD itself, but the motherboard manufacturer pushing way too much voltage. People say this is the reason Intel is dying but its not the only reason. Intel had contamination issues in their fabs and there is also a fatal flaw with their ring bus. Not even close since only 20 amd cpus were affected and hundreds of thousands of 13th and 14th gen chips have had to be RMA’d
My i9-10900K lives another year...
My I9 9900kf is still going strong.
Still too early for you lad
I'm very happy with my 9900KS as well. Next CPU will be Ryzen .
7700k creeping along here. This is fine.
Stand proud, you're strong.
These look very promising, I hope Intel can deliver. They're absolutely setting up for a special edition Ultra 9 386K, 2025 is 40 years since the 386 launched!
Thank you for all the work you do my dudes.
TL;DW just wait for the 9800X3D if you're a gamer. :P
Edit: to be clear I still enjoyed learning about Arrow Lake and GN always does a good job balancing thoroughness and being concise.
However since gaming is my primary use case it is hard to not be disappointed to see Intel advertising only performance parity with their previous generation especially when the 7800X3D was already faster than 14th gen at a lower power draw.
And get 10fps better in games? New CPUs are worthless
@@zee-fr5kw Depends on what you're currently using. It's probably not worth it for AM4 and 12th-14th gen owners to upgrade to a new platform. But if you're on something older and are primarily gaming then the performance gains might be worth it.
@@zee-fr5kw It depends on what games a given person is playing. There are CPU-heavy games where these older CPUs will struggle. Plus if you want to play at a high refresh rate the CPU is often what ends up being the bottleneck.
@@zee-fr5kw 1% lows are the real gain.
@@zee-fr5kwor you can realize how these companies release their cpus. First release is high power. Second release is efficiency. Then it loops.
That ILM distribution contour chart looks exactly like my 12900KF did before the contact frames started releasing everywhere. My IHS had an hourglass shape for thermal contact, throttled at ~150W before I swapped over, then I could run it at 318W without throttling on a loop after swapping.
Roman aka deBauer likened the new clamp as taking an aspirin every day for a headache rather than going to a doctor to find out what is causing the headache.
@@GregM Well its best we have right now.
Why is it that just as you're getting comfortable with one naming scheme companies feel the need to jump over to another? Sure, the i-naming convention wasn't perfect by a longshot, but at least it was both informative and intuitive once you figured it out.
Gotta keep consumers confused!
"Baffle 'em with bull****!!!"
@GamersNexus You must have significant issues if it takes a decade to figure out what a name means. Or maybe you just like to complain about trivial things and have to find something because glass half empty comments gets subscribers.
>"once you figured it out "
Literally the same for this naming scheme 😂 what's the confusion exactly?
@@Tugela60 Deliberately confusing consumers is not trivial, it speaks to a marketing strategy that defrauds the average consumer who isn't perfectly informed. To be clear, it's not just Intel that's guilty of it, it's been a problem in the PC space since its foundation; it's equally petty when AMD decided to call its chipsets B350 after Intel's B250 motherboards. An uninformed consumer would get their new Intel processor, know it needs a new motherboard, and know their last one was a B250, then go to the store and buy a B350 motherboard, not realizing that it's for the wrong CPU. It's nothing but a waste of time, money, and is a disservice to the people these companies allegedly try to serve. Also if you really dislike Steve and the GN team's commentary, stop giving them views and boosting their algorithm lol.
You have been given a thumbs up automatically for mentioning my Captain.
Regarding NPUs, like other acelerated math functions there is a potential for some software to get a significant boost by leveraging it like math co-processors, embedded encryption acceleartion and gpus have done. iGPUs are a good example, embedding funcitons like decoding/encoding, etc.
This will of course require applications to use these cores, but provided there are approximately standard interfaces professional software and games both leveraging accelerated algebraic funcitons in parallel just makes sense. For basic examples: CPU offloaad for interactive physics that can't be entirely passed to the GPU, or convolution funcitons can be offloaded from the CPU prior to shader processing on the GPU.
Shhhhh, AI = bad. The hivemind has decided as such.
There is no AI workload that Joe, in a previous post, personally runs today.
That means the NPU is useless for everyone for the rest of time.
Yeah, forget what it's named and what they're advertising it as, hardware matrix math sounds interesting. Tons of gaming stuff is just matrix multiplication, for example. Having dedicated hardware to offload that onto could be nice. ("What? Run the game's AI in the AI processor? No that doesn't make any sense, it wouldn't work at all! We need it to do VECTOR MATH!")
Changing naming style is a great way to hide the fact that they pushed older CPUs to the point of failure :D
Strangely they still are keeping a high Turbo power figure of 250W. Some companies never learn :/
they stated 3 times differently that they increased prediction windows, while disabling hyper-threading
increased efficiency, while decreasing max turbo boost
it's basically 14000 series cpu with disabled hyper-threading and fixed turbo boost behavior
This is exactly what it is.
We won't know for sure until they launch, especially with the new E Cores existing.@@s-x5373
20:43 (infamous "glue" statement)
Ouch, that hurt. Those old slides really show the darkest side of "arrogant Intel" back when Zen just came out.
Didn’t they do it again when AMD released the X3D technique?
First it was AMD making fun of intel for gluing CPUs together. Then it was intel making fun of AMD for gluing CPUs together and now we're back to AMD making fun of intel for gluing CPUs together. No, wait, snapping them together like a bunch of Legos.
Near spat my drink out with that "Thanks Steve" with the shades.
Cool, can't wait to buy in +2 years
sticking with the star trek reference. I knew there was a reason I subscribed to the channel 😆😆
Is that pencil Steve's way of telling us he's going crazy? Now we just have to wait for him to make it disappear
Wanna see a magic trick?
@@GamersNexus I really enjoy living, so no thank you lmao
@@GamersNexus Athlon XP/MP? ^^
@@GamersNexussmacks lips and does a weird tongue thing
Intel's new naming scheme must be getting to him 😂
Looks like Intel got their new naming inspiration from monitors.
😂😂😂
With the loss of hyperthreading, I really just wish we had all p-cores with 10-12 cores. Then they could actually make the Xeon more mainstream and bring the E cores into the Xeon CPU for creatives needing the parallel compute power. 8 P cores just seems too low now a days.
If I recall correctly, the die area for 1 P core is supposed to be the die area for 4 E cores, so they really could have done 12 P cores in the same die area, but then they would not have anyway to compete with AMD on core count. That said, 12 P cores would have been making what people want, but this is Intel. They make things and then expect people to buy them anyway.
@@richardyao9012 Not only that but Intel couldn't get those high Cinebench numbers with just 12P cores.
The very benchmark Intel called fake performance numbers that the public should ignore, until they added those E-Cores.
@@LaurenceBlunt yeah it's just that we all don't need rendering performance. Gaming and CAD type workloads would be better off with the 10-12 P cores
Gaming doesn't significantly benefit from more than 8 cores, or realistically 6. All you e-core haters are asking for is for Intel to make the CPU slower in Actually Multithreaded Code for no reason.
@@Vegemeister1 Thats wrong. The best games like Battlefield, Pubg and many other new multicore games like many more cores. The problem you cant test that because the current cpus doesnt have 12p on one bus ring with enough ram bandwith.
Whatever it is, we're sure that stonks will go up....... my god i love it
-steve at 2:42 "more better"
-me as a spanish speaker "mas mejor"
-my mum "INCORRECT" (slaps me)
I'm now convinced that all these companies sit down together and collectively decide to name their products by smacking each other's heads on their keyboards and pick whatever weird amalgamation of characters shows up.
Exactly right! they call them the "marketing department". Bunch of overpaid and underworked fl0gs.
I am actually kind of impressed. They ditched Hyperthreading and still kept the same performance. Sure, it's not really much of an upgrade right now, but I fully expect the next couple of generations to be big upgrades. I bet Hyperthreading comes back once they figure out how to make it stable on this architecture. They also have a lot of power headroom. If they can make it scale with that power there is huge performance improvements there.
Its not the same performance though, in Battlefield, Pubg and other optimised multicore games it'll have an upto 20% recession. Never should have removed HT without upping the number of P cores. Why cant we have anything nice anymore?.
performance in low threaded workloads*
@@impuls60 I completely understand what you are saying and wouldn't fault anyone for skipping this generation. I am on a 12700k and won't be upgrading for at least another couple years. I am just impressed that they can cut such a huge feature while not losing performance as an average. Every major architectural change will have applications that do not scale well. I am mainly just excited for the future. This whole presentation looked to me like they were basically saying that this is a foundation and there will be tons of headroom to build on.
We are at a point in transistor history where getting more performance is becoming extremely difficult. Engineers are working with transistors that are barely bigger than atoms with 10s of billions on a single chip. Although it is not fun for the consumer we have to be honest with ourselves. The days of getting big performance uplifts from one generation to another are gone. I am happy Intel is trying something new and not just sticking with their previous model of pushing a chip it's absolute max and then doing that again for the next Gen and calling it something new. We also have all of the performance most people really need. Nobody really needs to run games at over 200 FPS, we just want to see the bigger number. And if this new generation isn't for you then the 14000 series will still be there and the prices should become cheaper.
Did you not notice that they added more cores to compensate for lack of HT?
The new focus on power efficiency just feels like spin to me. "We haven't figured out how to push this architecture yet so lets call it efficient."
Weren't we all just complaining that Intel used too much power to win benchmarks? Hold up
I don’t care about efficacy at all as long as Mr Scott isn’t calling me from the reactor room
But wait, they've undervolted the crap out of the old CPUs, will there even be an overclocking scene at this point? Overclocking has pretty much just been undervolting to keep boosting longer, but now even that is done at the factory.
They focused on performance but Intel P core team flopped so marketing needed to change their tune.
It's good for laptops tbh, they might have been worried about Qualcomm
Yay! You're gonna be the wizzend grey beard of YT hardware... That means I'm going to grey soon because I'm sure we're almost the same age...... Steve you got keys, is that why you showing the wizdom wizzard hair? Keep it coming man! Don't get burned out, you and your team are a gem!
First time watching with my new OLED monitor...man is that work mat GREEN! Great job again btw.
All the branding nomenclature trouble Intel could have avoided had they just kept the Pentium numbering scheme, we'd be at like Pentium 18 by now.
I just commented that somewhere else!
It would be so much easier for non tech people....
It would be like iPhones "iPhone 8 or higher required"
Continuing the Pentium scheme by introducing the all new Intel Decaoctium
No, no, These are Pentium Pro CPUs🤡
I was so expecting the "There! Are! 4! Lights!" reference... And was not disappointed!
The amount of PCI lanes itself is incredible. We're finally done with this ridiculous 24 PCI lane cap. I wonder if Intel will run HEDT this generation too, considering their 4677/7592 are upwards of 60 lanes. These have the same amount of lanes from Broadwell-E. No longer do we need to run 10 year old hardware to run NVME arrays, or $5,000 plus for server grade/Threadrippers, Epyc & Xeons (barely affordable 1st/2nd gen)
thread ripper 7000 started at ~1.5k, not 5k. But yes it's very expensive.
If tech companies could just get rid of their damn artificial limitations we would actually see innovation. Intel actually deserves credit here for doing this.
The two things I want to happen atm. I want 40-60 pcie lanes per cpu. Everything needs pcie now an having to pick an choose which will run an what won't run on a board is stupid and confusing. Everything on that motherboard should work. x16 at x16 x4 at x4 an all sata ports should work. The second thing is that they make a new form factor taller then ATX where when you plug in a GPU it doesn't cover a pcie slot. If you have 2x GPU you now took up 4 of your 4-6 pcie slots that you probably needed.
I expect it to be BS. It's gonna be a x16 slot a maybe 1 or 2 nvmes straight from cpu. Rest is gonna be eaten by USB and we will rely on chipset for more expansion.
@@xfy123 Yeah, these thunderbolt ports will eat up into pcie a lot.
yup, just even more happy I switched to Zen5 and that nice low power consumption
I just went to the conclusion, saw that Steve's spirit is more crushed than before and got all the info I needed from this video. Hang in there, Steve. It'll get better. Maybe.
8:20 they have the 9950X power usage up to 250W in that graph, I thought its maximum draw was 200W on Cinebench 24? Graph is probably also misleading as the 14900K goes up to 330W to compete, somehow don't think the "relative" gap there can be that large with a up to 15% per watt improvement.
I thought that was total system power, not just cpu alone.
Base on those details of Intel gives honestly, arrow lake is still untouchable to 7800x3d on gaming, no matter power draw or performance. I think Intel should market it as a very good cpu for working rather than gaming.
Finally a sensible comment besides the sea of people complaining about a name. I see ARL as the CPU for content creators and people who do not care to have the maximum gaming performance. AMDs new X3D might be more compelling for gaming at top tier performance but I still think ARL with its single thread performance will be better in content creation.
Aye and the market formall rounder aka working sometime gaming is overwhelming market. That surely beats amd. You think why amdrelease power efficiency ryzen 9000 with no ai?
For efficiency but it hurts if meeting intel with ai.
That means intel can do ai and saving more energy while amd relied on external dedicated gpu. But in term of security that means amd kost likely does not activatetd copilot of microsoft hnlike intel
0:26 "suggested eTail price"? :D
The only specific thing I would like benchmarked is the iGPU hardware encoder speed. How fast it can run AV1 4k10bits and the highest resolution it can handle at 60fps for example.
As someone with home, media and game servers and gaming needs in the same machine (space and connection based restrictions), idle power draw and temps from both default and more optimized settings would be very interesting to see comparisons around.
Comparing power draw when gaming is imho not the most important metric since most consumers use their machine at that load
I love that you're still using that "Thanks Steve" meme😂
Now 100% cooler 😎
Now 100% cooler
Kind of weird (or smart) to compare gaming benchmarks to the 9950X instead of the 7800X3D
Technically, the 7800x3D trade overall performance for gaming performance
The 285k looks to be about 2-2.5x faster in all core full load even at 125w
Plus the 7800x3D is a previous gen cpu, I think they are just butthurt over it and feel it's unfair, since they can't beat it in gaming.
Zen 5% VS Arrow Flop, nice!
be interesting to see how well ARL overclocks, especially on memory.
Floppy Arrow. The names jsut keep getting better. Unlike their naming
Eh I’m still buying arrow lake cpu
Arrow Lake 2.85K%
@@NotMrToast Why?
@@JamesRussoMillas 2850%??? thats a lot of performace
Yes, Steve. If possible, I would like to see if the 285K will be able to run 4x24 at 6400MT/S. If so I would also like to see the how much performance loss or gain there is by going 2x48 in the 8000MT/S area vs 4x24 at the maximum JEDEC rated speed. I remember a few years ago in one of your videos you were saying that you can gain up to 10% performance when running your system in 4x16 configuration.
Just hear me out, this Gen gonna be gold for enthusiast like us.
people thought AMDs latest were a flop... this is the most underwhelming release by intel in a long time. i get they are trying to lay a foundation for the next gen, but between the the frying of their chips fiasco and this lame release, intel is makin amd lookin good.
If the new chips don't self-destruct, then they are an improvement over the previous gen.
i have benn hearing that from intel for the ast 5 years since 11 gen
@@BainesMkII it's the same chips with hyperthreading disabled on TSMC 3nm nodes, there is no real improvements
@@s-x5373 If the new chips still self-destruct, then does it even matter if they are otherwise better or worse?
how many generations will it be compatible with same motherboard in this new architecture?
“Yes!”
1-2. Even AMD isn't going to do another AM4.
I think some of the biggest improvements this generation are actually in the platform. Last gen only had 20 PCIE lanes with this gen now supporting up to 48? That is a massive increase and something you would normally find on like a workstation platform. I will seriously be looking into this chip for upgrading my home server.
Looks like they focused on everything EXCEPT performance. Which I actually like. Could have had one or two slides dedicated to a renewed focus on stability, and not just in their chips, but all of their drivers. I think though it's kinda clear they're focusing on that in the background. Or at least, they certainly better be...
CPU will have 24 lanes: 16 + 4 (m.2) pcie 5 like previous gen + 4 pcie 4. Chipset will have 16 + 8 (m.2) pcie 4.
Definitely better but I’d still like more (-8
@@arnox4554 There is nothing wrong with other areas getting the focus, but a complete lack of performance increase just makes the chip boring to most. I am glad they didnt keep throwing stupid amounts of power at their chips, but no budge on the performance needle at all is going to equal no buy.
@@utubby3730 Eh I am going to disagree with this one. Most people don't upgrade their computers every generation. It is more like every 5-10 years depending on where your preference. If I were looking at this or a 14900k right now to upgrade from a 9700k I would definitely choose this or an AMD X3D chip. If I was coming from a 2700x I would actually just see if I could update my bios and throw in a 5800X3D. My point though is that they don't have to deliver 10-20% better performance year over year. They just have to be better than the last gen and everyone but the weird people with nothing better to do with their money will be happy with it.
@@utubby3730 Even in the performance department though, they were HEAVILY hinting that these chips would be extremely overclockable, so if performance is really what you want, you should definitely be able to push the envelope with this architecture. Or so Intel says anyway.
I would hope they have 2 targets with this: 1) get enough power headroom for chips with more power cores 2) laptops. I am hoping for the pro laptop markets to be updated so regular work on a gaming laptop can push the battery to a solid 7/8 hour mark.
Seeing Intel 'humble' themselves and actually work on a different architecture that is unconventional (e.g. Hyper-Threading) and releasing chips that are somewhat competitively priced is definitely very good for us consumers. But the real competition for Intel, in my opinion, is their ability to use motherboards with the same 'type' and socket for at least more than 3 generations would be a significant contender to AMD's 'forever-lasting' motherboards.
So let me get this straight... Intel's new CPUs are going to require a new socket (aka new motherboard), to get at best on par performance, and in some cases less (probably almost always less as well since these are Intel's numbers), just to gain power efficiency (a low bar for Intel). Power usage still being higher than AMD at full usage.
AMD's Zen5 was disappointing to people because they only gained some efficiency and performance uplift was fairly small. However, at least you don't need a whole new motherboard, it's still AM5.
Was really actually hoping Intel would put out something good again. With their melting chips, higher cost, higher power usage, lower performance CPUs, AMD has been the only logical choice. They needed a win here... That's not because I'm rooting for either company, but because competition keeps prices low (look at the GPU market for example).
That's not the point ARL is basically a whole architectural redesign so of course the physical layout of the pins have changed. And to be honest, same performance (In Gaming) Because I'm sure these new chips will beat 14th gen in Content apps. But same performance at almost half the power is a W in my book. Plus these chips might surprise us in gaming once people start messing with overclocking these chips since there is now a good amount of thermal overhead and also ram speeds up to 9000+ should give a big boost in performance. Why are people so quick to judge with out proper testing?
@@thetheoryguy5544 We don't know what the headroom for overclocking is yet. Intel says the E-cores could go higher, but nobody knows about the P-cores.
Overall performance is ok, if you are not on 13/14th gen, but something older. It's basically a fixed version of those CPUs. If you are on 13th/14th gen or the AMD equivalent, there's no point in paying full MSRP to gain no performance, just efficiency.
16:38 ECC support? OMG. Is that the 'real' ECC or the marketing-kind-of ECC which is basically the essential part of any DDR5 memory module I wonder.
This would be a killer. I could finally retire my old dell server hogging 100w at idle.
Answering again: it will be reserved for enterprise platforms 😢
@@LinusBerglund yeah, probably.. So the memory controller in the CPU understands ECC, but you gonna need an 'enterprise grade' chipset to enable it.. I hate them.
No change from 12th/13th/14th gen. The CPUs support real ECC, but you need a C-series chipset.
“S” stands for Steve!
These benchmarks will be interesting. Nothing in the past 10 years has been worth upgrading to for my use case for varying reasons, but I like where Intel is going with all these low level architecture changes. Seems like a really futuristic design.
I have no doubts TSMC will do magic for Intel - these CPUs should be significantly better