Some more details & Info here forum.level1techs.com/t/intel-core-ultra-9-285k-core-ultra-5-review/218917 9:45 - there's a typo here, the column is supposed to say "OS" not "CPU" lol... whoops
This channel does a number of things very different from pretty much ALL other review outfits, but the absolute best two differences are: 1. The 'level' of actual deep knowledge of the tech is unmatched and 2. There is zero negativity expressed. Any downsides to a product are discussed in context with the rest of the product, and there is no shit-talk for the sake of Drama. Thank you Wendell and team for what is refreshing content on UA-cam. It's hard to articulate just how rare all this has become. Lastly, the 'Show me where the CPU hurt you' skit just about had me piss myself
The 'show me where' skit had me rolling! Amazing sense of humor. Level1 and GN are my absolute favorite tech channels because they both have a good sense of humor - and manage to keep it tech related. Can't wait for the x3d chips from AMD!
Dude, intel was sticking 2 dies in their CPUs for Pentium D and Core 2 Quad. Almost 20 years ago. And there was no magical "glue" - aka interconnect fabric - just plainly using the simple capabilities of the northbridge which comes at high interconnect latency. And of course intel eventually went AMD's way - monolithic die with interconnecting bus with first gen Core-i-crap (Nehalem). What's happening now is a deja vu - Intel's engineers are creativity-less morons, let's not remember what happened to RAMBUS, Fully-Buffered memory or others of their "creations".
They called it glue because glue logic is circuitry that connects two different chips. It wasn't a dig. It was a description. UA-camrs like HUB and GN blew it out of proportion for clicks and comments and you bought it hook, line, and sinker.
@@tim3172 is it hard to believe the comment to be a dig, when they both throw shots back and forth with little reason. AMD was laughed at before, now its Intel's turn, life is a circle lol.
@@tim3172 intel called it a glueD together cpu big difference the phrase was we at intel do not produce glued together cpus or something along the line
I find that I enjoy commentary pieces a lot more than just a wall of benchmarks when it comes to UA-cam and this one is top notch. Looking forward to the rest of your coverage on ARL.
well, the sooner it launched the better for intel and everyone else. more feedback on what needs work and why for which usecase. also if windows hinders the performance and could be optimized to get better gaming or other experience then it was ready to launch, just OS wasn't ready for it. I think it was Jay that said "do you remember the early multi core days?" - kinda like that. what I mean is maybe this actually is intel's long awaited step forward to actually catching up on AMD. I mean cinebench multicore is a beast of a result. there are shenanigans that need some work yeah, but that was the Zen 1 problem as well. what I don't like at all is that everything is done by TSMC. don't get me wrong, nothing is wrong with TSMC but the very fact that every chip maker depends on them is not good at all for the industry as a whole and for the consumer. because now they are the ones that dictate chip prices and production priorities which is bad for us as consumers.
they probably rushed to "cover" their previous geneations "flawed lifespans' then again its a bit cynical to think that way considering they are pretty much deleting performance to not kill CPUs(i9 and maybe i7) kinda hard to trust them after getting burned and u now have to deal with them letting customers be beta testers too...Yikes Intel taking the dump after the dump lol from CPU heaters to non-leading performers lol.
@@inkredebilchina9699 we're even MORE dependent on ASML, a Dutch company spun off from Phillips. They are the only company making the machines that TSMC makes the chips with. And it's only American political leverage that keeps them from selling these chips fabs to China. If you look deeply enough you'll see TSMC's monopoly is just one cog in the very delicate socio-political arrangement.
I work in IT for a non profit company, recently had to have a meeting with Dell to get more end user devices, I made it a point to say I want AMD laptops, not intel. I also pointed out I wasn’t happy that their lineup of devices are lacking AMD. Pretty sure they were sued over this before. They wanted to know why I don’t want intel, and basically said I wasn’t impressed with the performance compared to price and thermals. Also not a fan of their name change, confusing consumers more than they already were.
Well as if AMD laptop skews don't have confusing names? lol Also as for lack of AMD laptops, aside from Intel paying off OEMs in some cases, sometimes AMD just can't provide enough chips for all the vendors. And if an OEM like Dell can't secure enough cpu stock for an entire lineup, they just won't bother making more product lines for AMD chips.
Go thinkpad. At my company we used latitudes, and they always ran hot, and honestly were just ok. Tried a colleagues Ryzen powered thinkpad and it was much better.
@@napowolf AMD is just not a big enough buyer to get priority at TSMC over Apple. So they struggle to meet demand. The laptop/embedded division is such a small part of their business it is where the most shortfalls occur.
I had the same problem at my company. Have been evaluating both HP and Lenovo since that conversation. At the time i looked at their lattitude lineup the best i could get was 2 performance cores and the rest were efficiency cores. We have some software that was running faster on 3 and 4 year old laptops than the new laptops i was getting in.
I can't think of a single reason to buy anything from this generation. It might not start randomly dying like 13th and 14th gen, but that is not something to be proud of.
100%. Zen 4 was a great leap forward, anyone interested in building a pc should use it. Zen 5, raptor lake, meteor lake and arrow lake are all complete garbage. There's 12000 series which was pretty good too but I guess you don't really consider it for a new system
Steve even go as far as saying 7800x3D at current inflated pricie still offer better value for gaming, they need to reduce core ultra pricing more if they want consumers to consider buying their product.
At these prices, and for my needs, I'm sticking with my Intel 12th Gen Core i9 setups(desktops, and laptops)as they still do everything I need them to do, and then some with ease, even with DDR4 RAM.
The only one from this lineup I care about at all is the 245K to use for a small form factor, cool, quiet and snappy low power office PC that can live off of UPS power for hours on end.
I think a nearly 40% improvement in memory latency is a bit better than "slightly better" - still 80+ ns is nothing to write home about. A shame that once again going to a chiplet architecture inccurs a latency penalty (very important for real-time audio work). Looks like it's Raptor Lake for me for my next upgrade.
@@Hullbreachdetected I'm going to get one of those "Mutant" laptop i7s retrofitted for LGA1700 off of AliExpress (13850 HX) and undervolt it and probably disable Hyperthreading anyway.
well they apparently lost some of the funding from power companies since they lowered the power usage... but the performance dropped greatly to... so there is that...
You'll see what's coming up in two years. Any BK talk is just ignorance of what's really going on in the past few years. Simply put, Intel has been bogged down by its foundry and the unwieldy P-core. Now both are changing with the 18A and the reorientation towards E-core. Already saw the great potential of E-core in Lunar Lake.
"...the future is going to be interesting..." That's what's so exciting about product launches nowadays... that the NEXT launch might bring some improvements...
OMG, Ultra 9 285K loses to 7800X3D on every game except Final Fantasy & Forspoken! $600 Intel CPU loses to $450 AMD CPU? & the 9800X3D launches next week.
I think can be a interesting CPU next year when it comes down in price and has had some updates to iron out the flaws. I also think this will be more interesting for business desktops.
I kinda like what Intel have done here to an extent; now don't get me wrong this generation is not all that great, but I think this a step forward. It somewhat reminds me of the Ryzen launch although a smaller leap in Intels case, not amazing CPUs but alot better than bulldozer. I think the 385; or whatever they call it, will be the CPU that you actually want to buy. Now if only they would price this generation well...
Arrow Lake is a "gap filler" until Panther Lake on Intel's own 18A process is released. On the plus side it has increased the number of PCIe lanes and introduced tiles.
I didn’t even realize we were at the point of measuring cpu processes in angstroms now. I still remember when building my first pc that 45nm was still the new hotness I think. I feel old.
Puget Sys bench looks absolutely solid Looking to upgrade my photo - video editing workstation and the 285k simply crushes all competition with healthy margins. Considering how dirt cheap all these cpus are now compared to HEDT solutions, its a complete no-braiser.
Intel reminds me of Kodak: it didn't want to innovate at first because its monopoly was strong, and when it finally tried to change due to digital cameras, the time had already passed. AMD is hitting hard. I had a Ryzen 2700, then bought the 5600, and now I have the 5700X3D. Everything works great, and it offers excellent value.
I mean its not like they're not trying. Their cpu division was held back a long time by their foundry problems. They've spent countless billions trying to fix it. Not like they arent trying. They went to tsmc this gen to fill the gap. Also intel is genuinly innovating. E-cores, being able to power of and on p cores, removing multi threading, and especially, intel is still updating their x86s architecture that will finally replace x86 and remove some of the old stuff.
@@Blackfatrat E cores aren't anything new. Mobile CPUs have been doing this for over a decade. Independently powering cores isn't new either, removing hyperthreading is part of the reason why Arrow Lake is so dreadful. Intel quite literally sat on its ass during the 14nm++++++++ era, refused to adapt to EUV and to Ryzen's own performance increases and it is now biting them in the ass because their own process nodes are terrible as a result. Doesn't help that Gelsinger actually cancelled the one thing that Intel could've pioneered in rentable units and Beast Lake. And mind you Arrow Lake is on a better node than Zen 5 and yet fails to best it while using much more power than it does.
This launch is a CF. The new chips are an improvement in power utilization, but they’re considerably more expensive than the 14th gen, and performance while generally close, better in a few cases, slightly lower overall. Any way you look at it, this CPU isn’t quite ready for launch. Not sure if they fumbled this worse than AMD fumbled the Zen5 launch, but it’s definitely a race to the bottom. Even with TMSC’s process technology, they didn’t manage a great improvement in efficiency. That aside, it’s not a bad chip, but it’s not what Intel needed. I’m actually surprised that Intel stock went up today. Especially given Qualcomm’s announcement yesterday. Or course, ARM’s legal actions might have muted the effects of Qualcomm’s announcement.
When I test the Core Ultra 9 285k... I mean, when I was trying to test the Core Ultra 9 285k I have several blu screens of the death and windows corruption, and I only maintain an stable SO when I pick up an SSD NV2 from an AMD server that I have here and installed on the z890 and the windows did no crash anymore. Believe me, I get surprised like you guys, cause nobody from the hardware bench from UA-cam that I know can explain that too. 🤦🏻♂️🤦🏻♂️🤦🏻♂️🤦🏻♂️🤦🏻♂️🥲
I'm holding out for Panther Lake, where BIG changes ARE coming. PowerVia, RibbonFET, node shrink, ARC iGPU upgraded. I hope, by then Windows will be updated like you say they need to. Intel NEEDS this man!
Luckily I snagged the 5800X3D before AMD stopped production and shops running out of stock as I can now go for a few years without an upgrade (I managed to make use of my FX6300 system for 6 years before jumping onto Ryzen with 2nd Gen being the starting point) and most games don't require all that much in the way of CPU.
1:38 It's more complicated than that, since there is a second tiny filler tile next to the GPU that is not even shown on this slide and the GPU is on a separate tile that isn't needed for the KF models. So depending on how you want to count and the model, it's anywhere between three (without GPU, filler and base tile) and seven.
Honestly, there's a lot of interesting technical questions here. I woner how the higher memory speeds interact with the Die to die clock. Considering AMD Infinity Fabric had so many considerations when it came to the ratio of the memory speed and the fabric clock; it would be interesting to see intel's quirks with Foveros between the I/O tile and the Core tile.
The 7800x3d is way too good, and really made Intel realise their CPU just way inefficient comparing. So Intel focused on making arrow lake more efficient, but similar performance. They didn't succeed, rather miserably failed, and if 9800x3d will the next x3d king, then no doubt AMD will gain massive market share in CPU/APU sector.
And with this poor performance in Intel games, how exactly do they think they will convince me to pay for their new processors + new moutherboard + new ram? 🤔Sorry intel but after so many years with You now I will go to AMD and 9800X3D. ☺
It's interesting that when talking about upgrade most people compare it to 14900k, but obviously it does not make much sense to buy +1 generation, whatever it is - intel, amd or iphone. I have 9900k and I'm really looking forward to buying the latest arrow lake once it's available, because it's a new motherboard and ram anyway, and unlike 14900k there is a chance that I may later upgrade the CPU only if needed. I think I definitely will see great improvement over my current PC anyway. Since 14900k is an ideal spec for Microsoft Flight Simulator 2024, and this 285k is close to its performance with much lower power consumption, that sounds promising.
It does seem promising. I think as systems grow into fully utilizing the current CPUs the ultra 9 285 will be a golden piece of used hardware over the next 5-7 years. The high-end 13th and 14th gen will be landmines of possible degradation and that leaves what on the and side, a 3 years older 12th gen, which is very solid but will age out first. Unless the next generation released is just out-of-this-world incredible I suspect that 285 owners will have pretty good resale value or a lot of incentive to repurpose these chips. also, the IGPU on this is amazing, battlemage when!?
The easy-anticheat bug is a known Windows 24H2 bug and has NOTHING to do with Intel's new CPUs. It will do that on any CPU when using 24H2 along with Easy AntiCheat.
There's a facet of this that I don't think has been noted as much: these CPUs are built on TSMC 3nm. AMD 9000 series is built on TSMC 4nm. When you note that, the efficiency gains it has don't seem that big of a deal, especially since they still can't beat Zen processors on efficiency from two generations ago, and its performance is often worse than their own last gen. Just on node improvement alone, we should expect much, much better.
2:55, I'm confused. Why are you getting that low write and copy speeds while running I assume 8200MT/s memory speeds? I get better results with 6800MT/s on Intel 13Gen, not even talking about latency. Something must not be right here...
This is on a better manufacturing node than Zen 5. The cost to manufacture these chips must higher than Zen5. The profit margin of Arrow Lake could be a problem for Intel.
It's so sad to see how far Intel has fallen. I remember when they launched the Core 2 Duo, and they were just the best for years. They seemed invincible. Now AMD, Nuvia, Apple and even MediaTek are ahead of them.
Although I think this might have been inevitable because they stuck with x86 and were unable to transition to a clean new 64-bit architecture like Itanium. After all, Apple had switched to Intel CPUs at the time of Core2Duo; they were customers, not competitors. If they had transitioned to a less power hungry, less heat-producing architecture long ago, they wouldn't be in this position now.
Happens to a lot of companies. Start off with the best products. Then sit on their laurels rather than investing and innovating. The current situation is a result of them sticking with 10nm for so long and doing +++++++++++ rather than moving on.
it's what happens when you try to build factory with low profit and larger larger debt, you then start ignoring other products just to focus on manufacturing technology. Now intel has neither the product nor the manufacturing edge.
@@squirlmy Well, since the Pentium era, x86 cpus are risc internally with a micro ops conversion circuitry. They really missed the opportunity to ease a transition on a clean new risc ISA by allowing programs to bypass the cisc interpreter which would have been mainly used to remain compatible with the large x86 software collection. Then after a decade or so, they could have transitioned further with a full risc CPU combined with some software emulation for legacy x86 binaries (like Apple forced twice on their captives). To me, such a bypass instruction should have been featured many years ago, at least for enterprise products.
I think total power consumption is way more telling than just CPU consumption in tests like this. Platform and interaction with storage media, PCIe etc. is an important aspect that is ignored by most testers. Of course this requires testing multiple motherbords and RAM configurations to get a clear picture.
I think this trend of attacking Intel has become a stupid way of getting views. It's hard to judge over Intel's Arrow Lake now. I appreciate a lot their effort for lowering the consumptions which is something they will benefit in the next generation as well. We have more than enough power for most of the applications compared to what used to happen 10 years ago. The very problem today is limiting the heat pc spread. It causes multiple issues at least 6 months a year.
Legit question do people actually high end products for gaming specifically? At this point I get the feeling that high end products only matter because of the energy consumption and shear performance gains no matter how small. I wish there was some deep analytics done on what gamers actually do because I have a hard time believing people really want everything maxed even if it's on the competitive multiplayer genre.
Newegg gives a good statistics. Most popular CPU 58/7800X3D and 14700K. 7600x/5700x3d and 12700k are also popular. So basically the midrange CPUs. I don't give a shit about CPU as long as they have at least 6 cores, made in the last 3 years. I put all my money on a better GPU LOL
That's funny! Not the performance, etc. but that iGPU/add-in driver order issue has been a problem since Intel started putting iGPUs on their ICs! I think they may have fixed it at some point (though I don't know for sure) but it's a long standing problem that I would have hoped wouldn't be an issue at this point.
So, I'm kind of glad I bought 10th gen at this point with the intention of using that rigs platform on its own for the next 7-10 years, with maybe an 11th gen Xeon swapped in at some point to put it into lite-server work mode instead. (Could do the same with the 10th gen xeons, but the 11th gen ones are cheaper right now I think, and probably will be for a while yet.) It's probably going to take up to 10 years from that point of original purchase I figure, before Intel is able to catch up with itself in terms of time to cook it products. They're releasing too fast, so they can't work out all the small kinks and issues that are popping up post release. An unfortunate side effect of competition as it were, not that competition is a bad thing overall. Intel went from being like a slow service restaurant, carefully preparing the food to ensure maximum quality (ideally), to being more like a fast food place, where you aren't getting what's shown on the menu, and you'll be lucky if the order is even prepared properly. It's going to take time for them to recover from that problem. And it's not like AMD is going to quit trying to take any market share Intel has remaining on a YoY basis. AMD however, could probably benefit greatly from a general slowdown of CPU releases, as could the global community as a whole in my honest opinion. I think we need to be given some time to come to terms with the general performance of what we have right now, and put it all to it's most effective use first, before we start advancing further yet, at least in terms of pure performance. Instead, they should (IMO) swap into efficiency orientated designs again that keep the performance results of the best releases out right now, or at least incrementally increase performance when efficiency gains allow for it. The way I see it, is that the more we release these products as a whole, the faster and more resources seem to be gobbled up by programmers and developers as a whole with their projects and products. Which just feeds back into the loop necessitating better releases overall. Meanwhile, while not a cheap rig to build, we have the tech right now to produce pretty convincing virtual worlds to the point of uncanny realism. This won't take long with the current release scheduling as it were, to bring this to a point where even the cheaper rigs can manage it. That's not necessarily a bad thing, but I don't see it as a good thing either with the constantly increasing resource requirements as well to run many of these things. I.E. I think the bloat we deal with from a lot of programs and games today, are mostly due to the lack of need to optimize for anything anymore. There's always a 'supposed to be better' tech release on the horizon. So they aim for that instead. I believe this does us a great disservice in the future, when we are trying as a global community overall to reduce the amount of power that our tech is consuming for the tasks that are being run. So I think we need them to take a step back for a while, and release slower, with focuses on efficiency again over performance. We have perfomant enough tech as it were right now to do a lot of things. We need to get better in how we use it first, I believe, before we continue the march upwards in performance and resource availability again. In some ways at least. PCIe lanes for example could be fleshed out better, but that's more of a motherboard side problem as I see it. RAM amounts possible, could be increased, but right now 128GB +/- some amounts, is probably enough for the general required needs of 'most people'. For those who really need more, there are specialized platforms for that. GPU vram amounts are approaching numbers that necessitate you have some rather beefy rigs RAM wise, for balance reasons. And that's the gaming GPU's only spoken of right now. The workstation ones double that at the top end. Like, CPU's aside, none of these things are exactly "cheap". And then by the time you have finally built one of these systems if you are piecemealing it for cost reasons, the next release is already out, with the one after that set to release on the horizon as well only a year later; if that. At that kind of rate, it's only going to take another 3-5 years before everything can do 4K at 240fps and 8k at 60fps; max settings otherwise for textures and such from the gaming side of things. Who knows how ungodly the workstation and server side of things will at that point. 512 cores per 1 CPU, before taking into account any SMT/Hyperthreading which may not be a thing anymore at that point? 1GB of L1 cache? 16GB of L3? Might seem ridiculous, but that's kind of the point. We are going in that direction, and it won't be cheap in either upfront cost, or power costs afterwards. And a lot of that could be a good thing for a lot of other reasons. But with the quality of the tech being release as of late getting worse and worse it seems... is it really going to be worth it, chasing this dragon as it were? Should we maybe instead be focusing on maximizing the capable use of the current tech available instead, with slower releases that are given more time to cook, so as to reduce the potential of wasted time and resources on garbage product releases? I think the answer is yes. But those who took the time to read through, even if you disagree; you say what you want at this point. I've said my opinion on the matter.
I tabbed out for The Age of Aquarius. Good stuff. Outlook not so attractive for Intel CPUs. I hope they get it together sooner than later because I don't want AMD to get complacent.
If anything, it's making me think about 14th generation, since I can keep my 128GB of DDR4 RAM. I just bought that RAM last year and I want to keep it a while. Actually, I'll probably just keep my 5900x because I just don't want the risk of my CPU dying early. The 14th gen Intel CPUs kind of scare me.
well, coming from an ex intel employee, i can tell you guys that there is a lot of stress for releasing the new CPU, and sometimes corners are cut for differnet reasons. Mostly to reduce complexity of the technology, allowing the engineears to understand it and onboard the technology easyer. This approach is necessary in large comapnies because it's harder to coordonate all the teams working on the new platform. But this process is the same for all large companies, except for startups. Basically they are doing small incremental improvements. And when the technology changes this much, than it's a good practice to rollback on some stuf, like hyperthreading that i would imagine caused more power cunsomption (and die space) than it's worth. But i agree with everybody ! They botched it up with gen13 and gen14 bugs, but in the hindsight i can understand why it happened, it was a corner case (a very common one where cooling is adecvate) but none the less a cornercase, that wasnt represented well enough in intels development labs. After this huge problem regarding the gen13 and gen14 cpu's and the fact that everybody else is outperforming them (amd and apple) they should have lowered the price much more, and should have treated this product as a first gen low cost cpu, specially that it requires motherboard change YET AGAIN !
New motherboards every 2 releases is a big reason I switched to Amd 7000 last year. I have some faith that I'll be able to reuse the it when I upgrade the cpu in 4 years. I don't want to ewaste a 100-200 board every time
I never see enough generational cpu uplift to upgrade parts on a mobo/socket, (nor keeping the cpu and changing to a newer mobo). So the socket thing is mainly just a matter of better production economies for stuff like coolers and chip packaging. Also keep in mind AM4 and zen1→zen3 is an extreme outlier situation. But I also start with the CPU class that meets my needs right from the start and run machines for more years than most people. As opposed to gettng an economy CPU now and a flagship replacement on the same socket in the next generation, which is usually rather poor economy. (Especially once factoring in differences in mobo features and such, and my old old flagship will still be about as good as the newer econo cpu.)
@@nickeshchauhan5661Requiring a new motherboard and not guaranteed another cpu upgrade is Intel's biggest sin. Arrow Lake is launched slower than the competition and Intel's previous gen with no future recourse. What incentive is there to buy into this platform?
I was under the impression that these big tech companies were way ahead on their real research than we get to see. Like they could build an i9 19900k if they wanted to, but instead just make incremental generation upgrades to ensure that they can make their money back with each generation and ensure product improvements during times with slow development progress. But this proves I was completely wrong.
@@korcommanderthey meant incremental as in while their actual knowledge as of this moment extends to building something as far in the future as a i9 19900k, what they actually release in the present always lags behind because its more cost efficient than trying to build that mythically advanced cpu. Intel has for sure made advancements in cpu tech sure but as a business they currently lag behind amd really hard at delivering good products. If they had that knowledge roadmap set out for them already like 10-15+ years in advance how are they fumbling right now
It takes quite a few years to turn their research into a product you can buy, especially when it requires a clean slate design. That's why it takes a long time to turn a company around when it's not doing well. For example, if AMD could've replaced Bulldozer with Zen any faster, they would've done so. Instead, they had to incrementally improve on that architecture while they made sure Zen actually worked. Rushing things to market is more of a problem than holding things back, because it can go disastrously wrong.
Rather than all these thread manipulation programs, why can't Intel and AMD work with Microsoft to update the Windows Scheduler for upcoming launches? Why is that not an option?
APO is too much of a temptation to fiddle scores IMO, cynicism says it will be rarely interated on unless a new flavor of the month benchmark pops up. Same with the "AI" thread and power governors. Intel really need to earn some good karma.
0:50 whispers "Wow that's a really old reference". I assume most people will assume you're referring to the 1967 song from the musical "Hair", but by definition, the Age itself lasts for 2,160 years, on average. It is of one 25,920-year period of precession, or Great Year, divided by 12 zodiac signs to equal an astrological age. So, yeah "REALLY old!" lol!
I always run insider previews! Canary Channel, it is FAR more stable than the release versions because Microsoft fixes all the issues MUCH more quickly.
@@mikeree8543 in gaming……..absolutely………..it’s a gaming chip. That’s what it’s for. If you would have read my post you may have realized I said “gaming”.
@@mikeree8543 OP clearly said "in gaming". The 7800X3D is indeed better in gaming. Obviously in MT heavy production workloads, the 9950X is the current desktop king.
Where have I heard that before, recently. Let's be real; the shoe is on the other foot, for now. And I hope it stays there for a while. Least till AMD gets complacent.
@@SpecialKLSX Pay attention to the background stuff, Intel is stuck because its old P-core runs out of steam. Now it shifts its effort to beef up the E-core, which we already see its potential in just 1-gen. If the perf-leap can be largely retained in the next gen or two, taking the lead after two years seems likely. On the other hand, AMD's advance slowed down quite a bit in its Zen4 and 5 release, unless AMD can wow people in its next gen, which I doubt, I'll bet on Intel leap ahead.
Looks like they still don't have the power gating and tile interconnect figured out properly. I think they are more on the stability than performance side of things at the moment. Let's hope future BIOS updates will improve the situation. Also your Geekbench score is interesting, Geekbench web shows scores of 3000+ in single core ... unfortunately the system information is terrible so any comparison is impossible to do.
Some more details & Info here
forum.level1techs.com/t/intel-core-ultra-9-285k-core-ultra-5-review/218917
9:45 - there's a typo here, the column is supposed to say "OS" not "CPU" lol... whoops
Those aliexpress mobile chips look more appetizing every day
They're actually a really good deal lol we have a video on one of the Erying mb + cpu combos from a year ago if you're interested. ~ Amber
@@BBWahooI bought them and the chip might be okay, but the motherboard and bios is a third world mess...
I want a 9600X3D
Is it just me, or at the 1:20 mark, did you say 'MSRye' instead of 'MSI'?
"Have you tried turning yourself off and on?" - fucking killed me.
I never laughed so hard watching a tech video, an absolute classic!
Everytime I have to get in bed, I get turned off now
But did it solve the problem?
That was great ahahah
Off is easy, back on is the hard part.
Wendell, I'm afraid that you are 100% right: I'm not interested in this CPU.
I always say that, but that's just because my upgrade cycle is so slow
@@xXx_Regulus_xXx same here, statistically I'll wait on the 23rd generation to think about it
yup, it gives Zen1's vibe. Interesting but I won't buy it.
You will be once you see what the AI Core does with cpu temps
🙂
Yeah, I would only use it if and only I got it in a laptop for free with a decent low end gpu.
This channel does a number of things very different from pretty much ALL other review outfits, but the absolute best two differences are:
1. The 'level' of actual deep knowledge of the tech is unmatched and
2. There is zero negativity expressed. Any downsides to a product are discussed in context with the rest of the product, and there is no shit-talk for the sake of Drama.
Thank you Wendell and team for what is refreshing content on UA-cam. It's hard to articulate just how rare all this has become.
Lastly, the 'Show me where the CPU hurt you' skit just about had me piss myself
Yes, absolutely!
Absolutely! There are no negatives when it comes to product design, there is almost always a reason as to why it was designed that way.
Yeah yeah whatever. I'm just here for the sweet jazz background music :P (love you Wendell)
you only could find a somewhat similar yt channel like der8auer but he is german
@@hanfman1951 Sorry, his video wasn't exactly a hit either.
The 'show me where' skit had me rolling! Amazing sense of humor. Level1 and GN are my absolute favorite tech channels because they both have a good sense of humor - and manage to keep it tech related. Can't wait for the x3d chips from AMD!
Agreed. So much nicer than the "that's what she said" nature of that one popular tech UA-camr
GN is a speed talking mumble mouth.
@@JohnWalsh2019maybe you should be checked for auditory processing issues.
Steve & Tim from Hardware Unboxed are not too bad either, with their Aussie-style sarcastic humor! 😂
@@dare2liv_nlove agreed!
They mockingly called AMD's infinity fabric glue some years ago. Guess who's embracing the glue 😀
Dude, intel was sticking 2 dies in their CPUs for Pentium D and Core 2 Quad. Almost 20 years ago.
And there was no magical "glue" - aka interconnect fabric - just plainly using the simple capabilities of the northbridge which comes at high interconnect latency. And of course intel eventually went AMD's way - monolithic die with interconnecting bus with first gen Core-i-crap (Nehalem). What's happening now is a deja vu - Intel's engineers are creativity-less morons, let's not remember what happened to RAMBUS, Fully-Buffered memory or others of their "creations".
They called it glue because glue logic is circuitry that connects two different chips.
It wasn't a dig. It was a description.
UA-camrs like HUB and GN blew it out of proportion for clicks and comments and you bought it hook, line, and sinker.
Uh oh
@@tim3172 is it hard to believe the comment to be a dig, when they both throw shots back and forth with little reason. AMD was laughed at before, now its Intel's turn, life is a circle lol.
@@tim3172 intel called it a glueD together cpu
big difference
the phrase was we at intel do not produce glued together cpus or something along the line
I find that I enjoy commentary pieces a lot more than just a wall of benchmarks when it comes to UA-cam and this one is top notch. Looking forward to the rest of your coverage on ARL.
In short:
"It wasn't ready to launch"
well, the sooner it launched the better for intel and everyone else. more feedback on what needs work and why for which usecase. also if windows hinders the performance and could be optimized to get better gaming or other experience then it was ready to launch, just OS wasn't ready for it. I think it was Jay that said "do you remember the early multi core days?" - kinda like that.
what I mean is maybe this actually is intel's long awaited step forward to actually catching up on AMD. I mean cinebench multicore is a beast of a result. there are shenanigans that need some work yeah, but that was the Zen 1 problem as well. what I don't like at all is that everything is done by TSMC. don't get me wrong, nothing is wrong with TSMC but the very fact that every chip maker depends on them is not good at all for the industry as a whole and for the consumer. because now they are the ones that dictate chip prices and production priorities which is bad for us as consumers.
they probably rushed to "cover" their previous geneations "flawed lifespans' then again its a bit cynical to think that way considering they are pretty much deleting performance to not kill CPUs(i9 and maybe i7) kinda hard to trust them after getting burned and u now have to deal with them letting customers be beta testers too...Yikes Intel taking the dump after the dump lol from CPU heaters to non-leading performers lol.
@@inkredebilchina9699 we're even MORE dependent on ASML, a Dutch company spun off from Phillips. They are the only company making the machines that TSMC makes the chips with. And it's only American political leverage that keeps them from selling these chips fabs to China. If you look deeply enough you'll see TSMC's monopoly is just one cog in the very delicate socio-political arrangement.
sadly.. yes
The segment with the psychologist made me laugh more than I should have. Was that editor Autumn ? 😆
Yes it was! ~ Amber
@@Level1Techs lol, sorry Amber.
xD I meant yes it was autumn, I was behind the camera ~ Amber
@@Level1Techs lol. Gotcha. It gets a little confusing.
I get called Amber all the time lol ~ Editor Autumn
@Level1Tech, 5:42 is truly comedy gold... SNL reference and a good one at that.
I work in IT for a non profit company, recently had to have a meeting with Dell to get more end user devices, I made it a point to say I want AMD laptops, not intel. I also pointed out I wasn’t happy that their lineup of devices are lacking AMD. Pretty sure they were sued over this before. They wanted to know why I don’t want intel, and basically said I wasn’t impressed with the performance compared to price and thermals. Also not a fan of their name change, confusing consumers more than they already were.
people have been complaining about the names for years now, why are they so reluctant to improve in that area?
Well as if AMD laptop skews don't have confusing names? lol
Also as for lack of AMD laptops, aside from Intel paying off OEMs in some cases, sometimes AMD just can't provide enough chips for all the vendors. And if an OEM like Dell can't secure enough cpu stock for an entire lineup, they just won't bother making more product lines for AMD chips.
Go thinkpad. At my company we used latitudes, and they always ran hot, and honestly were just ok. Tried a colleagues Ryzen powered thinkpad and it was much better.
@@napowolf AMD is just not a big enough buyer to get priority at TSMC over Apple. So they struggle to meet demand. The laptop/embedded division is such a small part of their business it is where the most shortfalls occur.
I had the same problem at my company. Have been evaluating both HP and Lenovo since that conversation. At the time i looked at their lattitude lineup the best i could get was 2 performance cores and the rest were efficiency cores. We have some software that was running faster on 3 and 4 year old laptops than the new laptops i was getting in.
"I don't think this platform was ready to launch"
No kidding. Imagine this launching after Ryzen 9000 X3D, though.
Where did intel touch you Wendell? Show us on the mobo lol love it
I can't think of a single reason to buy anything from this generation. It might not start randomly dying like 13th and 14th gen, but that is not something to be proud of.
100%. Zen 4 was a great leap forward, anyone interested in building a pc should use it. Zen 5, raptor lake, meteor lake and arrow lake are all complete garbage. There's 12000 series which was pretty good too but I guess you don't really consider it for a new system
It is the best consumer CPU on the market for content creation.
Not across the board no. 9950x is about just as good and uses alot less power.
The moment you said 'aquarius' i had the melody running in my head. Yeah, thats an old reference!!
The 7950x3d and the 7800x3d are going to be the new 1080ti GOAT this decade with CPUs
What about the new Gen?
I built new PC on Saturday with Ryzen 9 7950X3D after seven and half years on Intel Core i7-7700K.
I think that 1080TI GOAT medal has already been given to the 5800X3D. But the 7000X3Ds sure are worthy successors already.
Steve even go as far as saying 7800x3D at current inflated pricie still offer better value for gaming, they need to reduce core ultra pricing more if they want consumers to consider buying their product.
I've got the new GOAT / old GOAT combo in my system right now. 7950x3d + 1080ti
5:42
More like this please. 😆
Nearly painted my monitor with coffee here 😂
That came unexpected 😂
I need to restart myself more often after fresh installs :)
At these prices, and for my needs, I'm sticking with my Intel 12th Gen Core i9 setups(desktops, and laptops)as they still do everything I need them to do, and then some with ease, even with DDR4 RAM.
The only one from this lineup I care about at all is the 245K to use for a small form factor, cool, quiet and snappy low power office PC that can live off of UPS power for hours on end.
I think a nearly 40% improvement in memory latency is a bit better than "slightly better" - still 80+ ns is nothing to write home about.
A shame that once again going to a chiplet architecture inccurs a latency penalty (very important for real-time audio work). Looks like it's Raptor Lake for me for my next upgrade.
Then make sure you update BIOS and check voltage settings so your 14th gen CPU won't fail.
@@Hullbreachdetected I'm going to get one of those "Mutant" laptop i7s retrofitted for LGA1700 off of AliExpress (13850 HX) and undervolt it and probably disable Hyperthreading anyway.
9:45 Wendell sums it up perfectly.
"Good job Intel?"
"This CPU represents the dawn of a new age for Intel!"
The bankruptcy era?
the dark ages (part 2)
well they apparently lost some of the funding from power companies since they lowered the power usage... but the performance dropped greatly to... so there is that...
You'll see what's coming up in two years. Any BK talk is just ignorance of what's really going on in the past few years. Simply put, Intel has been bogged down by its foundry and the unwieldy P-core. Now both are changing with the 18A and the reorientation towards E-core. Already saw the great potential of E-core in Lunar Lake.
more of the same. we will see if intel stocks can wait two years.
google "intel 18a" that stuff is in testing right now...
"...the future is going to be interesting..."
That's what's so exciting about product launches nowadays... that the NEXT launch might bring some improvements...
Thank you, Wendellman! 🙏🏼👍🏼
I like this review the most because it acknowledges the shortcomings but also talks about the cool parts and the potential it offers for the future.
yep I see great potential here, it will be worth a revisit in a few months for sure
great video quality by the way - nice camera work....🍎
OMG, Ultra 9 285K loses to 7800X3D on every game except Final Fantasy & Forspoken! $600 Intel CPU loses to $450 AMD CPU? & the 9800X3D launches next week.
oops
it also ties the 5700x3d in most games. a $200 cpu on the AM4 platform.
Right, but then it crushes the 7800X3D in every single way outside of games. You have an ENORMOUS trade-off with the 7800X3D.
@@CyberneticArgumentCreator Thats where the 7950X3D and the 9950X3D come into play. AMD has it covered.
Intel loses more and more. Just sad.
I laughed at the therapy skit. Thank you wendel
Intel and AMD need to just take a beat, give us a year of no rushed products, and just focus on solid R&D.
@5:43 LOL at "reset yourself" !
Sometimes you have to go sideways to jump the tracks to a new way ahead.
If there’s anything Wendell likes more than new tech, it’s underdogs trying new things and promising a push to the future.
My main interest for a new cpu would be a low power NUC that can run Planet Crafter and FFXIV at 1080p or even 4k.
I think can be a interesting CPU next year when it comes down in price and has had some updates to iron out the flaws. I also think this will be more interesting for business desktops.
I kinda like what Intel have done here to an extent; now don't get me wrong this generation is not all that great, but I think this a step forward. It somewhat reminds me of the Ryzen launch although a smaller leap in Intels case, not amazing CPUs but alot better than bulldozer. I think the 385; or whatever they call it, will be the CPU that you actually want to buy. Now if only they would price this generation well...
Arrow Lake is a "gap filler" until Panther Lake on Intel's own 18A process is released.
On the plus side it has increased the number of PCIe lanes and introduced tiles.
I didn’t even realize we were at the point of measuring cpu processes in angstroms now. I still remember when building my first pc that 45nm was still the new hotness I think. I feel old.
So, who buy those "gap filler"?
@@rifa.3307 if the price was right, I would buy one for my homelab server
Puget Sys bench looks absolutely solid
Looking to upgrade my photo - video editing workstation and the 285k simply crushes all competition with healthy margins. Considering how dirt cheap all these cpus are now compared to HEDT solutions, its a complete no-braiser.
Intel reminds me of Kodak:
it didn't want to innovate at first because its monopoly was strong, and when it finally tried to change due to digital cameras, the time had already passed.
AMD is hitting hard. I had a Ryzen 2700, then bought the 5600, and now I have the 5700X3D. Everything works great, and it offers excellent value.
I mean its not like they're not trying. Their cpu division was held back a long time by their foundry problems. They've spent countless billions trying to fix it. Not like they arent trying. They went to tsmc this gen to fill the gap.
Also intel is genuinly innovating. E-cores, being able to power of and on p cores, removing multi threading, and especially, intel is still updating their x86s architecture that will finally replace x86 and remove some of the old stuff.
@@Blackfatrat E cores aren't anything new. Mobile CPUs have been doing this for over a decade. Independently powering cores isn't new either, removing hyperthreading is part of the reason why Arrow Lake is so dreadful.
Intel quite literally sat on its ass during the 14nm++++++++ era, refused to adapt to EUV and to Ryzen's own performance increases and it is now biting them in the ass because their own process nodes are terrible as a result. Doesn't help that Gelsinger actually cancelled the one thing that Intel could've pioneered in rentable units and Beast Lake.
And mind you Arrow Lake is on a better node than Zen 5 and yet fails to best it while using much more power than it does.
This launch is a CF. The new chips are an improvement in power utilization, but they’re considerably more expensive than the 14th gen, and performance while generally close, better in a few cases, slightly lower overall. Any way you look at it, this CPU isn’t quite ready for launch. Not sure if they fumbled this worse than AMD fumbled the Zen5 launch, but it’s definitely a race to the bottom. Even with TMSC’s process technology, they didn’t manage a great improvement in efficiency.
That aside, it’s not a bad chip, but it’s not what Intel needed. I’m actually surprised that Intel stock went up today. Especially given Qualcomm’s announcement yesterday. Or course, ARM’s legal actions might have muted the effects of Qualcomm’s announcement.
This was the best review! BIG UP!
Not been having the best days lately. 5:41 made me laugh really hard. you're the man wendell
When I test the Core Ultra 9 285k... I mean, when I was trying to test the Core Ultra 9 285k I have several blu screens of the death and windows corruption, and I only maintain an stable SO when I pick up an SSD NV2 from an AMD server that I have here and installed on the z890 and the windows did no crash anymore.
Believe me, I get surprised like you guys, cause nobody from the hardware bench from UA-cam that I know can explain that too. 🤦🏻♂️🤦🏻♂️🤦🏻♂️🤦🏻♂️🤦🏻♂️🥲
Incredible film sir Wendell
Thank you for this review. Your insights are invaluable.
I'm holding out for Panther Lake, where BIG changes ARE coming. PowerVia, RibbonFET, node shrink, ARC iGPU upgraded.
I hope, by then Windows will be updated like you say they need to. Intel NEEDS this man!
For Intel's sake, I better hope so. However, people have been saying the same thing about Arrow Lake and here we are.
Luckily I snagged the 5800X3D before AMD stopped production and shops running out of stock as I can now go for a few years without an upgrade (I managed to make use of my FX6300 system for 6 years before jumping onto Ryzen with 2nd Gen being the starting point) and most games don't require all that much in the way of CPU.
1:38 It's more complicated than that, since there is a second tiny filler tile next to the GPU that is not even shown on this slide and the GPU is on a separate tile that isn't needed for the KF models. So depending on how you want to count and the model, it's anywhere between three (without GPU, filler and base tile) and seven.
Wendel youre awesome, I won't let them hurt you again!!
Hahaha this is the most entertaining review I've seen in a while (and informative too).
Honestly, there's a lot of interesting technical questions here. I woner how the higher memory speeds interact with the Die to die clock. Considering AMD Infinity Fabric had so many considerations when it came to the ratio of the memory speed and the fabric clock; it would be interesting to see intel's quirks with Foveros between the I/O tile and the Core tile.
REDDD!!!!! Haha, sorry... I like it. Informative, sensible video as well, as usual
I had no idea that you also brought big humor. Nice.
In a fair amount of gaming benchmarks @1080p the 285k was clocking to 4.8Ghz where as the 14900k was holding 5.7Ghz. Which certainly seems odd…
The 7800x3d is way too good, and really made Intel realise their CPU just way inefficient comparing. So Intel focused on making arrow lake more efficient, but similar performance. They didn't succeed, rather miserably failed, and if 9800x3d will the next x3d king, then no doubt AMD will gain massive market share in CPU/APU sector.
And with this poor performance in Intel games, how exactly do they think they will convince me to pay for their new processors + new moutherboard + new ram? 🤔Sorry intel but after so many years with You now I will go to AMD and 9800X3D. ☺
Hasn't the gaming thing kind of settled out anyway? I mean, how many channels on here talk about "Nobody is buying graphic cards anymore!"
^..^~~
Great Video. Two Thumbs Up
Wow, your camera has amazing quality! You look so crisp and life like on my 4K HDR600 32" Viewsonic
I probably won't, Wendellman, but I *do* have high hopes for the Bartlett Lake lineup. 🤗
Bartlett Lake probably won't be for consumers. It's focussed on networking and edge microservers.
@@samuelrodgers2742 That wouldn't stop me from trying to get my hands on one (or two) to play with. 😁
@@samuelrodgers2742supposedly they are reworking it for consumers, on the LGA 1700 platform. It's rumored to have a 12 p-core model
@@renerant I can hardly wait. (I'm not lettin go of LGA1700 without a fight! 😝)
@2:30 Do you remember when Intel slagged AMD for it's chiplets being "glued together". Tiles are the same thing. Pepperidge farm remembers...
You’re right… I’m still on my 5800x3D! :)
7600X user here, I'm happy with what I have :)
As had been said by many, Intel really ought to release a desktop gaming focused product - All P-Core, 0 E-Core
It's interesting that when talking about upgrade most people compare it to 14900k, but obviously it does not make much sense to buy +1 generation, whatever it is - intel, amd or iphone. I have 9900k and I'm really looking forward to buying the latest arrow lake once it's available, because it's a new motherboard and ram anyway, and unlike 14900k there is a chance that I may later upgrade the CPU only if needed. I think I definitely will see great improvement over my current PC anyway. Since 14900k is an ideal spec for Microsoft Flight Simulator 2024, and this 285k is close to its performance with much lower power consumption, that sounds promising.
ex-13900k user currently 7950x I'm not going back to Intel until they pay me the 2 years of constant crashes I suffered, 9950x3D my wallet is ready
Oh boy you had to buy a whole new motherboard just to switch eh? Dang that's a lot of dough
5:40 That was out of left field and awesome! 😆🤣
It does seem promising. I think as systems grow into fully utilizing the current CPUs the ultra 9 285 will be a golden piece of used hardware over the next 5-7 years. The high-end 13th and 14th gen will be landmines of possible degradation and that leaves what on the and side, a 3 years older 12th gen, which is very solid but will age out first. Unless the next generation released is just out-of-this-world incredible I suspect that 285 owners will have pretty good resale value or a lot of incentive to repurpose these chips.
also, the IGPU on this is amazing, battlemage when!?
The easy-anticheat bug is a known Windows 24H2 bug and has NOTHING to do with Intel's new CPUs. It will do that on any CPU when using 24H2 along with Easy AntiCheat.
Intel at least is starting to show some innovation. Next versions will hopefully be much better
Once task schedulers get on board with the tiles, we are looking at a sexy power efficient beast.
There's a facet of this that I don't think has been noted as much: these CPUs are built on TSMC 3nm. AMD 9000 series is built on TSMC 4nm. When you note that, the efficiency gains it has don't seem that big of a deal, especially since they still can't beat Zen processors on efficiency from two generations ago, and its performance is often worse than their own last gen.
Just on node improvement alone, we should expect much, much better.
2:55, I'm confused. Why are you getting that low write and copy speeds while running I assume 8200MT/s memory speeds? I get better results with 6800MT/s on Intel 13Gen, not even talking about latency. Something must not be right here...
This is on a better manufacturing node than Zen 5. The cost to manufacture these chips must higher than Zen5. The profit margin of Arrow Lake could be a problem for Intel.
Since they outsourced to TSMC they won't have the same margins as internal Intel Fabs, so they can't cut cost as much as they normally do.
It's so sad to see how far Intel has fallen. I remember when they launched the Core 2 Duo, and they were just the best for years. They seemed invincible. Now AMD, Nuvia, Apple and even MediaTek are ahead of them.
Sad
Although I think this might have been inevitable because they stuck with x86 and were unable to transition to a clean new 64-bit architecture like Itanium. After all,
Apple had switched to Intel CPUs at the time of Core2Duo; they were customers, not competitors. If they had transitioned to a less power hungry, less heat-producing architecture long ago, they wouldn't be in this position now.
Happens to a lot of companies. Start off with the best products. Then sit on their laurels rather than investing and innovating.
The current situation is a result of them sticking with 10nm for so long and doing +++++++++++ rather than moving on.
it's what happens when you try to build factory with low profit and larger larger debt, you then start ignoring other products just to focus on manufacturing technology. Now intel has neither the product nor the manufacturing edge.
@@squirlmy Well, since the Pentium era, x86 cpus are risc internally with a micro ops conversion circuitry. They really missed the opportunity to ease a transition on a clean new risc ISA by allowing programs to bypass the cisc interpreter which would have been mainly used to remain compatible with the large x86 software collection. Then after a decade or so, they could have transitioned further with a full risc CPU combined with some software emulation for legacy x86 binaries (like Apple forced twice on their captives).
To me, such a bypass instruction should have been featured many years ago, at least for enterprise products.
Is it the case that this chip is the beginning of a new platform and achieving parity with its previous design is something of an accomplishment?
I think total power consumption is way more telling than just CPU consumption in tests like this. Platform and interaction with storage media, PCIe etc. is an important aspect that is ignored by most testers. Of course this requires testing multiple motherbords and RAM configurations to get a clear picture.
Reacting to you opening reference - are you saying Intel has entered the Fifth Dimension ?
I think this trend of attacking Intel has become a stupid way of getting views. It's hard to judge over Intel's Arrow Lake now. I appreciate a lot their effort for lowering the consumptions which is something they will benefit in the next generation as well. We have more than enough power for most of the applications compared to what used to happen 10 years ago. The very problem today is limiting the heat pc spread. It causes multiple issues at least 6 months a year.
Legit question do people actually high end products for gaming specifically? At this point I get the feeling that high end products only matter because of the energy consumption and shear performance gains no matter how small. I wish there was some deep analytics done on what gamers actually do because I have a hard time believing people really want everything maxed even if it's on the competitive multiplayer genre.
Newegg gives a good statistics. Most popular CPU 58/7800X3D and 14700K. 7600x/5700x3d and 12700k are also popular. So basically the midrange CPUs. I don't give a shit about CPU as long as they have at least 6 cores, made in the last 3 years. I put all my money on a better GPU LOL
8:00 to my findings (watching reviews) APO seems to make a bigger difference in 0,1%/1% lows but in both directions.
Thats skit was so funny. Do more of them xD
That's funny! Not the performance, etc. but that iGPU/add-in driver order issue has been a problem since Intel started putting iGPUs on their ICs! I think they may have fixed it at some point (though I don't know for sure) but it's a long standing problem that I would have hoped wouldn't be an issue at this point.
So, I'm kind of glad I bought 10th gen at this point with the intention of using that rigs platform on its own for the next 7-10 years, with maybe an 11th gen Xeon swapped in at some point to put it into lite-server work mode instead. (Could do the same with the 10th gen xeons, but the 11th gen ones are cheaper right now I think, and probably will be for a while yet.)
It's probably going to take up to 10 years from that point of original purchase I figure, before Intel is able to catch up with itself in terms of time to cook it products. They're releasing too fast, so they can't work out all the small kinks and issues that are popping up post release. An unfortunate side effect of competition as it were, not that competition is a bad thing overall. Intel went from being like a slow service restaurant, carefully preparing the food to ensure maximum quality (ideally), to being more like a fast food place, where you aren't getting what's shown on the menu, and you'll be lucky if the order is even prepared properly.
It's going to take time for them to recover from that problem. And it's not like AMD is going to quit trying to take any market share Intel has remaining on a YoY basis.
AMD however, could probably benefit greatly from a general slowdown of CPU releases, as could the global community as a whole in my honest opinion. I think we need to be given some time to come to terms with the general performance of what we have right now, and put it all to it's most effective use first, before we start advancing further yet, at least in terms of pure performance. Instead, they should (IMO) swap into efficiency orientated designs again that keep the performance results of the best releases out right now, or at least incrementally increase performance when efficiency gains allow for it.
The way I see it, is that the more we release these products as a whole, the faster and more resources seem to be gobbled up by programmers and developers as a whole with their projects and products. Which just feeds back into the loop necessitating better releases overall.
Meanwhile, while not a cheap rig to build, we have the tech right now to produce pretty convincing virtual worlds to the point of uncanny realism. This won't take long with the current release scheduling as it were, to bring this to a point where even the cheaper rigs can manage it. That's not necessarily a bad thing, but I don't see it as a good thing either with the constantly increasing resource requirements as well to run many of these things. I.E. I think the bloat we deal with from a lot of programs and games today, are mostly due to the lack of need to optimize for anything anymore. There's always a 'supposed to be better' tech release on the horizon. So they aim for that instead. I believe this does us a great disservice in the future, when we are trying as a global community overall to reduce the amount of power that our tech is consuming for the tasks that are being run.
So I think we need them to take a step back for a while, and release slower, with focuses on efficiency again over performance. We have perfomant enough tech as it were right now to do a lot of things. We need to get better in how we use it first, I believe, before we continue the march upwards in performance and resource availability again. In some ways at least.
PCIe lanes for example could be fleshed out better, but that's more of a motherboard side problem as I see it.
RAM amounts possible, could be increased, but right now 128GB +/- some amounts, is probably enough for the general required needs of 'most people'. For those who really need more, there are specialized platforms for that.
GPU vram amounts are approaching numbers that necessitate you have some rather beefy rigs RAM wise, for balance reasons. And that's the gaming GPU's only spoken of right now. The workstation ones double that at the top end.
Like, CPU's aside, none of these things are exactly "cheap". And then by the time you have finally built one of these systems if you are piecemealing it for cost reasons, the next release is already out, with the one after that set to release on the horizon as well only a year later; if that. At that kind of rate, it's only going to take another 3-5 years before everything can do 4K at 240fps and 8k at 60fps; max settings otherwise for textures and such from the gaming side of things.
Who knows how ungodly the workstation and server side of things will at that point. 512 cores per 1 CPU, before taking into account any SMT/Hyperthreading which may not be a thing anymore at that point? 1GB of L1 cache? 16GB of L3? Might seem ridiculous, but that's kind of the point. We are going in that direction, and it won't be cheap in either upfront cost, or power costs afterwards.
And a lot of that could be a good thing for a lot of other reasons. But with the quality of the tech being release as of late getting worse and worse it seems... is it really going to be worth it, chasing this dragon as it were?
Should we maybe instead be focusing on maximizing the capable use of the current tech available instead, with slower releases that are given more time to cook, so as to reduce the potential of wasted time and resources on garbage product releases?
I think the answer is yes. But those who took the time to read through, even if you disagree; you say what you want at this point. I've said my opinion on the matter.
I tabbed out for The Age of Aquarius. Good stuff.
Outlook not so attractive for Intel CPUs. I hope they get it together sooner than later because I don't want AMD to get complacent.
that couch felt a lil more casting than therapy....
If anything, it's making me think about 14th generation, since I can keep my 128GB of DDR4 RAM. I just bought that RAM last year and I want to keep it a while. Actually, I'll probably just keep my 5900x because I just don't want the risk of my CPU dying early. The 14th gen Intel CPUs kind of scare me.
He got me again with that box , I clicked on it again thinking it was a floppy disc
5:55 Gosh I love that red shirt
well, coming from an ex intel employee, i can tell you guys that there is a lot of stress for releasing the new CPU, and sometimes corners are cut for differnet reasons. Mostly to reduce complexity of the technology, allowing the engineears to understand it and onboard the technology easyer. This approach is necessary in large comapnies because it's harder to coordonate all the teams working on the new platform.
But this process is the same for all large companies, except for startups. Basically they are doing small incremental improvements.
And when the technology changes this much, than it's a good practice to rollback on some stuf, like hyperthreading that i would imagine caused more power cunsomption (and die space) than it's worth.
But i agree with everybody ! They botched it up with gen13 and gen14 bugs, but in the hindsight i can understand why it happened, it was a corner case (a very common one where cooling is adecvate) but none the less a cornercase, that wasnt represented well enough in intels development labs.
After this huge problem regarding the gen13 and gen14 cpu's and the fact that everybody else is outperforming them (amd and apple) they should have lowered the price much more, and should have treated this product as a first gen low cost cpu, specially that it requires motherboard change YET AGAIN !
New motherboards every 2 releases is a big reason I switched to Amd 7000 last year. I have some faith that I'll be able to reuse the it when I upgrade the cpu in 4 years. I don't want to ewaste a 100-200 board every time
I never see enough generational cpu uplift to upgrade parts on a mobo/socket, (nor keeping the cpu and changing to a newer mobo). So the socket thing is mainly just a matter of better production economies for stuff like coolers and chip packaging. Also keep in mind AM4 and zen1→zen3 is an extreme outlier situation.
But I also start with the CPU class that meets my needs right from the start and run machines for more years than most people.
As opposed to gettng an economy CPU now and a flagship replacement on the same socket in the next generation, which is usually rather poor economy. (Especially once factoring in differences in mobo features and such, and my old old flagship will still be about as good as the newer econo cpu.)
@@nickeshchauhan5661Requiring a new motherboard and not guaranteed another cpu upgrade is Intel's biggest sin.
Arrow Lake is launched slower than the competition and Intel's previous gen with no future recourse. What incentive is there to buy into this platform?
easier* You worked for Intel?
Anandtech and Techgage both seem to have gone down and are sorely missed.
I was under the impression that these big tech companies were way ahead on their real research than we get to see. Like they could build an i9 19900k if they wanted to, but instead just make incremental generation upgrades to ensure that they can make their money back with each generation and ensure product improvements during times with slow development progress. But this proves I was completely wrong.
Incremental? Both Intel and AMD have made leaps in generational engineering. It's gonna pay forward for both of them.
@@korcommanderthey meant incremental as in while their actual knowledge as of this moment extends to building something as far in the future as a i9 19900k, what they actually release in the present always lags behind because its more cost efficient than trying to build that mythically advanced cpu. Intel has for sure made advancements in cpu tech sure but as a business they currently lag behind amd really hard at delivering good products. If they had that knowledge roadmap set out for them already like 10-15+ years in advance how are they fumbling right now
It takes quite a few years to turn their research into a product you can buy, especially when it requires a clean slate design. That's why it takes a long time to turn a company around when it's not doing well. For example, if AMD could've replaced Bulldozer with Zen any faster, they would've done so. Instead, they had to incrementally improve on that architecture while they made sure Zen actually worked. Rushing things to market is more of a problem than holding things back, because it can go disastrously wrong.
O/Clocking is done with new cpu's there already set to race
Rather than all these thread manipulation programs, why can't Intel and AMD work with Microsoft to update the Windows Scheduler for upcoming launches? Why is that not an option?
APO is too much of a temptation to fiddle scores IMO, cynicism says it will be rarely interated on unless a new flavor of the month benchmark pops up. Same with the "AI" thread and power governors. Intel really need to earn some good karma.
0:50 whispers "Wow that's a really old reference". I assume most people will assume you're referring to the 1967 song from the musical "Hair", but by definition, the Age itself lasts for 2,160 years, on average. It is of one 25,920-year period of precession, or Great Year, divided by 12 zodiac signs to equal an astrological age. So, yeah "REALLY old!" lol!
I always run insider previews! Canary Channel, it is FAR more stable than the release versions because Microsoft fixes all the issues MUCH more quickly.
Is it really a memory controller advantage or is it most likely the more loosely coupled chiplets to I/O die and the Infinity Fabric?
So what do you think the theoretical loss in proformance is from the removal of hyperthreading?
Why all the gaming benchmarks and not tests with productivity software?
Because then it makes Intel look good
Happy to see the 7800x3d crush Intel’s new CPU in gaming. Keep working Intel.
Pat Gelsinger: * nervously adjusts his rear view mirror
@@Hugh_Ipat "wh.. wh- where is AMD!?"
But x3d is very slow. Cine, blender..
They are 50% slower.
You think your x3d is better than 9950x?
@@mikeree8543 in gaming……..absolutely………..it’s a gaming chip. That’s what it’s for. If you would have read my post you may have realized I said “gaming”.
@@mikeree8543 OP clearly said "in gaming". The 7800X3D is indeed better in gaming. Obviously in MT heavy production workloads, the 9950X is the current desktop king.
Great video - Arrow lake is a stepping stone. It may take another two years for Intel to beat AMD on the PC end.
Where have I heard that before, recently.
Let's be real; the shoe is on the other foot, for now.
And I hope it stays there for a while. Least till AMD gets complacent.
@@SpecialKLSX Pay attention to the background stuff, Intel is stuck because its old P-core runs out of steam. Now it shifts its effort to beef up the E-core, which we already see its potential in just 1-gen. If the perf-leap can be largely retained in the next gen or two, taking the lead after two years seems likely.
On the other hand, AMD's advance slowed down quite a bit in its Zen4 and 5 release, unless AMD can wow people in its next gen, which I doubt, I'll bet on Intel leap ahead.
Looks like they still don't have the power gating and tile interconnect figured out properly. I think they are more on the stability than performance side of things at the moment. Let's hope future BIOS updates will improve the situation. Also your Geekbench score is interesting, Geekbench web shows scores of 3000+ in single core ... unfortunately the system information is terrible so any comparison is impossible to do.