Intel was pushing the voltage for years and last GEN they killed there own CPU's. Now they need to get more afficient and that kills performance. AMD was on the afficient horse much sooner thats why they are better at the moment.
Tech YES City. please compair new intel ultra5,7 14600-14700 with new ryzen 9600x-9700(pbo max , ram 6400)and new 7600x3D-5700x3d on windows 11 24h2 (in benchmark charts always write priceUSD too)
They only performs well in a select few applications compared to AMD. So it is good CPU`s only for a few people that uses the applications they perform well in a lot.
@@PowellCat745 nice point, so Intel dropped all sanity in thinking just using a new node would give them the advantage, as always? Just buy it and be happy?
@@gertjanvandermeij4265 Not true unless you are only using it for gaming. If you don't use it for gaming and look at it on Linux, you see what scheduler improvements will improve it greatly (also microcode improvements)
It's basically the price of the b650 and 32gb plus 7800x3d bundle in OZ kinda crazy that they got such poor perf from the newer expensive node from TSMC
Exactly! They've been doing this since skylake 9 years ago.... Until eventually the highest end, Ks SKU don't even have any margins left for overclocking (running at thermal limit and power limit already, at default settings)
Stability feedback is what is missing from most tech reviewers. What's the point of a fast computer, if it crashes all the time leading to loss of work, or being unable to enjoy the game.
underrated comment tbh, I personally love talking about stability now and yeah the AMD products have come a long way in my personal testing, nothing unstable on their latest stuff.
@@techyescity I'll gladly sacrifice 10% of my fps in games for a smooth, seamless experience on the desktop. I envy people who "can't feel the difference".
@@arc00tai swear so many people are ignorant. Windows 11 is less snappy and an overall HORRIBLE experience if you like a fast PC and yet people said "it's better than win10". People have never mentioned how much better a ryzen system is once you tune the ram+ ESPECIALLY the infinity fabric. from what I can see most people are just console plebs who can afford a gaming PC and can't actually tell the difference between 60hz and anything higher, they just look at the fps counter and placebo their way into the comments.
Yes, because it draws more from the 24pin, at least his MSI board did. Which, given MSI's recent spat with AMD...might have been intentionally misleading. That's tin-hat nonsense tho.
I'm still rocking an i7 8700 on my video editing rig and it's doing just fine. My daily-drive for online web-browsing is a third gen i5 and (thanks in part to Linux) it's also doing just fine. I'm not a gamer so I don't need to live on the cutting edge and suffer the bleed-out that often results.
I'm running out of Intel gens to SKIP, i felt awful smug about waiting on 13th/14th gen after that debacle... waiting patiently with my 12900K for a worthy successor...looks like I'll be waiting a WHILE.
@@saricubra2867 Single thread cinebench scores =/= real life use Even if those scores reflected gaming performance X3D for Zen 5 just drops in a few weeks, so what is your point?
@@saricubra2867 Single thread Cinebench scores =/= real life use. Even if those scores reflected gaming performance, X3D chips for Zen 5 drops in just a few weeks, so what is your point?
@@haukionkannel Which makes is strange as the NPU isn't strong enough to official be used with the new AI stuff from Microsoft (MS says you need an NPU with a minimum of 40 TFLOPs, while the new Core Ultra tops out at only 13).
I miss slot A Pentium III era. Moore's law was really kicking ass back then. True doubling every generation. I guess it makes sense that generation leaps decrease over time, but greed is like sand in the gears of progress and this is not the way to make progress.
Imagine the outrage when AMD got a 5% improvement on average when releasing the new CPU generation, but Intel coming up SHORT in performance of a THREE GENERATION OLDER CPU 😂😂😂😂
This has happened a few times , Intel P4 was slower than Pentium 3 Coppermine. AMD did it a few times also . A lot of it is due to poor management and trying to please shareholders.
Only in gaming though; in fully-threaded productivity workloads it's about 50% faster (than the 12900KS you're referencing), while Zen5 wasn't more than 5% faster compared to Zen4 anywhere, so the unsatisfaction with the latter was quite deserved. That said, I agree with Brian's assessment and I wouldn't rush out to buy Arrow Lake either. At the very least it seems prudent to first wait for 9800x3d (and I guess 9950x3d as well), then go from there.
@@HenrySomeone Zen5 is actually upwards of 30-40% faster than Zen4 in INT8/AI based workloads, and AVX512 is also up by 14-16%, so yes it absolutely was more than 5% faster somewhere. It's just that where Zen5 is significantly faster doesn't matter to the vast majority of desktop use cases. But this also isn't too surprising, as Zen has always been first and foremost an enterprise/server focused architecture, as that's where the real money is, and Zen desktop is just the leftover scraps given to us plebs.
Frametime stability was Intel's strong suit with Alder and Raptor. The glue dip hits hard this generation. I wonder if the lack of HT played a role in it too. i9 9700K was also a mess when it came down to frametimes. I'd love to see a 285K vs 14900K HT off (gloves off) comparison if possible.
In the vast majority of titles (of its day), the 9700k was far from a mess, both in average framerate AND the 1% lows. There were just a couple of outliers, the others were (much) later releases.
My old 9700k is still in use by a friend of mine and still runs great, never noticed any frametime issues with it when I had it. HT doesn't make nearly as big of a difference as influencers and intel would like you to believe because the HT's don't support the entire instruction set and never has, hell p4 HT only supported TWO instructrions every other instruciton that HT was 100% idle. Remember when Linus made a huge deal about how good HT on the P4 was and how it improved performance "across the board"... yeah it could only add or subtract it couldn't poke move or do anything else it literally improved performance when add and subs were used which is objectivley nowhere near "across the board". It was a scam at first it's probably still mostly smoke and mirrors.
That freaking latency issue is killing me on 13600k. It happens unexpectedly and lasts for 2-3 seconds. Sometimes won't happen for days other times it happens twice day.
Are you sure it's not SSD latency? I've had some spikes like that which were drive related. Try also m.2 vs sata. Clonezilla your m.2 drive to a data one and use that for a week see what happens.
Yeah, the latest cpu that is completely without this issue (at least inherently, without deactivating the e-cores), is the 12600 (non-k), which of course isn't exactly a top performer...although there are some motherboards (with an external clock generator) like MSI Mortar Max that do allow overclocking.
These CPUs are not designed for consumers. They are designed for the server and data center market because that’s where the profit is. They care about stability, security and power consumption. These CPUs without hyper threading are more secure, the new architecture is more stable and the power consumption is much lower than the previous three generations. For a company with lots of servers, this CPU family is very attractive.
Should hold up and perform pretty well for the next while at least, I've got a 12700f and 3060 12g and the way it performs at everything I'm sure it's good the next 3-5.
@@jeremyniels I have a dedicated Linux Mint machine and it’s so much better than any Windows OS in that it just keeps going. Haven’t had an issue with it in years.
Been watching the sales of the socket 1851, and nothing is moving its all just sitting around, waiting for reviewers to test stability - Just shows how much confidence consumers have with Intel right now..
Intel pricing these chips like people have the stupidity and cash to invest in a slower chip and a more expensive platform upgrade, they're delusional to think it!!
Tile-to-tile latency as well. That's why 7800X3D is a hair faster than 7950X3D in gaming despite having half the cores and lower clock speed. That single CCD design allows it to behave almost like a monolithic CPU.
It all comes down to perception and marketing. AMD kept supporting AM4 well into AM5. Can you imagine Intel making a 3D vcache version of the 14600K now? I don't think so. They need to get creative with their products, cause if this is their best foot forward, I see their stock price dropping even further. 😅
Motherboards start at $200+ with full features held at ransom for $500+ CUDIMMs - Only ones for sale thus far are GSkill 48GB-9600 kits for $400 Looks like 48GB may be the standard for these as all of the reviewers received 48GB kits from various vendors. Your $500 motherboard probably won't get 9600MT, nor would you want it to because of asynchronous performance dropdowns... Your motherboard would need to be able to run a regular UDIMM at 8600-8800MT for a clock driver to punch it up to 9600MT.
@@douglasmurphy3266 Which begs the question, who did Intel design these chips for ? Certainly not gamers. Maybe for productivity ? For the price it would be a better option to get the cheapest Threadripper system as the cost wouldn't be that much more and the potential of having a lot more I/O than desktop parts is worthy of consideration. Both AMD and Intel desktop parts are closer to console/ mobile chips than they are to workstation/ server parts and they are priced way to high for how gimped they are by comparison. If one of those two companies could just come out with a CPU that could use all four memory channels at full spec and offer at least 32-48 PCI-e gen5 lanes direct from the CPU for around $500 I think there would be a demand, especially if it was clocked decent and had something similar to 3D-Vcache.
@@douglasmurphy3266 Very interesting pricing discussion. One of my fears this generation in beforehand was that Intel would pull something like that video of Linus in the rain at computex back from many years ago, where he was complaining that, I think they made it so that even when you bought their super expensive motherboards for HEDT you had to buy a hardware key and plug in to that motherboard if you wanted to unlock more features like hardware RAID or something. Anyways, point being that they would lock features into higher pricing motherboard segments, and we saw the "higher end ILMs", and I feared that CUDIMM or CAMM2 or whatever would be locked in to super expensive motherboards. I saw some cheaper kits of CUDIMM DDR5 7200, but then it turned out they were 24GB kits lol. Or then on the other hand, the G skill 8200 kit was €400 when I checked. But you can't even buy it yet, it comes into stock in 2 weeks, begging the question why launch now when it clearly isn't ready and you can't buy the RAM they want you to use with the CPU yet. All this makes me wonder, would it have been cheaper to do, let's say, quad channel DDR 6000, with 4 dimm slots rather than trying to get 9600 on 2 modules and dual channel? Obviously quad channel requires a more expensive IMC but you require such insanely high quality memory dies and other stuff to get those modules to run at those speeds. I think AMD/Intel have tried to replace HEDT with high end regular parts, and in my opinion they are running up against a memory wall with what you can achieve on dual channel for the highest core count SKUs (maybe AMD moreso than Intel). I think it was funny when I had my X58 chipset and i7 920 that it was triple channel DDR3, maybe Intel/AMD should explore options like that. The y-cruncher guy wrote an interesting article about AVX-512 on Zen5 desktop, and that, if you use the 9950X with AVX-512 instructions on regular dual channel memory, you have to do roughly 300 vector instructions per memory operation to not be bandwidth bound, highlighting how silly dual channel desktop platforms at over 5GHz are for those CPUs, where they in the server world would be running maybe 3 GHz on 12 channel memory making the bandwidth equation more favorable.
It would've been nice to pull one of those AMD products in your test to put in the 14900K, I'd like to know where it stands. Also maybe the 14900K with hyperthreading turned off because the 285 doesn't have hyperthreading on the chip. You always have awesome videos.
I was waiting for these reviews to come out and boy am I glad. I am upgrading from a 8700k and clearly not going intel now. Debating between 9700x or 7800x 3d
Was ready for a 285k build in a week or so. Now... going AMD. Crosshair X870E Hero, 64GB 2x32 Corsair Titanium 6400 @ CL32 and waiting till January to grab the 9950X3D.
@@BeastMortThe yeah im considering going 9700x and B650 ROG board and 32gb ddr5. I got a 4070ti super a couple weeks ago for a steal ( $650 NZD ) and sold my 2080 super for $500 nzd lol
Arrow Lake is so disappointing! Even a 146€ 7500F or a 197€ 5700X3D offer the same (or higher) gaming performance. Edit: 5700X3D is only 174€ right now.
I agree with your intro completely. Even though I use it for non-game use sadly even though it is ok for that it's a Beta test for their new chiplet style fabs. I assume if INTEL can survive this the next version will IMHO be exceptionally good just like the way AMD went after they changed to chiplet same Beta route from them. Try clocking the Efficiency cores that's where you will get bigger boost clocking the Performance cores has a negative effect because of the restrictions on them. It's a Beta so it could be a good buy at a Beta price of £490 or 968.26 aus
For most people the difference in frame rate from the top end performer vs the core ultra cpu is negligible. Most people😅 wouldn't notice the difference of 10 to 40 in frame rates, particularly when above 60 fps. Additionally for most people good performance and stability is preferred over bleeding edge performance and instability.
absolutely this.... imagine stating with all seriousness thats this is a terrible cpu because it runs 40 fps lower..................................... while pulling +300 hahaha LMAO this ppl are not real
The issue probably relate to optimizations in the sheduler and the branch prediction engine, which has been made a lot deeper than previous processors.
Seems that with Cu-Dimm at 8000 and some overclock the 285K can give better results and sometimes it can beat 7800x3d. However I feel disappointed, this cpu may show some muscles after bios and windows 11 scheduler will improve.
Will you be testing the latency of these Core 5, 7 or 9 Ultra chips like you did with AM5 and 12th and 13th gen CPU's? I am very curious if these new chips are at the 10th gen level like you prefer.
Just wanting to check, was this done using APO or with All core enhancer enabled? Intel cores are split between groups of different performance, such as performance cores and efficiency cores. By default Windows does not understand the cores are of different speeds so window's built in task sceduler will place programs on random cores that are based on lowest usage by percentage. This is bad as that means games very often get put on the efficiency cores and not on the performance cores resulting in them running at a far slower multiplier. APO basically takes over a part of the process or is meant to so that ether it detects or you specify what needs to be ran on high performance cores. If that fails you can also enable in a lot of motherboards something called "All Core Enhancement" which brings the efficiency cores up to the same multiplier as the performance cores. (This does drastically increase heat so make sure your setup can handle that) But without any of that the efficiency cores are just that, they are designed for efficiency meaning their single thread speed could never even contend with AMD as they are not meant to, but have yet to see a single person say they got an improvement using APO and no ones touched all core enhancement. My Intel 10980XE started of at 4.8GHz for the first 4-8 cores then it drops as far as 3.2GHz for the performance cores at the end. but I had 18 cores and 32 Threads which meant I didn't have to worry about the slower cores so long as I used ether Intel Turbo 3.0 tool or All Core Enhancement as that allowed me to get to 4.8GHz on all cores and even overclock all cores to 5.1 or selective cores to 5.2 so long as everything except 8 cores was at 4.0GHz and 3.2GHz with all core enhancement off, but then that required manually telling Process Lasso to take all applications off the first 8 cores to dedicate them to what I wanted to use them for, in a sense doing manually what APO and Intel Turbo 3.0 was meant to do. But my performance would skyrocket by doing that stuff.
Since Intel can chop and change tiles around their die, it would be very interesting if you could run the same test for the 15th Gen core ultra series that you did with the Intel 11, 12, 13 & 14 Gen to see what you discover !
The productivity and power efficiency specs of this CPU are very good though... and gaming performance is pretty much on a par with the best CPUs out atm. Im just baffled why Intel doesnt release a gaming version with high power draw. Its almost like they've given up on gamers
The only issue I have found with AMD longevity is that it's almost impossible to sell your old AMD cpu at a good price as people only wish to upgrade their old systems to the latest CPU. Which, of course, makes the older cpu's available for low cost builds for new buyers.
The 285K and the 9950X are both designed for professional uses. They are both appreciably better for companies who care about saving a few minutes on a render and overall power use. Neither is intended for gamers. Intel isn't even bothering to create a gaming SKU for Arrow Lake. For the 4080 and below, everything from a 13600K/7600X to a 7800X3D or a 285K has roughly the same performance in average FPS at 1440P and higher. There are almost certainly places where a CPU makes a difference (turn time in the late game of stellaris/civ, FPS in huge RTS maps with a huge number of units, world updates in flight sims), but I rarely see those being benchmarked.
yeah, would love to see some of those strategy titles getting tested with late game lag, like Stellaris and other PDX titles and not to forget X4 and civ as you mentioned, a cpu that can master those titles in late game would be my instant purchase without hesitation.
@@ashamahee With FS2024's "ideal" PC being a 7900X and Civ 7 Ultra config being a 5950X, the games I like seem to be moving from 8 cores being ideal to 12 to 16 cores being ideal.
they dont have the BALLS to pin your comment, I had to slam the table and scream "THANK YOU" because all this gaming performance is BS, and even with latency lows whatever, you put this on a mid to high end GPU above 1080 in high settings and all motherfkng cpus perform within a 5% margin.... FFS
@@arch1107 i meant good as in big price cuts, at the right price these CPUs will also be good for someone. Only problem would be motherboard prices. AM5 and even AM4 still makes more sense.
@@ashenwattegedera lets put the idea in numbers how much would you pay for a deal on this new cpu, ram and mobo? 500 dollars, 400 dollars? i would pay for the combo 100 dollars thinking that t might have a problem i just dont know yet but i will eventually discover and sell used or give it away at that price, no one wins, intel will not survive another 2 years, selling on discount, mobo partners will stop selling new models it sounds exaggerated, but if you look at amazon, the model is selling the most from intel is 12th gen i9 at 300 dollars and that is expensive, it is just a old intel product
Isn't anywhere near trash. You idiots do know that gamingbis a niche and pc gaming is ever less relevant. Intel 13700k and 14700k are equal to zen 4 and 5. The only cpus that pull away, amd Included are x3d and even that isn't by much in 90% of games. To call them trash just shows how delusional you are with the amd cult shit
I think that Intel, like AMD, should get some slack knowing that this is the first generation of a new architecture for both. For non-gamers this CPU seems fine, those are the majority of the market, certainly if you include laptops. But gamers should skip on this generation. At least AMD had some progress for gamers, around 13%. The great thing of the first round of a new architecture, in the 2nd and 3rd round you usually get a lot of progress because there is a lot of low hanging fruit. You also saw this for example with Intel Core (1000 and 2000 series) and Ryzen (Zen2 and Zen3).
I hope (for Intels sake) that the gaming performance issues will be resolved soon. It would be great if You could do some testing to see if the latency issues that you found on the previous generations are still present on the 200 series. Anyways, it doesen't look like there is any reason to replace my 13700K any time soon 🙂
This is unrelated but I had the displeasure of meeting one of the most disrespectful Intel Employee in person. He was getting coffee at my local parlor. Not only are they not up to par with AMD but their employee do not know respect for others. Happy I switch to AMD..
Since Intel exchanged my 14900k it has been rock solid. Even more then my old 12900k. Although I will probably switching to the 9950x3D as the Ultra can not beat my current 14900k in gaming. I think Intel should have launched Arrow Lake in the XEON space would have been very good and nobody would have complained about its gaming power
I had a nice 285k system ready to build next month and now I'm just frustrated. I've been Intel all the way since getting into PC's over 16 yrs ago, but it's time I try AMD. I'm actually gonna be grabbing a Crosshair X870E Hero, along with 64GB of Corsair Titanium 2x32 6400 @ CL32 in a week and then wait till January for the 9950X3D.
@@BeastMortThe I had trouble with some ryzen systems but ever since ryzen 7000 those systems all work without a problem. I also already have a r5 7600 at home that works perfectly. Now that Intel is really out of the picture, I only need to wait for the x870e ace and 9950x3D to launch. Unfortunately there is absolutely no reason to with Intel anymore now that also the gaming performance is that far off.
Sounds like unnecessary expenditure to me. The 14900k is still a gaming beast (more than this PoS) and I'd personally be waiting for a more substantive upgrade in another gen or two.
@@sammyflinders Exactly, fully tuned it actually beats 7800x3d in most titles (with the Ryzen chip also tuned as far as it will go, mind you). Check Frame Chasers channel for proof.
I wasn't, been expecting a Rocket Lake 2.0 level performance. This is making my 12900KF look like a champ. Still rocking with 4090. I only use P cores, turbo boost off, went Linux, 48gb DDR5, am stable. Each Intel has been making me spend more time in the BIOS than I care to past few years. When I helped my friend with 5800X build in 2020 it was really no fuss.
Last real innovation was 2600k that was 2011, ever since then they did those 3% increments between generations, and now they probably lost all talents who could make positive change.
I would dare to say X99 was the last big jump we had with cpus overall, AMD noticed this and decided to base Ryzen on this to take the lead since 6th gen intel Skylake was still the 4c/8t standard.
Absolutely this. I finally upgraded from an Intel i7-2600K that I built in 2011 to an AMD Ryzen 7 9700X on X870 whilst watching this Intel shit-show unfold. The technology itself would get a pass IF it was priced accordingly - which it's not.
If it was the same price as the previous gen, it might make sense as a transitional product to the new socket, but charging a premium price seems ridiculous.
Intel prioritized power consumption because of the 13th and 14th gen heat issues along with the CPU Degrade issue, so they wanted to make sure they took care of that.
in case you didn't know, "AVX512" isn't invented by Intel or AMD, but rather X86 Organization. On AMD, AVX512 is written as is But on Intel, 13th 14th 15th Gen, it is written as AVX VNNI or AVX 10.1 Both are just Alias name that contains AVX512 if you don't believe it, just try to run any Benchmark Tool that specifically ask for AVX512 and refusing to run if the CPU is not capable of AVX512. Needless to say, 13th 14th 15th Gen intel will run AVX512 Benchmark immediately
I had a chuckle years ago when Intel criticized AMD's "glued-together" processors, even as the writing was on the wall for the monolithic approach. Turnabout is fair play, eh Intel?
Intel likes to take advantage of consumer's typically poor memory. Intel originated gluing chips together with the Pentium D (2 single core dies on a package), after AMD released the actual dual core Opteron. It was immediately funny to me that they made that statement, because they literally did it themselves. Intel often tries to downplay the competitions winning strategy just long enough before eventually needing to copy it.
I'm waiting for some sells/promotions before I do my new build for 4k content. I'd certainly wait and see if some windows updates, BIOS, or microcode could adjust performance. Compared to productivity the gaming performance just seems off like its either not optimized for the new architecture or lacking some updates. AMD had a similar issue right outta the gate with the Ryzen 9000x launch, and being nerfed by poor optimization in Windows. I agree with some others that Intel may have jumped the gun on their launch, and the CPU is poorly refined/optimized atm. I wouldn't expect and phenomenal gains, but hopefully in a month or two, or by New Years maybe we'll see a little uplift. Otherwise I still expect AMD 9000X3D to slaughter it in 1080p/1440p gaming content, in most aspects. Theres still a fair few titles that still prefer Intel or regular AMD 9000/7000X chips. As for 4k and 8k gaming it's more GPU dependent, the performance margins for CPU's typically averages 2% spread overall in top ten charts. It's my main problem with reviewers and gaming benchmarks, most only show tests in like 10-15 games 90% of which I don't play, while there are hundreds of other popular titles, and thousands of games available. If I were to do a true broad benchmark of gaming performance averages, I'd refer to some charts like Steam or others for the top 100 most played games. Quite a few of which are F2P and would only cost a reviewer some time. A vast majority of the people on my discord, aside from most being in our 30's to 40's, play a lot of older casual titles like Minecraft, 7DTD, LoL, WoW, Terraria, Core Keeper, Palworld, and many RPGs like Elden Ring, Elder Scrolls, Baulders Gate, Fallout, and many RTS/TBS and Roguelite/Rougelike games too.
Alright, my honest thoughts are that intel's 2xx Ultra series is a tactical lineup, and that the prices are on point regarding UK at least, 285K is £550 and for the category it makes sense, no way AMD's 9800x3d is going to be less than £450, even though it will be better. But i often defend Intels motherboard lineup as most the time i can never find something to specification at a decent price on AMD (and other issues i have personally) I feel this is good to know as its saying the "intel premium" does not exist and they want the numbers with the 245k something that seems to be the ideal buy compared to the 9600x As for the performance, we can hope it gets better, but i feel its restricted by design, in both power and temperature, that might be due to the drama of intel lately, or it may be the fact they already know they cant really push much other than speeds and on die fixes, its another standby product, but lets not forget, that hyper threading is not a thing now and most games at least knew it existed even though probably 1% of games can actually use it properly. All things considered, its a new direction that can lead anywhere, its more respectable as an option, personally im gonna look at how the T series comes out, but also might have my eyes on the x3d release, it all seems underwhelming regardless.
36 cores? i doubt windows can handle such ammount on games properly, perhaps on productivity but at those numbers, just buy a threadripper, you get those cores and double the threads
13 and 14k possessors are fine if you limit your PL1 to 125W and the PL2 to 253W and set the Voltage limit in the bios to 1400. This keeps the VIDs fixed at a max of 1400. My 14900k runs like a dream over 1 year old (Pc used for 2D and 3D rendering). Don't use the intel defaults microcode fix its a waste of time. The 14900k has a life of 10 years easy if you control the settings. Intel's default settings can often be too conservative for many users, limiting their CPU's potential. By carefully tweaking settings like power limits and voltage, I've achieved significantly better performance and lower temperatures without compromising stability.
there is nothing fine on those two generations of cpus, that is why the company and the new products are where they are. no use should enter in bios to do any of the crap you are saying to avoid the cpu destroy itself, never.
@@arch1107 Intel's decision to overclock their CPUs can be a double-edged sword. While it provides immediate performance gains, it can also lead to stability issues and increased power consumption for less experienced users. As a skilled user, I've been able to manually tune my PC using XTU to achieve optimal performance and stability. By adjusting PL1, PL2, and core voltage offset, I've tailored my system to my specific needs. However, this level of customization can be daunting for less experienced users.
You should know by now that first gen of anything is auto skip.. CPU, GPU, Cars and so on. First RTX cards, first gen Ryzen. By the second, third iteration it will be good and stable.
@@ZackSNetwork whilst you are right about the latency and other issues, The fact that pc had the longest lifespan out of any I have owned as i went from 1700 to the 2700x and is currently rocking the 5600, all whilst using the original hardware minus the GPU, that is a quality of its own and by time I got the 2700x it made any previous issues worth it. Also thanks to the success of the first gen Ryzen we have the performance we got today as intel was and still is complacent.
would be interesting to know what ddr memory type you used for overclocking alder lake for cyberbunk benchmark, just for that I would purchase one, but it may not be future proofing for value.
intel saying, just forget we make 13th & 14th gen, we're restarting at 12th gen but with a new chip design. Im moving to AMD, first time since 2012 as i've been on intel since 2014.
Man could I ask you to do some test? You're the only one person that is able to do this with the transparency that has always distinguished you. 285k overclock E cores Vs E cores stock gaming. Vs 13/14700k P core stock /e core stock no HT VS 13/14700K p core stock / e core overclock + no HT benchmark on 1080p, competitive FPS games (warzone, apex, valorant, cs2)
The new chips should have been the 14000 series. Intel now is 2 steps behind. There is some hope tho. Power consumption did go down, that was a move in the right direction. Now Intel needs to keep power consumption low while making the chip competitive.
Wow so turning off E cores this go around also reduces cache? Thats brutal. Won't see some doing that now as most games need that cache. Wonder if it's helpful just reducing the E core number count instead of disabling them all.
The architecture is really for data centre and enterprise, with laptops a 2nd thought and gaming about 9th on the list. The only reason Intel released it for desktops was to amortise the R&D costs over a wider product stack and keep OEM contracts happy. AMD have done the same with the 9000 series, gamers will be sticking with the 7800x3D for a while
No, this is a CPU for no one. Why would any data centre or enterprise buy this thing, when it crashes, has inconsistent performance, hybrid cores (hell for optimization and scheduling), AND costs the same as 9950X. Nobody sane would buy this over Epycs or 7950X / 9950X for servers and enterprise applications. I work in an enterprise environment and I can tell you that with the exception of ~20% legacy machines (Skylake-gen Xeons which we rent out for low-cost VMs) absolutely everything is AMD now.
@@frantavopicka5259 sure, they wouldn't buy this CPU. The aim from Intel is that they would buy the server CPU that is based on this architecture, that is tweaked for that market. Will it work? Well, let's see when they come out
@@frantavopicka5259Obviously enterprise would not buy the 285K. You are speaking cluelessness for a person that claims to work in enterprise. The 285K sucks at gaming however it is a great workstation CPU that has improvements over the 14900K and 9950x.
@@ZackSNetwork I'm sorry but you are the one who is clueless, mr fanboy. You spew bullshit on the internet. In reality, enterprise buys proven, high performance, high efficiency CPUs in bulk. Because you don't want early access hardware in business, a.k.a. the entire 200 series. I remember when we learned about the bugs that were present in 13 and 14 gen CPUs and we immediately began replacing the workstations with 7950X's. Compared to 14900K, a 7950X completes UE5 compilation pass about 5% slower, however, it doesn't crash. So, if you think that 285K has any value in a workstation, instead of, let's say, 9950X, just because it scores 5-6% higher in SOME tests, then I hate to break it to you but it absolutely doesn't. It doesn't matter what you do - programming, rendering, whatever. You want stability first and foremost, and predictable performance. That's why AMD is killing it in enterprise space, too, and that's why Intel is finished, unless Samsung, Apple, or Qualcomm buys them and drives them towards practicality for a change.
Given its performance, it should have been marketed as an affordable workload chip selling for 300-350€, basically a reverse 7800X3D. Consumers don't care about TSMC prices and 300€ is still infinitely more than 0€, which is realistically how much they're going to earn from this. If I worked for Intel, I would have called it the i5 250k Pro and had a marketing slogan such as "i9 in workloads, i7 in gaming". They could release an i7 and i9 version too in the next 6 months to become the market leaders in the reopened HEDT market and finally, after everything being ironed out gaming-wise release the regular desktop variants a year from now. Because it's sold as a regular desktop i9 that gets beaten in many games by my now 2 gen old i7 13700k, it's laughably bad.
With the results we're seeing from this CPU being all over the map I have a feeling its pretty undercooked at the moment and we'll see performance drastically improve. Will it become faster than AMD's X3D and Intel's 14900? No. But it will likely become a much more consistent performer at, slightly below, 14900 benchmarks.
Maximus over hype 9 285 by the looks of it, also Brian dude hope you are well, you seem to have some skin iritation, be carefull out there dude!, also this name is hilarious, going to be for sure using it as a meme for awhile, Maximum over Saiyan 9 285 hahaha.
@@Tugela60 Nope, you are somewhat correct, but you cannot disable HT and not have some kind of penalty in some software. This is reflected in the awful performance of these CPUs.
@tonep3168 Disabling HT allows single threaded performance to increase, that is where the IPC increases in the processor mostly came from. The performance issues most likely come from the sheduler being thrown off by the absence of HT together with the much deeper branch prediction in the ultra 2 processors. When the branch prediction algorithms in Windows were undated in 24H2 the changes were specific for the zen 5 improved branch prediction, and that likely hurt Intel's upgraded branch prediction engine in the ultra 2 chips. These things will probably be corrected in future windows updates. Both AMD and Intel appear to be suffering from lags in the OS development cycle with their new processors.
They should make each e-core have a correct cache structure and ditch the huge and power hungry P-Cores which have an actual performance regression vs Raptor.
*HEY Bryan , I would love to know your best thought between the 'ASRock B650 Pro RS' & 'ASRock B650 STEEL LEGEND' & 'ASRock B650E PG Riptide'* Just because, I can't make up my mind between these 3 ! ( I only use my PC 90% of the time, for Flight Simming ( X-Plane 12 ) *PS Money isn't the issue ! I just want the best !* (Incl. your best X3D CPU for it) Thanks !
My 7900XTX with a normal OC (2900mzh) achieves a minimum of 180 fps but is far from the average with 207fps and a maximum of 250fps.... Taking into account that it cost me €800 more than a year ago. (cpu 7800x3d) cyberpunk
Competition makes all PC hardware better. Every true tech enthusiast has been hoping Arrow Lake would be a major triumph; only an AMD fanboy hoped for Arrow Lake to fall on its face.
how were you running into crashes and problems with the 285k i hear this all the time intel failing etc etc and i can't understand it. i've never had a failure once with intel i have both ryzen pc and intel pc but i just wonder if it's just ppl don't know wth they're doing
intel made lots of changes and every reiewer had bios, ram and settings problems and some had drivers problems i invite you to watch techtestesters review and kitguru, that one had the most problems, motherboard would not boot, it required a bios upgrade, on a new motherboard and socket... intel is not the company you used to know, it is on the bottom and ready to be sold in pieces
@@arch1107 valid but surely these same folks are savvy enough to work out their build before they face these problems it's just logic & understanding underlying technology vendors etc etc. specifications they have to be facing these issues because they're self inflicted. for eg: i talked to someone who didn't even understand vrm phases on a mainboard and just was like really??
But later cut versions will be released, which will have to replace the 12100, 12400, 13400, etc. It looks like they will be a real nightmare at their prices.
My opinion is that Intel is going back to basic not worrying about being on top why because intel have not lost grounds, Intel give you a CPU that works both ways in performance . You have a CPU that do productivity as well as game. I myself as well as a lot of PC users do not play games in 4K 1080p and 1440p is the threshold. Not everyone in the USA alone wake up 3am to play Call of Duty for 48hrs no stop. It only depends how the average person look at it on a price point for all hardware. No one has the money to be throwing away in upgrading every time new hardware come out. If I had to due it over again to play my retro games, edit, stocking buying and selling stocks then I will use the ultra 7 CPU on the PCIe Gen 5 LGA 1851 with I hope an affordable GPU RTX 5090 / 5080 /5070/ 5060
This is a bit worrisome for intel. Even with 8600mhz ram and top single core performance in cinebench, you can't even match the 12900ks? Of the performance isn't really as "bad" as media claims it is. But when you consider the price... It shouldn't have "weak spots". Maybe at 450-500 usd. But at 600 is a no go.
Keeping my 13700kf for next few years lol. Was a bit unstable, but with recent undervolt (-0.075 offset, core voltage) it works wonderfully with similar performance.
@@moebius2k103 only if you have 7800X3D, yet I don't want to change all the system for it, it's not that much faster in 1440p high end high refresh rate gaming to bother me.
@@moebius2k103he doesn't know that of he had to under volt to stabilize his system, it's already too late and his cpu will fry itself like other 13/14 gen intel cpus, That means he'll probably be on Amd soon
I just bought an Ultra 9 285K CPU and an ASUS Z890 motherboard. After installing Windows 11 24H2 on this brand-new setup, it kept crashing and rebooting repeatedly right from the start. At first, I thought my RTX 4070 GPU was faulty. Eventually, I had to downgrade to 23H2 to resolve the issue, wasting several hours of my time. Such an obvious issue, and Intel couldn’t catch it during testing? The decades-long image of Intel + Windows as a stable and highly compatible combination has been completely ruined for me. Next time, I’m going with AMD.
These results are very strange. Overall underwhelming but with inconsistent glimmers of hope. Admittedly I'm not sidegrading to hopes and dreams. 9800X3D it is. The only problem is that I'm not alone in this. If perchance I've got leftover money I'd go for the i5 or whatever it's called this time around. It would be a pure Intel only build meant to satisfy my morbid curiosity.
My feeling right now, disappointed :(. Why do you guys think Intel is falling so far behind AMD now?
IT appears gaming cores are E; the latency in some tests on X/twitter drop -22ns if disabling P core.
Intel was pushing the voltage for years and last GEN they killed there own CPU's. Now they need to get more afficient and that kills performance. AMD was on the afficient horse much sooner thats why they are better at the moment.
Tech YES City. please compair new intel ultra5,7 14600-14700 with new ryzen 9600x-9700(pbo max , ram 6400)and new 7600x3D-5700x3d on windows 11 24h2 (in benchmark charts always write priceUSD too)
@techyescity Say my name !
Greed.
And yet the prices for these chips in Australia are through the roof. Intel is on meth if they think these products are priced to sell.
I mean they sort of can’t lower the prices too much…Their computer tile is on the most expensive N3B node.
They only performs well in a select few applications compared to AMD. So it is good CPU`s only for a few people that uses the applications they perform well in a lot.
Intel and Ngreedia are greedy fuckers but its the stupid gullible people keep paying for crap at crappy prices.
@@PowellCat745 nice point, so Intel dropped all sanity in thinking just using a new node would give them the advantage, as always? Just buy it and be happy?
there is not a single product that stays the same price at launch even it is not the best performer.
$840 Canadian for the 285K.
That's a definite HARD pass.
Not really ! You shouldn't buy it period ! Even if it was 420 !
u need a new mb too
@@gertjanvandermeij4265I agree to buy it for 250$ and another new motherboard for 150$, not a penny more.
@@gertjanvandermeij4265 should be 404 value not found, like 265K is
@@gertjanvandermeij4265 Not true unless you are only using it for gaming.
If you don't use it for gaming and look at it on Linux, you see what scheduler improvements will improve it greatly (also microcode improvements)
It's basically the price of the b650 and 32gb plus 7800x3d bundle in OZ kinda crazy that they got such poor perf from the newer expensive node from TSMC
Eleven hundred AUD for that cpu 😂
@@phatminhphan4121 yep, and the bundle with a 7800X3D, b650 aurous elite ax v2 and 32gb of gskill trident 6000 cl 30 is $1163
These CPUs are lower so next gen when we get back to same performance as the 12900k they can claim a big upgrade.
bingo
But next gen CPUs will likely be against Zen 6, which are likely to improve, so they'll have more ground to catch up, making them an even worse value.
Exactly! They've been doing this since skylake 9 years ago.... Until eventually the highest end, Ks SKU don't even have any margins left for overclocking (running at thermal limit and power limit already, at default settings)
That Cyberpunk performance is nasty.. my 10850K is within scratching distance of the 285K there.
Lies
@@ZackSNetwork If I prove it, what will you do?
No need to prove it, it sounds correct. Arrow Lake is massively under performing in that game
@@thecatdaddy1981probably believe
@@ZackSNetworklook you skin
Stability feedback is what is missing from most tech reviewers. What's the point of a fast computer, if it crashes all the time leading to loss of work, or being unable to enjoy the game.
underrated comment tbh, I personally love talking about stability now and yeah the AMD products have come a long way in my personal testing, nothing unstable on their latest stuff.
@@techyescity I'll gladly sacrifice 10% of my fps in games for a smooth, seamless experience on the desktop. I envy people who "can't feel the difference".
@@arc00tai swear so many people are ignorant. Windows 11 is less snappy and an overall HORRIBLE experience if you like a fast PC and yet people said "it's better than win10".
People have never mentioned how much better a ryzen system is once you tune the ram+ ESPECIALLY the infinity fabric.
from what I can see most people are just console plebs who can afford a gaming PC and can't actually tell the difference between 60hz and anything higher, they just look at the fps counter and placebo their way into the comments.
@ yeah definitely. My 9900X is stable over 24 hours of karhu with 2200 fabric and very tuned 6400 ram and it still feels like a dog in 11.
I think gamer nexus said that the power draw isn’t correct… and it’s actually higher than the system reports…
Its on asus mobos that use the 24 pin to power 4 phases
Yes, because it draws more from the 24pin, at least his MSI board did. Which, given MSI's recent spat with AMD...might have been intentionally misleading. That's tin-hat nonsense tho.
@@ridleyroid9060 was it msi or aSUS
@@AbbasDalal1000ASUS, but the software package power draw isn’t real…It was estimated with VID levels.
Intel said the TDP is 125, but the reality is way above 200 !
Intel can just go F themselves !
I'm still rocking an i7 8700 on my video editing rig and it's doing just fine. My daily-drive for online web-browsing is a third gen i5 and (thanks in part to Linux) it's also doing just fine. I'm not a gamer so I don't need to live on the cutting edge and suffer the bleed-out that often results.
I'm running out of Intel gens to SKIP, i felt awful smug about waiting on 13th/14th gen after that debacle... waiting patiently with my 12900K for a worthy successor...looks like I'll be waiting a WHILE.
Hmm yes, if only there was something else other than Intel to pick from....
@@ZimpanXZen 4 has worse IPC than Alder Lake and Zen 5 is irrelevant without X3D.
@@saricubra2867 Single thread cinebench scores =/= real life use
Even if those scores reflected gaming performance X3D for Zen 5 just drops in a few weeks, so what is your point?
You have to wait at least another 2 years then. And hope that Nova Lake won't just be another dud aswell.
@@saricubra2867 Single thread Cinebench scores =/= real life use.
Even if those scores reflected gaming performance, X3D chips for Zen 5 drops in just a few weeks, so what is your point?
Absolutely love my i9 12900KS such a beast of a CPU 🔥🔥
Ultra .... Crap
These will sell well!
People buy much more work PCs than gaming PCs, so this is exactly what Intel needs!
@@haukionkannel Most work PCs are pretty low end. You don't need an i9 to run Office or do some videoconferencing.
@@nossy232323 Yes, you don`t!
So the 245 is the most important chip to Intel!
ULTRA Bulldozer ! 😜
@@haukionkannel Which makes is strange as the NPU isn't strong enough to official be used with the new AI stuff from Microsoft (MS says you need an NPU with a minimum of 40 TFLOPs, while the new Core Ultra tops out at only 13).
I miss slot A Pentium III era. Moore's law was really kicking ass back then. True doubling every generation.
I guess it makes sense that generation leaps decrease over time, but greed is like sand in the gears of progress and this is not the way to make progress.
mnyeah
We're close to hitting a hard technological wall. Material properties are immutable.
@@MandoMTL I know eh, stupid physics getting in the way of our progress.
@@fanofentropy2280 maybe stupid humans, I dunno... :P
The race to 1 GHz ❤❤
Imagine the outrage when AMD got a 5% improvement on average when releasing the new CPU generation, but Intel coming up SHORT in performance of a THREE GENERATION OLDER CPU 😂😂😂😂
Intel -2.85%
AMD deserved that cause they outright lied about the CPUs and then tried to gaslight everybody.
This has happened a few times , Intel P4 was slower than Pentium 3 Coppermine. AMD did it a few times also . A lot of it is due to poor management and trying to please shareholders.
Only in gaming though; in fully-threaded productivity workloads it's about 50% faster (than the 12900KS you're referencing), while Zen5 wasn't more than 5% faster compared to Zen4 anywhere, so the unsatisfaction with the latter was quite deserved. That said, I agree with Brian's assessment and I wouldn't rush out to buy Arrow Lake either. At the very least it seems prudent to first wait for 9800x3d (and I guess 9950x3d as well), then go from there.
@@HenrySomeone Zen5 is actually upwards of 30-40% faster than Zen4 in INT8/AI based workloads, and AVX512 is also up by 14-16%, so yes it absolutely was more than 5% faster somewhere. It's just that where Zen5 is significantly faster doesn't matter to the vast majority of desktop use cases. But this also isn't too surprising, as Zen has always been first and foremost an enterprise/server focused architecture, as that's where the real money is, and Zen desktop is just the leftover scraps given to us plebs.
Frametime stability was Intel's strong suit with Alder and Raptor. The glue dip hits hard this generation. I wonder if the lack of HT played a role in it too. i9 9700K was also a mess when it came down to frametimes.
I'd love to see a 285K vs 14900K HT off (gloves off) comparison if possible.
9700K was goated for its time.
In the vast majority of titles (of its day), the 9700k was far from a mess, both in average framerate AND the 1% lows. There were just a couple of outliers, the others were (much) later releases.
@@HenrySomeone I and people I know never had a problem.
My old 9700k is still in use by a friend of mine and still runs great, never noticed any frametime issues with it when I had it. HT doesn't make nearly as big of a difference as influencers and intel would like you to believe because the HT's don't support the entire instruction set and never has, hell p4 HT only supported TWO instructrions every other instruciton that HT was 100% idle. Remember when Linus made a huge deal about how good HT on the P4 was and how it improved performance "across the board"... yeah it could only add or subtract it couldn't poke move or do anything else it literally improved performance when add and subs were used which is objectivley nowhere near "across the board". It was a scam at first it's probably still mostly smoke and mirrors.
@@bdhale34 Not to mention the added latency of HT. Article from Intel in 2017.
That freaking latency issue is killing me on 13600k. It happens unexpectedly and lasts for 2-3 seconds. Sometimes won't happen for days other times it happens twice day.
Those inconsistencies would drive me crazy.
You can stop that by going into your bios and deactivating E cores. Then reinstall Windows and then activate E cores.
Are you sure it's not SSD latency? I've had some spikes like that which were drive related. Try also m.2 vs sata. Clonezilla your m.2 drive to a data one and use that for a week see what happens.
Yeah, the latest cpu that is completely without this issue (at least inherently, without deactivating the e-cores), is the 12600 (non-k), which of course isn't exactly a top performer...although there are some motherboards (with an external clock generator) like MSI Mortar Max that do allow overclocking.
No latency issues for me. Running Windows 10 Pro.
These CPUs are not designed for consumers. They are designed for the server and data center market because that’s where the profit is. They care about stability, security and power consumption. These CPUs without hyper threading are more secure, the new architecture is more stable and the power consumption is much lower than the previous three generations. For a company with lots of servers, this CPU family is very attractive.
Bought a 12600k/Mobo combo for $240 about 18 months ago, glad I did.
Should hold up and perform pretty well for the next while at least, I've got a 12700f and 3060 12g and the way it performs at everything I'm sure it's good the next 3-5.
Glad ? No ! You would be happier with an 5700X3D/B550 combo
@@gertjanvandermeij4265 no he wouldnt
@@gertjanvandermeij4265abt 18 months ago....
@@gertjanvandermeij426518 months ago no 3d existed
It will be fun to see the price drop eventually.
Even if it was FREE , I still don't want it !
You're review was the best because you did more positively recognize the value in productivity. I won't buy it for that, but someone will. :)
Windows is in such a disastrous state for gaming i swear….. gotta teat 10 different versions now for gaming. Bring us back windows 7 please Microsoft
Switched to Linux mint. Its easy to use and absolute dream compared to Windows lol.
@@jeremyniels I have a dedicated Linux Mint machine and it’s so much better than any Windows OS in that it just keeps going. Haven’t had an issue with it in years.
Windows is dead. It just doesn't realize it yet.
Give POP OS a shot.
I only gave up with W7 this year as Steam stopped working. W11 is a nightmare in comparison.
Been watching the sales of the socket 1851, and nothing is moving its all just sitting around, waiting for reviewers to test stability - Just shows how much confidence consumers have with Intel right now..
Intel just can't stop taking L's this year.
Intel pricing these chips like people have the stupidity and cash to invest in a slower chip and a more expensive platform upgrade, they're delusional to think it!!
yep. apparently memory latency is a big factor.
Tile-to-tile latency as well. That's why 7800X3D is a hair faster than 7950X3D in gaming despite having half the cores and lower clock speed. That single CCD design allows it to behave almost like a monolithic CPU.
I can't wait for the latency test for this. I might consider it as I'm a 3D artist and After effects user. Keep up the unique content boss!
It all comes down to perception and marketing. AMD kept supporting AM4 well into AM5. Can you imagine Intel making a 3D vcache version of the 14600K now? I don't think so. They need to get creative with their products, cause if this is their best foot forward, I see their stock price dropping even further. 😅
What about total cost of mobo, cpu and RAM?
I think that kind of ram will be pretty expensive.
Motherboards start at $200+ with full features held at ransom for $500+
CUDIMMs - Only ones for sale thus far are GSkill 48GB-9600 kits for $400
Looks like 48GB may be the standard for these as all of the reviewers received 48GB kits from various vendors.
Your $500 motherboard probably won't get 9600MT, nor would you want it to because of asynchronous performance dropdowns...
Your motherboard would need to be able to run a regular UDIMM at 8600-8800MT for a clock driver to punch it up to 9600MT.
@@douglasmurphy3266 Which begs the question, who did Intel design these chips for ? Certainly not gamers. Maybe for productivity ? For the price it would be a better option to get the cheapest Threadripper system as the cost wouldn't be that much more and the potential of having a lot more I/O than desktop parts is worthy of consideration.
Both AMD and Intel desktop parts are closer to console/ mobile chips than they are to workstation/ server parts and they are priced way to high for how gimped they are by comparison. If one of those two companies could just come out with a CPU that could use all four memory channels at full spec and offer at least 32-48 PCI-e gen5 lanes direct from the CPU for around $500 I think there would be a demand, especially if it was clocked decent and had something similar to 3D-Vcache.
@@douglasmurphy3266 Very interesting pricing discussion. One of my fears this generation in beforehand was that Intel would pull something like that video of Linus in the rain at computex back from many years ago, where he was complaining that, I think they made it so that even when you bought their super expensive motherboards for HEDT you had to buy a hardware key and plug in to that motherboard if you wanted to unlock more features like hardware RAID or something.
Anyways, point being that they would lock features into higher pricing motherboard segments, and we saw the "higher end ILMs", and I feared that CUDIMM or CAMM2 or whatever would be locked in to super expensive motherboards. I saw some cheaper kits of CUDIMM DDR5 7200, but then it turned out they were 24GB kits lol. Or then on the other hand, the G skill 8200 kit was €400 when I checked. But you can't even buy it yet, it comes into stock in 2 weeks, begging the question why launch now when it clearly isn't ready and you can't buy the RAM they want you to use with the CPU yet.
All this makes me wonder, would it have been cheaper to do, let's say, quad channel DDR 6000, with 4 dimm slots rather than trying to get 9600 on 2 modules and dual channel? Obviously quad channel requires a more expensive IMC but you require such insanely high quality memory dies and other stuff to get those modules to run at those speeds. I think AMD/Intel have tried to replace HEDT with high end regular parts, and in my opinion they are running up against a memory wall with what you can achieve on dual channel for the highest core count SKUs (maybe AMD moreso than Intel). I think it was funny when I had my X58 chipset and i7 920 that it was triple channel DDR3, maybe Intel/AMD should explore options like that.
The y-cruncher guy wrote an interesting article about AVX-512 on Zen5 desktop, and that, if you use the 9950X with AVX-512 instructions on regular dual channel memory, you have to do roughly 300 vector instructions per memory operation to not be bandwidth bound, highlighting how silly dual channel desktop platforms at over 5GHz are for those CPUs, where they in the server world would be running maybe 3 GHz on 12 channel memory making the bandwidth equation more favorable.
32GB 7600 A-die Viper Xtreme $106
if you are thinking in price, am4 is for you
It would've been nice to pull one of those AMD products in your test to put in the 14900K, I'd like to know where it stands. Also maybe the 14900K with hyperthreading turned off because the 285 doesn't have hyperthreading on the chip. You always have awesome videos.
I was waiting for these reviews to come out and boy am I glad. I am upgrading from a 8700k and clearly not going intel now. Debating between 9700x or 7800x 3d
Was ready for a 285k build in a week or so. Now... going AMD. Crosshair X870E Hero, 64GB 2x32 Corsair Titanium 6400 @ CL32 and waiting till January to grab the 9950X3D.
@@BeastMortThe yeah im considering going 9700x and B650 ROG board and 32gb ddr5. I got a 4070ti super a couple weeks ago for a steal ( $650 NZD ) and sold my 2080 super for $500 nzd lol
The correct answer is 9800x3d, just wait a few more weeks.
none, wait for 9800x3d
I am going 285.
THE REVIEW IVE BEEN WAITING FOR!
Arrow Lake is so disappointing! Even a 146€ 7500F or a 197€ 5700X3D offer the same (or higher) gaming performance.
Edit: 5700X3D is only 174€ right now.
I agree with your intro completely. Even though I use it for non-game use sadly even though it is ok for that it's a Beta test for their new chiplet style fabs. I assume if INTEL can survive this the next version will IMHO be exceptionally good just like the way AMD went after they changed to chiplet same Beta route from them.
Try clocking the Efficiency cores that's where you will get bigger boost clocking the Performance cores has a negative effect because of the restrictions on them.
It's a Beta so it could be a good buy at a Beta price of £490 or 968.26 aus
Intel continues to embarrass themselves
and insult all customers with their prices. It's so sad, I really hoped to see some competition again.
And so is NATO in proxy war with Russia
For most people the difference in frame rate from the top end performer vs the core ultra cpu is negligible. Most people😅 wouldn't notice the difference of 10 to 40 in frame rates, particularly when above 60 fps. Additionally for most people good performance and stability is preferred over bleeding edge performance and instability.
absolutely this.... imagine stating with all seriousness thats this is a terrible cpu because it runs 40 fps lower..................................... while pulling +300 hahaha LMAO this ppl are not real
The issue probably relate to optimizations in the sheduler and the branch prediction engine, which has been made a lot deeper than previous processors.
Seems that with Cu-Dimm at 8000 and some overclock the 285K can give better results and sometimes it can beat 7800x3d. However I feel disappointed, this cpu may show some muscles after bios and windows 11 scheduler will improve.
I'm actually pretty excited about that revelation. Mainly because it's also the best workstation chip and has much lower idle power draw than AMD.
Will you be testing the latency of these Core 5, 7 or 9 Ultra chips like you did with AM5 and 12th and 13th gen CPU's? I am very curious if these new chips are at the 10th gen level like you prefer.
It will probably be worse
Just wanting to check, was this done using APO or with All core enhancer enabled?
Intel cores are split between groups of different performance, such as performance cores and efficiency cores.
By default Windows does not understand the cores are of different speeds so window's built in task sceduler will place programs on random cores that are based on lowest usage by percentage. This is bad as that means games very often get put on the efficiency cores and not on the performance cores resulting in them running at a far slower multiplier.
APO basically takes over a part of the process or is meant to so that ether it detects or you specify what needs to be ran on high performance cores.
If that fails you can also enable in a lot of motherboards something called "All Core Enhancement" which brings the efficiency cores up to the same multiplier as the performance cores. (This does drastically increase heat so make sure your setup can handle that)
But without any of that the efficiency cores are just that, they are designed for efficiency meaning their single thread speed could never even contend with AMD as they are not meant to, but have yet to see a single person say they got an improvement using APO and no ones touched all core enhancement.
My Intel 10980XE started of at 4.8GHz for the first 4-8 cores then it drops as far as 3.2GHz for the performance cores at the end. but I had 18 cores and 32 Threads which meant I didn't have to worry about the slower cores so long as I used ether Intel Turbo 3.0 tool or All Core Enhancement as that allowed me to get to 4.8GHz on all cores and even overclock all cores to 5.1 or selective cores to 5.2 so long as everything except 8 cores was at 4.0GHz and 3.2GHz with all core enhancement off, but then that required manually telling Process Lasso to take all applications off the first 8 cores to dedicate them to what I wanted to use them for, in a sense doing manually what APO and Intel Turbo 3.0 was meant to do. But my performance would skyrocket by doing that stuff.
Since Intel can chop and change tiles around their die, it would be very interesting if you could run the same test for the 15th Gen core ultra series that you did with the Intel 11, 12, 13 & 14 Gen to see what you discover !
Would be cool to see you include Space Marine 2 in benchmarks!
The productivity and power efficiency specs of this CPU are very good though... and gaming performance is pretty much on a par with the best CPUs out atm. Im just baffled why Intel doesnt release a gaming version with high power draw. Its almost like they've given up on gamers
The only issue I have found with AMD longevity is that it's almost impossible to sell your old AMD cpu at a good price as people only wish to upgrade their old systems to the latest CPU. Which, of course, makes the older cpu's available for low cost builds for new buyers.
The 285K and the 9950X are both designed for professional uses.
They are both appreciably better for companies who care about saving a few minutes on a render and overall power use.
Neither is intended for gamers.
Intel isn't even bothering to create a gaming SKU for Arrow Lake.
For the 4080 and below, everything from a 13600K/7600X to a 7800X3D or a 285K has roughly the same performance in average FPS at 1440P and higher.
There are almost certainly places where a CPU makes a difference (turn time in the late game of stellaris/civ, FPS in huge RTS maps with a huge number of units, world updates in flight sims), but I rarely see those being benchmarked.
yeah, would love to see some of those strategy titles getting tested with late game lag, like Stellaris and other PDX titles and not to forget X4 and civ as you mentioned, a cpu that can master those titles in late game would be my instant purchase without hesitation.
@@ashamahee With FS2024's "ideal" PC being a 7900X and Civ 7 Ultra config being a 5950X, the games I like seem to be moving from 8 cores being ideal to 12 to 16 cores being ideal.
Lows are trash on the 285k due to latency.
This affects even lower end cards.
they dont have the BALLS to pin your comment, I had to slam the table and scream "THANK YOU" because all this gaming performance is BS, and even with latency lows whatever, you put this on a mid to high end GPU above 1080 in high settings and all motherfkng cpus perform within a 5% margin.... FFS
Very good review. I am even happier with my Ryzen 5 7600.
Thankyou very much Sir ❤
I feel like we'll see some good discounts for these very soon and isn't unknown for doing that.
good for who, the store, intel, you? no one wins here
@@arch1107 i meant good as in big price cuts, at the right price these CPUs will also be good for someone. Only problem would be motherboard prices. AM5 and even AM4 still makes more sense.
@@ashenwattegedera lets put the idea in numbers
how much would you pay for a deal on this new cpu, ram and mobo?
500 dollars, 400 dollars?
i would pay for the combo 100 dollars thinking that t might have a problem i just dont know yet but i will eventually discover and sell used or give it away
at that price, no one wins, intel will not survive another 2 years, selling on discount, mobo partners will stop selling new models
it sounds exaggerated, but if you look at amazon, the model is selling the most from intel is 12th gen i9 at 300 dollars and that is expensive, it is just a old intel product
Intel is total trash these days.
Isn't anywhere near trash. You idiots do know that gamingbis a niche and pc gaming is ever less relevant. Intel 13700k and 14700k are equal to zen 4 and 5. The only cpus that pull away, amd Included are x3d and even that isn't by much in 90% of games. To call them trash just shows how delusional you are with the amd cult shit
I think that Intel, like AMD, should get some slack knowing that this is the first generation of a new architecture for both. For non-gamers this CPU seems fine, those are the majority of the market, certainly if you include laptops. But gamers should skip on this generation. At least AMD had some progress for gamers, around 13%. The great thing of the first round of a new architecture, in the 2nd and 3rd round you usually get a lot of progress because there is a lot of low hanging fruit. You also saw this for example with Intel Core (1000 and 2000 series) and Ryzen (Zen2 and Zen3).
I hope (for Intels sake) that the gaming performance issues will be resolved soon. It would be great if You could do some testing to see if the latency issues that you found on the previous generations are still present on the 200 series. Anyways, it doesen't look like there is any reason to replace my 13700K any time soon 🙂
This is unrelated but I had the displeasure of meeting one of the most disrespectful Intel Employee in person. He was getting coffee at my local parlor. Not only are they not up to par with AMD but their employee do not know respect for others. Happy I switch to AMD..
Weird I met an Amd employee and he called me the n word. So I guess you support racism
Lion Cove P-cores used in the Core Ultra chips are a REGRESSION in gaming and OVERCLOCKING vs the Raptor Lake/Alder Lake ones.
Ultra 5 would be great video also I've heard add 2-300 mhz on ecores mad a bit of difference
I really hope my 13900k will survive atleast 5 more years edit : year one right now no issues
Since Intel exchanged my 14900k it has been rock solid. Even more then my old 12900k. Although I will probably switching to the 9950x3D as the Ultra can not beat my current 14900k in gaming. I think Intel should have launched Arrow Lake in the XEON space would have been very good and nobody would have complained about its gaming power
I had a nice 285k system ready to build next month and now I'm just frustrated. I've been Intel all the way since getting into PC's over 16 yrs ago, but it's time I try AMD. I'm actually gonna be grabbing a Crosshair X870E Hero, along with 64GB of Corsair Titanium 2x32 6400 @ CL32 in a week and then wait till January for the 9950X3D.
@@BeastMortThe I had trouble with some ryzen systems but ever since ryzen 7000 those systems all work without a problem. I also already have a r5 7600 at home that works perfectly. Now that Intel is really out of the picture, I only need to wait for the x870e ace and 9950x3D to launch. Unfortunately there is absolutely no reason to with Intel anymore now that also the gaming performance is that far off.
Sounds like unnecessary expenditure to me. The 14900k is still a gaming beast (more than this PoS) and I'd personally be waiting for a more substantive upgrade in another gen or two.
@@sammyflinders Exactly, fully tuned it actually beats 7800x3d in most titles (with the Ryzen chip also tuned as far as it will go, mind you). Check Frame Chasers channel for proof.
@@sammyflinders I know but I always pass down my old CPU so it stays in the family where there is always need. ;)
I wasn't, been expecting a Rocket Lake 2.0 level performance. This is making my 12900KF look like a champ. Still rocking with 4090. I only use P cores, turbo boost off, went Linux, 48gb DDR5, am stable. Each Intel has been making me spend more time in the BIOS than I care to past few years. When I helped my friend with 5800X build in 2020 it was really no fuss.
I hear ya, makes my 12900K look like a beast too! I think it's a bummer, expected more.
Price hike for new 1851 platform is sky high. Way too pricey. Decent motherboard cost arm and a leg.
Last real innovation was 2600k that was 2011, ever since then they did those 3% increments between generations, and now they probably lost all talents who could make positive change.
And the 2700K , it was the LAST Intel CPU, I was happy with !
For now it is AMD, and AMD ONLY !
I would dare to say X99 was the last big jump we had with cpus overall, AMD noticed this and decided to base Ryzen on this to take the lead since 6th gen intel Skylake was still the 4c/8t standard.
Absolutely this. I finally upgraded from an Intel i7-2600K that I built in 2011 to an AMD Ryzen 7 9700X on X870 whilst watching this Intel shit-show unfold. The technology itself would get a pass IF it was priced accordingly - which it's not.
Disagree. Alder Lake was insanely good. Everything from the last 2 years is utter garbage.
@@epeksergastis You do know 5800X3D was already released back then, so this CPU was way way slower, so how it was good again?
Really looking foward your latency test and how does it compare to 10th gen intel in your workflow.
i like your videos and also watching from 4 years
If it was the same price as the previous gen, it might make sense as a transitional product to the new socket, but charging a premium price seems ridiculous.
Intel prioritized power consumption because of the 13th and 14th gen heat issues along with the CPU Degrade issue, so they wanted to make sure they took care of that.
Thanks for the review, three generations in a row now. 9800x3d next month, intel falling even further behind.
That will be a bloodbath for the blue ❤😂
and the AI cpu without AVX512...
It has AVX 10.1 which incorporates a 256bit implementation of AVX512.
@@gymnastchannel7372 The intel website does not list AVX 10.1
@@gymnastchannel7372 Still garbage.
in case you didn't know, "AVX512" isn't invented by Intel or AMD, but rather X86 Organization.
On AMD, AVX512 is written as is
But on Intel, 13th 14th 15th Gen, it is written as AVX VNNI or AVX 10.1
Both are just Alias name that contains AVX512
if you don't believe it, just try to run any Benchmark Tool that specifically ask for AVX512 and refusing to run if the CPU is not capable of AVX512. Needless to say, 13th 14th 15th Gen intel will run AVX512 Benchmark immediately
@@gymnastchannel7372it doesn’t
I had a chuckle years ago when Intel criticized AMD's "glued-together" processors, even as the writing was on the wall for the monolithic approach. Turnabout is fair play, eh Intel?
I think it’s safe to say that Intel has some catching up to do in the CPU space.
Intel likes to take advantage of consumer's typically poor memory. Intel originated gluing chips together with the Pentium D (2 single core dies on a package), after AMD released the actual dual core Opteron. It was immediately funny to me that they made that statement, because they literally did it themselves. Intel often tries to downplay the competitions winning strategy just long enough before eventually needing to copy it.
I'm waiting for some sells/promotions before I do my new build for 4k content. I'd certainly wait and see if some windows updates, BIOS, or microcode could adjust performance. Compared to productivity the gaming performance just seems off like its either not optimized for the new architecture or lacking some updates. AMD had a similar issue right outta the gate with the Ryzen 9000x launch, and being nerfed by poor optimization in Windows.
I agree with some others that Intel may have jumped the gun on their launch, and the CPU is poorly refined/optimized atm. I wouldn't expect and phenomenal gains, but hopefully in a month or two, or by New Years maybe we'll see a little uplift. Otherwise I still expect AMD 9000X3D to slaughter it in 1080p/1440p gaming content, in most aspects. Theres still a fair few titles that still prefer Intel or regular AMD 9000/7000X chips. As for 4k and 8k gaming it's more GPU dependent, the performance margins for CPU's typically averages 2% spread overall in top ten charts.
It's my main problem with reviewers and gaming benchmarks, most only show tests in like 10-15 games 90% of which I don't play, while there are hundreds of other popular titles, and thousands of games available.
If I were to do a true broad benchmark of gaming performance averages, I'd refer to some charts like Steam or others for the top 100 most played games. Quite a few of which are F2P and would only cost a reviewer some time. A vast majority of the people on my discord, aside from most being in our 30's to 40's, play a lot of older casual titles like Minecraft, 7DTD, LoL, WoW, Terraria, Core Keeper, Palworld, and many RPGs like Elden Ring, Elder Scrolls, Baulders Gate, Fallout, and many RTS/TBS and Roguelite/Rougelike games too.
i noticed the clocks are very low in your tests of the i9 285k vs other cpu's whats the windows 10 performance settings/bios clock settings ?
is there a way you can cap the speed?
Alright, my honest thoughts are that intel's 2xx Ultra series is a tactical lineup, and that the prices are on point regarding UK at least, 285K is £550 and for the category it makes sense, no way AMD's 9800x3d is going to be less than £450, even though it will be better. But i often defend Intels motherboard lineup as most the time i can never find something to specification at a decent price on AMD (and other issues i have personally) I feel this is good to know as its saying the "intel premium" does not exist and they want the numbers with the 245k something that seems to be the ideal buy compared to the 9600x
As for the performance, we can hope it gets better, but i feel its restricted by design, in both power and temperature, that might be due to the drama of intel lately, or it may be the fact they already know they cant really push much other than speeds and on die fixes, its another standby product, but lets not forget, that hyper threading is not a thing now and most games at least knew it existed even though probably 1% of games can actually use it properly.
All things considered, its a new direction that can lead anywhere, its more respectable as an option, personally im gonna look at how the T series comes out, but also might have my eyes on the x3d release, it all seems underwhelming regardless.
they need a 12P/24E CPU for that price to make sense, i can get a 7800x3d + mobo for the CPU price alone
36 cores? i doubt windows can handle such ammount on games properly, perhaps on productivity
but at those numbers, just buy a threadripper, you get those cores and double the threads
13 and 14k possessors are fine if you limit your PL1 to 125W and the PL2 to 253W and set the Voltage limit in the bios to 1400. This keeps the VIDs fixed at a max of 1400. My 14900k runs like a dream over 1 year old (Pc used for 2D and 3D rendering). Don't use the intel defaults microcode fix its a waste of time. The 14900k has a life of 10 years easy if you control the settings. Intel's default settings can often be too conservative for many users, limiting their CPU's potential. By carefully tweaking settings like power limits and voltage, I've achieved significantly better performance and lower temperatures without compromising stability.
there is nothing fine on those two generations of cpus, that is why the company and the new products are where they are.
no use should enter in bios to do any of the crap you are saying to avoid the cpu destroy itself, never.
@@arch1107 Intel's decision to overclock their CPUs can be a double-edged sword. While it provides immediate performance gains, it can also lead to stability issues and increased power consumption for less experienced users. As a skilled user, I've been able to manually tune my PC using XTU to achieve optimal performance and stability. By adjusting PL1, PL2, and core voltage offset, I've tailored my system to my specific needs. However, this level of customization can be daunting for less experienced users.
You should know by now that first gen of anything is auto skip.. CPU, GPU, Cars and so on. First RTX cards, first gen Ryzen. By the second, third iteration it will be good and stable.
As someone who used the Ryzen 1700 for many years I disagree with that part as that CPU was amazing stable, solid performance and very affordable
2080 ti was good😊
@@k1er4n544I disagree with you Zen 1 sucked. The latency was horrible, motherboards were meh and IPC was trash.
@@sperrex465Raytracing and DLSS sucked that generation.
@@ZackSNetwork whilst you are right about the latency and other issues, The fact that pc had the longest lifespan out of any I have owned as i went from 1700 to the 2700x and is currently rocking the 5600, all whilst using the original hardware minus the GPU, that is a quality of its own and by time I got the 2700x it made any previous issues worth it.
Also thanks to the success of the first gen Ryzen we have the performance we got today as intel was and still is complacent.
would be interesting to know what ddr memory type you used for overclocking alder lake for cyberbunk benchmark, just for that I would purchase one, but it may not be future proofing for value.
intel saying, just forget we make 13th & 14th gen, we're restarting at 12th gen but with a new chip design. Im moving to AMD, first time since 2012 as i've been on intel since 2014.
Man could I ask you to do some test?
You're the only one person that is able to do this with the transparency that has always distinguished you.
285k overclock E cores Vs E cores stock gaming.
Vs 13/14700k P core stock /e core stock no HT
VS 13/14700K p core stock / e core overclock + no HT
benchmark on 1080p, competitive FPS games (warzone, apex, valorant, cs2)
The new chips should have been the 14000 series. Intel now is 2 steps behind.
There is some hope tho. Power consumption did go down, that was a move in the right direction. Now Intel needs to keep power consumption low while making the chip competitive.
if you go to intel core ultra page, you can clearly see its not intended for gamers... like the coreX serie
Wow so turning off E cores this go around also reduces cache? Thats brutal. Won't see some doing that now as most games need that cache. Wonder if it's helpful just reducing the E core number count instead of disabling them all.
The architecture is really for data centre and enterprise, with laptops a 2nd thought and gaming about 9th on the list. The only reason Intel released it for desktops was to amortise the R&D costs over a wider product stack and keep OEM contracts happy. AMD have done the same with the 9000 series, gamers will be sticking with the 7800x3D for a while
No, this is a CPU for no one. Why would any data centre or enterprise buy this thing, when it crashes, has inconsistent performance, hybrid cores (hell for optimization and scheduling), AND costs the same as 9950X. Nobody sane would buy this over Epycs or 7950X / 9950X for servers and enterprise applications.
I work in an enterprise environment and I can tell you that with the exception of ~20% legacy machines (Skylake-gen Xeons which we rent out for low-cost VMs) absolutely everything is AMD now.
@@frantavopicka5259 sure, they wouldn't buy this CPU. The aim from Intel is that they would buy the server CPU that is based on this architecture, that is tweaked for that market. Will it work? Well, let's see when they come out
@@frantavopicka5259Obviously enterprise would not buy the 285K. You are speaking cluelessness for a person that claims to work in enterprise. The 285K sucks at gaming however it is a great workstation CPU that has improvements over the 14900K and 9950x.
And doesn't even support AVX512
@@ZackSNetwork I'm sorry but you are the one who is clueless, mr fanboy. You spew bullshit on the internet. In reality, enterprise buys proven, high performance, high efficiency CPUs in bulk. Because you don't want early access hardware in business, a.k.a. the entire 200 series.
I remember when we learned about the bugs that were present in 13 and 14 gen CPUs and we immediately began replacing the workstations with 7950X's. Compared to 14900K, a 7950X completes UE5 compilation pass about 5% slower, however, it doesn't crash.
So, if you think that 285K has any value in a workstation, instead of, let's say, 9950X, just because it scores 5-6% higher in SOME tests, then I hate to break it to you but it absolutely doesn't.
It doesn't matter what you do - programming, rendering, whatever. You want stability first and foremost, and predictable performance.
That's why AMD is killing it in enterprise space, too, and that's why Intel is finished, unless Samsung, Apple, or Qualcomm buys them and drives them towards practicality for a change.
Given its performance, it should have been marketed as an affordable workload chip selling for 300-350€, basically a reverse 7800X3D. Consumers don't care about TSMC prices and 300€ is still infinitely more than 0€, which is realistically how much they're going to earn from this. If I worked for Intel, I would have called it the i5 250k Pro and had a marketing slogan such as "i9 in workloads, i7 in gaming". They could release an i7 and i9 version too in the next 6 months to become the market leaders in the reopened HEDT market and finally, after everything being ironed out gaming-wise release the regular desktop variants a year from now.
Because it's sold as a regular desktop i9 that gets beaten in many games by my now 2 gen old i7 13700k, it's laughably bad.
With the results we're seeing from this CPU being all over the map I have a feeling its pretty undercooked at the moment and we'll see performance drastically improve. Will it become faster than AMD's X3D and Intel's 14900? No. But it will likely become a much more consistent performer at, slightly below, 14900 benchmarks.
What about Latency and Snappyness during Premiere Pro? Does it also poop the bed?
is good to have a video of tuning 5.2 all cores
Maximus over hype 9 285 by the looks of it, also Brian dude hope you are well, you seem to have some skin iritation, be carefull out there dude!, also this name is hilarious, going to be for sure using it as a meme for awhile, Maximum over Saiyan 9 285 hahaha.
Nobody buying a 285K is going to Run 1080P low.
Ditching HT and keeping E cores was a fundamental miscalculation.
There is not an issue with the architecture, it is just a question of optimization in windows. It will probably improve as windoes is optimized.
@@Tugela60 Nope, you are somewhat correct, but you cannot disable HT and not have some kind of penalty in some software. This is reflected in the awful performance of these CPUs.
@tonep3168 Disabling HT allows single threaded performance to increase, that is where the IPC increases in the processor mostly came from. The performance issues most likely come from the sheduler being thrown off by the absence of HT together with the much deeper branch prediction in the ultra 2 processors. When the branch prediction algorithms in Windows were undated in 24H2 the changes were specific for the zen 5 improved branch prediction, and that likely hurt Intel's upgraded branch prediction engine in the ultra 2 chips.
These things will probably be corrected in future windows updates. Both AMD and Intel appear to be suffering from lags in the OS development cycle with their new processors.
They should make each e-core have a correct cache structure and ditch the huge and power hungry P-Cores which have an actual performance regression vs Raptor.
No HT, no multiple types of core in the Desktop, no tiles, no BS.
At least performance power usage has become more reasonable. but man those motherboard prices are anything but.
*HEY Bryan , I would love to know your best thought between the 'ASRock B650 Pro RS' & 'ASRock B650 STEEL LEGEND' & 'ASRock B650E PG Riptide'* Just because, I can't make up my mind between these 3 ! ( I only use my PC 90% of the time, for Flight Simming ( X-Plane 12 ) *PS Money isn't the issue ! I just want the best !* (Incl. your best X3D CPU for it) Thanks !
My 7900XTX with a normal OC (2900mzh) achieves a minimum of 180 fps but is far from the average with 207fps and a maximum of 250fps.... Taking into account that it cost me €800 more than a year ago. (cpu 7800x3d) cyberpunk
Everything from TYC is welcome
Intel messed up when they went TSMC.
Competition makes all PC hardware better. Every true tech enthusiast has been hoping Arrow Lake would be a major triumph; only an AMD fanboy hoped for Arrow Lake to fall on its face.
how were you running into crashes and problems with the 285k i hear this all the time intel failing etc etc and i can't understand it. i've never had a failure once with intel i have both ryzen pc and intel pc but i just wonder if it's just ppl don't know wth they're doing
intel made lots of changes and every reiewer had bios, ram and settings problems and some had drivers problems
i invite you to watch techtestesters review and kitguru, that one had the most problems, motherboard would not boot, it required a bios upgrade, on a new motherboard and socket...
intel is not the company you used to know, it is on the bottom and ready to be sold in pieces
@@arch1107 valid but surely these same folks are savvy enough to work out their build before they face these problems it's just logic & understanding underlying technology vendors etc etc. specifications they have to be facing these issues because they're self inflicted. for eg: i talked to someone who didn't even understand vrm phases on a mainboard and just was like really??
@@johntipeti4597 no, most reviewers got units a week or 10 days before, the problems were there
about vrms you are trying to cover too much here, so.
16:45 Oh, the Guile stage theme!
But later cut versions will be released, which will have to replace the 12100, 12400, 13400, etc. It looks like they will be a real nightmare at their prices.
My opinion is that Intel is going back to basic not worrying about being on top why because intel have not lost grounds, Intel give you a CPU that works both ways in performance . You have a CPU that do productivity as well as game. I myself as well as a lot of PC users do not play games in 4K 1080p and 1440p is the threshold. Not everyone in the USA alone wake up 3am to play Call of Duty for 48hrs no stop. It only depends how the average person look at it on a price point for all hardware. No one has the money to be throwing away in upgrading every time new hardware come out. If I had to due it over again to play my retro games, edit, stocking buying and selling stocks then I will use the ultra 7 CPU on the PCIe Gen 5 LGA 1851 with I hope an affordable GPU RTX 5090 / 5080 /5070/ 5060
This is a bit worrisome for intel.
Even with 8600mhz ram and top single core performance in cinebench, you can't even match the 12900ks?
Of the performance isn't really as "bad" as media claims it is. But when you consider the price... It shouldn't have "weak spots".
Maybe at 450-500 usd. But at 600 is a no go.
Keeping my 13700kf for next few years lol. Was a bit unstable, but with recent undervolt (-0.075 offset, core voltage) it works wonderfully with similar performance.
Your standards are too low.
@@moebius2k103 only if you have 7800X3D, yet I don't want to change all the system for it, it's not that much faster in 1440p high end high refresh rate gaming to bother me.
@@moebius2k103he doesn't know that of he had to under volt to stabilize his system, it's already too late and his cpu will fry itself like other 13/14 gen intel cpus,
That means he'll probably be on Amd soon
I just bought an Ultra 9 285K CPU and an ASUS Z890 motherboard. After installing Windows 11 24H2 on this brand-new setup, it kept crashing and rebooting repeatedly right from the start. At first, I thought my RTX 4070 GPU was faulty. Eventually, I had to downgrade to 23H2 to resolve the issue, wasting several hours of my time.
Such an obvious issue, and Intel couldn’t catch it during testing? The decades-long image of Intel + Windows as a stable and highly compatible combination has been completely ruined for me.
Next time, I’m going with AMD.
These results are very strange. Overall underwhelming but with inconsistent glimmers of hope.
Admittedly I'm not sidegrading to hopes and dreams. 9800X3D it is. The only problem is that I'm not alone in this.
If perchance I've got leftover money I'd go for the i5 or whatever it's called this time around. It would be a pure Intel only build meant to satisfy my morbid curiosity.
Finally a review done by a professional who shows what we ordinary mortals can understand...