Spark Notes: - By using cudimm ddr5 and overclocking the 285K. It can now "sometimes" keep up with the 7800X3D. - Jay is testing the 9800X3D already. - Nov. 6 at 6am. The 9800X3D reviews will release. - The 9800X3D will make "all of these charts look small". THANKS JAY!
9800X3D will be like 10% above a 7800X3D on average, that's not nothing but when your fps is already 300 is another 30 going to matter? Not really. Intel may not win gaming 90% of the time but its close enough while also offering significantly more multithreaded performance. I'm on a 12700k and want to upgrade to ddr5 and a newer platform BUT getting an 8 core X3D part is a side grade. If AMD could release the 9950X3D with cache on both CCDs it'd making going AMD an easier choice, but when intel is offering 20 real cores for $400-450 and AMD is giving you 8 cores and 16 threads with SMT for that price its a much more difficult decision.
@@Malinkadink maybe its similar likes, u have 8 worker with very quick performance much better than u have 20 worker with slow performance. much core is always good but useless if slow than less core with good performance.
@@Malinkadink Bro 5800X3D is the side grade to the 12700K. It's not a bad CPU, just lets be honest here. If you're production its the 16 core part or TR, gaming it's the X3D every day of the week. Nothing wrong with the 12 series, better that than 13 or 14 series but time to be fair to benchmarks, the 12 series goes up against the 5 series.
That's if Windows don't choke The 9800X3D. I personally don't pay attention to benchmarks and charts because those don't relate to real world use. AMD does make some damn good gaming processors, but they suck azz at VR.
microsoft is speedrunning the demise of windows as a product and are too buissy mopping up every VR/AI/GAMING IP they can get their hands on and or anything proprietary thing that "might" make them more money than god in the future.. current customers be damned.. after win 11 whatever they release people will throw themselves at it in hopes that is no where near the dogshit 11 is.. as proof by the continued support for 10 for 30 bucks a year :D
So let me get this straight, in order to get the real performance of the 285K you need to: - spend extra on the motherboard - spend an additional extra on CUDIMM memory - you also need to overclock your CPU all that just to get close to a 7800X3D that runs out of the box with standard DDR5 memory. How GREAT!
Ya and you can still optimize memory timings for a 7800x3d to pull ahead even further for very little money. A B650 board with a 120 Dollar Kit can gain another 5+% performance for very very little money on the 7800x3d. The value of 7800x3d is staggering in comparison to Arrowflop. And when we go all out with money on the 9800x3d like this video did with Intel then this will be a FPS massacre in favor of AMD.
Calm down guys, this is intels Ryzen 1000, they are learnin new stuff about CPUs after like 10 years, next ones gonna be better, than even better, just like AMD 😀but yeah, this a shit launch and intel has a shit CEO
I don't know man, 7:58 doesn't look very close to me. As a matter of fact, I am not sure if 7800x3D and 285K can even see each other from the opposite sides of the chart OMEGALUL
@@PeterCencul Almost as if 285K is not intended for people who only play games. In those cases it doesn't make sense to buy anything else than x3D at this point, no argue with that.
Doesn't matter really, 9800x3D is NOT a competitor since it's performance for other workloads outside of gaming is not even in the same ballpark. Also, when up to Ryzen 5000 series, AMD users always boasted about application performance and how important it is (since it was winning in those), now all I hear everywhere is frickin gaming, which doesnt even make sense, since with ANY OTHER GPU OTHER THAN 4090 YOU WILL NOT SEE A DIFFERENCE, ESPECIALLY IN 1440P AND BEYOND. God people are stupid.
@@Vtarngpb Pretty much. I love Jay and love GN. The channels are geared toward different crowds. GN I go for deep dive technical details that will put me to sleep, but super useful. (Its why I'm a patron.) But I love Jay's casual attitude, layman explanation, and frankly how he builds PC's. I really wish Jay would start up a patron though as I would support the crap out of this channel. (Already have with all the shirts I've purchased and mats I've picked up.
When Jay started showing gaming scores, I was like what is this going to look like with the 9800X3D? Then he just causally, and also crypticlly, says sminedehacehundredsmedee.
Jay didn't overclock the Intel CPU. This video was about more compatible ram that clearly the Intel CPUs need to work how they were intended. But I like many still won't buy Intel if I gotta spend an insane amount for CU dimm.
@@Awaken2067833758 Absolutly not! 9900X is not faster in gaming. It is also pretty slow. If you buy the correct ram it will be faster then the 9900X. 9800X3D probably will be faster. And it already is proven that the motherboards have an option with Asus were DLVR can be turned of and it saves a lot of power. This is the first cpu on the market were every single core can it's own voltage profile. The DLVR is bringing 1.5 volts down to about 1.1 and then up losing in multicore about 80 watts!!!! With a good cooling, undervolt you can get 40K R23 with 150 watts or less! It is not optimized at all atm.
But almost all the people who would consider buying one of these is not on Linux. They're on Windows. If it's broken on Windows, it's broken. If it has to be OC'd to perform, it's broken. If it needs a $300 RAM kit to perform, it's broken.
@@rangersmith4652 I understand what you are saying, but also it doesn't make much sense. It is designed to work best with that $300 RAM. So that is when it works best. That's like buying a car that offers a 4 cylinder engine and a 6 cylinder engine. "But having to pay $5000 more for the 6 cylinder engine means the car is broken". Like no, it still works, but if you don't buy the best for it to run the best, it wont run at it's best. Or smaller scale, "when I put 86 octane in my car it has knock and ping and gets terrible gas millage, but if I get 91-93 for $0.50 more it runs great, the car is broken". No the car is designed to run best on 91-93 octane, so if you buy cheaper gas, you get less performance.
I saw somewhere where someone tested some titles with only a single P core running and saw a large uplift. This gives an indication that a new scheduler and/or Windows driver/modificaiton to account for Arrow Lake's HW based thread director.
@@MrMarrok657I honestly think intel made the right choice with switching their architecture cause the “just throw more power at it” approach was really reaching it’s limits with 14th gen so to get “similar” performance at lower TDP is better in the long run
We did kinda have a similar situation with AM5 as well back in 2022 where due to having a new architecture where no one knew how to properly take advantage of and Microsoft being Microsoft with windows the new CPUs lost to even older AM4 processors but as time went on with updates on both windows and bios side so I’m expecting something similar to happen with 15th gen… I MEAN core ultra 2! But Intel didn’t stick the landing as well as they had hoped and they’re on step one again
Not really, when you consider intel has effectively copied amd and moved to a chiplet style design. We've known for a long time how important ram speed is to the zen architecture and the arrow lake architecture looks to be similar. Very few people will be buying cudimms at launch though as it again is similar to the am5 ddr5 only situation where memory was very expensive compared with going ddr4 on intel.
@@bionicgeekgrrl I would accept your argument except AMD is actually slower when you get RAM over the sweet spot speed, and it has been this way for the last few AMD generations.
The Intel and AMD SOC architectures are quite different. Even the Intel approach to "tiles" are quite a bit different than AMD's chiplets. Besides not being monolithic, the two aren't very similar at all.
JAAAAAAY. YOUR TEST FAKE,,,,you disable raytrace and dont enable PBO on all amd(they run in minimum not turbo),,,,all of us know your are intel fan boy and your tests not accure(only tomshardware,hordwareunbox are fair,accrue),,,,if you fair go enable RT raytrace and and PBO then new intel goes to down of all charts,,,,,this new CPU from Intel same old shit (not made in TSMC) after 3 month give blue screen in games
I'm loving my 265K. 7600mhz XMP working great. I was already gaming in 4K on my 10600K with great frame rates but this fixed my performance in Cities Skylines 2. One thing that keeps me on Intel is QSV, I converted a 2hr 4K movie to 1080p in 9 minutes with Handbrake, 1.3GB result. I recommend AMD machines to friends that ask me but I'm just not into it myself
I think its an overall great CPU as well, and it hopefully serves you well long term. pretty much all the newest mid-level and upwards CPU's are great actually, there isn't much point in argueing between small percentabges in these benchmarks. I think the pricing is not good though, YET, hopefully they adjust it to where it should be and everyone wins. And I want to wait one more generation away from 13th/14th gen for Intel to earn back some trust.
@@goinpoztal Just like people had to figure out how best to set up the new AMD cpu's, people are figuring out how to get the best out of these new Intel cpu's.
Yeah how are you going to use different memory and overclock the intel chip and not use the same memory type . But do nothing with the amd cpu but leave them stock so these results are flawed in a 1 vs 1 comparison. Even if you are using the updated memory for the Intel chips. You still need to undervolt x3d and overclock the X chips for AMD. So there fore these results are invalid period.
@@ThorntonWillie Different architecture. I'm no expert but I know that AMD chips work best with specific ram speeds. There are sweet spots. So putting in the fastest ram available won't help AMD cpu's and its been that way for a little while. As for under volting and so on, everybody's not going to do that just like most people aren't going to overclock their E cores in Intel, but people with the dough for a 285k will probably be able to come up with the money for faster ram.
The AMD chips have very little overclock headroom and also cannot use the frequencies these CUDIMM memory kits afford without running the IMC in gear 4 which would actually hurt performance. Arrow Lake also doesn't have much overclock room on its P cores but it can seem to OC the E cores quite a bit along with the interconnects. Finally it is able to run CUDIMM DDR5 8800 in gear 2 which no other CPU currently does.
its huge dejavue with this amd 3dnow similar idea vs pentium 4 RAMBUS early 2000;s ....now its x3d vs CU-DIMMS....amd fans did mention something about zen5 is memory bandwidth starve while arrow lake is well fed by better memory sticks and still more headroom as we speak...it can peform badly on old ddr5 standard to cu-dimm gaming level performance
Intel was pretty up front with the fact that these aren't going to set any performance records and are a stepping stone for later. Still, the E cores have come a long way and seem to have a good bit of headroom. I could see them relying on the E-cores a lot going forward to get core counts higher while keeping the P cores in the 8-12 core range.
I think everyone is aware that the main issue with non-monolithic chip designs is latency. I genuinely wonder if this generation of CPU was supposed to have higher clock speeds on the cache and ring but due to what happened with 13th and 14th gen, they had to lower the speeds for safety.
to quote The Beatles "It's All Too Much". and i'm pretty hardcore.the fact that this launch is a hot mess at a dog show is the beginning. now you need a new mobo, the CPU is overpriced with bad out-the-box performance, and now you have to pony up for CUDIMMs ? the fail is in allowing any compatibility w/ UDIMM and burning early adopters. all that and you still have to tweak the bios/OC? i don't think so
It reminds me of the 7000 launch. CPU's were overpriced. Motherboard were expensive and you need DDR5 which was hard to come buy and expensive. A lot of people were making excuses until the launch of the 7800x3D
@@yonghominale8884Um.... _Almost_ correct. 7000 series wasn't dependent on the new RAM, the releases just happened to coincide with each other, at the drop of AM5, which was to be paired with DDR5. But 7000 series by itself could be either AM4/DDR4 or AM5/DDR5. Easy thing to confuse, but distinct enough difference to be called out... Specifically because it (7000 series) brought upon performance uplifts *by itself* that were noteworthy over its predecessor without the need of some other hardware (like CUDIMM) to get out of it like the 285k apparently does. (BTW, I said "noteworthy", not "amazing").
So, in short, Intel's new architecture is heavily affected by faster RAM, the way Ryzen's chips were/are? History doesn't repeat, but it sure does rhyme.
Thank you so much for your content. I have the Ryzen 7 7800 x3d, so im happy, but I do like Intel CPU too. Im in a happy place though. I have the Rx 7900 Xt paired with the 7800x3d so im good. Your results are good to see though. Thanks again.
@@zCaptainz They kind of do 4x1/2 channels, but it's a bit academic, since you could maybe leverage it in some specifically written software. Not general purpose one. There was a video on it by Der8auer, prior to embargo. And for both AMD and Intel - yes, true 4 channel memory would boost the performance. Zen cores especially are so starved that Alexander Lee (from y-cruncher) calculated that to fully utilize them you'd need something like DDR5-21000 or thereabouts.
Finally caught the 7800X3D overclocked to the max. Too bad the 9800X3D about to drop and once again shit all over Intel. Sad, I was hoping the could reign in Amd.
Please consider including the 7700x in your CPU tests, since it seems to be performing incredibly well in games (in Cyberpunk it's even close to the 7800x3D), while only costing around 275€ since many, many months now
Imagine if the car industry worked the same as the PC industry works today. You bought a Porsche, you paid for a Porsche, but before you drive off the lot the dealer spins a wheel and it lands on "Sorry, your car will never perform better than a Corolla unless you use the right air freshener, better luck next time".
Actually, a car where you have to pay for a subscription service to unlock top speed sounds like *exactly* the kind of dicking around that car manufacturers are doing right now.
Yeah, no. That's not a remotely close comparison to the situations. Try actually understanding the video and the information before writing out ignorant ass comments...
More like you bought a car and you have to buy tires for it. If you cheap out and get crappy tires, your performance will be subpar. If you buy good tires (good traction, speed rating, etc.) then you'll get better perrformance.
just disable all P-cores, leave 1 P-core & all E-cores enabled, you will see a lot better performance. better than any one revewed it untill now. they need to fix Thread director again EDIT: the benefits i talked about are only in Gaming.
@TheFPSPower nah bcz E-cores are in between P-cores. See the architecture in 14th gen and 200 series gen. Windows doesn't work well with the new thread director in ultra 200 gen. And there's a lot of latency from P-core to P-core and from all P-cores to IMC. And a lot more to it
They won't get credit for it this time because everyone loves to use the chip makers as punching bags(and both new gen CPUs are still underwhelming) - but it's really starting to sound like Microsoft really borked BOTH of these new CPU launches.
@@RyTrapp0 Seems MS is not really working on the core components of Windows anymore, it's more like packing stuff on top to get people into the cloud and get their data. Microsoft surely could get some good programmers to get a new scheduler (if they don't have them on their own), but it seems they are not really interested except there is some public pressure. And you can program much better schedulers, the Linux results are quite clear about that.
someone on another channel demonstrated this by installing an insider build so once microsoft drop it into updates may see this improvement in near future
@@johntipeti4597 there is absolutely something wrong with arrow lake with software side but at this point I think that Intel making that on purpose ,maybe when AMD launched 9800x3d they will give update
If there are 100k Zen 5 chips and only 10k sold, that didn't sell so well. If there are 10k Intel LUltra chips and only 9k sold, that's a huge success. In this hypothesis, AMD would've outsold Intel, but be treated as an abject failure because 90k chips are sitting in the warehouse. In reality, I don't know the figures, but I do know that everybody's waiting for the X3D version releasing soon. These releases are very close to Christmas so why splurge now?
@ProVishGaming You need expensive ram to make a new intel cpu obviously better performing than the cheaper previous gens. That shows promise, but a bit more performance for way more money is not the right direction. The ocing is cool and shows potential, but it's not really something to compare since cpus vary in oc quality, and it is warranty voiding. Previous cpus could also be oc'd. And AMD has the same issue - a smidgen of more performance for quite a bit extra money (compared to previous gens cpus current pricing).
@Jayztwocents I have an idea as to why their could be a small performance difference "just" from using the CU-DIMM's as seem @7:15 and it is down to the memory controller simply not having to do as much to get the memory clocks that high, and perhaps this is down to the reduction in power draw by the memory controller, the same thing was seen with Zen (3 I think) with overclocking, more performance was gained with lower memory clocks because it "allocated" more power to the cores. Could it be a repeat of that, which was also a chiplet related side effect.!
Found this… “This month, Nvidia's GPU Display Driver and related software updates address eight major exploits. All of them except one allow for code execution and open up vectors for escalation of privileges, data tampering, denial of service, and information disclosure. Impacted users of Nvidia GPU drivers and GPU software are advised to update as soon as possible.”(toms Hardware) Our newest GeForce Game Ready Driver is packed with support for new games, all featuring support for DLSS 3 with Frame Generation and Super Resolution. By installing our new driver, you’ll optimize your experience in Alan Wake 2: The Lake House, Call of Duty®: Black Ops 6, Dragon Age™: The Veilguard, Horizon Zero Dawn™ Remastered, No More Room In Hell 2, Red Dead Redemption, and The Axis Unseen. Additionally, there’s support for 32 new G-SYNC Compatible gaming displays, and 3 new Optimal Playable Settings profiles.
@blitzhacker6981 AMD user here. You've been having issues with ICUE as well? Mine just randomly never opens again then i gotta repair the installation. So annoying
I feel like this makes sense, given the same architecture is used on lunar lake which has ram embedded with the CPU. Anything that increases memory speeds and reduces latency will greatly improve the performance.
It's a thread scheduler/thread parking problem, right? - I've seen some data that showed limiting to 1P + nE cores in Cyber Punk helped - showing it wasn't using the P Cores properly...
Honestly I'm still saying it's just not worth it. I don't understand how they're all sold out. Did people miss the reviews? I don't see any recommendations for the CPU. That coupled with 1851 being a platform for maybe 2 years and then you'd need to fully upgrade the platform again, I'm just not seeing it. It's a shame because we could use some more competition again. Actual fights for market dominance, price cuts, innovations and more but I think we have reached a point where the general engineering behind chips has come to a little bit of a infrastructure problem because unless there is some major new process we can't keep going smaller and smaller when it comes to transistors etc. I'll upgrade next year and then wait a loooong time to see where all of this is going but surely I will not use an Intel CPU.
You have to understand that A LOT of people don't watch/read reviews. They see that new stuff is coming out and will just buy it. I sell computers, components, and peripherals for a living, and the amount of people who come it to buy parts and don't know how to put it all together and/or haven't done their research is staggering!!
@@marioStortuga Oh I do that but this video is about an enthusiast CPU. No non enthusiast would buy this after informing themselves. There is just no justification to go with Intel anymore. Their high end CPU is much slower than the 3rd highest CPU offered by the competition when it comes to gaming. That's why I said we need more competition. I want Intel to get back on their feet and I want them to make amazing products again so we all profit. Enthusiast only means I'm willing to spend large amounts on something that I probably don't need but want, but even then I'd love to save a lot of money too hehe
@@ABP8214 I just wonder how large the actual amount of people is that don't inform themselves and buy a 285k vs. people that buy CPUs in general because I can't imagine the uninformed buyer base having so much money to throw at something this expensive. Also as a salesman you can make the difference which I'm sure you do! Educating customers binds them but I don't need to explain that. As I said, right now I just don't see it, but I'm merely one man with two eyes, I can't know everything which is why I'm glad you guys reply and give me input.
Glad Jay made this video been looking at doing a upgrade but wanted to wait and see what happens with the 9000x3d chips. From what he says at @15:34 Im happy I waited. lol Also him trying not to give info out in the way he did was funny I loved it.
So basically what we learned here is that Intel's new flagship needs fancy/ more expensive ram, and overclocking to compete with a 7800X3D out of the box. And still loses in most games... Annnnddd the 9800X3D hasn't even entered the chat. 😅
@@PowellCat745 I bench a LOT in cp2077 and I have several saves made for it where I run a 5 minute logged session and then average them. The built-in tool really doesn't show what you would want to know for buying hardware.
To completely redo a large suite of tests and the like takes time, especially when you're only working with one available/usable benching system. To get info out at any sort of reasonable period of time, you have to note where to draw the line. By keeping it on the original BIOS, you need to do 2 passes of testing (285 w/ CUDIMM and 285 OC w/ CUDIMM) rather than 3 or 4 (w/o and OC w/o), and any gains the new BIOS gives `should` carry over and apply to all 4 situations.
@brysonshires9742 Not true. It's still used by a very large portion of PC gamers. Just like when you go to a steam hardware survey and see most people are still on older hardware in general. If i didn't have my current rig, my old rig with DDR3 would still be plenty to enjoy all the games i play. Most games up until 2022.
WAIT! WAIT! JAY--ARE WE GOING BACKWARDS IN TECH!?!? Way back when (I’m 69…). I remember the 8086, 8088, 286, 386, 486 CPUs from Intel. Now we go from Core i9 14th Gen backwards in time BEFORE the vaulted and hallowed 286 of the early 90’s to the NEW Intel 285 chip??? WHAT NEXT? MS-DOS 8 RUNNING WINDOWS 12?????
Yeah, when I saw their new naming for the first time, I was like “but you had 286 decades ago (I’m old, so please no one do the math, I don’t want to be reminded about how old I am.)
That is true,however how is it going to take the Intel processors to realize that performance.The other fundamental problem is that Intel’s Z platform normally has support for 24 months before a new chipset and LGA socket released.That is why I lament the loss of the X platform for HEDT. I Intel revived their HEDT platform for their new chiplet style architecture they would be able to match AMD with chipset support duration.
@@Awaken2067833758 seeing as how their main focus seems to be limiting high wattage/high temps right out of the gate I hope you’re wrong but we will see. I waned to give intel credit where it’s due. We need competition.
@ your 9950x *HAS* had time to mature is spitting distance from the intel chip which is why I personally think it’s impressive. This is the first time they’ve used architecture (at least in modern times) that wasn’t their own and still was able to get it within the rest of the cpu lineup in terms of performance. This coming from a strictly Ryzen user since the beginning. I’m just saying they did well on their first entry
This is great and all but really only for people who use productivity tasks for work and can write off the expense. The chip alone is over $600 and Cudimm costing double sometimes triple standard ddr5 means for most consumers this launch should be ignored. For the price of just the chip, you can get a 7800x3d, mobo and 32gb kit at microcenter, if you dont have microcenter add 150 or so for all 3. This is not enough of an uplift for anyone with something from the last 2 gens to upgrade to.
@muzegames most people just browse the internet and do very light tasks. Fewer people game and then fewer still need a chip like this for their specific tasks
I know, I'm so confused. A few months back, we saw all this stuff about this new CAMM2 memory for all sorts of performance improvements on next-gen platforms. Then all of a sudden it completely "disappears" and CUDIMM is everywhere...weird.
@@Spinelli__ I think the answer is quite simple: in case of CUDIMMs the board manufacturer can stick to his standard layouts with standard DIMM slots. And also not much changes from the RAM manufacturers. For CAMM however you need to change both mainboard and RAM.
You would not see a real difference in real world performance. Zen 5 did not improve the memory controller from Zen 4. Even Alder Lake has a better CPU controller than Zen 5.
Nothing will happen. The AMD CPUs can't run those high RAM speeds without resorting to gear 4 on the IMC, which would actually harm performance. Arrow Lake can do 8800 MT/S in gear 2 so it can actually take advantage of the faster ram.
I'd like to know what percentage of PC builders are doing it for the performance stats rather than because they actually need to run lots of high computation applications. i.e. it's more like building a dragster.
It's funny, if you go back exactly 5 years ago you'd find AMD fans saying that a 9900k was a bad value when you could get a 3900x with more cores. Source: I was one of them and have a 3900x.
CK dimms are not so much expensive how i expected. RGB --- That Gskill F5-8200C4052G24GX2-TZ5CRK Modell cost 291€ (48GB) | Works on MSI MEG ACE Z890 RGB --- That Gskill F5-8400C4052G24GX2-TZ5CRK Modell cost 350€ (48GB) | Works on MSI MEG ACE Z890 Non RGB Versions are 40€ less, so u get one for 249€, some daily deals for 220€. That okay for brand new product. That Z-Royal Series are more expensive
This disparity between some instances the 8400 losing or far beating it with stock can only be described by instability leading to corrupting performance. Nowadays you lose ALOT of performance way way before having any "visible" instabilities due to the design of ddr5.
@@StrixWar no, unless you do something that specifically requires extreme performance other than gaming. most other uses could be covered perfectly well by a ryzen 5 1600, let alone a 7800x3d, including what most people would consider productivity tasks. my office runs on q6600s to this day and that has never been a bottleneck. the use for these extremely threaded cpus is even more niche than gaming. youtubers tend to put a lot of emphasis on this shit because they're youtubers --- video editing is one of the big things about it and these cpus help with that.
@@TheSkiddywinks Not everyone wants Ryzen though is the issue. Some people stay with Intel either because they're fanboys or they like using stuff like Quicksync
Reminds me of the days of the Pentium 4 Prescott where Intel were running hot, while AMD were offering lower cost, more efficient processors. They then changed their architecture and strategy with the Core2 range.
15:46 "this all looks good for Intel" is not how I would describe it. You are comparing the pricey 285K + pricey CUDIMM + overclock with a (currently definitely overpriced) 7800X3D with standard RAM and no overclock and the Intel system is still below the AMD system most of the times.
It does look pretty amazing unless you're strictly gaming, 285K+CUDIMM manages to top most of the productivity charts. Of course it does not make sense for people who intend to only play games (in which case x3D is an obvious choice), but vice versa 7800x3D is absolute trash for productivity, in most of those charts it is dead bottom. Different designs, different uses.
@@Aquaquake Fully agreeing with your analysis. But I would assume most on the people do not use their private computers for content creation or "productivity", in those cases all those P- and E-cores are idling, e.g. simply are a waste of sand and energy. As always: you should first be sure about your personal requirements and then decide which part to buy. At least if you are not one of those who always need to have the biggest numbers and fastest parts. And that's why I decided after Threadripper 1920X and Ryzen 5900X (which I bought also for some private FEA) to now change to 9800X3D. Which probably still is overkill for my use.
I'll tell you more Jay. I am having this approach also with older 13/14th generation.The limit is that E cores don't have the massive gap they have now. If you overclock P cores you see in cinebench p core 96C° while E core top at 65 C°. So with the 13900k I did 54 all core P cores, 47 E cores, HT disable. Ring 4500. On cinebench you'll see a worse score (but with max 270w). But with games................. Try ;)
Yes. This is a rushed launch again. After the 13th/14th gen fiasco, Intel should have known better. Arrow Lake needed some time to iron out its kinks because it's doing a lot of interesting things, but the results matter at the eod.
@@Kapono5150 This isn't entirely an Intel issue. It's also a Windows issue. We've seen it with Ryzen 9000 already, the odds aren't low that because this is a completely new architecture with a new way of doing things, that Windows might not be using it correctly
stock 7800x3d hits higher than 16700 in r23, consistently over 18000. IDK about the other tests i don't waste time with them. You have a bad sample. I can undervolt and limit wattage to 75w and still hit over 18000 every time.
his tests always wrong,not fair he disable RT raytrace and dont enable PBO on all amd,,,if he enable Raytrace and and PBO and use 6400 ram on all (use and gpu) intels 50% crash down
This really should be compared with 14900K with fast RAM because otherwise it's just making the 285K look better than it should. It's kind've been known for a long time that tweaking Raptor Lake can get you better performance than a 7800X3D because AM5 is limited in its RAM speeds.
14900K doesn't support more than like 5600 unless cooled by liquid nitrogen. It's motherboards also do not support more than in the ballpark of 6400. You wouldn't see a difference because the speeds just aren't supported. It was like using 3200 in a i7 7700 build that was restricted to 1600-1800. It ran at 1600-1800
@@goldenhate6649 You are incorrect. The 14900K doesn't need LN2 to run ram faster than 5600. With an AIO or custom loop the 14900K can run 8000-8600. I haven't seen a 14900K that can't run 7200.
AM5 can just as easily handle 8000MHz RAM as Raptor lake. There just isn't much point going past 6400MHz in a 1:1 ratio. That's a good thing. Also, how about you show us your sources proving that a tweaked 14900K outperforms a 7800X3D in gaming instead of talking out your butt?
@@Nayah9 You don’t need his proof, look at Jayz Cyberpunk benchmark. If you overclock that 13900K it eeks out a win, combine it with DDR5 8000. It’s possible, whether or not it’s worth it is a different question that you weren’t asking in the first place. Intel provides so many knobs and levers to play with to OC that it’s fun to tweak. It’s like driving stick shift, not a practicality thing but enthusiast activity. Arrowlake has way more of these controls over Raptorlake as shown in the Skatterbencher videos.
CUDIMM isn't just about faster clocks, it's about way more stability. Right now to hit high mem clocks you need to jack up the SOC/IMC/PHY voltage which cause's all sorts of issues. With CUDIMM's you can get those higher memory speeds without needing to increase SOC/IMC/PHY voltage to drive the memory.
Just need some tinkering on intels side. It is amazing to see people criticizing tech, but nobody criticizing the tech isnt able to engineer nothing. Its a reason you are a consumer, and they are the creators
This was always going to be Intel's Ryzen 1000, the difference is Intel wasn't desperate and needed a Hail Mary from going under so could have worked on it longer. This is the tick cycle as is Zen 5 and I expect the Tock cycle of this and Zen 5 to improve a lot
@ai-aniverse that's the point. If you don't like the product don't buy it. But at the end of the day, the creators can create whatever they feel like creating, and the consumers can buy what they want and at whatever price
its huge step for this socket....nova lake is around the corner for stronger e-cores ....since INTEL competing with AMD with SMT off....without SMT on AMD ...it losses to Intel and amd has nowhere to go after zen5....power usage is gonna upwards in next update with zen6...300w
The gaming improvements all seem logical to me. The whole reason the x3d chips are awesome is because of the low latency and high bandwidth of the l3 cache. If you can improve main memory bandwidth and lower latency to access the memory, it will close that gap a bunch. Can you disable the E cores in Arrow lake easily? Would disabling the E cores and then overclocking the P cores while overclocking the RAM and cache yield competitive gaming performance? Probably not worth doing for normal usage cases, but, I'm curious. In any case, Im keeping my 7800x3d for now, and likely for the next several years unless 9800x3d surprises me. (EDIT: i should have finished the video before posting. I like the rhyming at 15:30 and at the end, Hahaha)
I already told at the release just to wait before burning Intel to the ground. There are a lot of good things about the new architecture. It can consume really low power if DLVR is disabled an lower voltage is selected. It is crazy effecient then. Also microcode updates and Windows updates together with game optimazations will put this cpu very high within 6 months. Only thing that is not good atm is the price of the platform. That is CPU, Ram and motherboard are very expensive. Give it some time and this cpu will be much better and a real competitor for the top.
I've been saying this on reddit for weeks, everybody overlooking CUDIMM. I was expecting to gain back the 5-15 fps in games when compared to 14900K tho, not a massive +50 fps boost.
Here's the problem: Intel Core Ultra 9 275K, $650, G.SKILL Trident Z5 CK 48GB (2 x 24GB) CUDIMM DDR5 9600MT/S $399, ASUS ROG STRIX Z890-A GAMING WIFI Motherboard $399 for a total amount of $1,448 USD, prices from Microcenter, Newegg, Amazon. While AMD with the Ryzen 7 9800X3D with any motherboard, past or recent gen with a moderate DDR5 kit ranging from 6000MT/s to 8000MT/s will decimate the intel combo for a lower total build price too.
Your comparison is horrible because it’s apples to oranges. An 8 core gaming CPU with entry level motherboard and DDR5 ram. To a 24 core Intel CPU with high end motherboard and ram. Obviously it will cost far more because it does beyond more better. Tuning the 285K will allow it to get even closer to a 9800x3d in gaming while destroying it in everything else.
Your comparison is horrible because it’s apples to oranges. An 8 core gaming CPU with entry level motherboard and DDR5 ram. To a 24 core Intel CPU with high end motherboard and ram. Obviously it will cost far more because it does beyond more better. Tuning the 285K will allow it to get even closer to a 9800x3d in gaming while destroying it in everything else. Also people that buy either of the CPU’s are not goi g to be gaming in 1080p anyway. So games with be beyond more GPU bound regardless. Viewers have to differentiate the difference between CPU testing and real work performance difference.
The CUDIMMs should come down in price fairly quickly. The only difference is a little clock driver chip, so they shouldn't be much more expensive to manufacture. They just don't really exist on the market yet. Motherboards are more egregious, but AM5 was also ridiculous on launch, so that seems to be more on motherboard manufacturers scalping the high end.
@@ryanspencer6778 they will not come down in price quickly, and it is not about the chip. Low demand means high production prices, depending on the demand it could take a year to a couple years for prices to start going down
Not a very honest comparison. There is no point in using the 9600 CUDIMM. Intel says 8000 is the sweet spot for ARL. Also, the 285K with fast memory can do a decent job at gaming but the 9800X3D is far far weaker than the 285K at workstation and productivity tasks. Apples to oranges comparison.
Spark Notes:
- By using cudimm ddr5 and overclocking the 285K. It can now "sometimes" keep up with the 7800X3D.
- Jay is testing the 9800X3D already.
- Nov. 6 at 6am. The 9800X3D reviews will release.
- The 9800X3D will make "all of these charts look small".
THANKS JAY!
Hmmm... It's not the size of your chart, but how you use it... 😏
9800X3D will be like 10% above a 7800X3D on average, that's not nothing but when your fps is already 300 is another 30 going to matter? Not really. Intel may not win gaming 90% of the time but its close enough while also offering significantly more multithreaded performance. I'm on a 12700k and want to upgrade to ddr5 and a newer platform BUT getting an 8 core X3D part is a side grade. If AMD could release the 9950X3D with cache on both CCDs it'd making going AMD an easier choice, but when intel is offering 20 real cores for $400-450 and AMD is giving you 8 cores and 16 threads with SMT for that price its a much more difficult decision.
@@Malinkadink maybe its similar likes, u have 8 worker with very quick performance much better than u have 20 worker with slow performance.
much core is always good but useless if slow than less core with good performance.
@@Malinkadink Bro 5800X3D is the side grade to the 12700K. It's not a bad CPU, just lets be honest here. If you're production its the 16 core part or TR, gaming it's the X3D every day of the week. Nothing wrong with the 12 series, better that than 13 or 14 series but time to be fair to benchmarks, the 12 series goes up against the 5 series.
That's if Windows don't choke The 9800X3D. I personally don't pay attention to benchmarks and charts because those don't relate to real world use. AMD does make some damn good gaming processors, but they suck azz at VR.
We found a fix. Spend 700$ on ram...
Always be upgrading.
😂😂
Having money is always the best solution 😅
For normal people on a mid tier gpu, every extra dollar on a better graphic card yields a ton more.
And a stupid amount of overclocking
We can shit on Windows but why don't Intel and AMD work with Microsoft to fix their shit BEFORE they launch new products?
microsoft is speedrunning the demise of windows as a product and are too buissy mopping up every VR/AI/GAMING IP they can get their hands on and or anything proprietary thing that "might" make them more money than god in the future.. current customers be damned.. after win 11 whatever they release people will throw themselves at it in hopes that is no where near the dogshit 11 is.. as proof by the continued support for 10 for 30 bucks a year :D
So let me get this straight, in order to get the real performance of the 285K you need to:
- spend extra on the motherboard
- spend an additional extra on CUDIMM memory
- you also need to overclock your CPU
all that just to get close to a 7800X3D that runs out of the box with standard DDR5 memory. How GREAT!
Ya and you can still optimize memory timings for a 7800x3d to pull ahead even further for very little money. A B650 board with a 120 Dollar Kit can gain another 5+% performance for very very little money on the 7800x3d. The value of 7800x3d is staggering in comparison to Arrowflop.
And when we go all out with money on the 9800x3d like this video did with Intel then this will be a FPS massacre in favor of AMD.
Calm down guys, this is intels Ryzen 1000, they are learnin new stuff about CPUs after like 10 years, next ones gonna be better, than even better, just like AMD 😀but yeah, this a shit launch and intel has a shit CEO
I don't know man, 7:58 doesn't look very close to me. As a matter of fact, I am not sure if 7800x3D and 285K can even see each other from the opposite sides of the chart OMEGALUL
@@Aquaquake and than suddenly gaming benchmarks kick in and it looks like a joke 😀
@@PeterCencul Almost as if 285K is not intended for people who only play games. In those cases it doesn't make sense to buy anything else than x3D at this point, no argue with that.
Interesting, but imagine paying over double on cpu and ram just to lose to a 9800x3d
Or a 58x3d
of course. maybe Intel needs to poach the AMD engineers that keep kicking intel's ass. And Intel STILL charges a WHOLE lot more.
Lost for words.. 😂 Ai plus 289k 454. 350. 351. 360. 460. Lincoln So what’s the K mean again? press F to continue..
the 9800x3d isnt out yet neither is the price or info on it so too soon for that
Doesn't matter really, 9800x3D is NOT a competitor since it's performance for other workloads outside of gaming is not even in the same ballpark. Also, when up to Ryzen 5000 series, AMD users always boasted about application performance and how important it is (since it was winning in those), now all I hear everywhere is frickin gaming, which doesnt even make sense, since with ANY OTHER GPU OTHER THAN 4090 YOU WILL NOT SEE A DIFFERENCE, ESPECIALLY IN 1440P AND BEYOND. God people are stupid.
The 9800x3d reference at 15:34 lol
7800x3ds and everything else are about to drop price in the used market if what jay says is true
"Everything's about to look really small" Hell yeah!
Really glad I just switched to AM5.
@@adelalatawi3363 The 7800X3D doesn't drop in price since it isn't even made anymore. :D
Please use the correct term; it's the schm't'lit'shlut CPU
6:31 thanks, Steve! 🙃
yeah......GN, meh
Thats a hell of a picture to use for GN Steve. I lol'd.
@@themice42 h8ers gonna hate
@@Vtarngpb Pretty much. I love Jay and love GN. The channels are geared toward different crowds. GN I go for deep dive technical details that will put me to sleep, but super useful. (Its why I'm a patron.) But I love Jay's casual attitude, layman explanation, and frankly how he builds PC's. I really wish Jay would start up a patron though as I would support the crap out of this channel. (Already have with all the shirts I've purchased and mats I've picked up.
@@Vtarngpb If he wasn't unbearable to listen to, the content would be great
$285 price tag will make 285k faster for sure
When Jay started showing gaming scores, I was like what is this going to look like with the 9800X3D? Then he just causally, and also crypticlly, says sminedehacehundredsmedee.
There's definitely something to read into his "just wait..." statement.
@@JDCheng the fact that the 7800x3d still tops the charts speaks volumes.
@@joncundiff1085 Not really. It IS a gaming oriented part with it's almost 100 Megabytes of cache.
@@PixelatedWolf2077 and the 265 was supposed to gaming oriented. But it was hot shit.
@@joncundiff1085If you only care about gaming at 1080p sure it’s the best lmao.
Anyone else looking at the unfinished benchmark in the background lol.
😂why is it taking so long to render that scene, i can't unsee it now
Great Results!!! Now overclock all the rest of the cpus and test again ....
There shouldn't be much of a difference, Arrow lake is "special".
Jay didn't overclock the Intel CPU. This video was about more compatible ram that clearly the Intel CPUs need to work how they were intended.
But I like many still won't buy Intel if I gotta spend an insane amount for CU dimm.
Oldman 7800X3d about to be dethroned by his Son Jay confirms and Grandpa 5800X3d couldn't be more proud.
like always, the Windows scheduler is trash
if you look at the linux benchmarks made by Phoronix, the new Intel CPUs are delivering a lot of perf/watt
Power consumption is better than last gen but It loses against the 9900x and it is very bad in games
@@Awaken2067833758 Absolutly not! 9900X is not faster in gaming. It is also pretty slow. If you buy the correct ram it will be faster then the 9900X. 9800X3D probably will be faster.
And it already is proven that the motherboards have an option with Asus were DLVR can be turned of and it saves a lot of power. This is the first cpu on the market were every single core can it's own voltage profile. The DLVR is bringing 1.5 volts down to about 1.1 and then up losing in multicore about 80 watts!!!!
With a good cooling, undervolt you can get 40K R23 with 150 watts or less! It is not optimized at all atm.
i saw a chnnel who used windows insider build for addressing the scheduler and they were getting mass gains on the core arrow
But almost all the people who would consider buying one of these is not on Linux. They're on Windows. If it's broken on Windows, it's broken. If it has to be OC'd to perform, it's broken. If it needs a $300 RAM kit to perform, it's broken.
@@rangersmith4652 I understand what you are saying, but also it doesn't make much sense. It is designed to work best with that $300 RAM. So that is when it works best.
That's like buying a car that offers a 4 cylinder engine and a 6 cylinder engine. "But having to pay $5000 more for the 6 cylinder engine means the car is broken". Like no, it still works, but if you don't buy the best for it to run the best, it wont run at it's best.
Or smaller scale, "when I put 86 octane in my car it has knock and ping and gets terrible gas millage, but if I get 91-93 for $0.50 more it runs great, the car is broken". No the car is designed to run best on 91-93 octane, so if you buy cheaper gas, you get less performance.
It would be nice to have a frame time graph since frame times are also an issue with 285k
I think I understood the secret message at 16:45: "Be sure to drink your ovaltine".
Roger that, Jay! Roger that!
Really? All I heard was yvaN ehT nioJ, but I can't figure out what it means.
I saw somewhere where someone tested some titles with only a single P core running and saw a large uplift. This gives an indication that a new scheduler and/or Windows driver/modificaiton to account for Arrow Lake's HW based thread director.
Ring bus and layout plus io die That's a lot of latency
Yes. Danny Z spoke to this as well. There are a lot of issues going on, Its not all on Intel.
@@NBWDOUGHBOY can still blame intel for trying to reinvent the wheel
@@MrMarrok657I honestly think intel made the right choice with switching their architecture cause the “just throw more power at it” approach was really reaching it’s limits with 14th gen so to get “similar” performance at lower TDP is better in the long run
We did kinda have a similar situation with AM5 as well back in 2022 where due to having a new architecture where no one knew how to properly take advantage of and Microsoft being Microsoft with windows the new CPUs lost to even older AM4 processors but as time went on with updates on both windows and bios side so I’m expecting something similar to happen with 15th gen… I MEAN core ultra 2! But Intel didn’t stick the landing as well as they had hoped and they’re on step one again
That's crazy how much the RAM is impacting this cpu.
Not really, when you consider intel has effectively copied amd and moved to a chiplet style design. We've known for a long time how important ram speed is to the zen architecture and the arrow lake architecture looks to be similar. Very few people will be buying cudimms at launch though as it again is similar to the am5 ddr5 only situation where memory was very expensive compared with going ddr4 on intel.
RAM speed affects all CPUs, more than most people think!
@@bionicgeekgrrl I would accept your argument except AMD is actually slower when you get RAM over the sweet spot speed, and it has been this way for the last few AMD generations.
The Intel and AMD SOC architectures are quite different. Even the Intel approach to "tiles" are quite a bit different than AMD's chiplets. Besides not being monolithic, the two aren't very similar at all.
@@bionicgeekgrrl now makes you wonder.. what if AMD released CUDIMM chips? Would they benefit Zen as well?
Where's the link to download my extra FPS?
You have to specify if you want to download more Ram or faster Ram
@@martinprince8253 Dedotated RAM
JAAAAAAY. YOUR TEST FAKE,,,,you disable raytrace and dont enable PBO on all amd(they run in minimum not turbo),,,,all of us know your are intel fan boy and your tests not accure(only tomshardware,hordwareunbox are fair,accrue),,,,if you fair go enable RT raytrace and and PBO then new intel goes to down of all charts,,,,,this new CPU from Intel same old shit (not made in TSMC) after 3 month give blue screen in games
@@martinprince8253 Instructions unclear, now I have a truck.
So, this is sort of an Early Access CPU, but I’m sure it will be worth it once they finish, and you buy all their new CPU DLC. 🤡🌎
Yes, you found performance gains... but at what cost? *insert Thanos meme*
6 - 18 months later Jay: Everything, it has an unrecoverable instability failure
Where did that bring you? Back to Steve.
A lot of power and a lot of money?
@@mastermind6000 Thanks, Steve.
12 months later, Jay to Intel, I don't even know who you are.
I'm loving my 265K. 7600mhz XMP working great. I was already gaming in 4K on my 10600K with great frame rates but this fixed my performance in Cities Skylines 2. One thing that keeps me on Intel is QSV, I converted a 2hr 4K movie to 1080p in 9 minutes with Handbrake, 1.3GB result. I recommend AMD machines to friends that ask me but I'm just not into it myself
So you just dont like the best products
QSV is cool and all but couldn't you do the same with likely way better performance with any discrete GPU, especially Nvidia ones?
@@ekifi I have a 4070 Ti but QSV beats it by a long shot. It's not even close.
I think its an overall great CPU as well, and it hopefully serves you well long term.
pretty much all the newest mid-level and upwards CPU's are great actually, there isn't much point in argueing between small percentabges in these benchmarks.
I think the pricing is not good though, YET, hopefully they adjust it to where it should be and everyone wins.
And I want to wait one more generation away from 13th/14th gen for Intel to earn back some trust.
11:50 Messed up test - 7950x3d is slower than regular 7950x?
@@TheEldarie His AM5 results are very low
@@PowellCat745his AMD results are always low
@@Riyozsu true
Props for using the goofiest possible GN Steve picture imaginable (6:31), lmao 🤣🤣
Starts to sound like Pentium 4 and RDRAM.
Now do these results vs a 9950x and a 7950x that is overclock. Oh and undervolt the x3d chips too.
Thank u ! Just like the original video they are really going out of their way to put the best light they can on this cpu.
@@goinpoztal Just like people had to figure out how best to set up the new AMD cpu's, people are figuring out how to get the best out of these new Intel cpu's.
Yeah how are you going to use different memory and overclock the intel chip and not use the same memory type . But do nothing with the amd cpu but leave them stock so these results are flawed in a 1 vs 1 comparison. Even if you are using the updated memory for the Intel chips. You still need to undervolt x3d and overclock the X chips for AMD. So there fore these results are invalid period.
@@ThorntonWillie Different architecture. I'm no expert but I know that AMD chips work best with specific ram speeds. There are sweet spots. So putting in the fastest ram available won't help AMD cpu's and its been that way for a little while. As for under volting and so on, everybody's not going to do that just like most people aren't going to overclock their E cores in Intel, but people with the dough for a 285k will probably be able to come up with the money for faster ram.
The AMD chips have very little overclock headroom and also cannot use the frequencies these CUDIMM memory kits afford without running the IMC in gear 4 which would actually hurt performance.
Arrow Lake also doesn't have much overclock room on its P cores but it can seem to OC the E cores quite a bit along with the interconnects. Finally it is able to run CUDIMM DDR5 8800 in gear 2 which no other CPU currently does.
The ecores are not supposed to be playing games to start with, they are for Background tasks not foreground.
6:31 Right. You could not find a more flattering picture of Steve. 😂🤣😂👍
its huge dejavue with this amd 3dnow similar idea vs pentium 4 RAMBUS early 2000;s ....now its x3d vs CU-DIMMS....amd fans did mention something about zen5 is memory bandwidth starve while arrow lake is well fed by better memory sticks and still more headroom as we speak...it can peform badly on old ddr5 standard to cu-dimm gaming level performance
LOL dude. Come on now. Your intel bootlicking is showing and its not a good look.
Intel was pretty up front with the fact that these aren't going to set any performance records and are a stepping stone for later. Still, the E cores have come a long way and seem to have a good bit of headroom. I could see them relying on the E-cores a lot going forward to get core counts higher while keeping the P cores in the 8-12 core range.
Let's break the code from 15:28 guys, we can do it, the future of the Earth depends on it.
I think everyone is aware that the main issue with non-monolithic chip designs is latency.
I genuinely wonder if this generation of CPU was supposed to have higher clock speeds on the cache and ring but due to what happened with 13th and 14th gen, they had to lower the speeds for safety.
That seems very plausible
to quote The Beatles "It's All Too Much". and i'm pretty hardcore.the fact that this launch is a hot mess at a dog show is the beginning. now you need a new mobo, the CPU is overpriced with bad out-the-box performance, and now you have to pony up for CUDIMMs ? the fail is in allowing any compatibility w/ UDIMM and burning early adopters.
all that and you still have to tweak the bios/OC? i don't think so
It reminds me of the 7000 launch. CPU's were overpriced. Motherboard were expensive and you need DDR5 which was hard to come buy and expensive. A lot of people were making excuses until the launch of the 7800x3D
@@yonghominale8884Um.... _Almost_ correct. 7000 series wasn't dependent on the new RAM, the releases just happened to coincide with each other, at the drop of AM5, which was to be paired with DDR5. But 7000 series by itself could be either AM4/DDR4 or AM5/DDR5. Easy thing to confuse, but distinct enough difference to be called out... Specifically because it (7000 series) brought upon performance uplifts *by itself* that were noteworthy over its predecessor without the need of some other hardware (like CUDIMM) to get out of it like the 285k apparently does. (BTW, I said "noteworthy", not "amazing").
So, in short, Intel's new architecture is heavily affected by faster RAM, the way Ryzen's chips were/are? History doesn't repeat, but it sure does rhyme.
21 seconds, I wish I could say this is the only time it takes me that long but I'd be lying
Thank you so much for your content. I have the Ryzen 7 7800 x3d, so im happy, but I do like Intel CPU too. Im in a happy place though. I have the Rx 7900 Xt paired with the 7800x3d so im good. Your results are good to see though. Thanks again.
Now comes the question, with this correct RAM setup, can quad channel RAM actually work and improve performance further?
These chips don't have 4 ram channels...
god i wish we had regular desktop chips with quad channel, i'd kill for 1dpc 4x48gb
@@zCaptainz They kind of do 4x1/2 channels, but it's a bit academic, since you could maybe leverage it in some specifically written software. Not general purpose one. There was a video on it by Der8auer, prior to embargo.
And for both AMD and Intel - yes, true 4 channel memory would boost the performance. Zen cores especially are so starved that Alexander Lee (from y-cruncher) calculated that to fully utilize them you'd need something like DDR5-21000 or thereabouts.
DDR5 is already quad channel tbh
@@HoLDoN4Sec sort of
Great testing. This makes sense and makes an argument for Intel. Now to see if game devs can leverage the NPU's to get more in game performance also.
Finally caught the 7800X3D overclocked to the max. Too bad the 9800X3D about to drop and once again shit all over Intel. Sad, I was hoping the could reign in Amd.
Wut. 7800 cant be overclocked, its locked. You been looking for something that cant be found
Please consider including the 7700x in your CPU tests, since it seems to be performing incredibly well in games (in Cyberpunk it's even close to the 7800x3D), while only costing around 275€ since many, many months now
Imagine if the car industry worked the same as the PC industry works today. You bought a Porsche, you paid for a Porsche, but before you drive off the lot the dealer spins a wheel and it lands on "Sorry, your car will never perform better than a Corolla unless you use the right air freshener, better luck next time".
Sounds about right...lol
Actually, a car where you have to pay for a subscription service to unlock top speed sounds like *exactly* the kind of dicking around that car manufacturers are doing right now.
Yeah, no. That's not a remotely close comparison to the situations. Try actually understanding the video and the information before writing out ignorant ass comments...
High octane gas, better Oil, better tires. Sports cars require extra costs for the extra performance.
More like you bought a car and you have to buy tires for it. If you cheap out and get crappy tires, your performance will be subpar. If you buy good tires (good traction, speed rating, etc.) then you'll get better perrformance.
just disable all P-cores, leave 1 P-core & all E-cores enabled, you will see a lot better performance. better than any one revewed it untill now. they need to fix Thread director again
EDIT: the benefits i talked about are only in Gaming.
If you said that back when 12th gen launches everyone would think you're joking, now I'm not sure because those E-cores are getting really spicy.
@TheFPSPower nah bcz E-cores are in between P-cores. See the architecture in 14th gen and 200 series gen.
Windows doesn't work well with the new thread director in ultra 200 gen. And there's a lot of latency from P-core to P-core and from all P-cores to IMC. And a lot more to it
They won't get credit for it this time because everyone loves to use the chip makers as punching bags(and both new gen CPUs are still underwhelming) - but it's really starting to sound like Microsoft really borked BOTH of these new CPU launches.
@@RyTrapp0 Seems MS is not really working on the core components of Windows anymore, it's more like packing stuff on top to get people into the cloud and get their data.
Microsoft surely could get some good programmers to get a new scheduler (if they don't have them on their own), but it seems they are not really interested except there is some public pressure. And you can program much better schedulers, the Linux results are quite clear about that.
@@alilokhd4638 Unfortunately this guy was not smart enough to OC D2D and NoC/NGU to reduce mem latency.
Let me spend an extra grand to get 2 frames... yup, getting right on that Intel.
Intel is going to need a windows scheduler update if gaming is better with boosted e cores. Likely P core isnt being utilized properly yet
Also, someone showed turning off all the P cores except 1 and run on all e cores gave a boost in some games
someone on another channel demonstrated this by installing an insider build so once microsoft drop it into updates may see this improvement in near future
@@johntipeti4597 there is absolutely something wrong with arrow lake with software side but at this point I think that Intel making that on purpose ,maybe when AMD launched 9800x3d they will give update
Step 1, don't buy it. Next.
Wait.. Zen 5 % did not sell at all.. but Intel Ultra 3% has sold like crazy? I don't understand.
@@BReal-10EC It’s a paper launch. That’s why motherboards are oversupplied.
If there are 100k Zen 5 chips and only 10k sold, that didn't sell so well.
If there are 10k Intel LUltra chips and only 9k sold, that's a huge success.
In this hypothesis, AMD would've outsold Intel, but be treated as an abject failure because 90k chips are sitting in the warehouse.
In reality, I don't know the figures, but I do know that everybody's waiting for the X3D version releasing soon. These releases are very close to Christmas so why splurge now?
X3D factor.
Not ultra 3% with CUDIMM. More like Ultra 20%. AMD truly delivered a terrible architecture.
@ProVishGaming You need expensive ram to make a new intel cpu obviously better performing than the cheaper previous gens. That shows promise, but a bit more performance for way more money is not the right direction. The ocing is cool and shows potential, but it's not really something to compare since cpus vary in oc quality, and it is warranty voiding. Previous cpus could also be oc'd. And AMD has the same issue - a smidgen of more performance for quite a bit extra money (compared to previous gens cpus current pricing).
Bro just said the 9800X3d about to slap.
Yeah for About $650
@@thetheoryguy5544$479
@@thetheoryguy5544 try 479 dumb dumb
@@SPG8989 Still too much for a chip only good at gaming.
@@thetheoryguy5544 Unfortunetly you are right, it costs way too much for a 8/16 cpu.
did you try to find like the worst picture of Steve you could or something? lol
That image is a classic!
😂
@@MaaZeus really?
@@MaaZeus he looks stoned or something. lol
I know I'm DIMM Jay, please stop reminding me with, "C, U DIMM"... :(
lol :)
@Jayztwocents I have an idea as to why their could be a small performance difference "just" from using the CU-DIMM's as seem @7:15 and it is down to the memory controller simply not having to do as much to get the memory clocks that high, and perhaps this is down to the reduction in power draw by the memory controller, the same thing was seen with Zen (3 I think) with overclocking, more performance was gained with lower memory clocks because it "allocated" more power to the cores. Could it be a repeat of that, which was also a chiplet related side effect.!
Found this…
“This month, Nvidia's GPU Display Driver and related software updates address eight major exploits. All of them except one allow for code execution and open up vectors for escalation of privileges, data tampering, denial of service, and information disclosure. Impacted users of Nvidia GPU drivers and GPU software are advised to update as soon as possible.”(toms Hardware)
Our newest GeForce Game Ready Driver is packed with support for new games, all featuring support for DLSS 3 with Frame Generation and Super Resolution.
By installing our new driver, you’ll optimize your experience in Alan Wake 2: The Lake House, Call of Duty®: Black Ops 6, Dragon Age™: The Veilguard, Horizon Zero Dawn™ Remastered, No More Room In Hell 2, Red Dead Redemption, and The Axis Unseen.
Additionally, there’s support for 32 new G-SYNC Compatible gaming displays, and 3 new Optimal Playable Settings profiles.
Also .. same update conflicts with ICue.. can't win anymore
@blitzhacker6981 AMD user here. You've been having issues with ICUE as well? Mine just randomly never opens again then i gotta repair the installation. So annoying
i came for the comments and was not let down.
cant believe you called 7200mhz slow LOL
slower is still slower, 8400 being 1200 more is absolutely silly, lmao
It's slow now.
@@lizardpeter ddr5 7200 is the new ddr4 2400, smh
Because he never has to spend the stupid amount of money to buy them
Been using 8000mhz for 2 years already here. 7200mhz is middle of the road stuff. 6000mhz is terrible for anything Intel 13th gen or newer.
I feel like this makes sense, given the same architecture is used on lunar lake which has ram embedded with the CPU. Anything that increases memory speeds and reduces latency will greatly improve the performance.
People coping hard thinking they will double the framerate going from 7800X3D to 9800X3D. It's about a 10% improvement in best case scenarios.
It's a thread scheduler/thread parking problem, right?
- I've seen some data that showed limiting to 1P + nE cores in Cyber Punk helped - showing it wasn't using the P Cores properly...
Honestly I'm still saying it's just not worth it. I don't understand how they're all sold out. Did people miss the reviews? I don't see any recommendations for the CPU. That coupled with 1851 being a platform for maybe 2 years and then you'd need to fully upgrade the platform again, I'm just not seeing it. It's a shame because we could use some more competition again. Actual fights for market dominance, price cuts, innovations and more but I think we have reached a point where the general engineering behind chips has come to a little bit of a infrastructure problem because unless there is some major new process we can't keep going smaller and smaller when it comes to transistors etc. I'll upgrade next year and then wait a loooong time to see where all of this is going but surely I will not use an Intel CPU.
Think outside of the bubble of enthusiasts
You have to understand that A LOT of people don't watch/read reviews. They see that new stuff is coming out and will just buy it. I sell computers, components, and peripherals for a living, and the amount of people who come it to buy parts and don't know how to put it all together and/or haven't done their research is staggering!!
@@marioStortuga Oh I do that but this video is about an enthusiast CPU. No non enthusiast would buy this after informing themselves. There is just no justification to go with Intel anymore. Their high end CPU is much slower than the 3rd highest CPU offered by the competition when it comes to gaming. That's why I said we need more competition. I want Intel to get back on their feet and I want them to make amazing products again so we all profit. Enthusiast only means I'm willing to spend large amounts on something that I probably don't need but want, but even then I'd love to save a lot of money too hehe
Being sold out is a matter of supply and demand. This wouldn't be the first launch with barely any supply.
@@ABP8214 I just wonder how large the actual amount of people is that don't inform themselves and buy a 285k vs. people that buy CPUs in general because I can't imagine the uninformed buyer base having so much money to throw at something this expensive. Also as a salesman you can make the difference which I'm sure you do! Educating customers binds them but I don't need to explain that. As I said, right now I just don't see it, but I'm merely one man with two eyes, I can't know everything which is why I'm glad you guys reply and give me input.
Glad Jay made this video been looking at doing a upgrade but wanted to wait and see what happens with the 9000x3d chips. From what he says at @15:34 Im happy I waited. lol
Also him trying not to give info out in the way he did was funny I loved it.
So you need extra expensive shit to make it perform? Ok.
The problem is we dont know CUDims coud be just as cheap as normal Ram or they want to marke it up . Its just to early .
Like everything else, good things are expensive
When DDR5 first came out it was outrageously priced. Now it's dirt cheap.
@@jondonnelly3 Yeah but when such ram becomes cheap this intel gen will be already old and not worth it.
@@xythiera7255 It has added stuff, so it has to cost a bit more at least.
I would still go with the 7800X 3D !
So basically what we learned here is that Intel's new flagship needs fancy/ more expensive ram, and overclocking to compete with a 7800X3D out of the box. And still loses in most games... Annnnddd the 9800X3D hasn't even entered the chat. 😅
And its on a dead end platform.. What a waste of time!
Doesnt the 7800X3D need 6000mhz ram to ve good? 😂
@@dankmemes3153 no lol 6000 isnt even that fast now anyway
We found a FIX, spend $600 on RAM and spend 44 hours overclocking your PC to the max...
LMAO in Cyberpunk you got a jump from 217 to 238 but still lost against stock 7800X3D
Losing in the non-CPU bound default benchmark as well
@@PowellCat745 I bench a LOT in cp2077 and I have several saves made for it where I run a 5 minute logged session and then average them. The built-in tool really doesn't show what you would want to know for buying hardware.
@@arc00ta exactly
Not really surprising or funny. If it's on par or slightly slower than 14th gen then why would it beat a hard pre gaming centric CPU?
7800X3D is made to be strong at gaming but not much else. So not surprised.
"I haven't updated the BIOS as it'll invalidate my tests" - RE-RUN YOUR TESTS THEN
To completely redo a large suite of tests and the like takes time, especially when you're only working with one available/usable benching system. To get info out at any sort of reasonable period of time, you have to note where to draw the line. By keeping it on the original BIOS, you need to do 2 passes of testing (285 w/ CUDIMM and 285 OC w/ CUDIMM) rather than 3 or 4 (w/o and OC w/o), and any gains the new BIOS gives `should` carry over and apply to all 4 situations.
I'm still happy with ddr4. I think i can get 5 years out of it
easy
I mean some people are still on DDR3 platforms.
If you're not wanting for more performance, you're good
@RealGreyGhost ddr3 is long dead
@brysonshires9742 Not true. It's still used by a very large portion of PC gamers. Just like when you go to a steam hardware survey and see most people are still on older hardware in general.
If i didn't have my current rig, my old rig with DDR3 would still be plenty to enjoy all the games i play. Most games up until 2022.
this entire am5 generation is easily skippable. i'm way more bothered by my i/o than cpu speed (5800x3d).
All I understood at the end was No Shave November starts at 6 am.
WAIT! WAIT! JAY--ARE WE GOING BACKWARDS IN TECH!?!? Way back when (I’m 69…). I remember the 8086, 8088, 286, 386, 486 CPUs from Intel. Now we go from Core i9 14th Gen backwards in time BEFORE the vaulted and hallowed 286 of the early 90’s to the NEW Intel 285 chip??? WHAT NEXT? MS-DOS 8 RUNNING WINDOWS 12?????
msdos 6 running windows 12.
(I'm 55) Waitwait, do you mean the 286 is a copro to the 285 and with the 287 you get yet another boost? Wow...
to the 385 - try to fit a '386 chip in there and there will be trouble!
The 485 will be the truly hot one, with a really LARGE on-chip cache! :P
Yeah, when I saw their new naming for the first time, I was like “but you had 286 decades ago (I’m old, so please no one do the math, I don’t want to be reminded about how old I am.)
It's almost like NetBust/RDRAM all over again.
A 2.4k increase in R23 is not bad at all. Once they’ve matured their architecture these might be potent little cpus.
back to the 300w little cpu
That is true,however how is it going to take the Intel processors to realize that performance.The other fundamental problem is that Intel’s Z platform normally has support for 24 months before a new chipset and LGA socket released.That is why I lament the loss of the X platform for HEDT. I Intel revived their HEDT platform for their new chiplet style architecture they would be able to match AMD with chipset support duration.
@@Awaken2067833758 seeing as how their main focus seems to be limiting high wattage/high temps right out of the gate I hope you’re wrong but we will see. I waned to give intel credit where it’s due. We need competition.
My 9950x gets over 43K on Cinebench with memory at 5200Mhz. This Arrow generation is a bad joke with 24 cores just to get there using overclocked ram
@ your 9950x *HAS* had time to mature is spitting distance from the intel chip which is why I personally think it’s impressive. This is the first time they’ve used architecture (at least in modern times) that wasn’t their own and still was able to get it within the rest of the cpu lineup in terms of performance. This coming from a strictly Ryzen user since the beginning. I’m just saying they did well on their first entry
Please record the crashing sound that thing makes when it lands in the Dumpster.
This is great and all but really only for people who use productivity tasks for work and can write off the expense. The chip alone is over $600 and Cudimm costing double sometimes triple standard ddr5 means for most consumers this launch should be ignored. For the price of just the chip, you can get a 7800x3d, mobo and 32gb kit at microcenter, if you dont have microcenter add 150 or so for all 3. This is not enough of an uplift for anyone with something from the last 2 gens to upgrade to.
If all you care about is gaming at 1080p sure go ahead.
@@ZackSNetwork gaming at any resolution
@@Awaken2067833758not true. Gaming at 4k there is very little difference between most modern CPUs made by AMD and Intel.
Not everyone games. Most people have talent and actually create for a living. That puny little 8 core CPU is a bottleneck.
@muzegames most people just browse the internet and do very light tasks. Fewer people game and then fewer still need a chip like this for their specific tasks
Thanks for the extra tip at the end of the video about new CPU's. I ordered fans and a case last week.
What happened to camm memory?
I know, I'm so confused. A few months back, we saw all this stuff about this new CAMM2 memory for all sorts of performance improvements on next-gen platforms. Then all of a sudden it completely "disappears" and CUDIMM is everywhere...weird.
Isn't CAMM2 for laptops ?
@ not exclusively. They have camm2 desktop boards as well.
@@Spinelli__ I think the answer is quite simple: in case of CUDIMMs the board manufacturer can stick to his standard layouts with standard DIMM slots. And also not much changes from the RAM manufacturers.
For CAMM however you need to change both mainboard and RAM.
@@Spinelli__ I think its more meant for laptops because its low power
the thumb would confuse someone
"285k dollars? what is that censored book?"
AMD X870E boards support CUDIMM.. lets see what happens when pairing 9800x3d with CUDIMM sticks
I would guess that it won't help much, because the whole point of large cache is to avoid going to the RAM. Also 7800x3d didn't scale much with RAM.
@@p4n0rz They will only work in bypass mode. CUDIMMs are useless on AM5. Also, are you really going to spend $350+ on the RAM alone?
You would not see a real difference in real world performance. Zen 5 did not improve the memory controller from Zen 4. Even Alder Lake has a better CPU controller than Zen 5.
@@ZackSNetwork False. Zen 4 can easily reach 7800MT/s. No Alder Lake CPU can run that.
Nothing will happen. The AMD CPUs can't run those high RAM speeds without resorting to gear 4 on the IMC, which would actually harm performance.
Arrow Lake can do 8800 MT/S in gear 2 so it can actually take advantage of the faster ram.
I'd like to know what percentage of PC builders are doing it for the performance stats rather than because they actually need to run lots of high computation applications. i.e. it's more like building a dragster.
Intel fanboi logic: Intel wins at gaming, gaming matters. AMD wins at gaming, gaming doesn't matter, it's all about productivity. 🤣😂
Good thing those people only exist in your mind.
@@Felalegood thing you don't read any comments on UA-cam videos or sites like reddit
@@Riyozsu Reddit is dead
It's funny, if you go back exactly 5 years ago you'd find AMD fans saying that a 9900k was a bad value when you could get a 3900x with more cores.
Source: I was one of them and have a 3900x.
I think it's similar on both sides.
CK dimms are not so much expensive how i expected.
RGB --- That Gskill F5-8200C4052G24GX2-TZ5CRK Modell cost 291€ (48GB) | Works on MSI MEG ACE Z890
RGB --- That Gskill F5-8400C4052G24GX2-TZ5CRK Modell cost 350€ (48GB) | Works on MSI MEG ACE Z890
Non RGB Versions are 40€ less, so u get one for 249€, some daily deals for 220€. That okay for brand new product.
That Z-Royal Series are more expensive
they did say this architecture had alot of room to grow
Intel seems to be going for "a pile a manure is room to grow".
@@familhagaudir8561 Well yeah, you can't grow a strong plant without fertilizer 😂😂
They said that about alder lake. And it grew up to be a space heater. That was such a disappointment.
@PandaP- too much fertilizer in the beginning kills the seedlings 🌱
This disparity between some instances the 8400 losing or far beating it with stock can only be described by instability leading to corrupting performance. Nowadays you lose ALOT of performance way way before having any "visible" instabilities due to the design of ddr5.
Remember, remember, the 7th of November, the 9800x3D being dropped. I see no reason why the 285k should ever be bought
unless you use your pc more than just gaming
@@StrixWar no, unless you do something that specifically requires extreme performance other than gaming. most other uses could be covered perfectly well by a ryzen 5 1600, let alone a 7800x3d, including what most people would consider productivity tasks. my office runs on q6600s to this day and that has never been a bottleneck. the use for these extremely threaded cpus is even more niche than gaming. youtubers tend to put a lot of emphasis on this shit because they're youtubers --- video editing is one of the big things about it and these cpus help with that.
@ I video edit and it made a noticeable difference going from a i5-8400 to R9 3900X to a 12900K
Which CPU is frozen in the background running Cinebench? Raptor Lake I presume?
Yea, well... pretty expensive extra costs, compared to some integrated X3D cache.
Boo Hoo, Intel is looking better
@@freedomearthmoon1 For the much higher adoption cost it needs to look better.
@@freedomearthmoon1hardly, if at all. And for the money it's a joke.
Jay even says at the end of the video to just wait for the new X3D chips.
@@TheSkiddywinks Not everyone wants Ryzen though is the issue. Some people stay with Intel either because they're fanboys or they like using stuff like Quicksync
@@TheSkiddywinks And he said intel chips need an update, just like AMD
Cool, you managed to reach almost parity with a 7800X3D in gaming with a new generation CPU. So impressive Intel! 🙄
Reminds me of the days of the Pentium 4 Prescott where Intel were running hot, while AMD were offering lower cost, more efficient processors. They then changed their architecture and strategy with the Core2 range.
15:46 "this all looks good for Intel" is not how I would describe it. You are comparing the pricey 285K + pricey CUDIMM + overclock with a (currently definitely overpriced) 7800X3D with standard RAM and no overclock and the Intel system is still below the AMD system most of the times.
It does look pretty amazing unless you're strictly gaming, 285K+CUDIMM manages to top most of the productivity charts. Of course it does not make sense for people who intend to only play games (in which case x3D is an obvious choice), but vice versa 7800x3D is absolute trash for productivity, in most of those charts it is dead bottom. Different designs, different uses.
@@Aquaquake Fully agreeing with your analysis. But I would assume most on the people do not use their private computers for content creation or "productivity", in those cases all those P- and E-cores are idling, e.g. simply are a waste of sand and energy.
As always: you should first be sure about your personal requirements and then decide which part to buy.
At least if you are not one of those who always need to have the biggest numbers and fastest parts.
And that's why I decided after Threadripper 1920X and Ryzen 5900X (which I bought also for some private FEA) to now change to 9800X3D. Which probably still is overkill for my use.
You forgot one important thing: a motherboard that can run the high frequency of the CUDIMM reliably.
I'll tell you more Jay. I am having this approach also with older 13/14th generation.The limit is that E cores don't have the massive gap they have now.
If you overclock P cores you see in cinebench p core 96C° while E core top at 65 C°.
So with the 13900k I did 54 all core P cores, 47 E cores, HT disable. Ring 4500.
On cinebench you'll see a worse score (but with max 270w). But with games................. Try ;)
I still can’t believe Intel released these processors in this state
Its for the OEM/System builders/Laptop Makers mainly, They'll still make the sales they need.
I can intel and AMD have been making some terrible decisions the last 4 yrs
Yes. This is a rushed launch again. After the 13th/14th gen fiasco, Intel should have known better. Arrow Lake needed some time to iron out its kinks because it's doing a lot of interesting things, but the results matter at the eod.
@@Kapono5150 This isn't entirely an Intel issue. It's also a Windows issue. We've seen it with Ryzen 9000 already, the odds aren't low that because this is a completely new architecture with a new way of doing things, that Windows might not be using it correctly
There's various benchmarks that Wendel runs on Linux that seem to back this up. Linux compute performance definitely seems better than on Windows
Recommended next video: "Stop wasting money on fast Ram!!" - JayzTwoCents - 8 Months ago. 😁
stock 7800x3d hits higher than 16700 in r23, consistently over 18000. IDK about the other tests i don't waste time with them. You have a bad sample. I can undervolt and limit wattage to 75w and still hit over 18000 every time.
same something isnt right.... i get 19000
I get 42000 so there. Yours cpu is slow
41k 14900ks here... Low for a ks but mines stable
his tests always wrong,not fair he disable RT raytrace and dont enable PBO on all amd,,,if he enable Raytrace and and PBO and use 6400 ram on all (use and gpu) intels 50% crash down
@@GeeDrummer its about right. Ecores make the biggest diff in cb23 handsdown. They are cinebench cores
Love it how that 2k motherboard screen is half blocked by a $20 exhaust fan, lol.
This really should be compared with 14900K with fast RAM because otherwise it's just making the 285K look better than it should. It's kind've been known for a long time that tweaking Raptor Lake can get you better performance than a 7800X3D because AM5 is limited in its RAM speeds.
14900K doesn't support more than like 5600 unless cooled by liquid nitrogen. It's motherboards also do not support more than in the ballpark of 6400. You wouldn't see a difference because the speeds just aren't supported. It was like using 3200 in a i7 7700 build that was restricted to 1600-1800. It ran at 1600-1800
@@goldenhate6649 You are incorrect. The 14900K doesn't need LN2 to run ram faster than 5600. With an AIO or custom loop the 14900K can run 8000-8600. I haven't seen a 14900K that can't run 7200.
@@goldenhate6649 say what? My 13900KS is using 2x24GB 8000 G.Skill right now, air cooled on the RAM. Where are you getting this information from?
AM5 can just as easily handle 8000MHz RAM as Raptor lake. There just isn't much point going past 6400MHz in a 1:1 ratio. That's a good thing.
Also, how about you show us your sources proving that a tweaked 14900K outperforms a 7800X3D in gaming instead of talking out your butt?
@@Nayah9 You don’t need his proof, look at Jayz Cyberpunk benchmark. If you overclock that 13900K it eeks out a win, combine it with DDR5 8000. It’s possible, whether or not it’s worth it is a different question that you weren’t asking in the first place.
Intel provides so many knobs and levers to play with to OC that it’s fun to tweak. It’s like driving stick shift, not a practicality thing but enthusiast activity. Arrowlake has way more of these controls over Raptorlake as shown in the Skatterbencher videos.
CUDIMM isn't just about faster clocks, it's about way more stability. Right now to hit high mem clocks you need to jack up the SOC/IMC/PHY voltage which cause's all sorts of issues. With CUDIMM's you can get those higher memory speeds without needing to increase SOC/IMC/PHY voltage to drive the memory.
cudimms work best because of arrows new memory controller
Just need some tinkering on intels side. It is amazing to see people criticizing tech, but nobody criticizing the tech isnt able to engineer nothing. Its a reason you are a consumer, and they are the creators
💯
This was always going to be Intel's Ryzen 1000, the difference is Intel wasn't desperate and needed a Hail Mary from going under so could have worked on it longer.
This is the tick cycle as is Zen 5 and I expect the Tock cycle of this and Zen 5 to improve a lot
Some very backwards logic but dont be surprised when consumers , who continue to be the reason you exist, have an opinion.
@ai-aniverse that's the point. If you don't like the product don't buy it. But at the end of the day, the creators can create whatever they feel like creating, and the consumers can buy what they want and at whatever price
@@ai-aniverse bruh u gettin rich from making ai naked women. I envy you 🫡
its huge step for this socket....nova lake is around the corner for stronger e-cores ....since INTEL competing with AMD with SMT off....without SMT on AMD ...it losses to Intel and amd has nowhere to go after zen5....power usage is gonna upwards in next update with zen6...300w
Impressed with the CUDIMM improvement, but way too expensive, I expect more improvements with Driver and BIOS update ove the next month
The gaming improvements all seem logical to me.
The whole reason the x3d chips are awesome is because of the low latency and high bandwidth of the l3 cache.
If you can improve main memory bandwidth and lower latency to access the memory, it will close that gap a bunch.
Can you disable the E cores in Arrow lake easily?
Would disabling the E cores and then overclocking the P cores while overclocking the RAM and cache yield competitive gaming performance? Probably not worth doing for normal usage cases, but, I'm curious. In any case, Im keeping my 7800x3d for now, and likely for the next several years unless 9800x3d surprises me. (EDIT: i should have finished the video before posting. I like the rhyming at 15:30 and at the end, Hahaha)
I already told at the release just to wait before burning Intel to the ground. There are a lot of good things about the new architecture. It can consume really low power if DLVR is disabled an lower voltage is selected. It is crazy effecient then. Also microcode updates and Windows updates together with game optimazations will put this cpu very high within 6 months.
Only thing that is not good atm is the price of the platform. That is CPU, Ram and motherboard are very expensive. Give it some time and this cpu will be much better and a real competitor for the top.
yup
I've been saying this on reddit for weeks, everybody overlooking CUDIMM. I was expecting to gain back the 5-15 fps in games when compared to 14900K tho, not a massive +50 fps boost.
exactly. and the synchronous issue
So - I wonder if using similar memory under AMD would also see a similar uplift.
AMD memory controllers want 6000 MT/s RAM, more MT/s is slower on AMD right now
@@jackthatmonkey8994 false. 8000 > 7800 >= 6400 > 6200 > 6000.
AM5 doesn't support CUDIMM.
@@Patrick73787 actually, there's upcoming support under the X870 chipset. It will be limited to the 8000/9000 series CPU's though
This Jay grinding to try and make this new gen look better is crazy..
Here's the problem: Intel Core Ultra 9 275K, $650, G.SKILL Trident Z5 CK 48GB (2 x 24GB) CUDIMM DDR5 9600MT/S $399, ASUS ROG STRIX Z890-A GAMING WIFI Motherboard $399 for a total amount of $1,448 USD, prices from Microcenter, Newegg, Amazon. While AMD with the Ryzen 7 9800X3D with any motherboard, past or recent gen with a moderate DDR5 kit ranging from 6000MT/s to 8000MT/s will decimate the intel combo for a lower total build price too.
Your comparison is horrible because it’s apples to oranges. An 8 core gaming CPU with entry level motherboard and DDR5 ram. To a 24 core Intel CPU with high end motherboard and ram. Obviously it will cost far more because it does beyond more better. Tuning the 285K will allow it to get even closer to a 9800x3d in gaming while destroying it in everything else.
Your comparison is horrible because it’s apples to oranges. An 8 core gaming CPU with entry level motherboard and DDR5 ram. To a 24 core Intel CPU with high end motherboard and ram. Obviously it will cost far more because it does beyond more better. Tuning the 285K will allow it to get even closer to a 9800x3d in gaming while destroying it in everything else. Also people that buy either of the CPU’s are not goi g to be gaming in 1080p anyway. So games with be beyond more GPU bound regardless. Viewers have to differentiate the difference between CPU testing and real work performance difference.
The CUDIMMs should come down in price fairly quickly. The only difference is a little clock driver chip, so they shouldn't be much more expensive to manufacture. They just don't really exist on the market yet. Motherboards are more egregious, but AM5 was also ridiculous on launch, so that seems to be more on motherboard manufacturers scalping the high end.
@@ryanspencer6778 they will not come down in price quickly, and it is not about the chip. Low demand means high production prices, depending on the demand it could take a year to a couple years for prices to start going down
Not a very honest comparison. There is no point in using the 9600 CUDIMM. Intel says 8000 is the sweet spot for ARL.
Also, the 285K with fast memory can do a decent job at gaming but the 9800X3D is far far weaker than the 285K at workstation and productivity tasks. Apples to oranges comparison.