Removing hyperthreading from the design is very different from turning hyperthreading off. When you remove it from the design, it allows you to put everything else closer together and frees up that die space.
@@quintrapnell3605Facts this is the reason they are getting rid of it. To make way for TPM. It's kind of like the self sabotage of the auto industry to make EVs look more reliable. When will people realize they don't care about what the consumer wants it's about the plan for globalization. Aka you will own nothing and be happy.
i would love to see this test on a 12600k or 13600k, to see if the lower core count there changes anything. manny more people prob has the I5´s over the I7´s anyhow.
Agree. It is possible that games that use many threads would still benefit from hyperthreading when using an i5 while it does not with the more e-cores on an i9.
Hyperthreading hasn't really been necessary essentially since we hit 8 cores, and has given ever diminishing returns as core counts continued to rise. The 9700k was really no slower than the 9900k and it had hyperthreading fused off. Any difference between them can be 100% chalked up to their clock speed difference. But you need to keep in mind _removing_ HT all together will be more effective than just disabling it because you're removing a chunk of the pipeline effectively shortening it, which is _always_ beneficial. Honestly I hope Arrow Lake ends up being a home run for Intel. Not that IGAF about Intel, but they need a win after the beating the company and it's stock has taken the last couple years. Anyone that celebrates tens of thousands of people getting laid off needs to seriously check their fanboism and AMD is once again proving they're perfectly happy to increase prices the second they have no competition. Just look at Threadripper pricing.
@@nempk1817 There's a difference in "necessary" and "beneficial". As much as I love all my e cores when doing compression and compile workloads, if I could get 15% better single core performance on stubborn apps like emulators and sim games I'd happily sacrifice a few % in multi. E cores have made hyperthreading obsolete. Unless of course you're an AMD user. Then yeah, you need all the HT you can get cause you're slower core for core.
@@nichronos That's inconsequential when he's got the same results we've seen for more than a few years. Moving to 1080p low would just make the results _more_ amplified, not less. There's a _reason_ many people suggest disabling HT. It rarely slows you down but in certain games it can be beneficial.
Keep in mind the way P and E-cores communicate before arrowlake is also very in-efficient. And the E-cores in Arrowlake are up to 68% faster per clock and clocked higher too.
HT being removed supposedly was/is a step towards a more flexible method of splitting tasks between cores (potentially even splitting P cores all the way down to 4 threads each). Supposedly we’re left with simply no HT on 15th gen because this method wouldn’t be ready in time and it wasn’t worth putting HT back in. Generally this has been referred to as “rentable units” the past few years but there’s a lot of pure speculation and we don’t have any details beyond rumors and a couple patents that have been found. Could also be breadcrumbs that exist entirely to throw off competition, or something that hit a brick wall and was ultimately cancelled some time ago.
Rentable units was going to be the uarch of Royal Core/Beast Lake but they had to go and piss off Jim Keller so now AMD be eating their lunch and dinner.
Danny you're a champ bro. I remember asking for this exact video compare e cores vs hyperthreading directly and with them both off. Fantastic video and the one on showing these configurations. I already shared this on the forums because it's a good test that deserves exposure.
Everybody must disable the E-cores. Not sure about SMT because the general consensus appears to be you need it. E-cores need to be shut down because they are putting stress on the Ring bus which lead to voltage spikes and latency. E-cores are useless for most people unless maybe you are streaming.
Thanks for your hard work and putting the results together. Definitely gives us an idea as to why Intel went with the strategy. With scheduling optimizations that they spoke about, it should be even better across the board. With that said, E-cores make sense for laptop CPUs like Lunar Lake for maximizing battery life, but for desktop CPUs I wish Intel focused on optimized P-core only configs
Really interesting results, looks like the 8P/8T + 16E variant shouldn't have any problems in multithreaded games at least. It would be interesting to see how a simulated arrow lake "i5" would fare, so taking the 14900K and reducing it to 6P/6T + 8 E cores enabled, or even 4 E cores for the cheaper variants. I know it's a lot of extra work to run the benchmarks so I would understand if you feel like not doing it.
Using a 14900K to simulate an Ultra 5 245K wouldn't be a fair comparison as the 14900K has more L3 cache than the 14600K, which is the one he should use instead as it will have similar L3 cache size to the 245K.
HT was a great solution in the dual and quad core times more then a decade ago. Now it's getting redundant for 6 cores and especially for +8 core designs.
Arrow Lake is going to be very interesting... there are strange leaked benchmarks that are showing the new Skymont E-cores have the same IPC as the P-cores... that'll be very impressive if it turns out correct. Not long to go before the official announcement (rumoured 10th of October).
Everybody that owns a "pay-per-core" Microsoft SQL Server licensing that counts hyperthreading as if they were full capability regular cores is greatly relieved. How soon can they get it to market? Hyperthreading made sense in the dual cores to give them a bit more oomph to 4 cores without paying for a 4 core but these days the 24+ hyperthreading partial cores you find on a modern server CPU is getting to out of control levels of ridiculous especially when Microsoft charges you $20,000 to be able to use them every year.
Now I'm curious on how my 12400 will do. I may have missed it in the video, but I'll need to check how to turn off HT. Afaik, the BIOS should clearly have that option, I think.
This really shows why Intel went with a hybrid design. If Intel released a 16 p-core beast, it would destroy AMD in both productivity and gaming, but the power usage would be insane. Especially if you turned on hyperthreading. Great video!
Nice hard work. Tried on my 12900K with undervolt and now disable HT works even better which less shuttering and less power draw. The power consumption much lower than HT enabled only draws above 100W at full load, anf it should be like this. Very nice tweak for gaming. I agree your point of view which the CPU should works right out of the box without any such a lot tuning by user.
This inspired me to look up videos for disabling SMT on AMD CPUs (I have a 5800x3D). The results from Tech Yes City look promising, although he tested a much smaller sample of games. I'm tempted to give this a try 😉
This proves how everything needs software control to park cores on an individual game basis. Intel Dynamic Tuning or APO comes to mind but is it even getting support anymore?
SMT is Ancient Tech, While it still have it use on Server, on consumer devices, Modern Smartphone already ditch that, Apple Sillicon do not support this as well, If Intel decided to kill off SMT, it's step to the right direction since the Consumer Hardware is moving away from that. SMT still acceptable of Workstation, but for consumer hardware, it's not longer necessary, Keeping SMT just because the Tech Community want to see 2x Thread as tradition is foolish, I hope AMD also can follow the same path, like having Zen 5 + Zen 5C on consumer platform. that way the scheduling can be streamlined on consumer OS
SMT is still alive and well. heck, the Power10 ISA features 8-way SMT - that's 8 threads per core. But you are right in that most consumer hardware doesn't benefit from, especially if the underlying architecture isn't optimized for it (ARM doesn't have it and it's suboptimal on AMD64 in terms of efficiency).
I examined this for years on my channel. There is a bigger thing there to consider, not just FPS, but the frame to frame latency Removing HT is a big no-no for desktop use that actually loves HT But, in games HT OFF is the best boost you can give to yourself through use of Process Lasso Intel is likely hitting the limit of improvements that they want, so they are doing something gamers did for years, the issue is that desktop performance will suffer It is not an actual improvement, but it will improve stutters and frame to frame in games, but will it actually beat 14900k in desktop use (non-gaming) is a question
good Freaking Video danny and I havent even watched pass the intro, I myself keep HT off as it allows a higher all core oc 59p on my 14900k and a higher head room for ram OC 8200
Thanks for the video and i've learned about core parking and can now disable core parking for all my cores! I'm using 13700K and realized that by unparking all my cores, a few of my games run faster with hyperthreading on and off. May want to look at core parking affecting hyperthreading as well. Great videos! Cheers!
Do You use process lasso? I test it a lot and doesnt turn off HT, It Just ignore The second logical core. Parking The core.. I think it really does in C8 CPU iddle state
Amazing work, but I would have included Windows 10 in the benchmarks. I know it would have doubled the time and effort, but I still think Windows 10 is king for gaming on both Intel and AMD. Thanks for the hard work!
Great video Danny. Pretty interesting test. I too am looking forward to seeing how the new intel cpus will perform on a new process node. If the rumors of the arrow e cores being as powerful as the Raptor lake p cores, Thats a whole lot of performance on tap. We'll see how it goes tho.
Results are invalid for e cores disabled hyper threading enabled because of windows 11 not knowing how to schedule the threads with the e cores disabled. Re run the test on windows 10 and you’ll see that configuration coming out on top, and also when you disable the e cores the ring ratio automatically goes to 49 so you should raise the ring as well!
what about heat? I feel this would’ve been a really good benchmark to see which setup runs the best thermals at the cost of performance so on so forth .
Is this video worthless to me? i normally game while using discord and OBS and MSI afterburner in the background, sometimes i also stream while gaming. Im pretty sure my performance would TANK if i turned off HT or all the e cores in my 13700k. This video is great for people that ONLY play games and nothing else.
Isn't it more about being datacenter friendly (this matches getting rid of HT for "security reasons" as this is a problem on shared hardware). Gamers are not particularly concerned about power, are they (maybe in the EU where we are told that "switching off the rig saves money")
My I7 13700K locked at 5.6ghz on 1.375 V during gaming in Warzone consumes around 120 watts. Which is roughly as much as Ryzen 7700X. Given that AMD it’s at 4nm and Intel at 10 nm I don’t know how to call it. Shame for Intel that didn’t put correct boosting and voltage on the bios or shame for AMD by being better node performs similar to Intel. Or shame for both. As for “nobody knows how Intel will fix power issue” they getting a node shrinking next gen which means lower power consumption and removing hyper theradimg which also means lower power consumption. I can see the way. May be in your opinion those two factors doesn’t matter. Your choice. One question. That “way worse Intel power consumption” conclusion did you draw by testing both platforms yourself or by watching UA-cam videos?
@gametestinglab8861 what on earth are you talking about? 7700X is on 5nm and 13700x is on 7nm which intel says is equivuilent to 4nm hence the Intel 4 process node. 7700X is not using 120w in call of duty it's using less than 80w. It's also irrespective because the 7800X3D will hammer your 13700X in nearly every game using less than 65w.
Good applications report it would seem content providers did optimize for e-cores but whether they compiled for parallel SIMD operations you'd have to tell us Danny which is e-cores doing the matrix math, vectors, for some aspect of play back that likely can be seen in game play for a specific situation, drawing and shading that is not refresh of a scene but a change in the pixels to display some specific aspect within the e-cores capability. Or maybe its audio playback who knows but they do something what is it? mb
Maybe there just like a slug power plant adding to the main engine? I don't know but I am curious maybe you should query a few developers as a report topic. mb
Disable HT interesting power optimization technique few others have mentioned that along my audit like framechasers and Scatter Bencher who I think got the optimizations down but don't recall they mentioned 'just disable HT'. mb
It would be even better if they completely abandoned their current design of P and E and went full on P cores without F around . But I suppose that's what new 5th and 6th gen Xeon HEDT is for
HT never matters for gaming anyway. 7800x3d proves that by beats 14900k with any combination. Just give ton of more cache. Intel wouldn't focus for disabling ht for gaming only. Probably they had to reduce tdp and thermals and more e cores will inflate their benchmark scores even more. But multicore performance will take huge hit.
Not exactly true, depends on the game engine what sort of impact HT has. Some games will really drop the ball on 1% & 0.1% lows if you disable HT, others will run slightly better with HT off 😅
Isn't intel only disabling SMT on the consumer parts while the xeon's will still use/support it? And this hybrid arch is only used on consumers parts too while on server they are all ecores or pcores? While SMT is different for intel and AMD implementation, just disabling it on the software level will not change much. So those test only tell which games/engines benefits with/without it not even considering if the impact is not from the way the OS is using it since most games are not programed to see difference on SMT cores from real ones. The only real way to test this would be have the same core with and without the SMT implementation on hardware level.
Any news on the Bartlett lga1700? I have a 12600k and i ll never buy a hybrid intel CPU ever again. I had to disable the e-core because of the stutterfest on my 4070Super.
As long as they improve all core workloads by like 30% over 14900k and use less than 200w, they will be fine. If anyone thinks it unrealistic, that is basically what AMD did going from Zen 3 to Zen 4.
Instead of all this fafo intel should just give us a non hybrid CPU. Nobody wants to test this shit. I’ll take a Monolithic 12p core CPU with no hyperthreading anyday over having e cores
we shall see, the gold standard is the 9700x, if intel can beat it while not consuming more than 88w (PPT) from the socket then we are talking..... i HIGHLY doubt it will happen though
Not tested.... But I can vouch saying that if you disable HT and E-cores you can typically OC the p-cores a little higher. Direct die using liquid metal Iceman cooler water block. 12700kf HT enabled, e-cores disabled I can get 5.2ghz all core. 12700kf HT disabled, e-cores disabled I can get 5.4ghz all core. $175-200 CPU doing nearly as good as a 14900k in games. Maxes out a 4080 super that I own and a 7900xtx that I own.
I think HT Story ist very old. Since HT exist, its depends on two factor. What Compiler is used and can the Taskmanager Deal with the Code Base. Most Programms are Not carefully optimised and act Strange. Linux is in this Point better. You can easy Switch shedular to handle HT or E-Core better, without Code optimisation.
i think eventually they are heading to full all simple small E cores that in massive amount type of chip. with NO P cores. so they will have literally individual compute unit that coming in large amount but in simpler instruction and easy on power resources. and each can act as multi thread combined together or single individual threads. flexibility. but this approach will benefit for a very short generations. since the silicon semi conductor tech itself have reached its peak potential in its electrical design. they will not be able to keep selling things that eventually has no more gains.
@@yonghominale8884 threadripper is all A core(P cores in intel) that sucks tremendous amount of voltage. the E cores in intel approach is similar to GPU design approach. a lot of simple LOW VOLTAGE core that works in SCALABLE groups. AMD threadrippers have all P cores works the wayy it normally would. it is NOT scalable. for example, it wont form groups to run specfied threads. each P cores takes the tasks and runs each their own threads. unlike the new intel approach. in intel new approach, 2 or more E cores can "group" and act as a single compute unit doing multi threading, or break apart into small single individual thread for short and quick compute. SCALABLE.
Great review Danny 🤩🤯🥰💪and very informative! I’d love to see you do a comparison with AMD Ryzen …. Higher coded variants with Both non-3D vs 3D v-cache CPUs 🫡🙂↕👍
The decision to remove HT was made because of the tradeoff that the higher performance of the E core made it possible to allocate the area saved by removing HT to the E core without compromising area efficiency, and its primary purpose is power savings. The improvement in gaming due to HT-off only occurs when two heavily loaded threads cohabit on a single core. There are only a limited number of titles where that happens. Since Intel Core since the 12th generation has been optimized by Thread Director to use as little HT as possible, there shouldn't be much difference between Arrow lake and Alder lake (that's what makes HT one of the reasons for the decision to remove it, which was explained in the Computex 2024 keynote). There are still some games where disabling HT/SMT can improve performance, but most only happen before 11th generation and on some of Ryzen APU.
Would be great to test it on windows 10 also. Great video!! But i See a problem, it should not affect performance when you disable HT, the problem its that maybe with the thermal solution that you have its not enought so the turbo boost or the the cpu downclock to meet the VID requirement from the cpu, if you Custom liquid cool/delid you should not have this problem. And WTF is the result on MWII??? i have both, my 14900k OC to 5.9hz 7600Mt/s on the memory and thats not the result vs Ecore on and HT, you must be doing something wrong mate, there is no way in life that disable Ecores and HT lose you that dramatic FPS. Soz the bad english
Nice work. I don't think I have seen any of the other tech sources work on this! This is very useful data to look at when the new chips come out, where the die is smaller and can help others make informed decisions to upgrade or not.
These results are unrealistic. Not even 1% of people sport such a PC configuration with extreme RAM, RTX 4090 and so forth. Do a real world test with something that's affordable and doesn't cost a kidney or two.
That's why I just bought 5900X (Zen 3). Zen 4 and 5 are too expensive outside US. I have plenty of CPU power with this 12 core CPU. It's pointless to buy faster and pricier parts.
??? Not true at all. To fully realize the 7800x3d’s potential, you need a 4080 or 4090 atleast playing at 1080p. According to steam hardware survey, only 2% of people own those gpus. It makes 0 sense to spend $450 on a 7800x3d if 98% of the population has a worse gpu. I have a 13600k with a 4070ti. Thanks to comments like yours, I know someone that paired a 7800x3d with a 4060ti. Literally people don’t need more than a 7600x/12700k if they’re using a gpu like a 4070ti or lower and playing a resolution 1440p or higher.
@ProVishGaming That's 100% fake News and misinformation, Hardware Unboxed proved in one of their latest video that a good CPU is 100% important for good performance even at 4K, and specially high to Ultra settings since the settings is what use more CPU resources then GPU, also a lot more games are coming out more CPU Heavy like Dragons Dogma 2 and Black Myth Wukong, so this info is made up by a bunch of ignorant people that don't know what the hell they are talking about and be like just trust me bro, with no data nor proof to back it up. The Best Configuration for Gaming for Called 3DVcache anything else is trash or misinformation.
@ProVishGaming Your build is made up trash misinformation called, just trust me Bro 🤡🤡🤡, 7800X3D Trashes, demolished and absolutely destroys the 13 and 14900k with less power, less headeaches and no degradation, sit down fake News chump.
@@hdz77 you are the one actually the providing misinformation. You provided 0 numbers so let me provide some numbers. Upon watching HUB’s 7800x3d vs 13900k at 1080p, 1440p, and 4K video, the 7800x3d was 5% faster at 1080p, 4% faster at 1440p, and 1% faster at 4K. All while USING A RTX 4090. Any worse GPU and results would have been identical. Please stop spewing misinformation like an idiot. You bring up fringe titles where AMD x3d gets a win, but games like the star wars game, hogwarts also favor intel heavily with double digit improvements. If you take the average like hub did in the video, it’s very minimal (5% at 1080p). Kindly never speak about issues you don’t know again.
My 14900kf runs better with HT off + 8P + 8E (8E disabled) = 16T. Games are optimized for consoles, in other words, 16T. Better temps, better energy consumption...
You can not make up for the loss because in all cases hyper-threading will benefit performance. Arrow lake with lower clock speed and ditched HT is revealing, I don't expect much from this release except lower power draw and we see how a lower power design has worked out for AMD's Gen 5. Die shrink enables more transistors is a smaller space, but thinner dielectric material means more heat from leakage and dictates using lower voltage. A new architecture may circumvent some of this but a smaller die size won't. IMO. At this point pushing more 1's and 0's takes power and the binary system probably needs a change soon...✌
I always disabled hyper threading since the 9900K, Inwas always wondering why my 9700K ran smoother and better, then I found out that HT was the culprit. With the hybird cores I still disable HT, but I do keep the ecores running and just use process lasso to remove the ecores affinity on my games.
Yup, that's the way to do it. Also in process lasso you can turn HT off for individual applications without having to disable HT outright in the bios. Another thing you can do is schedule drivers across different CPU groups or individuals to spread the load and lower latency or audio crosstalk. Usually windows likes to lump multiple drivers on the same cores which drives latency up for those cores. These have to be changed manually than results tested and analyzed in latencymon. That's a tip I picked up from the DAW/audio workstation forums (multiple).
Intel : Get rid of HT in hardware design, saved transistor space and used those to make a better P core UA-camr: make video with a older generation and HT / ecore disabled for readers who claim HT disable have better performance Basically you dont need to make them a video , because they are Not the same thing
Even without it, Arrow Lake needs to be at least 10% better than AMD's X3D parts in terms of TDP, price, and efficiency for gamers to even consider it. After the disappointment of Intel’s 13th and 14th Gen, no one trusts Intel anymore. Desktop gaming for Intel is essentially dead, and its data center prospects are also diminishing. This leaves only the laptop market, which will soon face strong competition from Strix Halo. Intel will likely end up using TSMC for better stability and efficiency, marking the end of Intel's own foundries. All the government funding will have been wasted on poor decision-making by CEOs.
? AWS and Intel recently announced a multimillion deal. Xeons on Intel 3 and pots of things in 18A 14A and 14AE. They invested millions in Ohio where the new fab is being constructed.
Removing hyperthreading from the design is very different from turning hyperthreading off. When you remove it from the design, it allows you to put everything else closer together and frees up that die space.
Die space they’re wasting with Ai components.
@@quintrapnell3605of course, why wouldn't they
@@quintrapnell3605Facts this is the reason they are getting rid of it. To make way for TPM. It's kind of like the self sabotage of the auto industry to make EVs look more reliable. When will people realize they don't care about what the consumer wants it's about the plan for globalization. Aka you will own nothing and be happy.
@@quintrapnell3605 Exactly. I would like to have more ral cores or then more Xe engines than any NPU. But they will figure it out anytime soon
To push iGPUs much further, support for higher bandwidth memory will be needed.
i would love to see this test on a 12600k or 13600k, to see if the lower core count there changes anything. manny more people prob has the I5´s over the I7´s anyhow.
Agree. It is possible that games that use many threads would still benefit from hyperthreading when using an i5 while it does not with the more e-cores on an i9.
Hyperthreading hasn't really been necessary essentially since we hit 8 cores, and has given ever diminishing returns as core counts continued to rise. The 9700k was really no slower than the 9900k and it had hyperthreading fused off. Any difference between them can be 100% chalked up to their clock speed difference.
But you need to keep in mind _removing_ HT all together will be more effective than just disabling it because you're removing a chunk of the pipeline effectively shortening it, which is _always_ beneficial.
Honestly I hope Arrow Lake ends up being a home run for Intel. Not that IGAF about Intel, but they need a win after the beating the company and it's stock has taken the last couple years. Anyone that celebrates tens of thousands of people getting laid off needs to seriously check their fanboism and AMD is once again proving they're perfectly happy to increase prices the second they have no competition. Just look at Threadripper pricing.
"Hyperthreading hasn't really been necessary" i wouldn't be sure of that.
@@nempk1817 There's a difference in "necessary" and "beneficial". As much as I love all my e cores when doing compression and compile workloads, if I could get 15% better single core performance on stubborn apps like emulators and sim games I'd happily sacrifice a few % in multi. E cores have made hyperthreading obsolete. Unless of course you're an AMD user. Then yeah, you need all the HT you can get cause you're slower core for core.
@@nichronos That's inconsequential when he's got the same results we've seen for more than a few years. Moving to 1080p low would just make the results _more_ amplified, not less. There's a _reason_ many people suggest disabling HT. It rarely slows you down but in certain games it can be beneficial.
mb
@@nichronosproof or it didn't happen? 😅
Keep in mind the way P and E-cores communicate before arrowlake is also very in-efficient. And the E-cores in Arrowlake are up to 68% faster per clock and clocked higher too.
HT being removed supposedly was/is a step towards a more flexible method of splitting tasks between cores (potentially even splitting P cores all the way down to 4 threads each). Supposedly we’re left with simply no HT on 15th gen because this method wouldn’t be ready in time and it wasn’t worth putting HT back in. Generally this has been referred to as “rentable units” the past few years but there’s a lot of pure speculation and we don’t have any details beyond rumors and a couple patents that have been found.
Could also be breadcrumbs that exist entirely to throw off competition, or something that hit a brick wall and was ultimately cancelled some time ago.
Rentable units was going to be the uarch of Royal Core/Beast Lake but they had to go and piss off Jim Keller so now AMD be eating their lunch and dinner.
E cores are the new hyperthreading cores in case of Intel. I had guessed this move when Ecores came out.
And Arrow-Lake/Lunar-Lake E-Cores have the same or slightly higher IPC than 14th gen P Cores. The E-cores before ArrowLake were super slow.
ppl forget that when HT is turned off you get extra room to overclock you cores with 200mhz. got mine from 5.6 to 5.8 ghz on a 14600kf.
Good stuff but did it give you a measurable increase in FPS? If not it’s ok bc some overclock for the fun of it.
Does it beat 7800x3d in gaming or 14700k in productivity?
Danny you're a champ bro. I remember asking for this exact video compare e cores vs hyperthreading directly and with them both off. Fantastic video and the one on showing these configurations.
I already shared this on the forums because it's a good test that deserves exposure.
So Hyper-Threading are bad and more thread are good.
Maybe a 16C no HT of P-core are the best instead of 8P/16T+16E?
Surely not with arrow lake, those e cores will be fast and very space efficient
If it was bad they would remove from the xeons too, they are removing only from the PC parts.
Everybody must disable the E-cores. Not sure about SMT because the general consensus appears to be you need it. E-cores need to be shut down because they are putting stress on the Ring bus which lead to voltage spikes and latency. E-cores are useless for most people unless maybe you are streaming.
Danny putting work with these tests. Appreciate your attention to detail brother
Fr, love to see it
Thanks for your hard work and putting the results together. Definitely gives us an idea as to why Intel went with the strategy. With scheduling optimizations that they spoke about, it should be even better across the board. With that said, E-cores make sense for laptop CPUs like Lunar Lake for maximizing battery life, but for desktop CPUs I wish Intel focused on optimized P-core only configs
Really interesting results, looks like the 8P/8T + 16E variant shouldn't have any problems in multithreaded games at least.
It would be interesting to see how a simulated arrow lake "i5" would fare, so taking the 14900K and reducing it to 6P/6T + 8 E cores enabled, or even 4 E cores for the cheaper variants. I know it's a lot of extra work to run the benchmarks so I would understand if you feel like not doing it.
Using a 14900K to simulate an Ultra 5 245K wouldn't be a fair comparison as the 14900K has more L3 cache than the 14600K, which is the one he should use instead as it will have similar L3 cache size to the 245K.
HT was a great solution in the dual and quad core times more then a decade ago. Now it's getting redundant for 6 cores and especially for +8 core designs.
thanks for this amazing test! must have been an insane amount of work.
Arrow Lake is going to be very interesting... there are strange leaked benchmarks that are showing the new Skymont E-cores have the same IPC as the P-cores... that'll be very impressive if it turns out correct. Not long to go before the official announcement (rumoured 10th of October).
Everybody that owns a "pay-per-core" Microsoft SQL Server licensing that counts hyperthreading as if they were full capability regular cores is greatly relieved. How soon can they get it to market? Hyperthreading made sense in the dual cores to give them a bit more oomph to 4 cores without paying for a 4 core but these days the 24+ hyperthreading partial cores you find on a modern server CPU is getting to out of control levels of ridiculous especially when Microsoft charges you $20,000 to be able to use them every year.
Now I'm curious on how my 12400 will do.
I may have missed it in the video, but I'll need to check how to turn off HT. Afaik, the BIOS should clearly have that option, I think.
This really shows why Intel went with a hybrid design. If Intel released a 16 p-core beast, it would destroy AMD in both productivity and gaming, but the power usage would be insane. Especially if you turned on hyperthreading. Great video!
Nice hard work. Tried on my 12900K with undervolt and now disable HT works even better which less shuttering and less power draw. The power consumption much lower than HT enabled only draws above 100W at full load, anf it should be like this. Very nice tweak for gaming. I agree your point of view which the CPU should works right out of the box without any such a lot tuning by user.
This inspired me to look up videos for disabling SMT on AMD CPUs (I have a 5800x3D). The results from Tech Yes City look promising, although he tested a much smaller sample of games. I'm tempted to give this a try 😉
Also with HT & E-cores off you'll achieve the best overclocking results. So difference between the setups could be even higher.
Really good video Danny. Loving the deep dives.
This proves how everything needs software control to park cores on an individual game basis. Intel Dynamic Tuning or APO comes to mind but is it even getting support anymore?
SMT is Ancient Tech, While it still have it use on Server, on consumer devices, Modern Smartphone already ditch that,
Apple Sillicon do not support this as well,
If Intel decided to kill off SMT, it's step to the right direction since the Consumer Hardware is moving away from that.
SMT still acceptable of Workstation, but for consumer hardware, it's not longer necessary,
Keeping SMT just because the Tech Community want to see 2x Thread as tradition is foolish,
I hope AMD also can follow the same path, like having Zen 5 + Zen 5C on consumer platform. that way the scheduling can be streamlined on consumer OS
SMT is still alive and well. heck, the Power10 ISA features 8-way SMT - that's 8 threads per core.
But you are right in that most consumer hardware doesn't benefit from, especially if the underlying architecture isn't optimized for it (ARM doesn't have it and it's suboptimal on AMD64 in terms of efficiency).
I examined this for years on my channel.
There is a bigger thing there to consider, not just FPS, but the frame to frame latency
Removing HT is a big no-no for desktop use that actually loves HT
But, in games HT OFF is the best boost you can give to yourself through use of Process Lasso
Intel is likely hitting the limit of improvements that they want, so they are doing something gamers did for years, the issue is that desktop performance will suffer
It is not an actual improvement, but it will improve stutters and frame to frame in games, but will it actually beat 14900k in desktop use (non-gaming) is a question
I would like to see how much temperatures when running across test cases. So I can trade-off heat vs. performance.
good Freaking Video danny and I havent even watched pass the intro, I myself keep HT off as it allows a higher all core oc 59p on my 14900k and a higher head room for ram OC 8200
Thanks for the video and i've learned about core parking and can now disable core parking for all my cores! I'm using 13700K and realized that by unparking all my cores, a few of my games run faster with hyperthreading on and off.
May want to look at core parking affecting hyperthreading as well. Great videos! Cheers!
Do You use process lasso? I test it a lot and doesnt turn off HT, It Just ignore The second logical core. Parking The core.. I think it really does in C8 CPU iddle state
I like your proactive approach, you are my hero!!
Nice video with good information. Wish you had tracked power usage as well.
Amazing work, but I would have included Windows 10 in the benchmarks. I know it would have doubled the time and effort, but I still think Windows 10 is king for gaming on both Intel and AMD.
Thanks for the hard work!
very well made; ill watch your channel for some time
yeah i noticed that years ago, but i cannot be bothered to restart the pc and go into the bios every time i switch the game.
Great video Danny. Pretty interesting test. I too am looking forward to seeing how the new intel cpus will perform on a new process node. If the rumors of the arrow e cores being as powerful as the Raptor lake p cores, Thats a whole lot of performance on tap. We'll see how it goes tho.
Results are invalid for e cores disabled hyper threading enabled because of windows 11 not knowing how to schedule the threads with the e cores disabled. Re run the test on windows 10 and you’ll see that configuration coming out on top, and also when you disable the e cores the ring ratio automatically goes to 49 so you should raise the ring as well!
Make sure to install windows 10 with e cores off as well
mb
@@mikebruzzone9570all good king
Nice videi! You should test the stock config vs HT OFF E-Cores ON on multitasking apps bro
what about heat? I feel this would’ve been a really good benchmark to see which setup runs the best thermals at the cost of performance so on so forth .
Ya but for editing , photo and video cad.etc...those threads do alot..i dont just game
Finally a unique content, good work @danny, love this analysis
What would the results be with P cores turned off and just running on the E Cores?
You cant. Need at least one p core on. I. Made many tests in my 1390HX and running tasks Just in ecore ( process lasso) you generate less heat
The mw2 bench was probably so terrible because the worker render count would be wrong
Thank you for this video! can you please add MSFS 2020?
Is this video worthless to me? i normally game while using discord and OBS and MSI afterburner in the background, sometimes i also stream while gaming. Im pretty sure my performance would TANK if i turned off HT or all the e cores in my 13700k. This video is great for people that ONLY play games and nothing else.
Gaming sure.. everything else.. not so much.. intel is desperate to save power and win the gaming crown. No body knows if they will.
Isn't it more about being datacenter friendly (this matches getting rid of HT for "security reasons" as this is a problem on shared hardware). Gamers are not particularly concerned about power, are they (maybe in the EU where we are told that "switching off the rig saves money")
Gaming crown is a win they desperately need
lol they use the most power by a huge margin
My I7 13700K locked at 5.6ghz on 1.375 V during gaming in Warzone consumes around 120 watts. Which is roughly as much as Ryzen 7700X. Given that AMD it’s at 4nm and Intel at 10 nm I don’t know how to call it. Shame for Intel that didn’t put correct boosting and voltage on the bios or shame for AMD by being better node performs similar to Intel. Or shame for both. As for “nobody knows how Intel will fix power issue” they getting a node shrinking next gen which means lower power consumption and removing hyper theradimg which also means lower power consumption. I can see the way. May be in your opinion those two factors doesn’t matter. Your choice. One question. That “way worse Intel power consumption” conclusion did you draw by testing both platforms yourself or by watching UA-cam videos?
@gametestinglab8861 what on earth are you talking about?
7700X is on 5nm and 13700x is on 7nm which intel says is equivuilent to 4nm hence the Intel 4 process node.
7700X is not using 120w in call of duty it's using less than 80w.
It's also irrespective because the 7800X3D will hammer your 13700X in nearly every game using less than 65w.
Good test but to profit from disabled HT is to not have it in the chip at all. Thats why i think arrow lake will be great for gaming.
I guess you're also using extreme profile or whatever is the unlimited power option in bios?
Good applications report it would seem content providers did optimize for e-cores but whether they compiled for parallel SIMD operations you'd have to tell us Danny which is e-cores doing the matrix math, vectors, for some aspect of play back that likely can be seen in game play for a specific situation, drawing and shading that is not refresh of a scene but a change in the pixels to display some specific aspect within the e-cores capability. Or maybe its audio playback who knows but they do something what is it? mb
Report definitively shows the simple (effective?) compile for 8 P cores. mb
Maybe there just like a slug power plant adding to the main engine? I don't know but I am curious maybe you should query a few developers as a report topic. mb
Disable HT interesting power optimization technique few others have mentioned that along my audit like framechasers and Scatter Bencher who I think got the optimizations down but don't recall they mentioned 'just disable HT'. mb
It would be even better if they completely abandoned their current design of P and E and went full on P cores without F around .
But I suppose that's what new 5th and 6th gen Xeon HEDT is for
HT never matters for gaming anyway. 7800x3d proves that by beats 14900k with any combination. Just give ton of more cache. Intel wouldn't focus for disabling ht for gaming only. Probably they had to reduce tdp and thermals and more e cores will inflate their benchmark scores even more. But multicore performance will take huge hit.
Not exactly true, depends on the game engine what sort of impact HT has. Some games will really drop the ball on 1% & 0.1% lows if you disable HT, others will run slightly better with HT off 😅
7800X3D losses in a lot of games to a stock 14900k let alone a tuned with 8000 ram 14900k
Yeah.
Keeping E-Cores on and Hyperthreading off seams like the best choice.
Im missing the power draw from the data.
Me too, missing power draw.
Make the same with 3D render and how perform after effect or premiere pro, you will see why HT is important
Adobe definitely compiled to add e-cores. mb
ARL needs to beat X3D in gaming!
Lots of work for this one! Thanks!
love this video.. please the next time can u add THE FINALS ? thanks man :)
Crazy, but is it possible to disable all P-cores and only leave E-cores? What can they do all alone? Obviously for a test only.
Lol yes disable the actual CPU power
You forgot to test power usage with each configuration for different games
Isn't intel only disabling SMT on the consumer parts while the xeon's will still use/support it? And this hybrid arch is only used on consumers parts too while on server they are all ecores or pcores?
While SMT is different for intel and AMD implementation, just disabling it on the software level will not change much. So those test only tell which games/engines benefits with/without it not even considering if the impact is not from the way the OS is using it since most games are not programed to see difference on SMT cores from real ones.
The only real way to test this would be have the same core with and without the SMT implementation on hardware level.
Can't wait to upgrade this year from my 11900k
Should have tested Battlefield and Pubg. I'd think you'd see about 40% less fps.
Any news on the Bartlett lga1700? I have a 12600k and i ll never buy a hybrid intel CPU ever again. I had to disable the e-core because of the stutterfest on my 4070Super.
Huh? I don't have any issues but i'm on 12900KS i had 5800X on that i had a lot of stutters on i9 i had only one ¯\(°_o)/¯ 6000MT/s CL 30
@@theanglerfish use Superposition Benchmark at 4k and u might see them.
go win 11 its better with working with ecores
As long as they improve all core workloads by like 30% over 14900k and use less than 200w, they will be fine. If anyone thinks it unrealistic, that is basically what AMD did going from Zen 3 to Zen 4.
Nope. Zen 4 has 18% better perf. on average vs Zen 3.
Instead of all this fafo intel should just give us a non hybrid CPU. Nobody wants to test this shit. I’ll take a Monolithic 12p core CPU with no hyperthreading anyday over having e cores
Mw3 benchmark the same as the mw2 shown ?
we shall see, the gold standard is the 9700x, if intel can beat it while not consuming more than 88w (PPT) from the socket then we are talking..... i HIGHLY doubt it will happen though
Intel : getting rid of hyperthreading saves power... lol. On AMD, hyperthreading uses like 0.25% more power in exchange for 30%+ gains.
30%+ gains is nonsense. The *theoretical* limit is about 25%, so in practice it's 10% to maybe 20% at best.
you've gotta be braindead to think it uses that little power
Not tested....
But I can vouch saying that if you disable HT and E-cores you can typically OC the p-cores a little higher. Direct die using liquid metal Iceman cooler water block.
12700kf HT enabled, e-cores disabled I can get 5.2ghz all core.
12700kf HT disabled, e-cores disabled I can get 5.4ghz all core.
$175-200 CPU doing nearly as good as a 14900k in games. Maxes out a 4080 super that I own and a 7900xtx that I own.
Compare with 9700k pls
I think HT Story ist very old. Since HT exist, its depends on two factor. What Compiler is used and can the Taskmanager Deal with the Code Base.
Most Programms are Not carefully optimised and act Strange.
Linux is in this Point better. You can easy Switch shedular to handle HT or E-Core better, without Code optimisation.
i think eventually they are heading to full all simple small E cores that in massive amount type of chip. with NO P cores.
so they will have literally individual compute unit that coming in large amount but in simpler instruction and easy on power resources. and each can act as multi thread combined together or single individual threads. flexibility.
but this approach will benefit for a very short generations. since the silicon semi conductor tech itself have reached its peak potential in its electrical design. they will not be able to keep selling things that eventually has no more gains.
You know you just described Threadripper
@@yonghominale8884 threadripper is all A core(P cores in intel) that sucks tremendous amount of voltage. the E cores in intel approach is similar to GPU design approach. a lot of simple LOW VOLTAGE core that works in SCALABLE groups.
AMD threadrippers have all P cores works the wayy it normally would. it is NOT scalable. for example, it wont form groups to run specfied threads. each P cores takes the tasks and runs each their own threads. unlike the new intel approach. in intel new approach, 2 or more E cores can "group" and act as a single compute unit doing multi threading, or break apart into small single individual thread for short and quick compute. SCALABLE.
Great review Danny 🤩🤯🥰💪and very informative!
I’d love to see you do a comparison with AMD Ryzen …. Higher coded variants with Both non-3D vs 3D v-cache CPUs 🫡🙂↕👍
I aren’t ecores supposed to be used by the operating system and not by the game that’s currently running
Why is not Diablo 4 tested it's one of the most popular games out right now
The decision to remove HT was made because of the tradeoff that the higher performance of the E core made it possible to allocate the area saved by removing HT to the E core without compromising area efficiency, and its primary purpose is power savings.
The improvement in gaming due to HT-off only occurs when two heavily loaded threads cohabit on a single core. There are only a limited number of titles where that happens.
Since Intel Core since the 12th generation has been optimized by Thread Director to use as little HT as possible, there shouldn't be much difference between Arrow lake and Alder lake (that's what makes HT one of the reasons for the decision to remove it, which was explained in the Computex 2024 keynote). There are still some games where disabling HT/SMT can improve performance, but most only happen before 11th generation and on some of Ryzen APU.
Would be great to test it on windows 10 also. Great video!! But i See a problem, it should not affect performance when you disable HT, the problem its that maybe with the thermal solution that you have its not enought so the turbo boost or the the cpu downclock to meet the VID requirement from the cpu, if you Custom liquid cool/delid you should not have this problem. And WTF is the result on MWII??? i have both, my 14900k OC to 5.9hz 7600Mt/s on the memory and thats not the result vs Ecore on and HT, you must be doing something wrong mate, there is no way in life that disable Ecores and HT lose you that dramatic FPS. Soz the bad english
Nice work. I don't think I have seen any of the other tech sources work on this!
This is very useful data to look at when the new chips come out, where the die is smaller and can help others make informed decisions to upgrade or not.
These results are unrealistic. Not even 1% of people sport such a PC configuration with extreme RAM, RTX 4090 and so forth. Do a real world test with something that's affordable and doesn't cost a kidney or two.
That's why I just bought 5900X (Zen 3). Zen 4 and 5 are too expensive outside US. I have plenty of CPU power with this 12 core CPU. It's pointless to buy faster and pricier parts.
What a great vedieo
Thanks man
The Best configuration is called the 7800X3D.
??? Not true at all. To fully realize the 7800x3d’s potential, you need a 4080 or 4090 atleast playing at 1080p. According to steam hardware survey, only 2% of people own those gpus. It makes 0 sense to spend $450 on a 7800x3d if 98% of the population has a worse gpu. I have a 13600k with a 4070ti. Thanks to comments like yours, I know someone that paired a 7800x3d with a 4060ti. Literally people don’t need more than a 7600x/12700k if they’re using a gpu like a 4070ti or lower and playing a resolution 1440p or higher.
^We spent the same amount of money on our builds btw and obviously my build curbstomps his pc by 20-30% in games. Literally 0 sense to buy a 7800x3d.
@ProVishGaming That's 100% fake News and misinformation, Hardware Unboxed proved in one of their latest video that a good CPU is 100% important for good performance even at 4K, and specially high to Ultra settings since the settings is what use more CPU resources then GPU, also a lot more games are coming out more CPU Heavy like Dragons Dogma 2 and Black Myth Wukong, so this info is made up by a bunch of ignorant people that don't know what the hell they are talking about and be like just trust me bro, with no data nor proof to back it up. The Best Configuration for Gaming for Called 3DVcache anything else is trash or misinformation.
@ProVishGaming Your build is made up trash misinformation called, just trust me Bro 🤡🤡🤡, 7800X3D Trashes, demolished and absolutely destroys the 13 and 14900k with less power, less headeaches and no degradation, sit down fake News chump.
@@hdz77 you are the one actually the providing misinformation. You provided 0 numbers so let me provide some numbers. Upon watching HUB’s 7800x3d vs 13900k at 1080p, 1440p, and 4K video, the 7800x3d was 5% faster at 1080p, 4% faster at 1440p, and 1% faster at 4K. All while USING A RTX 4090. Any worse GPU and results would have been identical. Please stop spewing misinformation like an idiot. You bring up fringe titles where AMD x3d gets a win, but games like the star wars game, hogwarts also favor intel heavily with double digit improvements. If you take the average like hub did in the video, it’s very minimal (5% at 1080p). Kindly never speak about issues you don’t know again.
Imagine owning an Intel 13th and 14th gen........... lol. Silly bugger.
My 14900kf runs better with HT off + 8P + 8E (8E disabled) = 16T. Games are optimized for consoles, in other words, 16T. Better temps, better energy consumption...
7800X3D has Simultaneous Multithreading (AMD's version of Hyperthreading) always enabled and sips power in full gaming workloads.
Instead lf having a 8 core ultra 3 205 (4p+4e)
I would rather have a 6p core 6t (no hyperthreading).
🗿It feels alright i guess.
Gotta Try 8p+8e HT off.
Danny, If you disable E cores, you raise ring ratio
If you disable hyperthreading you get 100mhz faster, add In the actual data bud.
what game is playing at 2:17 ?
You can not make up for the loss because in all cases hyper-threading will benefit performance. Arrow lake with lower clock speed and ditched HT is revealing, I don't expect much from this release except lower power draw and we see how a lower power design has worked out for AMD's Gen 5.
Die shrink enables more transistors is a smaller space, but thinner dielectric material means more heat from leakage and dictates using lower voltage. A new architecture may circumvent some of this but a smaller die size won't. IMO.
At this point pushing more 1's and 0's takes power and the binary system probably needs a change soon...✌
Ok so HT on and E-cores on is basicly the best for both gaming and productivity? Got it😂
I always disabled hyper threading since the 9900K, Inwas always wondering why my 9700K ran smoother and better, then I found out that HT was the culprit. With the hybird cores I still disable HT, but I do keep the ecores running and just use process lasso to remove the ecores affinity on my games.
Yup, that's the way to do it. Also in process lasso you can turn HT off for individual applications without having to disable HT outright in the bios.
Another thing you can do is schedule drivers across different CPU groups or individuals to spread the load and lower latency or audio crosstalk. Usually windows likes to lump multiple drivers on the same cores which drives latency up for those cores. These have to be changed manually than results tested and analyzed in latencymon.
That's a tip I picked up from the DAW/audio workstation forums (multiple).
Windows should be reinstalled after each change, it is known that os will produce problems when switching number of cores without clean install.
Nobody is gonna buy them and Intel is looking at enough lawsuits for bankruptcy soooooooo RIP
Intel : Get rid of HT in hardware design, saved transistor space and used those to make a better P core
UA-camr: make video with a older generation and HT / ecore disabled for readers who claim HT disable have better performance
Basically you dont need to make them a video , because they are Not the same thing
it should be at least 12+12 cores
there e cores are there new hardware hyper threading.
Intel inside - Idiot outside 😅
Best config is only p cores
Why having hyperthreading with ecores ? Ecores is much more efficient than hyperthreading , no need to test that .
800 views / 100(+) Likes
Good ratio !
Even without it, Arrow Lake needs to be at least 10% better than AMD's X3D parts in terms of TDP, price, and efficiency for gamers to even consider it. After the disappointment of Intel’s 13th and 14th Gen, no one trusts Intel anymore. Desktop gaming for Intel is essentially dead, and its data center prospects are also diminishing. This leaves only the laptop market, which will soon face strong competition from Strix Halo. Intel will likely end up using TSMC for better stability and efficiency, marking the end of Intel's own foundries. All the government funding will have been wasted on poor decision-making by CEOs.
?
AWS and Intel recently announced a multimillion deal. Xeons on Intel 3 and pots of things in 18A 14A and 14AE. They invested millions in Ohio where the new fab is being constructed.
intel needs to put its shit together. degrading cpus?