🛑 AMD CANT Compete... 😱
Вставка
- Опубліковано 5 тра 2024
- ► framechasers.org/ - Discord and Support
► framechasers.org/consulting - Consulting
► framechasers.org/ready-to-ship/ - Max OC Bundles
► / chasersframe - Twitter
► kick.com/framechasers - Stream
► jufesdebecket?i... - Instagram
Join the community over on Kick and discord to talk tech and play some games with fellow frame chasers Saturdays 12PM PST.
#DDR5 #Intel #Warzone - Ігри
Digging the new and improved Kratos Intel Inside look.
Possibly, lack of HT also means that Intel doesn't need to put all the Spectre, Meltdown and other side-channel/SMT exploit mitigations into silicon that decreased potential performance in previous gens.
This might be the best comment I’ve seen in a while, great idea
u can disable them and it doesnt help in new cpus
@@erisium6988 you can disable software-based mitigations, but chip vendors have been baking-in hardware-based ones in attempt to reduce overhead/performance loss from software ones. Note: "reduce" does not mean that there is no impact from them at all. So by letting go of HT they might just get some more performance out of new chips if they also got rid of those old crutches.
It is just a theory, but a very compelling one when you stop and ponder about the reasons behind Intel redesigning whole silicon to get rid of HT as we know it now.
Booting out HT would make scheduling more simple and easier to do, especially if they can just add more E cores. High load -> P cores. Everything else -> E cores.
Jufes cut the hair off. He turning to the Dark side. Lol
He is a jew, so he was born into the dark side.
he is almost the tech version of andrew tate, in a good way..
The jewboy keeps deleting comments like they were Palestinians.
@@kasper-jw2441there's so many more cool bald people you could've chosen...
@@kasper-jw2441 Jufes is better than Andrew Tate.
Eh. I don’t care who wins. I just want better prices per fps.
I love the title lol. Almost no AMD discussion. Gotta get those AMD clicks.
The struggle
@@FrameChasers why the hell is intel dropping hyperthreading ? just to loose mindshare to amd ?
@@daltonmariano8010 Intel has talked about partitioning threads... basically breaking threads into parts and picking which core each part goes on. In short, the idea is to make actual good use of all those e-cores even for things like gaming.
Will it work? Dude, I have no clue. It's not a complicated concept, but execution? Yikes.
@@daltonmariano8010
Only getting rid of it because the IPC speed is so fast now, that it makes hyper threading pointless.
AMD however is talking about super hyper threading where you have 4+ threads per core.
So an 9950x has 16 cores but with 4 threads per core you have 64 threads.
They are talking about going up to 8 threads per core so up to 128 threads on a 16 core AMD CPU.
AMD is talking about having 24 and 32 core CPUs with 4 and 8 or more threads.
Which is bringing their server tech to the home user.
@@zagan1 AMD talks too much
the hair cut is epic.
higher clocks with no HT on and lower voltages. I think we are at a time where HT isnt needed, seems like the E cores do all the work now while P cores handle big dog stuff.
I'm on 13900KF and only going to get a new CPU when it comes with
Clickbait ass title but actually sane analysis
How is 7.30 GHZ running with an EKG Delta TEC 2 cryocooling custom water loop liquid cooling setup, clickbait, or anything but a sane goal, with Hyperthreading disabled on the 14900KS by the way, the 14900K and this CPU are beasts. If Intel can match AMD's power efficiency and match Raptor Lake or exceed clock speeds with Beast Lake, that platform will be a solid competitor for Zen 6 and Zen 7. Arrow Lake and Panther Lake are going up against Zen 5.
Regarding the dip, Intel's chiplet design should be MUCH lower latency than what AMD's been using for their Ryzen CPUs. Time will tell of course.
Uummm AMD redesigned their interconnect configuration and that latency advantages might be on their favor this time. AMD is so far ahead in chiplet design, Intel might not ever catch up. Remember, they called them glued together 😂😂😂
@@Manicmick3069 Yes I'm well aware that AMD has been improving their chiplet interconnects. But I can only evaluate what's currently available, since they haven't released any CPUs with new designs yet.
Menacing new hair cut… stone cold Steve Austin
Much love from Detroit btw
Looks better!
I think it was time bro was hanging on for dear life trying to hide the baldspot with rugrat Haircut
Having a higher FPS decreases input lag. Locking your FPS to the monitor refresh rate when you can get 600fps is a bad idea
Let's do the math. Assuming you have a 360 Hz monitor and are playing a game with the dumbest possible game loop where it reads input as the very first thing when rendering a frame. If you run the game at 720 fps, you could cut the input lag in half since it starts rendering the frame you'll see halfway in to the refresh cycle:
1/360/2 seconds in milliseconds = 1.4 ms
If you believe that's what'll make the difference, knock yourself out. But you better be real careful when buying your mouse and keyboard, because they could easily wipe out your latency gains.
@@necuz anyone buying a gaming rig is not cheaping out on the mouse or keyboard. Regarding frames, going from a 8.0 ms latency time to 3.0 is a night and day difference and this can be achieved by just increasing the fps dispite max monitor refresh rate.
If the game doesn’t have reflex then capping your fps can actually significantly lower input lag IF your gpu is being pushed too hard (it depends on the game but I’ll generalize and say 99-100% gpu usage) when uncapped. Whether it’s noticeable or not depends on the player and frame count. Battlenonsense has plenty of videos on this. Theoretically more fps should mean less input lag but sometimes the pipeline can become too full and give a massive latency penalty.
@@t7274 interesting yeah I never considered this. I would assume input lag is relative to cpu and gpu response time and overloading the system could hinder input lag
Stay with our i9 13th and 14th gens... crappy 0.1% lows are coming
HyperThreading is worth 5k in Cinebench. 12700K HT on = 25k, 12700k HT off = 20k; 14900k HT on = 40k, 14900k HT off = 35k. HT off typically allows extra +200mhz on single thread. 12th gen = 5.4/5.6, 13th gen = 5.8/6.0, 14th gen = 6.2/6.4 except they gimped 14th gen and it typically falls short 200mhz to 6.0/6.2 unless you have a golden sample or are direct die with a chiller
facts
I think the 285K would have worked out better if Intel had ditched a few clusters of E cores so they could fit in 2 more P cores
Keep binning 13-14900k, even 700k maybe. No amount of imc buff can save next gen
Shaving off that hair was the best decision you’ve ever made! You look way better now . Well done 👏🏻!
Looking fresh Jufes, welcome to the Discount Vin Diesel Club. Glad to have you join us, brother
If rumors are correct, Arrow Lake is supposed to be an efficiency focused architecture. The chiplets' smaller dies improve wafer yields although packaging yields will eat into the improved wafer yields. If Intel figured out the latency question, then their IO die would be better than AMD's IO. Hopefully, Intel figured out how to reduce the energy costs associated with chiplets. If Intel can use IPC to make up for the lower clock speeds, then Arrow Lake could be competitive against 14th gen at half or two-thirds the power, which would be a game changer for power conscious customers. A lot of ifs. We will know when Arrow Lake reportedly launches by the end of 2024.
I love ya Jufuss but the video title is bait. Just like if you made it about Vcache and said Intel can’t compete. How am I supposed to find affiliate links with content titles like this?
Honestly if we are purely talking about gaming, hyperthreading isn't doing anything most of the time and in many titles its actually hindering performance. Personally using 14700kf without ht and only game that is seeing slower perf is Cities Skylines 2. Battlefield gained roughly 20fps on my end by simply disabling ht, Counter-strike gained more. I don't play warzone so can't comment about that but you could make a video about the topic but I can say that my experience has been complete opposite to what you said.
HT enabled in warzone/mw3 gives around 10% more fps
Hyper threading is being removed in preparation for a technology called rentable units that is coming in 2027.
There is a chance the new chips might have a bunch of overclocking headroom and run much cooler, potentially add in better memory support?
With Arrow Lake that's a real possibility.
I wouldn't trust any Intel 13/14k CPU's atm they are failing left and right due to motherboard manufacturers using the wrong preset, intel is about to release new presets for default which reduces the wattage used and I bet performance will drop as a result.
I'm all in for removing hyperthreading. It gives a lot of physical space on the chip to add more cache or bigger cores.
Do you think it'll be like the jump to 11900K from 10900K, you get less cores but the cores you had was better IPC by a little bit? Ended up skipping that one all together, but as I listened the E core improvement could be interesting. Wonder what the cache is going to be like, I think the gap could get closed with a Vcache type Core chip. Probably going to buy it regardless lol
10900k is probably faster in most applications even gaming the latency is to high in rocket lake memory controller sucks. So if you have good ram the 10900k is probably faster as long as you don't need avx 512 which 11900k uses or PCI express 4 which you need a z590 to use. Watch framechazer videos from a few years ago and the 10th gen is beats everything until zen5 and alder lake. Better 1 percent lows then the 58003d.
@@justinloftis2307 yea I moved over to 12-13-14th Gen and eventually 8200MT M-Die
soo go AMD futureproof drop a 7800x3d in it and wait for 9800x3d lol
That's what I'm doing 😂
Hmm, what about 7950X3D or 9950X3D w/ HT off?! 🤔
gotta love this mad intel fanboy, "14900k best cpu" that just gets bent over by a 7800X3D at half everything, but then theres users like me who have the expensive motherboards to put it at 5.2ghz, yeah no 14900k is not 4% slower in gaming anymore but more like 10
Why would you upgrade 1 gen down the line for a Measly 10 to 12% IPC increase? That's just Stupid. Prime AMD Buyer right here boys. 🤣🤣
@@NBWDOUGHBOY dude i am a intel fanboy. But this time for me amd wins...en 7800x3d is cheap as f.... wait 1 year and sell 7800x3d and buy 9800x3d...what also will slap intel in gaming
Why does warzone run higher avg FPS and 1% lows with hyperthreading off on 13 and 14 gen?
Is this Prison Break edition?
DannyzReviews did a good HT vs no HT comparison on a 13900k. HT on usually hurt a bit in games.
Love your discussion on hyperthreads, cores, and IPC. I was wondering if there was a direct impact on IPC performance increase on a hyperthreads and if so, is it linear? I’ve never really considered how much performance hyperthreading improves a cpu and maybe a potential video showcasing performance difference? If they actually take away hyperthreading, this topic may be more important to look into, right?
Finally, you look great with short hair!
This is not the first time Intel has ditched hyperthreading. The whole Core 2 line didnt have hyperthreading and those cpus beat anything AMD had to offer back then and that arcitecture was a descendant of the older P6 arcitecture. I am sure Intel will figure something out. We will have to wait and see when they are released, but if AMD ends up being on top then good for AMD too! I do miss playing on my old Core 2 Quad Q8400, good memories!
thats because Jim Keller and his crew left right after Hector took over AMD. the dude drove AMD to the ground. AMD was running with inept engineers for a few years before Conroe came out
@@m8x425 Yeah, you are right! I had a Athlon 64 3400+ back then as well when AMD was on top with there Athlon 64, X2, and FX lines. It was a sweet cpu, lots of good memories. Then once that Intel Core 2 line came out, man that shook AMD for a decade for them to catch back up after they were done playing Phantom of the Opera with there Phenom 1 and 2 lines, and bulldozing and pile driving what ever they were looking for with their Tonka trucks lol, but they made a good come back eventually.
How big cahe memory has this new cpu ?
Ecore will be revised with upto 25% improvement and there talk of maybe a refresh from from 16c to 32c in 25. Not sure if there any ai on this cpu tops etc.
arent the new cpus supposed to have 8 more e cores?
Time to rock 13th/14th gen for some games and Zen 5 X3D for others. I want ArrowLake to be really good, but Intel just isn't doing anything to give me confidence.
If ecore are raptor lake core but low clock, then maybe this will be interesting.
This reminds me of the 9900k and the 11900k it was the same everything just a IPC up lift.
I could see the new chips being a lot better even with similar clocks and no HT, if they can get the power consumption (and thus heat) significantly down. It'll be fairly easy to beat the 14900KS if they can just make a similar one that doesn't thermal throttle no matter how you try to cool it.
Salam brotha nice Hitman HairCut.
Math looks good but we don't know how e-cores will play in to the mix
the 285k could effectively use the e-cores like multithreaded workloads in which case the 285k would still be faster in games. Right now, you can basically turn off e-cores and see no difference in games because of that hyper threading. 5.5Ghz could also be the factory spec, but we don't know what the boost will look like and they might have made it slightly lower to reel in the perceived power usage even though most enthusiasts will crank it up
his math is wrong because it doesn't show power usage.
right now 14900K unlocked can use 350 watts, if you even if you lose 5% & only uses 200 watts or less watts that still has better performance pre-watt. That would be a 42% drop in power with 5% loss in IPC in single thread. Which is still better.
@@kevinerbs2778 Man, I gotta say, do you know what this place is? I get it, so does everyone else here, for a lot of normal people performance/watt matters. That's just not this channel. Top performance. That's it. No one who tests tech/peltier coolers gives a rat's tush about power consumption. :)
@@MooKyTig Power is heat more heat = lower scores lower performance . Apparently, you don't understand how Clocks work on cpus either because you're dumb
E cores don't need an ipc bump to gain perf.
There are 2 easy ways to bump the encores.
1) mhz
2) cache
A harder but still possible way is improved branch prediction which would lead to an effectively improved ipc
Have apex encore 14900KS and 8400mhz g skill waiting to get open damn Intel what should I do? I don’t know if should I return it to store and wait for this arrow lake or open it right fukinggg now and gamble if my cpu don’t have the instability issues =\
Take that shit back and wait for AM5😂😂😂😂
Bald look is better than Asmongold with hair!
Hey Jufes, AMD just revealed ZEN 5 and the 9950X Zen 5 CPU competing with the 14900KS, coming in July. Max clock speed is 5.70 GHZ (same as the 7950X), with a 16% TO 40% IPC increase. I still think the 14900KS is competive with EKG Delta TEC 2 cryocooling, but if you buy a 9950X or 9950X3D for an RTX 5090 (which you SHOULD ;) ) your investing in a EKG Delta TEC 2 cryocooling anyways.
What's your opinion of Zen 5 versus 14th and 15th generation? I personally think the Intel Core i9 14900KS and the 14900K, with the proper delid, direct to die and EKG Delta Tec 2 custom loop cryo water cooling custom loop is nearly as future proof as AM5.
I love these videos but I pretty much only play CS so I went with AMD anyways, but it's interesting to see Intel go with dual ccd's as well. Perhaps there is a way to reduce or eliminate the dip. Maybe AMD had a plan for it the whole time and it's just been slow developing (probably not).
I just switched to a 7800x3D from my 13700K. Paired it with a 7900xtx as I sold my entire prev. PC with my 4090 in it. In games I play, I literally get equal or better FPS than my 13700K/4090 build. I play at 1440p on a AW3423DWF for reference
The 7800x3D is that good to make up the gap between 7900xtx and 4090. To further FAFO, I bought/returned a 4090 and tested it in my build now, and got no increase in FPS (again, in games I play). I was honestly surprised the 4090 didn't get higher FPS, but glad I could lay that question to rest and not try and get another 4090
Off topic I know, but blew my mind and had to post 🤣
At least there's still a believer in Intel progress lol. Wait for AMD 9950x3D
the l2 cache is supposed to be bumped to 3 MB per core which could yield "significant" (i'm guessing 10% and above) uplift in gaming. this is also worst case i'm seeing will happen, they might just surprise us with actual IPC gains. this also makes sense if they want to beat previus gens cpus without HT which i still assume they will. They also claim their foveros chiplet tech has no downsides compared to monolithic, but the meteor lake showing has been kinda sad. But that's also maybe cause they went "battery life first" approach where shit cores are favored until performance is detected to be required, which i assume comes with a delay, latency is horrible there.
I loved how you just shaved your head and fired up the recording right away 😁
ht off is only 10% reduction cause ecores dont have ht. games dont use HT unless you run out of physical cores. but with 16 ecores its not goin to happen. Games DO use ecores. Including warzone
There have been repeated instances of games using the E cores by accident and it leading to stutters. Those games then had to be patched to outright ignore the E cores. Principally everything you want from a CPU is to turn in work to the GPU as fast as possible for the best frame times. The E cores will never be able to provide that due to the combination of them being clocked way lower on top of having lower IPC. It's fundamentally impossible for them to turn in work at the same rate as the P cores. Their real function for gaming is having system processes offloaded to them so that the P cores can spend as much time on the game threads as possible. That is outside of productivity tasks that aren't latency sensitive.
@@blkspade23 You have no idea man. Ecores are used extensively in games.
@@blkspade23 Not all workloads are the same. But it does depend on how the game engine is programmed.
But, for example, many games have the audio part as a separate thread. So, for example, the music doesn't interrupt when you change a scene or go to the menu and so on.
If the music is handled alone by a thread, that one can be done by an E-core running at 400 MHz. If it's also with sounds, a 2.0 GHz E-core can probably take care of all of it. Thus easing the load on the P-cores which can deal with the more heavy stuff smoother, with less context switches and interruptions.
@@Winnetou17 What you're proposing is more theoretical than practical. A separate thread for audio has probably been a thing since sometime after windows Vista. Software for Windows has generally been assuming symmetrical threading. Games would have to have a specific code path just for the possible presence of e cores that only been an option for 3 gens now. Never mind that few games are fully taxing 12-16 thread CPUs, that they have to go an now pick a purpose for the e cores. Everything you said still works with any otherwise unutilized P cores, as it's already what they've been doing. 1 game I've personally seen using ~19 threads of a 5950x was Star citizen. It really did not like e cores being in that mix. I see it maybe being more plausible when a 12600k is like minimum spec.
@@blkspade23 Fair enough
@Frame Chasers why don't you test 14900ks ht on vs off (e cores on)as an experiment
I have many times
@@FrameChasers you may have but why not make a video and call it 15 gen simulation best case scenario and test 14900k ht off 6.2gh vs stock(6000mhz cl30 ram) vs 6.2 gh ht on , this can also work as a promotion on your awesome kits showing how bad stock 14900ks with 6000 cl30 is p.s. you should start adding stock base configuration to your max oc videos cause when people can't compare they don't know how good your max oc is
No time man, all that info is in my discord
@@FrameChasersDoes your Discord, have a direct link to this particular topic?
HT can give up to 50% of a P Core in productivity mixed workloads so worth it for that IMO. Happy with my 6GHz 14900KS for a good few years now. Definitely skip next gen.
I wonder what copium tastes like?
Is it bitter? Does it leave an aftertaste?
Do you eventually grow accustomed to the taste, and even find yourself enjoying it?
I wonder.
it's summer
I'm no engineer but I wonder if Intel's new chip will require Windows 12 which may have a reconfigured scheduler that utilizes e-cores in replacement of hyperthreading? If so then the new chips will def be a big upgrade over 13-14th gen i9. Tbh I don't know just speculation since Windows 12 is coming out around the same time as 15th gen.
Aren't they using the 3nm node at tsmc for their next gen? If so they could be alot better. Nvidia made alot of gains going from samsung 8nm to tsmc 5nm, for Intel it could be even bigger than that, fingers crossed.
Rocking the Max Payne 3 look now.
I got a hair cut today then come here and noticed Jufes has also had a haircut. Looks better on you man
Looking much better.
I'm hard-core intel, but I may go 9000seriesX3D AMD setup. I have a feeling it will be a lot faster in gaming. Also there is a rumor AMD may be putting X3D on both CCDs for the 9950x3D.
going bold looks bad but this new style is mutch better... Thumbs up DUDE Kratos incoming ... beard is the new Style :D
i'll look forward to your tuning and testing, especially the latency testing.
btw. Intel "glued the chiplets together ... Whats was it 4 years ago, Intel .. :D" I am on my 12900K 5,5 GHz only PCore+HT @1,315 no ECores = 220 W max in gaming 85-115 W and i am chilling on it :D
yo the haircut is absolutely winning
cinebench score doesnt matter that much for gaming
Hyperthreading off has more FPS in some games like Rust.
Lol. Zen5 is gonna stomp this shit.
This gen is the "last" good gen, only chiplets for now, the dark ages has come
Yeah, for Intel. Their chiplets suck ass😂😂😂😂
@@Manicmick3069 feveros is better than infinity fabric btw, you just don't know why monolithic cpus are good and prob don't know at least how the chiplet architecture works
Ayye he cut it off. Nice cut, think you should keep it.
chiplets are a worry amd has put alot of time into chiplets latency issues and intel has less experience. I expect the worse but hope for the best.
Great comment, I'm glad I went with 14900K, and I think those of us who did so will be vindicated
Intel already made chiplets work for laptops with having very low idle power consumption despite being chiplet. I think Intel figures out a solution that AMD hasn't thought of yet.
@@karl5010from my understanding they are doing a different approach than amd
@@user-tc4tz8ww1z I personally skipped both 13th and 14th gen due to the beyond horrible power efficiency. Intel has been trash for 2 gen now. What I want to see is Intel come back with at least equal to Zen 5 in singel core with better power efficiency to win me back.
@@laggmonstret Ryzen is more power efficient in certain specific scenarios but not across the board and not during chip lifetime. Tech Notice did a great video on that.
Intel just works...ups...but...Amazing some people try to hide the problem!
intel has stagnated. they're gonna get brutalized by AMD.
enjoy your ban buddy
They don't need Hyper threading with the new coding for how the cores will work..... Hyper threading actually cost them performance because it complicated the core utilization.....it's why they dropped it and for efficiency gains. I'm it saying it's going to beat The next AMD chip but it will compete with it and have alot better proficiency than previous Intel chips
Intel's the one increasing cores while AMD is still the same thing since the first Ryzen XD
@@dongzhuo7606 You should do some research before spreading lies. The first gen Ryzen were maxed at 8 core. The Ryzen 1800X was a 8 core 16 thread cpu.
Intel stagnating at 7 GHZ with EKG DELTA TEC 2 custom loop water cooling? 6.90 GHZ all core as well? I don’t think so.
Arrow Lake will be initially slower than 14th gen, for a massive power efficiency gain for Intel on TSMC 3nm. I suspect Panther Lake and Beast Lake are when Intel will push the new tile and composer chipset design, to match or exceed Raptor Lake clock speeds.
Provided Intel can keep thermal headroom and performance per watt on parity with AMD’s Zen 5 processors.
The 9950X and 9950X3D are rumored to have a 46% IPC increase over Zen 4, with 16 Zen 5 performance cores with a max boost clock of 6.50 GHZ.
I’m not anti AMD at all, I want to make video games one day with a Threadripper PRO build, but my primary machine remains Intel.
Not only do some games not care or even prefer if HT is disabled, you're also gaining more efficiency to the compute pipeline by having it removed completely. The 9700k was barely slower than the 9900k, but it only had HT _disabled,_ not removed. This is a swing and miss on this one, Jufe. While there may be a niche title here and there that benefits from HT, very few cared at all when the 9700 was up against the 9900, especially when OC'd to 9900 speeds. Not to mention the i9 had more cache which did almost nothing to stave off the i7 in the vast majority of games.
Hyperthreading became far less beneficial with 8 core CPUs, and completely obsolete with E cores. You may see the rare app that was designed with HT see a small decrease, but this should be greatly beneficial to those that don't care. And the vast majority of games still love single core performance. I'm still skeptically optimistic.
why the heck did Jufes take hair from head and put it on the face..
Jufes looking like a snack😋
Hardware unboxed and Gamers Nexus (the two Steve's) will Run the 14900K at the fastest ram it can support stable. i think that's 7400 MT/s and AMD Zen4 at 6000 MT/S Maybe higher faster ram for Zen5. 15th gen will run at whatever intel recommends to reviewers. I believe most reviewers including HUB will run the 15th gen at 7400MT/s so we can see the CPU scaling :) The issue is that Intel will 100% set a default motherboard power limit to 125w (to fix 13/14 gen degradation issues) so they can gimp the 14900k to make their 15th Gen look way better then it is.. while making the 14900k slower then Zen4 and Zen5 ..
My mans looking so much better rocking that shaved look, hell yeah brother. Keep up the good work.
So again 10900k vs 11900k but now its called 14900k vs 285k for this this is better but for this older product is better
The math you are doing only works if you assume that the perf improvement is 15 percent IPC but... we have no idea because this CPU has no official info from intel yet and no 2rd part testing so it's possible it will still have hyper threading it's also possible it will have 30 percent faster IPC but I don't think it matters to judge it yet because it's all based on leaks
Andrew Tech vibes
I had the same feeling when I read what Intel is doing. I didn't feel like being the guinea pig for their first cheaplet.... if I wanted cheaplet I would have gone AM5
Genuine question what's the negatives of a chiplet design? I haven't run into any latency issues in game with a 7800x3d. Is it mainly just causing issues with overclocking?
@@ezechieldzimeyor4541latency and stuttering mostly. With time those problems have been gatting smaller. Zen 1 and Zen 2 were pretty horrible.
@@InnocentiusLacrimosa I agree that they have made improvements.
One of my problems is that chiplets were meant to make these processors cheaper to produce by utilizing more of the silicon wafer iirc.... Where are the cost savings for the consumer? A 7950x3d which is AMD's somewhat equivalent to the 14900K costs the same.
@@user-tc4tz8ww1z there were cost savings originally for the consumer. Now AMD is just pocketing them all for themselves and tech media is eating it up without a complaint.
I think new i9 will have 32 E-cores
zen 5 will change the platform, by "am5+", the bridge chip they are using now is dragging them slowing down(crappy ddr5 perf.). i doubt they will give up HT.......the big core _small core ideal never worked since 1987, you cant be depedant on microsoft to do the "task schedulling" to tell the cores which part is heavier part", there are billions of apps on the ground, and unlike datacentre shit, the games are time delay sensitive, even with the ai shits to help. my suggest is to grap the cheapest flaship now if you want to bulid a new machine now, and skip 2 generation to se what will happen.
I know Intel has been dropping the ball a lot in the last ~5 years, but for some reason I think they'll be able to pull off that tile crap better than AMD pulled off chiplets. AMD is an indie company (lol) so they don't actually have the resources make desktop CPUs, they just jury-rig defective server CPUs and sell them as desktop CPUs. Intel actually designs specific CPUs for each market, so I think we just gotta wait and see.
totally. AMD's problem is using two Core dies and the controller chip
That's a gross misinterpretation of their binning process. It's actually a far more intelligent strategy of manufacturing 1 die that can scale between markets. It's more cost effective, and produces less waste. The only thing that separates an 8 core Epyc die from a Ryzen one, is that the Ryzen die operates at a higher nominal voltage. They are otherwise equally functional. Dies that are actually defective and have to have cores disabled still also get used in the less dense server CPUs. The resulting products in both segments still are more energy efficient than what Intel has been producing, while being competitive if not better. Like if Intel's way of producing specific silicon per segment was actually superior then they should be winning hands down. It is true that AMD doesn't have the manufacturing capacity of Intel, but it's a reach for you to try an spin that the way you did. Intel produces way more silicon than they can actually sell, and have to artificially cut down way more chips to fill lower end Skus. It's one of the many ways they are hemorrhaging money.
@@blkspade23 I don't care about Intel's or AMD's profits lol. All I care about is not having massive fps dips in the games I play, like Rust. While AMD reuses their infinity fabric garbage up and down the entire stack, Intel leaves their mesh architectures for servers and workstations where they belong.
I'm really not excited for this tile crap either, but I'm at least willing to wait and see.
(I have a negative opinion of AMD because I basically got scammed into buying a 5800X3D. It was AMDipping so hard in Rust it was giving me headaches and losing me fights. UA-cam benchmarkers don't actually play games, and the games they benchmark are easy single player crap like Tomb Raider where 99% of the game is just slowly jogging in a straight line.)
@@BrianCroweAcolyte The X3D CPUs are typically better still in most multiplayer games too. Rust is closer to being an outlier than the norm. I've finished top 3-8 in COD DMZ a number of times in a row on a 5950X, and wouldn't equate my wins or losses on my hardware. Games are fun, win or lose. They aren't making me money. I'd be more pissed about them refusing to run or crashing.
I care about their profits in so much that there needs to continue being at least two players in the space to have both relatively sane pricing and technological progress. It was an absolute tragedy that my 4790K OC had practically equivalent performance to a stock 7700K, and doing way more than just gaming on my PC needed more than 4 cores. Intel's is adopting a chiplet like method because it makes sense. Gamers aren't going to keep Intel afloat or in existence. Which is why E-cores exist. Even if you hate the things Intel are trying, at least appreciate that they are trying something new. Or new them, but copied from AMD. The reality is that software has to adapt to the new way of doing things.
@@blkspade23 Even if Rust is an outlier, I'd still just take the monolithic Intel CPU even if I didn't play Rust. It might not be the fastest all the time, but you can never roll snake eyes like you can with AMD's laggy infinity fabric.
I think if Intel cant figure it out too, I'll just stay on 14th gen and skip a couple generations until the newer CPUs are so fast that the 1% and .1% lows are better even with the tile/chiplet dips.
Bro is looking like an OSRS woodcutting bot
Cool new look, you pull it off in a good way, Triple H style :)
Are you basing this on an old neutered sample from China? I seriously doubt the next intel cpu under performs 14th gen
There's literally no point to upgrade rofl. I think losing HT will be huge. Wait and see but it will lose on Cinebench. It'll probably get techtube tested at Intel baseline/XMP profile on the ram for 14th.
you really care about benchmark numbers. Even on ryzen i disabled my HT in the bios and am gaming more stable. Don't care that cinebench says 30% less when im feeling games run 10% better
nice shave
amd is rubbing their hands rn
14th gen is the last generation. Lol. Lmfho.
At least brush the hair off the shirt, man. lol looks good though 👍
Can you please test PUBG in first person perspective mode, 1728x1080p
damn my brain recognized you as andrew tate for a split second upon opening the video
Intel downvoting with intel baseline with 13 - 14th gen to make Arrowlake competitive. Intel dip lol.
i tough you was gaming without HT, at least form your earlier video where you saying _HT no needed. This one here, qite difference from your standard. Hmm weird.
240 hy