Of course, I hate when someone saying "o AMD got the best APU or mobile chips, Intel sucks bla bla bla"... But U people must understand is so bad that someone have monopoly on the market. Look at NVIDIA and the prices of their GPU year after year, it's disaster for customers...
@@marcinlegacy9660 The reason why NVIDIA is so good at AI is because they provide the tools and libraries even for the open source community. AMD creates good hardware at lower cost, but fails to support the community. They just simply expect people to developed libraries for their hardware. The truth is that AMD is simply dumb when it comes to true marketing. Intel on the other hand is a marketing genius and there already is better AI support on Intel GPUs than there ever was on AMD.
@@abaj006 Creator of Linux showed the middle finger to Nvidia for their "support". I don't know what you are smoking. AMD is heaps better when it comes to support. Just look at FSR vs DLSS. Or maybe FreeSync vs G-Sync. Or Vulkan vs DirectX (yes, it's MS but Nvidia helped). What Nvidia is better at than AMD is optimisation. The Nvidia GTX-980 humiliated the R9 390X, but (years) after Nvidia abandoned that GPU while AMD kept optimising to the point that even slower R9-290 instead surpasses the GTX-980. Same story when it comes to the Vega-56 vs GTX-1080, or the RX-5600 vs GTX-1660Ti, RTX-2080 vs RX-5700XT. And now the affordable RX-6800 vs the RTX-3070Ti (massive VRAM issues).
I really want to see the power/w improvements, especially when the p-cores disable themselves. And even more so since the E-cores have the same ipc as last gen P cores, i think it would be very viable for certain scenarios.
How much of this can we take seriously showing only bench marking software and doesn't z1 extreme on ally get 60+ at 15-18 watt right now? Thanks Mr. phawx.
Very very interesting video! Competition is good, and if Lunar Lake delivers even a touch better performance over Hawk Point with 3-5W less power, it could be a big win anyway for handhelds, in my opinion, especially for MSI. That said, the 8840U is still a strong low power apu for gaming, and im sure Strix will be much better then LNL. It needs to be that way, it has 10 and 12 cores variants and, at least for the 370HX, an even bigger GPU. Let's hope for the best in any case, and don't forget that FPS can also be boosted by upscaling technologies that makes a lot of sense on a small 7-8" screens, where is even more difficult to see imperfections compared to bigger ones. And BTW, speaking about upscaling technology, the Xe2 graphic architecture on LNL now has a big XMX hardware accelerator for AI, and considering that Xess runs better on XMX in the Intel cards, it could be interesting to see the performance gains; i think that's a thing that should not be underestimated too much. I'll wait for the "not so good" video about LNL now 😁
4:38 this scene looks terrible. I can see intel using their dirty graphic fidelity reduction technic and fakeish frame skip to achieve higher frame rate. Just look at AMD beautiful graphic fidelity and perfect near perfect framepaste. At what cost are we gonna be waiting for intel? Is it evens worth the wait?
The secret sauce is on package dram and no more cores or threads than is needed for gaming with integrated graphics leading to low overall power draw. Possibly also power gating the unwanted core cluster when a real time workload doesn't work well across both. That's my guess before watching past 45 seconds.
Thanks for the comparison. 8840U is a last year cpu though, it's Phoenix. Meanwhile Lunar Lake is not even ready to release yet. 😵💫 I'll look into it and compare with this year CPU when it "released".
Like you said, I wonder how Intel's 'AutoTDP' functions and how it determines power/voltage for the fps it's able to achieve. For example if there's enough GPU headroom available for 60+fps, will it just try to default to an fps cap of 60 to get the most efficient power/voltage for the system, how the drivers determine this? I think it would be smart for AMD to implement 'Radeon Chill+' much like 'Antilag+' before people start to benchmark these two together and conclude that Lunar Lake is the best lower powered gaming device (tbf, they are leading a node over AMD now, N3B v N4P). That said, if there isn't an 'Chill+', I'd still like to see a tuned/'autoTDP' benchmark run of the Strix v LNL. Of course, conclusions can be looked at another way, if the 890M is just more performant in general across a variety of games. Also, also. We now have Zen5c cores. I'm very interested to see 4P+4E Lunar Lake vs 4zen5+4zen5c (for apples to apples comparison). Does AMD have a workaround on not being on N3(B/E) using Zen5c when a program doesn't need high freq sustained workloads/starts to throttle.
Please explain how you optimized amd to use less power to hit the 60 fps target. I think such guides are interesting on all platforms, even desktop. I'd like to optimmize my desktop system to hit 120fps and minimize cpu and gpu power used (intel cpu - for now, and amd gpu).
It is insane from where intel came (though its 3D mark, not games) and now is it is still luke warm considering how much advantage intel got with on package ram and 3nm (actual node bump vs 5nm, while 4nm is a small optimization of 5nm) Lunar is, relatively, expensive thx to said packaging i'm thrilled to see actual devices, 3rd party benchmarks and price battling (which intel probably can't afford rn)
Can’t afford , but will still do for no profit or very little for the good exposure … not just talk but action going on … they told Microsoft they will do the new gen x box for the price of silicon n labor , no profit , to show off their new foundry systems that takes ages to biuld
Competition is good. I hope both AMD and Intel keep each other's feet to fire. Plus, if this works as advertised it is a great feature whatever the case.
I'm not convinced that auto TDP was the factor with the new architecture here because the CPU package power was variable on both platforms. What immediately stands out is how much less is required by Lunar Lake, which stands out a mile and seemingly, a very good thing. Whilst any form of auto variable TDP is good, I find myself asking why it has taken so many years for such an implementation to arrive. My cynical answer is that it is likely to be a good example of drip drip tech advancement. But with all that said, if we will be able to do more for drawing less power, then it is a positive. But where auto TDP could really come into it's own would be if you could actually go down to a setting where a program that only needs to sip juice would have the ability to only to draw that juice. But what we see in reality is yet to come, so always a healthy amount of scepticism at such an early stage.
The problem with MSI claw is it doesn't have enough ram for Intel chips. If this 8 inch claw 2 packs a 32gb ram then we can see the potential of lunar lake but I guess the price is not going to be attractive. They might be selling this for over $1k lol which you might be better off getting a gaming laptop or desktop.
32GB lnl handheld with usb-c power in and video out that runs linux well would be perfect! One cable dock to my monitor on my desk (it's got a 75W pd and a usb hub so keyboard/mouse could stay plugged into the monitor) and just grab it and bring to the couch or bed for gaming/movies it does EVERYTHING
AMD as been too slow in mobile, they missed the chance to dominate like epyc. If the 6800u was released in 2020/2021 ie before alderlake, then they would have dominated with strix halo already out maybe 2022/2023. Similarly with GPUs, the 5700xt series should have been 2018, 6900xt 2019 and 7900xtx 2020/2021 You could argue zen 3d also was late, should have been at launch in 2020(apparently it was ready then) Intel had NOTHING to compete with 5800x3d back then
While I'm happy that Intel did manage to catch up with AMD - this STILL does not look tike a preferred system to get. It gives you save frame per Watt (or 10% better, maybe) - and that's with ram soldered onto the CPU/APU package. And we are supposed to beleive they managed to fix driver issues. And we are comparing to last gen AMD. I'm happy that competition exists, but I'm still Team AMD =)
Its still going to come down to driver maturity and compatibility even though this is looking promising. It's like the Nvidia AMD rivalry for just about ever. People opted for Nvidia because they didn't want to wait too see if things worked or got better on AMD. I'm not sure people are even willing to play that wait and see for an $800 handheld even if it's close to AMD overall. If Nvidia could figure out a way into this space then so begins the Spicy Wars
intel close to amd last gen that good, but this time intel using n3b nano technology, it suppose better to amd but now it is not. on the same time amd next gen apu is coming and it should better the last gen and it still using n4 nano technology. that show amd lead intel on chip design not the nano technology.
I said this before and alot joked about it... MSI took Intel so they can get their hands on Lunar lake and beyond chips, it's not meteor lake they care about hence they revealed new Handheld just 3 months after Claw release... That's how business works, it's all in the contract and MSI managed to get early development chip front Intel because of that, now if Lunar lake becomes huge success no one will be laughing anymore
Yes but somehow the chip is emulating windows so it takes a lot of power hit just from emulating not good for pushing fps. The snapdragon chip is at current for work programs that will enhance battery life instead of pushing polygons.
Can you give some comments on an Xbox One S Portable? I think this is more likely to be possible in the near term. Possibly even a One X docked mode with a custom memory configuration.
> Intel first showing finally with a handheld chip People seem to forget at this point, but Intel had Tiger Lake U in 2020-21, and it was really excellent for its time. I could get it to show decent performance at 8W TDP, which was such a huge improvement over the previous skylake generation, that couldn't get anywhere near that power budget. It's unfortunate that it got mauled so soon after its introduction by Apple's M1. And as a 4-cores design, it could not compete at max performance versus AMD 8-cores cpus. But when it comes to handheld gaming, this thing hold its own, as long as AMD was showing up with Vega refresh iGPU, which lasted for a while (up to SteamDeck in 2022). It's a pity Intel did not really follow up on that one, possibly as a consequence of being out of Apple's business, or maybe it was the 2+8 Alder Lake design that was supposed to but ended up unimpressive.
Hope we get a new ~6w cpu(e-core only?), and a new handheld with it(250-350$ max with shipping included), with a form-factor like the win600 or maybe switch lite-ish with little bigger screen.
I don’t trust intel benchmarks, and intel’s gpu just doesn’t stand up to RDNA. I hope they prove me wrong so prices come down. You know this is gonna be costly.
Fps cap don't change power usage for AMD at all? Strange, don't have experience with hawk point but I pretty sure setting fps cap (30 or 60 depending on the game, or just enabing vsync) on 6800u does have an effect on power usage, at least for some games.
i like tahat intel improved the lunar lake efficienscy so much that surpasses amd 8000 series but its still unknown if strix point is better than this and it probably is, so what are we really talking about?
It might be a major improvement over ML. but that will probably put it on-par or slightly more than AMD Pheinox that most probably cost less than half as much to make. Still, this is more than welcome and im glad that the new Claw is a real viable product.
Im glad this guy never replies to me, because.... i been translating his videos in spanish, and uploading them to youtube mexico. Sorry bro, but i kept telling you to admit that the steam deck is underpowered and that steam deck fanbois are just power hungry, but dont want to admit it. Bill and rich from nerdnest and fan the deck are the biggest steam deck fanbois, they need to get off the ValvE sack. The GO is the BEST hanheld out of all 3. "Hola a todos, mi nombre es The phawks"
Intel should've used two HBM memory modules instead of two 8533mhz LPDDR5X modules. The higher bandwidth overhead would make it perform on GDDR levels of VRAM speed. The iGPU would benefit greatly from that change alone.
@@K543 But both are on screen, seemingly shot via camera/phone, prob the same camera set up. Watch at 3:30 onward. I don't trust that fps on the right. What's up with that Phawx?
There are way too many possibilities this time with all the architectural changes, from Memory Side Cache latency, the e-cores getting separated from the p-cores to another block (the "smartphone stutter"), auto-TDP creating bubbles where cores at lowered TDP can't clear instructions previously scheduled at a higher TDP fast enough, to any number of Xe2 architectural changes. Whatever it is, hopefully it's fixable in software by disabling the e-cores or auto-TDP or something. Intel still has a few months to figure it all out.
No Thunderbolt 5 support is crazy. For someone that is holding to get a laptop with TB5 this is such a disappointment. Oculink will be the path this year and next year for laptops with eGPUs.
@@dimitarivanov4719 no, I mean Strix Halo. Strix halo will support 40,32,24,20CUs for the gpu. And thr GPU is apparently an improved architecture, something like RDNA 3.5 or RDNA 4
@@dimitarivanov4719 Exactly, I want that one. They're probably gonna call it Z2 Extreme or something. This is gonna be optimized for low wattage. Hope it happens. Until then, I'm happy with my 6800u handheld.
Competition makes us all winners.
Of course, I hate when someone saying "o AMD got the best APU or mobile chips, Intel sucks bla bla bla"... But U people must understand is so bad that someone have monopoly on the market. Look at NVIDIA and the prices of their GPU year after year, it's disaster for customers...
@@marcinlegacy9660 The reason why NVIDIA is so good at AI is because they provide the tools and libraries even for the open source community. AMD creates good hardware at lower cost, but fails to support the community. They just simply expect people to developed libraries for their hardware. The truth is that AMD is simply dumb when it comes to true marketing. Intel on the other hand is a marketing genius and there already is better AI support on Intel GPUs than there ever was on AMD.
But there's a price to pay, sacrifice etc.
@@abaj006 Creator of Linux showed the middle finger to Nvidia for their "support". I don't know what you are smoking.
AMD is heaps better when it comes to support. Just look at FSR vs DLSS. Or maybe FreeSync vs G-Sync. Or Vulkan vs DirectX (yes, it's MS but Nvidia helped).
What Nvidia is better at than AMD is optimisation. The Nvidia GTX-980 humiliated the R9 390X, but (years) after Nvidia abandoned that GPU while AMD kept optimising to the point that even slower R9-290 instead surpasses the GTX-980. Same story when it comes to the Vega-56 vs GTX-1080, or the RX-5600 vs GTX-1660Ti, RTX-2080 vs RX-5700XT. And now the affordable RX-6800 vs the RTX-3070Ti (massive VRAM issues).
@@ekinteko bruh fsr sucks compared to dlss and xess
I really want to see the power/w improvements, especially when the p-cores disable themselves. And even more so since the E-cores have the same ipc as last gen P cores, i think it would be very viable for certain scenarios.
Excited af for luner lake handhelds
How much of this can we take seriously showing only bench marking software and doesn't z1 extreme on ally get 60+ at 15-18 watt right now? Thanks Mr. phawx.
Very very interesting video! Competition is good, and if Lunar Lake delivers even a touch better performance over Hawk Point with 3-5W less power, it could be a big win anyway for handhelds, in my opinion, especially for MSI. That said, the 8840U is still a strong low power apu for gaming, and im sure Strix will be much better then LNL. It needs to be that way, it has 10 and 12 cores variants and, at least for the 370HX, an even bigger GPU. Let's hope for the best in any case, and don't forget that FPS can also be boosted by upscaling technologies that makes a lot of sense on a small 7-8" screens, where is even more difficult to see imperfections compared to bigger ones. And BTW, speaking about upscaling technology, the Xe2 graphic architecture on LNL now has a big XMX hardware accelerator for AI, and considering that Xess runs better on XMX in the Intel cards, it could be interesting to see the performance gains; i think that's a thing that should not be underestimated too much. I'll wait for the "not so good" video about LNL now 😁
4:38 this scene looks terrible. I can see intel using their dirty graphic fidelity reduction technic and fakeish frame skip to achieve higher frame rate.
Just look at AMD beautiful graphic fidelity and perfect near perfect framepaste.
At what cost are we gonna be waiting for intel? Is it evens worth the wait?
Nearly half the power usage
But still crappy graphic drivers
For old game OK. But for new games it's not true. Intel match or much better that amd
This is genius from intel. Optimized power management will truly make handheld PC's and laptops portable.
Giving the complete fiasco of MSI Claw as shown by GN, Lunar Lake has got to be an improve over MTL. Right???
Bro these advancements in portable technology is scary good
That 15w tdp looks amazing
That’s part of what makes the Steam Deck so great is that it’s so power efficient.
Your appearance on the MLID podcast was awesome. The content is appreciated!👍🏼
Thanks!
I love the fact that there is competition, especially in this space.
We will be getting handhelds better than the steam deck 10x before the decade is over
I really hope Intel puts in the work for the long haul. They've got great people on their driver team.
Lunar Lake 4 months before it comes, they could improve the drivers and power management even further.
imagine if intel also gets better in making their gpu drivers. damn competition, i fcking love you.
The secret sauce is on package dram and no more cores or threads than is needed for gaming with integrated graphics leading to low overall power draw. Possibly also power gating the unwanted core cluster when a real time workload doesn't work well across both. That's my guess before watching past 45 seconds.
Thanks for the comparison. 8840U is a last year cpu though, it's Phoenix. Meanwhile Lunar Lake is not even ready to release yet. 😵💫
I'll look into it and compare with this year CPU when it "released".
I'm so curious to see strix point vs lunar lake!
Like you said, I wonder how Intel's 'AutoTDP' functions and how it determines power/voltage for the fps it's able to achieve. For example if there's enough GPU headroom available for 60+fps, will it just try to default to an fps cap of 60 to get the most efficient power/voltage for the system, how the drivers determine this?
I think it would be smart for AMD to implement 'Radeon Chill+' much like 'Antilag+' before people start to benchmark these two together and conclude that Lunar Lake is the best lower powered gaming device (tbf, they are leading a node over AMD now, N3B v N4P). That said, if there isn't an 'Chill+', I'd still like to see a tuned/'autoTDP' benchmark run of the Strix v LNL.
Of course, conclusions can be looked at another way, if the 890M is just more performant in general across a variety of games.
Also, also. We now have Zen5c cores. I'm very interested to see 4P+4E Lunar Lake vs 4zen5+4zen5c (for apples to apples comparison). Does AMD have a workaround on not being on N3(B/E) using Zen5c when a program doesn't need high freq sustained workloads/starts to throttle.
Please explain how you optimized amd to use less power to hit the 60 fps target.
I think such guides are interesting on all platforms, even desktop.
I'd like to optimmize my desktop system to hit 120fps and minimize cpu and gpu power used (intel cpu - for now, and amd gpu).
Seeing this makes me interested in what the MSI Claw refresh will end up in terms of performance
It is insane from where intel came (though its 3D mark, not games) and now is
it is still luke warm considering how much advantage intel got with on package ram and 3nm (actual node bump vs 5nm, while 4nm is a small optimization of 5nm)
Lunar is, relatively, expensive thx to said packaging
i'm thrilled to see actual devices, 3rd party benchmarks and price battling (which intel probably can't afford rn)
Can’t afford , but will still do for no profit or very little for the good exposure … not just talk but action going on … they told Microsoft they will do the new gen x box for the price of silicon n labor , no profit , to show off their new foundry systems that takes ages to biuld
Competition is good. I hope both AMD and Intel keep each other's feet to fire. Plus, if this works as advertised it is a great feature whatever the case.
Any chance AMD gets this for 2025? I still prefer AMD.
Frametimes in the head to head also seemed much smoother, but stuttery compared to AMD. XE2 (Battlemage) showing up and showing off? 🤔
I believe frame generation is enabled on AMD making the slowed footagee much (miles) better
Lunar lake should be very interesting, but still dependent on Intel continuing to improve their drivers.
Hopefully it’s legit. Still gotta see it in real life
I'm not convinced that auto TDP was the factor with the new architecture here because the CPU package power was variable on both platforms. What immediately stands out is how much less is required by Lunar Lake, which stands out a mile and seemingly, a very good thing.
Whilst any form of auto variable TDP is good, I find myself asking why it has taken so many years for such an implementation to arrive. My cynical answer is that it is likely to be a good example of drip drip tech advancement.
But with all that said, if we will be able to do more for drawing less power, then it is a positive.
But where auto TDP could really come into it's own would be if you could actually go down to a setting where a program that only needs to sip juice would have the ability to only to draw that juice.
But what we see in reality is yet to come, so always a healthy amount of scepticism at such an early stage.
The problem with MSI claw is it doesn't have enough ram for Intel chips. If this 8 inch claw 2 packs a 32gb ram then we can see the potential of lunar lake but I guess the price is not going to be attractive. They might be selling this for over $1k lol which you might be better off getting a gaming laptop or desktop.
32GB lnl handheld with usb-c power in and video out that runs linux well would be perfect! One cable dock to my monitor on my desk (it's got a 75W pd and a usb hub so keyboard/mouse could stay plugged into the monitor) and just grab it and bring to the couch or bed for gaming/movies it does EVERYTHING
AMD as been too slow in mobile, they missed the chance to dominate like epyc.
If the 6800u was released in 2020/2021 ie before alderlake, then they would have dominated with strix halo already out maybe 2022/2023.
Similarly with GPUs, the 5700xt series should have been 2018, 6900xt 2019 and 7900xtx 2020/2021
You could argue zen 3d also was late, should have been at launch in 2020(apparently it was ready then) Intel had NOTHING to compete with 5800x3d back then
Intel stabil di 60fps, amd terkadang turun hingga 50 fps
While I'm happy that Intel did manage to catch up with AMD - this STILL does not look tike a preferred system to get.
It gives you save frame per Watt (or 10% better, maybe) - and that's with ram soldered onto the CPU/APU package.
And we are supposed to beleive they managed to fix driver issues. And we are comparing to last gen AMD.
I'm happy that competition exists, but I'm still Team AMD =)
NGL the Mendocino 7320U was pretty good too in the Ayn Loki Mini Pro. Too bad they discontinued it. 😕
Strix point is not going to compete with Lunar Lake anyways. Completely different market segments.
Curious if we'll ever see a custom chip that's just e-cores for handhelds.
yup 8 ecores should be enough for hand held gaming. and 24gb of ddr5 ram. at 8000mt/s with something equivalent to a rtx 3050 max-Q
Can't wait to see their ultra thin laptops too
Its still going to come down to driver maturity and compatibility even though this is looking promising. It's like the Nvidia AMD rivalry for just about ever. People opted for Nvidia because they didn't want to wait too see if things worked or got better on AMD. I'm not sure people are even willing to play that wait and see for an $800 handheld even if it's close to AMD overall.
If Nvidia could figure out a way into this space then so begins the Spicy Wars
If Qualcomm can make Windows on ARM take off, Nvidia already has Tegra.
Lunar Lake looks like a stuttering slide show by comparison
That sweet 15W Power draw 😁
Where things get interesting is one is called meteor lake the other is called lunar lake 😂😂 joking man way less power usage good on them 💯
intel close to amd last gen that good, but this time intel using n3b nano technology, it suppose better to amd but now it is not. on the same time amd next gen apu is coming and it should better the last gen and it still using n4 nano technology. that show amd lead intel on chip design not the nano technology.
Prospects for a new Win Mini looking bright!
Not wirh that crappy igpu
Absolute banger of a video! Though I’m surprised this isn’t 52 minutes long 😂
Как можно сравнивать настолько разные процессоры. Сравнивайте процессоры с одинаковым техпроцессом.
So MSI Claw 8 AI over ROG Ally X?
I said this before and alot joked about it... MSI took Intel so they can get their hands on Lunar lake and beyond chips, it's not meteor lake they care about hence they revealed new Handheld just 3 months after Claw release...
That's how business works, it's all in the contract and MSI managed to get early development chip front Intel because of that, now if Lunar lake becomes huge success no one will be laughing anymore
Do you think we will see the snapdragon stuff going into the gaming handhelds?
Yes but somehow the chip is emulating windows so it takes a lot of power hit just from emulating not good for pushing fps. The snapdragon chip is at current for work programs that will enhance battery life instead of pushing polygons.
HELL YEA INTEL HANDHELS FOR THE WIN
Go Phawx! go! I love your content!
I cant see sorry 😳
Can you give some comments on an Xbox One S Portable? I think this is more likely to be possible in the near term. Possibly even a One X docked mode with a custom memory configuration.
Cpu power wow
Auto tpd
> Intel first showing finally with a handheld chip
People seem to forget at this point, but Intel had Tiger Lake U in 2020-21, and it was really excellent for its time.
I could get it to show decent performance at 8W TDP, which was such a huge improvement over the previous skylake generation, that couldn't get anywhere near that power budget.
It's unfortunate that it got mauled so soon after its introduction by Apple's M1. And as a 4-cores design, it could not compete at max performance versus AMD 8-cores cpus.
But when it comes to handheld gaming, this thing hold its own, as long as AMD was showing up with Vega refresh iGPU, which lasted for a while (up to SteamDeck in 2022).
It's a pity Intel did not really follow up on that one, possibly as a consequence of being out of Apple's business,
or maybe it was the 2+8 Alder Lake design that was supposed to but ended up unimpressive.
What's up man, as always, great video
Hope we get a new ~6w cpu(e-core only?), and a new handheld with it(250-350$ max with shipping included), with a form-factor like the win600 or maybe switch lite-ish with little bigger screen.
I don’t trust intel benchmarks, and intel’s gpu just doesn’t stand up to RDNA. I hope they prove me wrong so prices come down. You know this is gonna be costly.
Thanks to PCWorld? Nothing against them but you're talking like there is nothing besides UA-cam!
Clever power management won't matter much if Intel's drivers still give far more problems than AMD's.
Fps cap don't change power usage for AMD at all?
Strange, don't have experience with hawk point but I pretty sure setting fps cap (30 or 60 depending on the game, or just enabing vsync) on 6800u does have an effect on power usage, at least for some games.
Now, AMD, do your magic and make proper auto TDP support like Intel.
AutoTDP already exists for AMD handhelds just not first party but it works stellar and stable.
i like tahat intel improved the lunar lake efficienscy so much that surpasses amd 8000 series but its still unknown if strix point is better than this and it probably is, so what are we really talking about?
lunar lake v200 vs amd 370 ai strix point is exciting for handhelds
I think someone needs to make a handheld with the new snapdragon elite cpu for battery life, which will be king.
I will never buy an Intel handheld with a crappy igpu. Drivers matter
Auto-tdp isn't the big thing that makes me leave AMD and go with terrible intel drivers.
I'm more interested in the battlemage igpu rather than efficiency for lunar lake
Yes but looks video on lunar lake its looks crapoy strugeling
Intel using TSMC 3nm vs AMD using 5nm already tells you which one to buy 😅
Intel of course. It's a beast far away from Strix point. You see soon
I hope intel learn a lot from xe iris because it was disappointing but it had to start somewhere.
I won’t lie, I couldn’t figure out what I was supposed to look for lol
APU is still Intel's biggest challenge
It might be a major improvement over ML. but that will probably put it on-par or slightly more than AMD Pheinox that most probably cost less than half as much to make. Still, this is more than welcome and im glad that the new Claw is a real viable product.
well MSI should have waited... the claw would have had a chance...
Honestly I’m interested in a potential snapdragon XElite handheld
That chip will be more powerful than the Z1E by a lot
Wonder if we’d be able to put this in our legion go with modding
Lunar lake can run as low as 2 Watt TDP
Yea, the z1 extreme is too ass when it comes to power. Like it goes down to 7-8 watts when browsing web.
It better
Let's hope this also fixes the random stutters in literally every game
You mean the MSI Claw 8 AI? best name ever
intel the canned benchmark king.
15 watts is the interesting thing
A new auto tdp ?
RAM is using more power than the display and other components?
always has been
Oh this is EXCITING
Im glad this guy never replies to me, because.... i been translating his videos in spanish, and uploading them to youtube mexico. Sorry bro, but i kept telling you to admit that the steam deck is underpowered and that steam deck fanbois are just power hungry, but dont want to admit it. Bill and rich from nerdnest and fan the deck are the biggest steam deck fanbois, they need to get off the ValvE sack. The GO is the BEST hanheld out of all 3. "Hola a todos, mi nombre es The phawks"
Intel should've used two HBM memory modules instead of two 8533mhz LPDDR5X modules. The higher bandwidth overhead would make it perform on GDDR levels of VRAM speed. The iGPU would benefit greatly from that change alone.
Price
All they need to be is better than amd.
HBM is insanely expensive. The handhelds price would skyrocket to 1500$
Price + power usage
@@bulletpunch9317somehow I doubt they will be better than AMD, remains to be seen though.
isnt auto-TDP active on asus ROG ally, and soon coming to lenovo legion Go?
yes, but its mantain 25 to 30 watts
@@robone9978 thanks for the clarification here.
❤❤❤🎉
Lunar Lake says 60fps... but it don't look like 60fps. not even close.
What do you expect from a 24 fps video capture?
@@K543 But both are on screen, seemingly shot via camera/phone, prob the same camera set up. Watch at 3:30 onward. I don't trust that fps on the right. What's up with that Phawx?
@@alpaykasal2902 one is zoomed in one is further away
Intel Arc has really come a long way, it has been a challenging journey, I'm glad it seems to be paying off.
I love the competition.
Starting at about 3:30 - why is the Lunar Lake images so stuttery, where the AMD one is smooth despite them both being 60fps?
The video captured is from 30fps. Checked PCWorlds video, and it's a 30fps video. So that's why it looks that way.
There are way too many possibilities this time with all the architectural changes, from Memory Side Cache latency, the e-cores getting separated from the p-cores to another block (the "smartphone stutter"), auto-TDP creating bubbles where cores at lowered TDP can't clear instructions previously scheduled at a higher TDP fast enough, to any number of Xe2 architectural changes. Whatever it is, hopefully it's fixable in software by disabling the e-cores or auto-TDP or something. Intel still has a few months to figure it all out.
No Thunderbolt 5 support is crazy. For someone that is holding to get a laptop with TB5 this is such a disappointment. Oculink will be the path this year and next year for laptops with eGPUs.
Are you jetlagged? Did you bring back any communicable diseases from Taiwan?
Strix Point is not competition for Lunar Lake, Kraken Point is, until Kraken is out competition is Hawk Point.
Strix Halo should destroy this but we shall see
@@baysidejr nope
You mean strix point. And yes, in recently leaked benchmarks destroys it. Outperforms it with more than 33%
@@dimitarivanov4719 no, I mean Strix Halo.
Strix halo will support 40,32,24,20CUs for the gpu. And thr GPU is apparently an improved architecture, something like RDNA 3.5 or RDNA 4
@@Jerome-iwnl Strix Halo with 40CUs is 120W. The only handheld option with Strix Halo is LP version with 20CUs
@@dimitarivanov4719 Exactly, I want that one. They're probably gonna call it Z2 Extreme or something. This is gonna be optimized for low wattage.
Hope it happens. Until then, I'm happy with my 6800u handheld.
@THE Phawx the mad scientist of gaming ⌨️ 🔭 🔬 ⚗️ 💻 🧪 🦾🧑🚀🥽🥼💡🔎📊🩻📡📚🎮