You must be dreaming! An x86 is hardly going to be better than an ARM in tablets. ARM is extremely energy efficient. An ARM doesn't need a cooler. This AMD processor will be limited in CLOCK.
How dare you insult AMDs brilliant marketing team, it's obvious that consumers buy products purely based on their name, and not price, performance or features. These seemingly meaning less jargons in a produces name, such as AI, Max+, Pro, XT and XTX, have been clearly shown to increase the sales volumes. Look at how well the AMD Radeon GPUs have been selling and their market share, its clear proof its working.
You are right, they took one of a few angles when they could take like 3 or four! I can see it being great for CAD and all sorts. Strangely with AI being pretty much all A and little to no I, it's just more useless and mostly meaningless marketing BS.
I am BEGGING AMD to unf*ck their Mobile chip naming scheme. I know OEMs pushed them to do the "multiple recycled nodes on a single generation" thing for marketing's sake but for god's sake it feels like you need a PHD to actually know what architecture is in the processor of the laptop you're buying.
it really doesn't matter what it's called. The main thing is price/performance and perf/watt. It takes literally 2 seconds to google a chip and figure out which architecture it's running..
People love complaining about unimportant things or jumping on bandwagon about meaningless topics when the user experience, Performance improvements, thermals..etc are what's important. Way more than a companies naming decision.
My main problem with all the "Macbook Killer" is that most of them make Apple look like a good deal. Most M3 Macbook Air killer cost 200-400 Euro more and even the Rog flow z13 costs 200 Euro more than the M4 pro macbook pro. I hope they really bring down their prices.
This. This AMD mega chip won't be a Macbook killer if it costs nearly twice as much as the entry level MacBooks and MacBook Air, while still needing a dGPU to accelerate workloads even further.
@rinsenpai135 The Ryzen 395 is a premium chip for premium laptops, it is not meant to compete with a $1k Macbook Air, it is way more powerful. It will compete with $2k Macbook Pro with the M4 Pro, wicb have similar performace.
SteamOS bro, it would double as a travel combo for steamOS gaming and with that external dock 5090 at home with a OLED tv.. book, just link up wireless controller and its living room 4k gaming pc to its neat, the price will not be neat.
Effectively a quad channel 8000MT/sec TDP limited 9950x. It should beat the 9950x with ~2.6x the memory bandwidth in some use cases, especially if we're talking 120W vs 250W.
I would say it's less TDP limited 9950X and more like OC'ed, previous Zen4 laptop chips already came within 90% of the same desktop chips, while using like 70% power
This is the processor that made PC mobile interesting to me again. And this device has me excited since at least on their website they are going to have a 128GB version of this which means It is a viable mobile workstation. The AI performance being misleading or not depends on how you look at it. I've often run into GPU VRAM limitations on my 4090 when using AI to train it on my own art in order to use it for previs so yeah even if it's slower the fact that it will be faster overall because it won't be memory bound is a good thing. Also, I think the microSD is fine. You can get a microSD to Full SD adaptor for $10-20 while maintaining the speed and use it to transfer from a camera. Yeah, its a bit more work but it's not the end of the world.
Both Apple and AMD seems to have left Intel in the dust. If you had told people 6-7 years ago that Intel isn't even going to be the second best CPU designers in a couple of years, most wouldn't have believed you.
Releasing this With only 32GB memory is silly. It cost them probably $50 to bump it to 64 and make it much more useful for developer how wont mind dropping $2500-$3000 on something like this.
When you test this with DaVinci Resolve, please check whether it has hardware acceleration for H.265 10-bit 4:2:2 color. So many cameras now output this as their highest quality codec, and none of AMD's CPUs accelerate it. All of Intel and Apple's chips do accelerate it. Without acceleration, h.265 10-bit 4:2:2 edits poorly.
All you’d have to do is enable gpu acceleration. I edit with a 5800X3D and a 7900XTX and I edit a lot of Sony A1 4k H.265 422 10bit footage. Its edits like a dream no proxy needed. So with the Ryzen Ai max having such a large gpu why are you concerned with the CPU?
@@puertadlm163 Because H.265 is so heavily compressed that even the beefiest GPUs can stumble when editing 4K, 10-bit, 4:2:2. Let alone multiple streams of it. Even an Nvidia 4090 system can't handle those types of files as well as a tiny Intel or Apple powered laptop that has actual hardware accelerated H.265 4:2:2 encode and decode. Hardware encode and decode make a MASSIVE difference.
11:20 This is clearly cherry picked benchmarks, Cinebench R23 does not have a Apple Silicon native version, it has to be emulated through Rosetta. Why aren’t you using Cinebench 2024?
The real advantage of Apple over everyone else is its not only offering incredible custom chips and hardware but also an entire os and ecosystem around it all complete with an App Store and cloud services and the full range of complimentary accessories.
@@janickpauwels3792 You mean laptop with the Max are $1000 more. No idea what the chip costs, AMD also did not tell people what they are charing OEMs for thier chips
The Flow Z13 is limited apparently to 80 watts. The Strix Halo Max+ 395 supports up to 120 watts. That means the row is unlikely to show off that monster APU's peak performance... I want to see what that can do for Houdini and Nuke.
3:33 still has great potential to match the RTX 4070 Full power if the TDP is set to 120 Watts, increasing from 40 watts to 70 watts is quite a big increase, especially if it is maximized to 120 Watts
This. The M4/Pro/Max are currently the best chipsets because they are the fastest, quietest, and last the longest for mobile devices. He hasn't really shown how it actually compares properly. Like an apples to apples, by plotting the performance, battery life, etc etc. Like you said, a 40W CPU is NOT the same as a 40W Chipset. Especially when that CPU wants to ramp up to 60W and we have some iGPU/NPU cores also begging for energy. That's not even mentioning the better union of software and drivers that a singular Apple supports compared to Microsoft/AMD/OEM dynamic. I do think AMD narrowed the distance, only for Apple to pull further ahead. We probably need another full-generation (+3nm) to see the gaps close, so another 1.5-2.5 years wait.
@@lordv1le859 You're probably right. But it's not journalistically honest. Most of us watched this video to see how AMD did catch up to Apple, but this does not seem to be the case.
@lordv1le859 First, the wattage figures of the AMD chips include the whole package i assume, not the CPU only. It is a 120 Watts limit for the whole package. Second, the cheapesy M4 Max laptop is $4k, while the Ryzen laptops will start at $2k. It is like saying that a $1k Iphone levels a $500 Android in speed, it is a pointless comparision.
@@rj7250a the CPU alone on the AMD chips eclipse the die TDP of an M4 Pro. At full bore with the entire die of the AMD chip maxes out at 120W. The M4 14 core is 31W all out (22 CPU, 9 GPU) and I can’t find the 16 core variant but it’s under 50W, likely under 40. The minimum configurable TDP of the AI Max + 1.21 Jigowatts is 45W capping out at 120W. Which side of that range do you think they got their numbers from? Which CPU and in what configuration are those 2K laptops? Also the M4 Max base configuration is more like 3K
3:30 Yeah, the performance is similar to the RTX 4070 at 71W, so I can't wait for AMD to release the Strix Halo in the form of a Desktop APU (G Series), for example the Ryzen Ai Max+ 395G, and it can fit into the Asrock Deskmini X600
Hopefully apple gets some competitors but man when I got my first MacBook an M2 Max pro I was blown away, all day battery life, doesn’t die when it’s closed and renders videos twice as fast as my 5800x3d or even more
I was very skeptical of getting an M4 Pro Macbook Pro because I thought it would be worse than my PC but Apple def cooked, and now pushing to add gaming to macs I feel like I won't even need my old amd laptop while travelling very soon.
This new AMD chip will blow apple out of the water in graphics design and gaming and if you don't like windows We can download Linux in it @@rahulnishadxd
This would be huge for on the go AI workloads if asus DOES NOT literally charge 5-10x the market rate for memory like apple. That's all they have to do. Get the chip into a thin 16" macbook competitor and give users 128GB *without being greedy*. The only issue would be software compatibility, but it's definitely solvable over the long term. And simply giving users more memory for a reasonable price would be a massive selling point.
5:45 FYI with a 256 bit memory bus, assuming it has support for both LPDDR5 and DDR5 (SODIMM or DIMM) there is a theoretical max memory capacity of 1TB assuming 2 dimms per channel and the 128GB DIMMs come to market at some point.
it has the option to run in a higher power mode, and can beat the M4, is that what you're referring to? This is a tablet btw, how's the iPad pro compete against this O_o
He mentioned the benchmark comparisons were done with similar power. If you go to 2 min 50 sec mark, you'll see the 40W AI Max is comparable to 50W M4 and the 60W AI Max outperforms the 50W M4.
This device can run windows, linux of any flavor and emulated macos. It is the most emcompassing and modifiable module that can be used as handheld, laptop, docked system. Pretty much end-game form factor for me.
I don't know why so many people are failing to understand this, but the SoC benchmarked here was capped at 55 watts. It can go upto 120W, which will provide far better performance. When you're plugged in and not limited by battery, it can go upto 120W providing great performance and when you're limited by battery, it will go to lower power modes like 55W or even down to 40W to give better battery life. I thought all of this should be obvious, but clearly looking in the comment section, people have not understood this.
Was running at 10W less mate in that cinebench test. Not to mention it's a tablet. Also it's still way more affordable and serviceable than a Mac. Also much more versatile.
This is kind of something I could see myself using. The laptop use around the house, the desktop gaming use at my desk, and a tabley for general browsng on the couch / bed.
@0:48 why is AMD comparing it to 12 and 14 core "Pro" parts, when Apple has an M4 Max variant with 16 cores? The Max is usually 20% faster in these workloads than even the 14-core Pro, so maybe that's why... They were scared to get smacked by reality. Real shady stuff...
and one more thing, Macbooks cannot be used for gaming, while Windows can. And Indonesia will ban the distribution of iPhone 16 and iPad and maybe Macbooks starting in 2024 because Apple forces 50-year tax holiday
Seems like everywhere I look, people say the M4 pro (14 core) has a TDP of 40-45 watts while I see many different reviewers being able to push the M4 pro up to 46 watts where did you get the 48w to 50w power rating?
Its not on the level of M4 Pro 14c or M4 MAX yet, but I believe AMD can pull it off in the next few years. It's gonna ba an exciting upgrade in the next generations!
0:47 Why did not you compare this AMD SOC with 16 cores to the 16 cores M4 Max? The charts showed the 14c ones, which not only lack 2 cores, but 8 gpu cores as well. I would love to see the real numbers.
Love your voice! As a recent M4 Pro Mac mini buyer, it’s great to have the one to which all caveated others are compared without any of the caveats, even the bottom button which I find genious. I've yet to dump my setup trying to pull it forward to shut it off via the back. It absolutely should not take 100 watts to power something Apple has proven it can do better with less than half that much. My local AI work screams! Can't wait to edit 8k video! Mostly I’m just saying, “I’ve used both platforms and there’s clearly no comparison for what I need. It’s good to see AMD stepping up, what with Intel failing. Two thumbs up on the quality of your video. You’re really good at this! Keep rolling!
@@cacogenicist Yes Linux can in theory work on everything. But Linux is not supported by Apple the same way Intel, AMD and Nvidia support Linux. Asahi Linux will not work on M4 and even on older models is not anywhere close to support all the SOC functionality.
All the non-Mac laptops I’ve ever owned died due to issues related to overheating, so if they have a good cooling system they’re on the right track. I’ve never had or known anyone who had a Mac laptop that had overheating issues, and their lifespan is significantly longer in my experience.
I was gonna get the M4 Pro MacBook Pro. Switching from windows for the first time. This year's macbooks have impressed me a lot. And now with the announcement of these AMD cpus, should i wait?
I'm only a third of the way through the video, but just looking at the 3d mark time spy test, there is a little foolery happening from either Asus or AMD. AMD said the gpu of this chip performs between a 4060 and 4070. That led me to believe a full powered 4060 or 4070, meaning running at at least 100w, not 70w. Don't get me wrong, it is still extremely impressive. It's just -- it's not equivalent to a 4060. That is like Apple saying the M4 Max is equivalent to a 5090 but not disclosing that the 5090 is running at 40w. It just makes the claim ridiculous, or at best, disingenuous.
I think we'll find way more issues with this once actual reviews come out. While gaming performance might be comparable to a power restricted 4060 as you said, it won't get near it in anything that can fit in the 4060s VRAM. 3D Applications, simulation software, even AI. Because they don't have any software support. Sure, bigger LLMs will be faster simply because they can fit in memory, but then if you want to do that, get a MacBook. Their GPUs have better compatibility with most AI workloads and will be way faster than this. I have a feeling a lot of people are going to be very disappointed by Strix Halo's real world performance. Especially for the price
@@_shreyash_anand For inference which is what most people will use this for (since training is really best done on datacenter GPUs, you can rent), the software support is pretty much there. ROCm supports all the tools which use llama.cpp and it even supports vLLM. So the software support at least for LLMs is there. They actually showed LM Studio (which uses llama.cpp backend) running a 70B model on stage at the CES.
meh, you get diminishing returns as you approach 100W anyways In my opinion, you are paying the premium for the efficiency, not performance. But there are diminishing returns with going up in price also... (M4 Pro is also $2k when not on sale) Also note that CPU power draw isn't included for the 4060/70, where it is for SH Hard to say without standardized benchmark / same map, but Verge's article had fps in Helldivers, and it was in between 4060 and 4070 100W fps iirc, while presumably SH is running at 20W less, then maybe another 20W+ for the CPU it will never be good value, but I mean the only other competition is literally Asus themselves with the previous Flow X13 / Z13 / ProArt PX13 with 4070 65W (I have videos of non-Z13 on my "channel"). Literally been 2 years and no one else wants to make dGPU 13" laptop. Or even Zephyrus G14 '23 competitor (not named Razer. I guess Apple is good alternative here if you don't need Windows)
I guess you could say that some, but MacBooks are a complete non-starter for gaming and non have touchscreens while this full pc is handy as a gaming machine +tablet replacement. I been tired of basically enlarged phone app device called iPad lacking the software options available on full desktop os, this looks to be more versatile than surface to me (can actually game hard, can run larger AI models etc)
As a video editor, this has always been my issue with windows based laptops. There is nothing (as of this moment) that compares with Macbook's efficiency. Imagine rendering a 4K timeline away away from the wall socket without the performance penalty.
Quad-channel memory integrated into the GPU and CPU, it will be super fast in normal task's. This may be a small breakthrough for laptops with windows, I can't wait for the first tests.
Well, Apple already had their octa channel DDR5 in their laptops for a while (even the M1 Max had 400GB/s bandwith), and the differences are visible, even a fully stuffed desktop Windows is freeze for a moment if you are clicking on the video timeline in any video editor, but even the cheapest Macbook just showing the next frame without any delay, even if you skipped 30mins in the video. But for Windows this is maybe just buffering crap, it won't start to play the video until it's not filled the 3-5sec buffer with data.. in that case even the faster memory won't show any difference..
I went from an Intel MacBook Pro to a M1 Pro Max, this thing is an absolute beast, Photoshop, illustrator, Figma it eats it all up. Can't see myself getting anything else for a long long time!
well if you only need that types of works, macbook is more than enough, this types of cpu and system as a whole headed more over to even more difficult tasks
Thanks for the overview Video and I think you are probably the only UA-camr, who at least mentioned the Unified Memory Access among the CPU and iGPU in Strix Halo. That is a pretty big thing and probably the 1st time in an x86 SOC with integrated Graphics. Memory reads and writes are the most latency inducing tasks and significantly bottleneck the CPU or iGPU performance. In all the past implementation, the Shared system memory allocated to the iGPU, uses a different memory address Space and whenever data is requested, it has to go to and from between the rest of the system memory and the dedicated memory to the iGPU, resulting twice the number of read-writes for a single work, compared to an Unified memory Access UMA) device, like Apple M series Chips. AMD, bringing the UMA to its CPU and iGPU will significantly decreases the number of memory operations and hence reducing the bottleneck.
It's interesting how chip companies are trending to unified memory for consumer AI workloads. Because of the slowing down of process nodes, shortening the distance between components seems necessary. Apple has M chips, Nvidia has Grace and Blackwell in Project Digits, AMD has Ryzen AI, Qualcomn has always had it... Does Intel even have a unified memory solution with a powerful GPU? They've only had low power iGPU unified memory for consumers.
That's not really the main reason why they're transitioning to unified memory. We are just simply at a place now where it's feasible to have memory either on the same interposer or close to the CPU using low power LPDDR5. Consoles have been using unified memory for decades. LPDDR5 just happens to be now *good enough* for graphics workloads which wasn't the case before. And it has the advantage of making a much cheaper system. But don't get it twisted, LPDDR5 is still a major bottleneck in for instance AI due to the less than ideal bandwidth compared to GDDR. It's just a very natural extension of silicon to slowly iterate towards unifying the entire compute platform. That wasn't possible until very recently because of density, chiplet and efficiency advancements. APUs are nothin new, integrating the PCH is nothing new etc. What's new is, it's finally catching up to the software consumers want to run.
Their new battlemage GPUs seems to be really good. Cus they can pull off huge performance gain with way less hardware compared to their previous gen. I hope they can scale it up or down depending on the application. More competition, the better.
I would be the most interwsted in the performance and batter life of a lower tdp setting on battery. 1080p 60 in cyberpunk at 3+hrs would be incredible. Probably still a couple geberations off but this is the point that things really get interesting where full AAA compromise free gaming is fully viable. smaller hand helds are still several generations off.
My big question is why didn't they launch this with Thunderbolt 5 to take advantage of the new XGM? That and still not adding an oled panel like the PZ13 are my two main issues with this. Otherwise it'd be perfect.
Tbf if it’s competing against MacBook Pro it’s decently priced (compared to apple since it could be upgradable) and it also has touch screen now the only thing that would make it bad it’s if the battery life it’s even 2 hours less or if it dies on sleep mode by morning no one will buy it.
@animecutscenes3414this AI beast is a bargain, i know someone in our field who is planning to buy as many as he can get his hands on, hopefully, at least a dozen. For the kind of a I performance, these things are giving and the cost, they're practically free and i'm pretty sure no gamer's going to be able to get their hands on these for at least a year, because we'll be buying up as much as we can
I was impressed until i realized they were comparing a 16/32 cpu vs a 10 core cpu with no hyperthreading.... Maybe thats why they didnt add the m4 max chip
m4 pro is 14 core and the m4 max is 1.5x the price and because apple silicon is ARM which doesn't and probably shouldn't have simultaneous multi-threading.
The true value of MacOS and Apple Silicon is not just the performance of the chips, but the efficiency, reliability, overall ecosystem of apps and devices, all creating a platform (Apple) that all works beautifully together. No other tech ecosystem can offer it. The combination of Windows and Android is nowhere near as good.
This is for AI, not for budget ballers. I know people in the llm community that are planning to buy for business purposes, literally a dozen or more depending on how many they can get. I don't think gamers will even be able to remotely get these at least for a year or so, because all of them will be bought up because the price at $2000 means it's practically free vs all the other chips that can accommodate such large AI LLMs
Can you do a battery test borh heavy and average usage?because there's a review that the previous models last only for only 2 hours, the portability of it doesn't make sense if we can only use it for a short amount of time.
I really love the new design, make slash glass design become vertical on the back of device make it looks much better...and can't wait to see the full review on the final product
This thing is really impressive. I want the MBP like laptop that has native x86 support, 128GB and the keyboard and trackpad that's centered (not off center like PC makers often like to ruin the laptop with).
I'm much more interested in getting this chip in a standard laptop form factor. Why would I need it in a tablet? I know HP announced something but they don't even have a price or release date. Kind of looks like it will be expensive though. Why are there so few models using this...
@@arya_amg I'm sure it will eventually, it's just frustrating that so few products have actually been announced. It's literally just this and the HP. The HP one is a laptop but it seems to really be going after the "AI workstation" market, talking about stuff like 128GB of RAM configurations and will likely be very expensive...
@@KellyWu04 no it won't be Unified soc is cheaper Cheaper power delivery smaller and simpler mother board less complex heat sink It is more of a Nvidia killer than M4 killer Performance is near more than 5060 so anything below that no one will use
M4max cost alot more and is also consuming more power too. Crazy thing is in it's 60w cpu mode it'll be with in 3% or less of the max in most work loads.
@@mikeowentaylor They should have shown it just for reference then and say "look, our chip is just as good or better for less money and lower power". That they didn't, probably means it isn't. We'll see when they're in the hands of customers, you should always take manufacturers numbers with a grain of salt, regardless of who they are.
AMD Customers: "Ryzen AI Max Plus 395?! Your branding couldn't make less sense if you tried!" AMD: "Strix Halo" AMD Customers: "STRIX ISN'T EVEN YOUR BRAND!"
First and foremost u should clarify for your viewers this chip won't beat M4's single core performance and combined CPU & GPU of total SoC performance per watt. Apple's + Arm M4 chip is the most efficient piece of silicon for the consumer market out there, especially when it comes to performance per watt - period! This new chip from AMD can only compete in the multicore/multithread and GPU benchmarks where it shows excellent efficiency as well, however all for sake of extra power and faster battery drain. 1:25 speaks completely for itself and confirms unmatched efficiency of the M4 chip which is impossible (at least for now) to be challenged by any x86 CPU/SoC. In comparison M4 Pro that means whole SoC CPU & GPU + everything else pulls in total at maximum load up to 70 watts! Also let's not forget M4 as well as previous Mx chips can perform same with or without being plugged to power, which is something almost impossible for x86 chips to achieve (Lunar Lake is close). I am sure this "bad boy" Strix Halo will sooner than later thermal throttle in such a small form factor no matter how awesome his vapor chambers r or how good his fans and liquid metal is. Cool design and well done presentation won't hide basic and fundamental laws of physics. That being said, this is most certainly one of the few x86 APUs/SoCs which is capable of keeping up with the excellent M4 (Pro) chip at least in the specific benchmark comparison so still great job by AMD! APUs/SoCs r certainly AMD's great advantage and deliver maximum performance per watt should be their main goal. Besides that they should also think about RISC architecture like RISC-V or Arm as it's very hard for the CISC x86 to compete when it comes to efficiency. I love x86 for its great compatibility, absolute freedom and legacy with vast gaming possibilities, however when it comes to performance per watt RISC is unbeatable and looses only ASICs which in a way Apple's Mx chips r😉
@@mashirokobato5509 HAHA u must be smoking some bad sh*t by comparing this chip to currently best of the best & top of the list M4 Max which is pure workstation type of silicon even largest MB Pro 16" has problem to cool down and prevent thermal throttling👎Don't smoke that stuff, it makes u hallucinate😁
5:35 yeah it's sus, how were they even running a 70b model on a 24GB video memory card, that's literally impossible even at q2_K Edit: Apparently the testing was done with the Llama 70B 3.1 Nemotron at Q4 quantization, which takes ±41GB in memory. So the testing was done with shared memory and this yields ±3 tokens/s usually, so we're looking at ±6 tokens/s on the new Ryzen. For comparison, a 4090 running a model its size (32b) is approximately 10x faster.
The AMD naming is confusing but they're going after the Nvidia with xx70 naming and Apple with their xxAI Max and Max + Maybe for the American market its weird but the rest of the world its common sense.
I'd be interested in seeing how this does at super low power limits, I've been enjoying dropping power limits on my laptop and I want to see how it'll do in those cases
Can someone explain to me why AMD, NVIDIA use such legacy naming systems, why haven't they caught on the trend of naming things simply like model Y, X, S, 3, like Tesla? AMD Ryzen - Model A1, A2, B1, B2 , C1, C2 Apple - M1,M2,M3,M4
I want to see a competitor to the m4 pro in an m4 pro-like form factor. As much as I like this tablet, it's an ROG. Its gamer asthetics are just not my thing.
Sorry I can't help but make this one last comment. "Unified memory" is Apple's marketing term, as is "Max." Why is AMD copying Apple's marketing terms? Just make up your own - your chip is good enough now that you don't need to follow anyone else. Copying Apple's marketing feels like it cheapens the awesome chip you created (IMHO of course).
The part about “unified memory” isn’t correct. The term unified memory has at least been used before by NVIDIA, although in a different context. I’ve seen references to unified memory architecture even in some Intel based systems (e.g. Aurora supercomputer nodes have a unified memory architecture). As I see it now, it’s more of a technology term than a marketing term.
Thanks for the tour Marquis. This is what I'm looking for, a creation/gaming laptop converting into tablet, I'm not down with 32 gigs ram on soldered MB, my 7 year old 2in1 has 32 gigs, I need to wait for the 64 gig and higher release. When you do a full review, I'd love to see android emulated on this hardware
Its mind-bendingly hilarious AMD renamed their CPU from Ryzen 9 xxxx to Ryzen AI MAX and talked about how it does "AI" with 50TOPs only for Nvidia to show up hours later with Project Digits, an actual AI supercomputer which is just 800-900$. (32GB vs 128 GB)
Apple killer my ass, not even beating the 14core M4pro in MT and way slower in ST. Not even close buddy OH AND DONT YOU EVER AGAIN USE R23 as a ARM64 vs x86 comparison, there is no amount of trying you can do to make AMD and INTEL be faster they arent, Apple Silicon is the fastest! What a clown still trying with R23, try SPEC2017 or the still x86 biased CB2024.
The power packed into this thing aside, the port selection is really impressive imho considering the form factor when many laptops don't even come with such a solid port selection.
Although i'm learning microsoldering, i really really hope that modulair compnents like dedicated gpu's and motherboards with cpu sockets for example will keep existing in the distant future
I like your videos and style of presentation. That said, at about 10 minutes and you talk about creatives and this being a good competitor the MacBook Pro 14 inch, not at 500 NIT’s. As we say in the UK, it’s a country mile away.
Apologies for the error at 0:20. I meant to say 16 cores and 32 threads.
the fact igpus are not same amd did very naughty by not giving equivalent igpus in all processors, where competitors provide like intel
no worries! We all had good ol laugh :)
'pounds'? 'inches'? On a global content platform in 2025? Come on now, it is not the dark ages anymore, use standard units of measurement.
@@guderian557 - the weight of pure gold is _THE ONLY REAL MEASUREMENT,_ and C, the speed of "light" or data transmitting your verified BTC!
You must be dreaming!
An x86 is hardly going to be better than an ARM in tablets.
ARM is extremely energy efficient.
An ARM doesn't need a cooler.
This AMD processor will be limited in CLOCK.
the fact that a gaming and production beast like this was marketed only as an AI product is nuts
How dare you insult AMDs brilliant marketing team, it's obvious that consumers buy products purely based on their name, and not price, performance or features. These seemingly meaning less jargons in a produces name, such as AI, Max+, Pro, XT and XTX, have been clearly shown to increase the sales volumes. Look at how well the AMD Radeon GPUs have been selling and their market share, its clear proof its working.
You are right, they took one of a few angles when they could take like 3 or four! I can see it being great for CAD and all sorts. Strangely with AI being pretty much all A and little to no I, it's just more useless and mostly meaningless marketing BS.
I don't think it's marketed as strictly AI since it's in a Republic of Gamers product.
That's tech in general now.
Overrated AI😉
I am BEGGING AMD to unf*ck their Mobile chip naming scheme. I know OEMs pushed them to do the "multiple recycled nodes on a single generation" thing for marketing's sake but for god's sake it feels like you need a PHD to actually know what architecture is in the processor of the laptop you're buying.
What a baby. Get over it dude your an adult ...I guess in theory
it really doesn't matter what it's called. The main thing is price/performance and perf/watt. It takes literally 2 seconds to google a chip and figure out which architecture it's running..
and im begging AMD to actually put this sort of hardware on desktop sockets
People love complaining about unimportant things or jumping on bandwagon about meaningless topics when the user experience, Performance improvements, thermals..etc are what's important. Way more than a companies naming decision.
I don't have a PHD and still I'm educated enough to know what I'm purchasing.
wow, 16 threads with 32 cores? this is some new tech right here
return of bulldozer lol
Half threads
"Hypo"-Threading unlocked
Meta threads for the imaginary workloads, to accelerate dividing imaginary numbers 😂 Į
In an APU with this level of GPU? Yeah, this is a breakthrough in x86-64 chips.
My main problem with all the "Macbook Killer" is that most of them make Apple look like a good deal. Most M3 Macbook Air killer cost 200-400 Euro more and even the Rog flow z13 costs 200 Euro more than the M4 pro macbook pro. I hope they really bring down their prices.
This.
This AMD mega chip won't be a Macbook killer if it costs nearly twice as much as the entry level MacBooks and MacBook Air, while still needing a dGPU to accelerate workloads even further.
Not yet. But it’s close.
They drop in price quite fast though compared to apple products
@rinsenpai135
The Ryzen 395 is a premium chip for premium laptops, it is not meant to compete with a $1k Macbook Air, it is way more powerful.
It will compete with $2k Macbook Pro with the M4 Pro, wicb have similar performace.
Well nobody said they would be cheaper, just better
Bro using the old Samsung Galaxy Note 3 Wallpaper. Thats epic!
I thought that looked familiar!
came here to look for the OGs who recognized it in the comments
No it's not
@@maxweinbach3996 it's not what what part of his comment are you disagreeing with)
ok, I am not crazy then . I wonder where can I find it
16th thread , 32 core? In voice over at 0:20 .
mega core architecture! /s
It's the new dual core per thread architecture for dividing the throughput by 2.
Genius design
AMD Bulldozer/Piledriver Vietnam flashbacks
Hyperthreading with a new name@@hupekyser
Came here to say that, too :)
I think this was one of the most under-reported products at CES! This is a godsend for many researchers and developers, it's a game changer.
I hope so. I'd like a good ARM laptop with Linux for my coding work.
Finally an all AMD gaming laptop in a tablet formfactor PERFECT for Linux!
except for the price i really hope when it going in a mini pc product or other the price is gonna be reasonable it have so much potential
@@darkness-j6f Yes, I'm also looking for a mini pc form factor with 128GB or ram to run Linux.
SteamOS bro, it would double as a travel combo for steamOS gaming and with that external dock 5090 at home with a OLED tv.. book, just link up wireless controller and its living room 4k gaming pc to
its neat, the price will not be neat.
This is so close for my wants. If we get OLED and pen support, it would be nearly perfect.
Linux sucks unless you’re only working from a web browser.
Effectively a quad channel 8000MT/sec TDP limited 9950x. It should beat the 9950x with ~2.6x the memory bandwidth in some use cases, especially if we're talking 120W vs 250W.
Running local Ai will be soo mucfh easier
I would say it's less TDP limited 9950X and more like OC'ed, previous Zen4 laptop chips already came within 90% of the same desktop chips, while using like 70% power
9950x draws around 220 watts max, unless overclocked.
@@Son37Lumiere 200w total package power
This is the processor that made PC mobile interesting to me again. And this device has me excited since at least on their website they are going to have a 128GB version of this which means It is a viable mobile workstation. The AI performance being misleading or not depends on how you look at it. I've often run into GPU VRAM limitations on my 4090 when using AI to train it on my own art in order to use it for previs so yeah even if it's slower the fact that it will be faster overall because it won't be memory bound is a good thing. Also, I think the microSD is fine. You can get a microSD to Full SD adaptor for $10-20 while maintaining the speed and use it to transfer from a camera. Yeah, its a bit more work but it's not the end of the world.
Both Apple and AMD seems to have left Intel in the dust. If you had told people 6-7 years ago that Intel isn't even going to be the second best CPU designers in a couple of years, most wouldn't have believed you.
Don't forget Qualcomm. They have premium performance focused processors for laptops that is competing with Apple M4
Releasing this With only 32GB memory is silly. It cost them probably $50 to bump it to 64 and make it much more useful for developer how wont mind dropping $2500-$3000 on something like this.
32gb is usually fine
But for some workload may be to little
your humble voice made me subscribe this channel. Continue your work, i've seen your videos many times.
i hate the 6900hs + rtx 3050 flow model, a lot of laptops with bad battery in minus than a year, i hope ASUS can make a better job in that department
literally no point 6900hs apu is close to 3050 laptop 35w
Asus is really bad with their service also.
is it that bad? I love my 5900hx with 6800m
@@micbanand very good
the fact igpus are not same amd did very naughty by not giving equivalent igpus in all processors, where competitors provide like intel
When you test this with DaVinci Resolve, please check whether it has hardware acceleration for H.265 10-bit 4:2:2 color. So many cameras now output this as their highest quality codec, and none of AMD's CPUs accelerate it. All of Intel and Apple's chips do accelerate it. Without acceleration, h.265 10-bit 4:2:2 edits poorly.
Is it because of 4:2:2 instead of 4:2:0?
All you’d have to do is enable gpu acceleration. I edit with a 5800X3D and a 7900XTX and I edit a lot of Sony A1 4k H.265 422 10bit footage. Its edits like a dream no proxy needed. So with the Ryzen Ai max having such a large gpu why are you concerned with the CPU?
As said before is it because in Intel and Apple case, it s the iGPU which is doing that work😅?
@@MKR3238 Yes
@@puertadlm163 Because H.265 is so heavily compressed that even the beefiest GPUs can stumble when editing 4K, 10-bit, 4:2:2. Let alone multiple streams of it.
Even an Nvidia 4090 system can't handle those types of files as well as a tiny Intel or Apple powered laptop that has actual hardware accelerated H.265 4:2:2 encode and decode.
Hardware encode and decode make a MASSIVE difference.
11:20 This is clearly cherry picked benchmarks, Cinebench R23 does not have a Apple Silicon native version, it has to be emulated through Rosetta. Why aren’t you using Cinebench 2024?
Cause it's AMD and not an independent review (yet)
@@xXDeltaXxwhotookit Fair enough, then this comment is for AMD
@@xXDeltaXxwhotookit actually its an ASUS sponsored review
the fact igpus are not same amd did very naughty by not giving equivalent igpus in all processors, where competitors provide like intel
The real advantage of Apple over everyone else is its not only offering incredible custom chips and hardware but also an entire os and ecosystem around it all complete with an App Store and cloud services and the full range of complimentary accessories.
All that confidence and then they didn't benchmark it against the M4 Max. Why tho?
Show us what you got!
Because M4 Max is $1000 more
probably he's tied to an embargo
@@janickpauwels3792 You mean laptop with the Max are $1000 more. No idea what the chip costs, AMD also did not tell people what they are charing OEMs for thier chips
@@hishnash well thats effectively what the price is then eh?
The Flow Z13 is limited apparently to 80 watts. The Strix Halo Max+ 395 supports up to 120 watts.
That means the row is unlikely to show off that monster APU's peak performance...
I want to see what that can do for Houdini and Nuke.
3:33
still has great potential to match the RTX 4070 Full power if the TDP is set to 120 Watts, increasing from 40 watts to 70 watts is quite a big increase, especially if it is maximized to 120 Watts
@@muhammadikhwannurrosyidin8371 We will probably have to wait until we see a ProArt P16 class system for that, but you are probably correct.
Question is why are they comparing a 30W chip to ones that runs at 120W? The M4 isn’t a 50W CPU is a 50W chip.
This.
The M4/Pro/Max are currently the best chipsets because they are the fastest, quietest, and last the longest for mobile devices.
He hasn't really shown how it actually compares properly. Like an apples to apples, by plotting the performance, battery life, etc etc.
Like you said, a 40W CPU is NOT the same as a 40W Chipset. Especially when that CPU wants to ramp up to 60W and we have some iGPU/NPU cores also begging for energy. That's not even mentioning the better union of software and drivers that a singular Apple supports compared to Microsoft/AMD/OEM dynamic.
I do think AMD narrowed the distance, only for Apple to pull further ahead. We probably need another full-generation (+3nm) to see the gaps close, so another 1.5-2.5 years wait.
@ there’s a reason why they used the M4 Pro and not the Max. That’s because the Max levels the new AMD chip whilst still using less power.
@@lordv1le859 You're probably right. But it's not journalistically honest. Most of us watched this video to see how AMD did catch up to Apple, but this does not seem to be the case.
@lordv1le859
First, the wattage figures of the AMD chips include the whole package i assume, not the CPU only. It is a 120 Watts limit for the whole package.
Second, the cheapesy M4 Max laptop is $4k, while the Ryzen laptops will start at $2k.
It is like saying that a $1k Iphone levels a $500 Android in speed, it is a pointless comparision.
@@rj7250a the CPU alone on the AMD chips eclipse the die TDP of an M4 Pro. At full bore with the entire die of the AMD chip maxes out at 120W. The M4 14 core is 31W all out (22 CPU, 9 GPU) and I can’t find the 16 core variant but it’s under 50W, likely under 40. The minimum configurable TDP of the AI Max + 1.21 Jigowatts is 45W capping out at 120W. Which side of that range do you think they got their numbers from?
Which CPU and in what configuration are those 2K laptops? Also the M4 Max base configuration is more like 3K
I want to see this utilized by Minisforum.
Or Beelink
there 8060 igpu version is coming soon, because that is a good igpu
@@robmoye7373 there 8060 igpu version is coming soon, because that is a good igpu
3:30
Yeah, the performance is similar to the RTX 4070 at 71W, so I can't wait for AMD to release the Strix Halo in the form of a Desktop APU (G Series), for example the Ryzen Ai Max+ 395G, and it can fit into the Asrock Deskmini X600
Hopefully apple gets some competitors but man when I got my first MacBook an M2 Max pro I was blown away, all day battery life, doesn’t die when it’s closed and renders videos twice as fast as my 5800x3d or even more
I was very skeptical of getting an M4 Pro Macbook Pro because I thought it would be worse than my PC but Apple def cooked, and now pushing to add gaming to macs I feel like I won't even need my old amd laptop while travelling very soon.
@ the gaming thing isn’t really true they’ve been saying that for ages unfortunately
@@occasionalshredder The new Game Porting Toolkit 2 is doing great for me right now with some bugs here and there
@@occasionalshredder hope fully Toolkit 3 might be launching this year with better support and less bugs
This new AMD chip will blow apple out of the water in graphics design and gaming and if you don't like windows We can download Linux in it @@rahulnishadxd
It's interesting that they compare with the 14-core M4 Pro but not the 16-core M4 Max. Lacks of confidence from AMD?
No, m4 pro just lacks the $1000 apple tax that apple slaps on their m4 max.
@@necroboxic8526that would make them look even better. Would it not? There’s a reason they didn’t, and it’s not because of price.
Not really? They compared it to the competitors product their product compares well against. That's like... Marketing 101.
Seen your video and bought XAI701B yesterday... it's up %18.5 today. Talk about timing.... Thanks!
This would be huge for on the go AI workloads if asus DOES NOT literally charge 5-10x the market rate for memory like apple. That's all they have to do. Get the chip into a thin 16" macbook competitor and give users 128GB *without being greedy*. The only issue would be software compatibility, but it's definitely solvable over the long term. And simply giving users more memory for a reasonable price would be a massive selling point.
Ya, without CUDA ai workloads are going to struggle for a while longer while the alternatives develop
I think I would use the NVIDIA digits superchip, which has 128GiB of unified memory
Chuck this in a zephyrus g 16
@@MrHamncheez Llama.cpp fully supports ROCM and Vulkan so running LLMs on AMD works great.
(Training is a different story though...)
@@MrHamncheez Llama cpp supports ROCM and Vulkan so running LLMs on AMD works great.
(Training is a different story though...)
5:45 FYI with a 256 bit memory bus, assuming it has support for both LPDDR5 and DDR5 (SODIMM or DIMM) there is a theoretical max memory capacity of 1TB assuming 2 dimms per channel and the 128GB DIMMs come to market at some point.
Cool. Let us know when the actual review comes out.
What's the battery life like and also can it run bigger AI models fast enough ?
I'm not sure a device drawing 120w just from the SoC is really an "M4 killer".
It was competing at 10W less in that cinebench test. Wdym?
it has the option to run in a higher power mode, and can beat the M4, is that what you're referring to? This is a tablet btw, how's the iPad pro compete against this O_o
He mentioned the benchmark comparisons were done with similar power. If you go to 2 min 50 sec mark, you'll see the 40W AI Max is comparable to 50W M4 and the 60W AI Max outperforms the 50W M4.
This device can run windows, linux of any flavor and emulated macos. It is the most emcompassing and modifiable module that can be used as handheld, laptop, docked system. Pretty much end-game form factor for me.
I don't know why so many people are failing to understand this, but the SoC benchmarked here was capped at 55 watts. It can go upto 120W, which will provide far better performance. When you're plugged in and not limited by battery, it can go upto 120W providing great performance and when you're limited by battery, it will go to lower power modes like 55W or even down to 40W to give better battery life. I thought all of this should be obvious, but clearly looking in the comment section, people have not understood this.
What I like about this design, is that I don't have to worry about how hot my keyboard will be.
This is sooo efficient that just the cpu alone draws the same power as a WHOLE m4 pro chip. Not the mention the igpu can be fed up to 60 watts.
Was running at 10W less mate in that cinebench test. Not to mention it's a tablet. Also it's still way more affordable and serviceable than a Mac. Also much more versatile.
yeah pay 3200 usd min for any system with a whole m4 pro chip 🤣 which gets owned by an equivalent priced min-maxed desktop anyways
Not even close M4 pro is just too weak
m4 uses way newer node, now make this on same node and see how it will be different
@@chinesesparrows you pay for what you get. And what you get is a chip that use half the the power of this ai max 395 plus bs
This is kind of something I could see myself using. The laptop use around the house, the desktop gaming use at my desk, and a tabley for general browsng on the couch / bed.
@0:48 why is AMD comparing it to 12 and 14 core "Pro" parts, when Apple has an M4 Max variant with 16 cores? The Max is usually 20% faster in these workloads than even the 14-core Pro, so maybe that's why... They were scared to get smacked by reality. Real shady stuff...
Because a Macbook pro with M4 Max starts at $3200, that's why.
and one more thing, Macbooks cannot be used for gaming, while Windows can. And Indonesia will ban the distribution of iPhone 16 and iPad and maybe Macbooks starting in 2024 because Apple forces 50-year tax holiday
cause price? apples to apples comparison should consider price? how is this shady lol
Bruh.. delete your comment now. You made yourself a fool 😂
@@janickpauwels3792so the price disparity would make them look better. There’s a reason they didn’t include and it’s not price.
Seems like everywhere I look, people say the M4 pro (14 core) has a TDP of 40-45 watts
while I see many different reviewers being able to push the M4 pro up to 46 watts
where did you get the 48w to 50w power rating?
Its not on the level of M4 Pro 14c or M4 MAX yet, but I believe AMD can pull it off in the next few years. It's gonna ba an exciting upgrade in the next generations!
You don't have a clue
@CHutch-w2u I missed the part where the confusion comes from. This is the most bot name ive seen lol
@@nameless_strangerdid you even watch the video ?
0:47
Why did not you compare this AMD SOC with 16 cores to the 16 cores M4 Max? The charts showed the 14c ones, which not only lack 2 cores, but 8 gpu cores as well. I would love to see the real numbers.
when will you review thermalright royal pretor and knight
Love your voice! As a recent M4 Pro Mac mini buyer, it’s great to have the one to which all caveated others are compared without any of the caveats, even the bottom button which I find genious. I've yet to dump my setup trying to pull it forward to shut it off via the back. It absolutely should not take 100 watts to power something Apple has proven it can do better with less than half that much. My local AI work screams! Can't wait to edit 8k video! Mostly I’m just saying, “I’ve used both platforms and there’s clearly no comparison for what I need. It’s good to see AMD stepping up, what with Intel failing. Two thumbs up on the quality of your video. You’re really good at this! Keep rolling!
How does it compare performance wise to the m4 max 128 gb ram
The M4 max will have higher performance but is more expensive and not x86 compatible so no Linux. The performance is closer to M4 pro.
@@electrodacus multicore performance, single core it's barely beating a 5 year old m1.
@@electrodacus- Linux runs on ARM. The issue is that _particular_ SoC. Although, see: _Asahi Linux_ .
@@cacogenicist Yes Linux can in theory work on everything. But Linux is not supported by Apple the same way Intel, AMD and Nvidia support Linux.
Asahi Linux will not work on M4 and even on older models is not anywhere close to support all the SOC functionality.
@@electrodacus Yet. They're still more or less brand new relatively speaking and they're having to reverse engineer everything.
What naming scheme do you propose instead, specifically?
All the non-Mac laptops I’ve ever owned died due to issues related to overheating, so if they have a good cooling system they’re on the right track.
I’ve never had or known anyone who had a Mac laptop that had overheating issues, and their lifespan is significantly longer in my experience.
Some of the later Intel i9 MacBook Pro's ran kind of hot. They still managed to survive, but they weren't exactly quiet while doing it lol.
I was gonna get the M4 Pro MacBook Pro. Switching from windows for the first time. This year's macbooks have impressed me a lot. And now with the announcement of these AMD cpus, should i wait?
I'm only a third of the way through the video, but just looking at the 3d mark time spy test, there is a little foolery happening from either Asus or AMD. AMD said the gpu of this chip performs between a 4060 and 4070. That led me to believe a full powered 4060 or 4070, meaning running at at least 100w, not 70w. Don't get me wrong, it is still extremely impressive. It's just -- it's not equivalent to a 4060. That is like Apple saying the M4 Max is equivalent to a 5090 but not disclosing that the 5090 is running at 40w. It just makes the claim ridiculous, or at best, disingenuous.
I think we'll find way more issues with this once actual reviews come out. While gaming performance might be comparable to a power restricted 4060 as you said, it won't get near it in anything that can fit in the 4060s VRAM. 3D Applications, simulation software, even AI. Because they don't have any software support.
Sure, bigger LLMs will be faster simply because they can fit in memory, but then if you want to do that, get a MacBook. Their GPUs have better compatibility with most AI workloads and will be way faster than this.
I have a feeling a lot of people are going to be very disappointed by Strix Halo's real world performance. Especially for the price
@@_shreyash_anand For inference which is what most people will use this for (since training is really best done on datacenter GPUs, you can rent), the software support is pretty much there. ROCm supports all the tools which use llama.cpp and it even supports vLLM. So the software support at least for LLMs is there. They actually showed LM Studio (which uses llama.cpp backend) running a 70B model on stage at the CES.
Its probably more equivalent to the 4050 laptop gpu, maybe. We shall see. Good comment
@@_shreyash_anand can you game on a mac though?
meh, you get diminishing returns as you approach 100W anyways
In my opinion, you are paying the premium for the efficiency, not performance. But there are diminishing returns with going up in price also... (M4 Pro is also $2k when not on sale)
Also note that CPU power draw isn't included for the 4060/70, where it is for SH
Hard to say without standardized benchmark / same map, but Verge's article had fps in Helldivers, and it was in between 4060 and 4070 100W fps iirc, while presumably SH is running at 20W less, then maybe another 20W+ for the CPU
it will never be good value, but I mean the only other competition is literally Asus themselves with the previous Flow X13 / Z13 / ProArt PX13 with 4070 65W (I have videos of non-Z13 on my "channel"). Literally been 2 years and no one else wants to make dGPU 13" laptop. Or even Zephyrus G14 '23 competitor (not named Razer. I guess Apple is good alternative here if you don't need Windows)
Over the years, we’ve been told of so many Apple product killers. None of them have succeeded in that. This won’t either.
These CPUs might be good as a Mac Mini competitors but until we have battery life numbers, I don't see them in MacBook competitors at all
It has more battery life than macbook m4
@@anand.suralkarit has 10 hour battery life according to ASUS. That’s not even close to the M4. Where are you coming up with your conclusion?
@@anand.suralkarlol are u high?
I guess you could say that some, but MacBooks are a complete non-starter for gaming and non have touchscreens while this full pc is handy as a gaming machine +tablet replacement. I been tired of basically enlarged phone app device called iPad lacking the software options available on full desktop os, this looks to be more versatile than surface to me (can actually game hard, can run larger AI models etc)
As a video editor, this has always been my issue with windows based laptops. There is nothing (as of this moment) that compares with Macbook's efficiency. Imagine rendering a 4K timeline away away from the wall socket without the performance penalty.
When will reviews for ai max come out?
Quad-channel memory integrated into the GPU and CPU, it will be super fast in normal task's. This may be a small breakthrough for laptops with windows, I can't wait for the first tests.
Well, Apple already had their octa channel DDR5 in their laptops for a while (even the M1 Max had 400GB/s bandwith), and the differences are visible, even a fully stuffed desktop Windows is freeze for a moment if you are clicking on the video timeline in any video editor, but even the cheapest Macbook just showing the next frame without any delay, even if you skipped 30mins in the video. But for Windows this is maybe just buffering crap, it won't start to play the video until it's not filled the 3-5sec buffer with data.. in that case even the faster memory won't show any difference..
when is the full review coming out then?
I went from an Intel MacBook Pro to a M1 Pro Max, this thing is an absolute beast, Photoshop, illustrator, Figma it eats it all up. Can't see myself getting anything else for a long long time!
well if you only need that types of works, macbook is more than enough, this types of cpu and system as a whole headed more over to even more difficult tasks
Thanks for the overview Video and I think you are probably the only UA-camr, who at least mentioned the Unified Memory Access among the CPU and iGPU in Strix Halo. That is a pretty big thing and probably the 1st time in an x86 SOC with integrated Graphics. Memory reads and writes are the most latency inducing tasks and significantly bottleneck the CPU or iGPU performance. In all the past implementation, the Shared system memory allocated to the iGPU, uses a different memory address Space and whenever data is requested, it has to go to and from between the rest of the system memory and the dedicated memory to the iGPU, resulting twice the number of read-writes for a single work, compared to an Unified memory Access UMA) device, like Apple M series Chips. AMD, bringing the UMA to its CPU and iGPU will significantly decreases the number of memory operations and hence reducing the bottleneck.
It's interesting how chip companies are trending to unified memory for consumer AI workloads. Because of the slowing down of process nodes, shortening the distance between components seems necessary. Apple has M chips, Nvidia has Grace and Blackwell in Project Digits, AMD has Ryzen AI, Qualcomn has always had it...
Does Intel even have a unified memory solution with a powerful GPU? They've only had low power iGPU unified memory for consumers.
That's not really the main reason why they're transitioning to unified memory. We are just simply at a place now where it's feasible to have memory either on the same interposer or close to the CPU using low power LPDDR5. Consoles have been using unified memory for decades. LPDDR5 just happens to be now *good enough* for graphics workloads which wasn't the case before. And it has the advantage of making a much cheaper system. But don't get it twisted, LPDDR5 is still a major bottleneck in for instance AI due to the less than ideal bandwidth compared to GDDR.
It's just a very natural extension of silicon to slowly iterate towards unifying the entire compute platform. That wasn't possible until very recently because of density, chiplet and efficiency advancements. APUs are nothin new, integrating the PCH is nothing new etc. What's new is, it's finally catching up to the software consumers want to run.
Their new battlemage GPUs seems to be really good. Cus they can pull off huge performance gain with way less hardware compared to their previous gen.
I hope they can scale it up or down depending on the application. More competition, the better.
How are the speakers?
I would be the most interwsted in the performance and batter life of a lower tdp setting on battery. 1080p 60 in cyberpunk at 3+hrs would be incredible. Probably still a couple geberations off but this is the point that things really get interesting where full AAA compromise free gaming is fully viable. smaller hand helds are still several generations off.
Where's is the Flow *X* 13? That's the more versatile one that you can use both in clamshell and tablet modes.
Just call it Ryzen 9 395
Oh, 03:38 and also *the real* Ryzen 4070 😂
My big question is why didn't they launch this with Thunderbolt 5 to take advantage of the new XGM? That and still not adding an oled panel like the PZ13 are my two main issues with this. Otherwise it'd be perfect.
Dude 2000$ msrp isn't bad at all. It could be 1800$ in just a few months with discounts.
Don't say this shit, next years ai pro max plus fe 695+++ laptop will launch at 3k plus for a 15% performance increase
Tbf if it’s competing against MacBook Pro it’s decently priced (compared to apple since it could be upgradable) and it also has touch screen now the only thing that would make it bad it’s if the battery life it’s even 2 hours less or if it dies on sleep mode by morning no one will buy it.
@animecutscenes3414this AI beast is a bargain, i know someone in our field who is planning to buy as many as he can get his hands on, hopefully, at least a dozen.
For the kind of a I performance, these things are giving and the cost, they're practically free and i'm pretty sure no gamer's going to be able to get their hands on these for at least a year, because we'll be buying up as much as we can
Mac book pro is $1099. How is $2000 any good
@@ken-oq9igbase spec comparison
shouldn't their "MAX+" be compared to at least M4 Max instead of Pro? and was the benchmark run while the device was plugged in or on battery?
I was impressed until i realized they were comparing a 16/32 cpu vs a 10 core cpu with no hyperthreading....
Maybe thats why they didnt add the m4 max chip
m4 pro is 14 core and the m4 max is 1.5x the price and because apple silicon is ARM which doesn't and probably shouldn't have simultaneous multi-threading.
Hyperthreading is necessary to compensate for the deficient x86 architecture. That's why ARM is so successful.
The true value of MacOS and Apple Silicon is not just the performance of the chips, but the efficiency, reliability, overall ecosystem of apps and devices, all creating a platform (Apple) that all works beautifully together. No other tech ecosystem can offer it.
The combination of Windows and Android is nowhere near as good.
Its not the chip, its the OS (windows)
I'm curious on how Windows handle the unified memory. Does it change the allocation to the CPU/GPU dynamically?
that chip in a budget gaming laptop, I'm sold
This is for AI, not for budget ballers.
I know people in the llm community that are planning to buy for business purposes, literally a dozen or more depending on how many they can get.
I don't think gamers will even be able to remotely get these at least for a year or so, because all of them will be bought up because the price at $2000 means it's practically free vs all the other chips that can accommodate such large AI LLMs
Can you do a battery test borh heavy and average usage?because there's a review that the previous models last only for only 2 hours, the portability of it doesn't make sense if we can only use it for a short amount of time.
AMD 9800X3D Is crazy insane 🔥
I really love the new design, make slash glass design become vertical on the back of device make it looks much better...and can't wait to see the full review on the final product
wow, every core has half the threads
peak technology
AMD hired this man
Hypothreading
Should you get early gains on XAI701B and SUI for example and then take profits into the mid caps?
This thing is really impressive. I want the MBP like laptop that has native x86 support, 128GB and the keyboard and trackpad that's centered (not off center like PC makers often like to ruin the laptop with).
Can this chip be potentially used for a PC handheld this 2025? Like a new Lenovo Go, Rog Ally, Steam Deck, etc, etc?
I'm much more interested in getting this chip in a standard laptop form factor. Why would I need it in a tablet?
I know HP announced something but they don't even have a price or release date. Kind of looks like it will be expensive though. Why are there so few models using this...
If it runs in a tablet just imagine how can perform in a laptop!!
It's just flex to put it in a tablet
It will be in laptops
@@arya_amg I'm sure it will eventually, it's just frustrating that so few products have actually been announced. It's literally just this and the HP. The HP one is a laptop but it seems to really be going after the "AI workstation" market, talking about stuff like 128GB of RAM configurations and will likely be very expensive...
It’s probably much cheaper for normal laptops to ship with a normal SoC and a discrete GPU.
@@KellyWu04 no it won't be
Unified soc is cheaper
Cheaper power delivery smaller and simpler mother board less complex heat sink
It is more of a Nvidia killer than M4 killer
Performance is near more than 5060 so anything below that no one will use
Title says M4, testing shows M4 Pro, most powerful laptop chip is actually M4 Max, and it’s nowhere to be seen in testing. Obviously ;)
M4 Max is in a completely different price league.
M4max cost alot more and is also consuming more power too. Crazy thing is in it's 60w cpu mode it'll be with in 3% or less of the max in most work loads.
@@mikeowentaylor They should have shown it just for reference then and say "look, our chip is just as good or better for less money and lower power". That they didn't, probably means it isn't. We'll see when they're in the hands of customers, you should always take manufacturers numbers with a grain of salt, regardless of who they are.
@@TalesOfWar they are not competing with max, how are you all so dumb
@@zorororonoraroro It's implied they're able to best them, while showing no numbers against them, which implies it clearly doesn't.
AMD Customers: "Ryzen AI Max Plus 395?! Your branding couldn't make less sense if you tried!"
AMD: "Strix Halo"
AMD Customers: "STRIX ISN'T EVEN YOUR BRAND!"
First and foremost u should clarify for your viewers this chip won't beat M4's single core performance and combined CPU & GPU of total SoC performance per watt. Apple's + Arm M4 chip is the most efficient piece of silicon for the consumer market out there, especially when it comes to performance per watt - period! This new chip from AMD can only compete in the multicore/multithread and GPU benchmarks where it shows excellent efficiency as well, however all for sake of extra power and faster battery drain. 1:25 speaks completely for itself and confirms unmatched efficiency of the M4 chip which is impossible (at least for now) to be challenged by any x86 CPU/SoC. In comparison M4 Pro that means whole SoC CPU & GPU + everything else pulls in total at maximum load up to 70 watts! Also let's not forget M4 as well as previous Mx chips can perform same with or without being plugged to power, which is something almost impossible for x86 chips to achieve (Lunar Lake is close). I am sure this "bad boy" Strix Halo will sooner than later thermal throttle in such a small form factor no matter how awesome his vapor chambers r or how good his fans and liquid metal is. Cool design and well done presentation won't hide basic and fundamental laws of physics. That being said, this is most certainly one of the few x86 APUs/SoCs which is capable of keeping up with the excellent M4 (Pro) chip at least in the specific benchmark comparison so still great job by AMD! APUs/SoCs r certainly AMD's great advantage and deliver maximum performance per watt should be their main goal. Besides that they should also think about RISC architecture like RISC-V or Arm as it's very hard for the CISC x86 to compete when it comes to efficiency. I love x86 for its great compatibility, absolute freedom and legacy with vast gaming possibilities, however when it comes to performance per watt RISC is unbeatable and looses only ASICs which in a way Apple's Mx chips r😉
Are day dream m4 when full throttle can go up to 150 watt in what world do u smoke
@@mashirokobato5509 HAHA u must be smoking some bad sh*t by comparing this chip to currently best of the best & top of the list M4 Max which is pure workstation type of silicon even largest MB Pro 16" has problem to cool down and prevent thermal throttling👎Don't smoke that stuff, it makes u hallucinate😁
i don't know what this guy is saying. but it's detailed enough so i agree
And you compare arm vs x86? What a shame.
5:35 yeah it's sus, how were they even running a 70b model on a 24GB video memory card, that's literally impossible even at q2_K
Edit: Apparently the testing was done with the Llama 70B 3.1 Nemotron at Q4 quantization, which takes ±41GB in memory. So the testing was done with shared memory and this yields ±3 tokens/s usually, so we're looking at ±6 tokens/s on the new Ryzen. For comparison, a 4090 running a model its size (32b) is approximately 10x faster.
This is great but somehow i have a feeling intel will still secure most of the laptop cpu market...
Is it possible the next socketed CPU's will ship with this kind of monster integrated graphics?
naming sucks real bad
man i was super thinking about getting one till the price. guess i stay with my M4 pro laptop
The AMD naming is confusing but they're going after the Nvidia with xx70 naming and Apple with their xxAI Max and Max +
Maybe for the American market its weird but the rest of the world its common sense.
I'd be interested in seeing how this does at super low power limits, I've been enjoying dropping power limits on my laptop and I want to see how it'll do in those cases
Can someone explain to me why AMD, NVIDIA use such legacy naming systems, why haven't they caught on the trend of naming things simply like model Y, X, S, 3, like Tesla?
AMD Ryzen - Model A1, A2, B1, B2 , C1, C2
Apple - M1,M2,M3,M4
This is sarcasm, yes?
Can we get these chips in a handheld?
I want to see a competitor to the m4 pro in an m4 pro-like form factor. As much as I like this tablet, it's an ROG. Its gamer asthetics are just not my thing.
Zephyrus g14
This is the device that I am most excited in entire CES 2025
Sorry I can't help but make this one last comment. "Unified memory" is Apple's marketing term, as is "Max." Why is AMD copying Apple's marketing terms? Just make up your own - your chip is good enough now that you don't need to follow anyone else. Copying Apple's marketing feels like it cheapens the awesome chip you created (IMHO of course).
The part about “unified memory” isn’t correct. The term unified memory has at least been used before by NVIDIA, although in a different context. I’ve seen references to unified memory architecture even in some Intel based systems (e.g. Aurora supercomputer nodes have a unified memory architecture).
As I see it now, it’s more of a technology term than a marketing term.
AMD doesn't know how to be a marker leader 😂
Do you genuinely think the word max is an apple marketing term? Where have you been for 100+ years?
Will it be released on desktop? I want to build one.
I want more battery life
I would rather have a built in stylus display.
@@KK-fi6msdoesn’t this have that capability?
Thanks for the tour Marquis.
This is what I'm looking for, a creation/gaming laptop converting into tablet, I'm not down with 32 gigs ram on soldered MB, my 7 year old 2in1 has 32 gigs, I need to wait for the 64 gig and higher release.
When you do a full review, I'd love to see android emulated on this hardware
Its mind-bendingly hilarious AMD renamed their CPU from Ryzen 9 xxxx to Ryzen AI MAX and talked about how it does "AI" with 50TOPs only for Nvidia to show up hours later with Project Digits, an actual AI supercomputer which is just 800-900$. (32GB vs 128 GB)
🔥
lol. supercomputer......
Brother Digits starts at 3000
@@ShashaParallax yes
@@ShashaParallax 3000 at 128GB vs 2200 at 32GB.
Super hyped about XAI701B. Think it hits 50x very soon. Good analysis on this hidden gem🚀
Apple killer my ass, not even beating the 14core M4pro in MT and way slower in ST.
Not even close buddy
OH AND DONT YOU EVER AGAIN USE R23 as a ARM64 vs x86 comparison, there is no amount of trying you can do to make AMD and INTEL be faster they arent, Apple Silicon is the fastest!
What a clown still trying with R23, try SPEC2017 or the still x86 biased CB2024.
AMD must be doing something right if it’s got you blowing a fuse over a tablet, lol.
@@theramennoodler7950 did you read the whole comment little boy? Only hard x86 copers use R23
@@PKperformanceEU I’m younger than you, but I at least don’t throw tantrums over chips, lol you’re a grown-ass man-baby.
The power packed into this thing aside, the port selection is really impressive imho considering the form factor when many laptops don't even come with such a solid port selection.
Although i'm learning microsoldering, i really really hope that modulair compnents like dedicated gpu's and motherboards with cpu sockets for example will keep existing in the distant future
whoa this looks insanely impressive wtf, matching a 4070 without any VRAM is impressive
How about the M4 Max 16 core version? so both have the same core count , even if the M4 has efficiency cores in the core count.
What i would like to know,s the battery is user replaceable,and if so how easy it is to be removed.
I like your videos and style of presentation. That said, at about 10 minutes and you talk about creatives and this being a good competitor the MacBook Pro 14 inch, not at 500 NIT’s. As we say in the UK, it’s a country mile away.