@IoanBiTza Snapdragon X Elite also use the ARM architecture, but they can't even release a fanless model, because it's not enough efficient.. loosing half of the performance without a cooler..
The things I enjoy most on the MBP are the total lack of fan noise and the fact that I never have to tweak performance settings. It just runs buttery smooth plugged in or on battery. And the battery life is actually great!
changed my view about laptops completely, made a switch to macbook m1 pro almost 2 years ago and still amazed to this day, the price is high but worth it
I run ableton on the 16 inch M1 Pro...multiple heavy plugins that would bring my 2017 top of the line MacBook Pro intel to its knees with 2 instances of knifonium.........I run 7 on the m1, no problem, plus many other tracks and plugins also....and still yet to have the fans ramp up....amazing machine
12 днів тому+4
This type of comment - if you dont push your MBP, you dont get fan noise. If you push your MBP your battery will suffer and you will get fan noise. Pure and simple.
Apple silicon is mainly a market pitch. It is all thanks to ARM and its highly efficient chip architecture that Apple was able to make this leap. Overtime ARM has allowed major users to have tweaks on the ARM designs and put their logo on top. But its still all ARM architecture underneath that makes this possible. And also ASML chip machines which allowed for to huge breakthrough in the much smaller chips. But dont see Apple ever talking about ARM or ASML so yes it is mostly marketing with Apple silicon. More windows machine might now start making it to ARM as well and we will see huge steps forward for windows machines. Saying this all as a happy Mac user who recognized this great move to ARM since day 1.
Honestly, calling the 14900HX a "brand new processor" in 2025 is borderline disingenuous. It's basically a rebranded 13th gen CPU which is more than 2 years old. Also, the RTX 4090 launched in 2022, and the RTX 50-series was recently announced. The newly announced AMD AI Max+ 395 (yes, awful name) chip would be a much more interesting comparison. It is a "all in one" chip more similar to the Apple Silicon ones. They are expected to be decent value and power efficient too!
I’ve always been a Windows user, rocking an RTX 1070m 3080m, 3090. But when the time came to choose between the Lenovo Legion Pro 7 (AMD Ryzen 9 7940HX, Intel Core i9-14900HX) for $3,400, or the MacBook Pro 16" M4 Max (48GB RAM, 1TB SSD) for $3,229 at my college bookstore, I went with the MacBook. At first, I was skeptical I'd always thought Apple products were overpriced and overrated, especially based on things like AirPods Max and every flashy new iPhone. But oh boy, was I wrong. This MacBook is a powerhouse. It handles my 4K and even 8K drone footage in Final Cut Pro effortlessly, staying almost ice-cold and performing incredibly fast. Whether it’s 3D modeling or programming in IntelliJ, this machine is a beast. As a gamer with over 300 games on Steam, I was initially concerned about the limited macOS library with only half of all my games available, but I’ve been pleasantly surprised. Here are the games I’ve run on 4K ultra settings so far: - **Metro Exodus**: 100-120 FPS - **Dying Light**: 120 FPS (200+ FPS with VSync off) - **Hogwarts Legacy** (via Crossover): 90-100 FPS - **Elite Dangerous** (via Parallels Desktop): 70 FPS - **No Man's Sky**: 120 FPS - **Baldur’s Gate 3**: 100-120 FPS - **The Witcher 3: Wild Hunt** (via Crossover): 90-100 FPS And there’s more! What’s impressed me the most is how quiet the MacBook stays. Even under heavy loads, the fans barely spin up, and at times, they don’t spin at all. The laptop remains completely silent with no FPS drops. Whatever Apple is doing with their hardware and software integration, they’re absolutely on the right track-and I’m confident it’s only going to get better from here. I have zero regrets about making the switch.
The reason that both the M4 Pro and M4 Max were similar in performance may be due to the Neural Engine being identical in both chips. If properly managed, most of the network, if not all, should run on the ANE, thus providing similar results. It would be very interesting to see where the model runs during inference with Asitop (which shows CPU, GPU and ANE usage). Great comparison!
Just to be clear, they weren’t similar in performance. Using MLX-based runtimes, the M4 Max achieved 172 tokens/second to 103 tokens/second for the M4 Pro. The difference is most likely due to memory bandwidth.
Last time I checked, LLMs were not able to be run on the ANE. Maybe that changed by now, but there was something about Apple not providing 3rd party devs the necessary APIs to use it in full.
@ There is no limitation preventing LLMs from running on the ANE, it comes down to the architecture implementation and quantization. What is interesting though is that MLX, currently does not support the ANE, at least according to python MLX package on PYPI. ANE can be fully utilized though using CoreML, even for 3rd parties (e.g. coremltools on python). On my models, I can see significant speed boosts when running on ANE with CoreML.
@@Berecutecu that was the whole point I made, the only reason you might even consider Intel now is if you NEED TB5. Before Strix Halo, it could brute force multi-thread stuff slightly better than AMD because it had more cores... that's no longer the case.
@@StreetDawg55 That's not an M4 Max competitor. That's an M4 Pro competitor thus AMD only showed comparisons against the Pro and the reg M4. As far as consumer SOC's go, the M4 Max is currently more powerful than anything from AMD or Intel. Strix Halo will probably still be very good in its class and be the most powerful x86 SOC you can get on Windows/Linux.
There's something screwy about that Razor. I ran Llama3.2 on my ancient RTX2080 laptop and I got 123 tokens/s. No way can a 4090 only do a 1-2% increase over a card that's two generations older and a lot less powerful. And that multicore score on Geekbench is abysmal.
I've bought two 15 inch Razer laptops, 2020 and 2021 Advanced models, both had suboptimal cooling solutions for the compnents they pack and constantly overheated. The other bug bear was needing to run their Razer Synapse software in the background to be able to control keyboard lighting, fan speeds and performance modes. This coupled with their poor build & design quality I honestly can't say I would recommend them.
@@migueljardim8177 I owned a 2022 model and the edges of this laptop was way to sharp and the keyboard is one of the worst i ever used. Now its sold and replaced with a ThinkPad.
@@migueljardim8177 I don't agree at ALL. They have zero quality control. So sometimes you get an awesome Razor lappy... and sometimes you go through RMA hell and it spends more time at Razor and in transit than it does at your house before the warrenty runs up. Ask me how I know 😤 They're very inconsistent and service is AWFUL. That being said... If you're lucky enough to get one without issues I agree they use nice materials and have a premium chassis like no other. Just imo... build quality is more about consistency than a nice chassis.
@@migueljardim8177 "imagine someone having problems with a different device by the same brand as your device you don't have problems with" gamer challenge level: impossible
@@AZisk could you please explain in one of your future videos about FP4 vs FP16, because if I understand correct NVIDIA Project DIGITS is around 1000 TOPS vs RTX 5090 which is 3.1 TOPS so the 5090 is around 3 Petaflobs? (BTW I'm not native English speaker I'm sorry if any mistake)
Hi Alex, did you try changing the performance modes in the razer synapse software? That normally controls the tdp for both the GPU and CPU on Razer laptops. I would also recommend changing it to discrete only mode for the GPU as it would run the tests on the 4090
This is exactly what he missed, but it is annoying that you have to tweak these settings in order to get good performance. It should work this way directly out the box while plugged in!
I have MBP and an old 4 year old Ryzen linux laptop, the Windows VM on Linux boots my dev env app in 5 seconds, the MBP with ARM Windows boots my env in almost 58 seconds and that's using paid Parallels...I can't stand my mbp from a dev perspective. The way keyboard "shortcuts" are implemented sucks as it's not always supported in all apps.
While this is true, the benchmarks I saw, the M4 Pro beats full blown desktop CPU's in a lot of benchmarks. So there's no way in hell a mobile Intel CPU would have any sort of chance here. Apple is absolutely spanking Intel & AMD here overall. Intel & AMD have their work cut out for them.
@@Bigheadedwon Every gen AMDs own mobile cpus fully blown desktop CPUs in a lot of benchmarks :-). Thats normal :-). Im very seriously considering to buy a AMD based mini pc as my new win desktop. Apple has still advantage with incorporating full SoC, so memory bandwidth is very high and effective. And other things of course. I also have mbp for battery life and maximum reliability for work. I understand this euphoria about macs. They did something, what is hardly achievable (not technically) in Windows world. Even MS with their surfaces isnt capable to order whole SoC and give it full software support. MS cant even properly support nowadays surfaces.
Comparing the hardware of the Razer device with the benchmark results of its individual components that can be found online, your device seems either defective or power limited, as there is a significant discrepancy between the two. I wouldn't call your results valid or conclusive due to this issue. However, there is no denying how impressive a chip M4 Max is, most of the time on par if not slightly ahead of the mainstream X86 hardware while being much more power efficient.
he most deffs doesn't know how to work on a windows machine.. Most likely some eco mode lowering the clock speeds.. He also went to gaming instead of creator mode
A couple of things to note for ML benchmark: first I don’t think LM studio works good for nvidia chips. Usually we use vllm or sglang to serve the model which is way faster than LM studio. And the benchmarks better to be done with bigger model to exhaust power of 4090 compared to just 3B model considering it’s already 4-bit quantised. Second thing is about inference throughput. Apple mlx has never done any good batch LLM inference, meaning if you have multiple queries, it’s going to be slow to process. I can easily reach 2k tokens/second for a mobile 4090 for a 3B model with batch inference.
The goal is not to provide a balanced analysis of what the machines can potentially do if you use them properly. It all about shitting on PCs to make the Mac look as the superior choice no matter what. This is to lower the cognitive dissonance associated with buying very expensive hardware that isn't that much better in the real world. Hence the sensationalist titles and whatnot. This guy keeps showing up in my recommended, yet every time his approach makes no sense appart from just shitting on PCs. To be clear I own Mac/iPhone/aWatch. There is some good thing about it but the value proposition is out of whack...
@@clementcollier8432 but both fall under the category of very expensive hardware. It's hardly just about justifying the price. Gonna be honest: I don't like the MacOS. I don't even have legitimate reasons, I just dislike using it. But I do think that as things stand now, Macbooks are better for most people at least as general purpose laptops, but also for some very specific work tasks(video/foto editing, coding, audio work etc). If you need the best GPU on the market and CUDA accelerated applications, obviously that is only present on the Windows side. But looking at the device as a whole, it just looks like a horrible proposition to me - it's loud, it uses a ton of power as it's inefficient, it drops in performance and by a significant margin when used on battery (which btw was very generous of him to use it plugged in basically the whole time), consequently have poor battery life. Granted, it's a poor hardware choice and I would like to see a comparable AMD based device with some of their upcoming offerings, but these Intel machines are pretty bad. "There is some good thing about it but the value proposition is out of whack"....hm, not sure this criticism can be directed towards the Mac/Macbook lineup. They are pretty on par with the Windows offerings in terms of value. Like sure they ain't cheap, but on the other side, the premium Windows devices aren't cheap either. As soon as you hit any sort of overall equivalency in terms of build quality, battery, screen quality, and somewhat comparable performance, you're basically looking at the same price range. This can be more directed towards the smaller devices, namely iPhones. And there's also the new, very cheap but capable Mac Mini now. It's a lot of PC for the price, to the extent that if you don't intend to do any sort of gaming, it excels at the price point compared to the competition which doesn't have anything to offer in the price range in that form factor at that performance level. And because of my aforementioned dislike of MacOS, I've been following the market closely for the past year or so to see if something will pop up to satisfy my desire for a new non-MacOS laptop but so far, everything that showed any promise turned out to cost the same as a comparable Macbook Pro, but also turned out to have some sort of flaw that would just annoy me. In some cases they even cost more. I'm not in a rush and will wait to give AMD a chance to prove me wrong, but at this point the internals aren't even that much of an issue, but the whole package. The thing is, I may dislike their OS, but if I get a Macbook, I at least know what I am getting. It's a safe bet. Windows laptops are so tricky to get a good read on, since the market offerings are way too granular and the product lines are very messy for all of the manufacturers.
@@clementcollier8432 Going hard on the Copium I see... The goal here was to compare two pieces of hardware with 2 suites of common/accessible tools. Could someone make a super-optimized benchmark specifically for ? Sure. Does that demonstrate anything? Not at all. If a Windows laptop can't perform in the day-to-day tasks that people are doing, then they lose. Simple. There's still plenty of reasons to pick Windows over Mac, or Mac over Windows. I'm writing this to your from my AMD 5950x / RTX 3080 PC right now. Doesn't mean I can't recognize that Mac has some strengths over PC right now, especially in the mobile/laptop segment. I mean, you're complaining about not giving things a fair shake - the dude ran Windows in a goddamn VM on Mac and *still* outperformed the Intel machine. Were you under the mistaken impression that running things in a VM *improves* performance?
@@peter.dolkens i don't think you understand half the things he has written, point he made is you buy according to your needs, nobody is going to buy a dedicated gpu laptop for tasks that won't benefit from that power, comparing an apu to a cpu/gpu combo is stupid to begin with, that too on a hardware that is 2 years old and a cpu known to literally burn itself to death. from your comment "Could someone make a super-optimized benchmark specifically for ? Sure. Does that demonstrate anything? Not at all." it demonstrates your inability to make proper choices, besides razer performed all tasks in the video just slower.
Blade 18 owner here, you should use custom mode in synapse and set cpu to boost and gpu to high to unlock the full performance razer designed it for. After a bios update, they throttled Turbo mode performance and cpu scores tanked drastically for some reason
for 4k$ I don't want to do any of those.. on Mac I don't need to. Why the hell should I first of all know about this set of actions and then do them? it's not 500$ chinese laptop after all
@@yuding237 fresh OS: find the app, install it, familiarize with its UI, find required switchers.. Yes, it's same as clicking on battery icon and selecting power mode on Mac. Agree.
I mean Razer has got it's own proprietary software called Razer synapse or something that lets you set it to quiet balance, performance mode etc. so.. maybe have a look there, that windows performance setting is just useless for gaming laptops
I came here to say the same. With 256 GB/s memory bandwidth, and 40 CU GPU with a Geekbench OpenCL score right between an M2 Max and M3 Max, the 395 should be really really interesting to see, especially in LLM inferencing tests.
Well the razer notebook is used more on the gpu since its what makes it shine, well the cpu is mid since you can get it with a 1700$ notebook too. Still it surprised me that the cpu wasn't even close to the m4 chip. Nice video by the way 👍
Now would not be a good time to get a Blade laptop since very soon they'll be refreshed with the Nvidia 50 series GPU's and new AMD and Intel CPU's. I'd wait personally, especially for the AMD Strix Halo chips.
@@migueljardim8177 When new game laptop models come out I buy last years model marked 80% off. FWIW I don’t value AI frames as highly as rendered frames. The 5 series performance is numbers not experience. Why pay 10X as much so the card can hallucinate it is preforming???
@@migueljardim8177 Or wait for the new snapdragon chips in 2025, if not being in hurry. 1st generation of snapdragon seems better than m3 pro, while m3 pro was using 3nm manufacturing of TSCM, while snapdragon was using 4nm of TSMC. The new snapdragon will use 3nm as m3 & m4 , so based on the previous generation's result, the expected it that it should outperform m4.
Im noticing a big divide between next gen machines and everything else. Fast RAM and unified memory I think is making a staggering difference. Plus as you mentioned storage speed and bad cpu's. Mac is amazing. I cant wait for the AMD AI Max+ 395, that might be the closest Mac competitor we'll see.
16 Zen 5 cores (32 threads) and 128 gb of RAM will be INSANE. It will even destroy the 1 thing Intel was decent at (multi-core because they shoved so many cores in their silicon). Unless you need TB5, the choice is clear.
Do you ever look back and realize how insecure you seem when you say stuff like that? Like imagine watching a Lamborghini review and saying “well how many people can you fit inside? Guarantee my van will smoke it in that comparison”. Legit, makes you sound like a loser. Btw if anyone’s mind is so nonfunctional as to extrapolate the wrong relationship from the paragraph above (and I know someone will) then consider taking up the tide pod challenge. I promise you’ll come out smarter 😉.
Wow, rare to see these kind of benchmarks nowadays, great work. Also, great to see the mention of windows laptops on battery vs plugged in. Very rarely do reviewers mention that, and even rarer to see an actual test of both
@ The main problem is the missing driver of cours. Some Hardware components are not compatible with linux. But the most problems did the nvidia gpu. I had to load custom kernel parameters and even than it is not running seamless.
@ Of course you can try to run not rolling release distros with older kernels good luck with this moron. No driver no function of the hardware. You are getting the newest drivers only in rolling release distros or you use "experimental" version distros but these are more unstable than rolling systems.
As a DBA I can tell you memory latency is much more important that anyone realizes. It affects all of these test, sometimes including disk tests because they copy to disk from memory. What's special about Macs is the amazing advantages of memory latency with the unified architecture. Another lesson here is running databases on arm instances in AWS is pretty amazing for simple operations(standard loop joins) with the Amazon version of this chip.
I have a laptop with the same processor as yours(ryzen 7 7840hs) . When I ran speedometer 3.0,the result was only 9.0±0.50 Something seriously felt wrong here because other lower end processors got better scores than the 4nm one Tried it again but this time, I kept the task manager open and found out that the test wasn't even utilising 3% of my cpu Could you tell me what I was doing wrong here?
@@WkwkwkkWkwkwkwk-h5c "Could you tell me what I was doing wrong here?" I don't know... all I know is that the results of this specific test depend on many factors.number of background processes, BROWSER used, amount and type of RAM and so on...
This is a fantastic video Alex! I am really excited to see and be grateful to you if you you can do a comparison video of how a top Vanilla M4 (Not pro/max) Macbook Pro does against a Ryzen 9 AI HX370 laptop like a Zenbook S16. The later can be any model powered by the Ryzen 9 AI HX370 but only reason to suggest a Zenbook S16 is because that model too has a fast RAM and SSD.
I've been looking for a proper reviewer who is more knowledgeable than me and doesn't just test some games and video editing, and this dude is that guy. New sub from me!
I really would love to see a comparison between the M4 Max and the 40x0 mobile for ML/Pytorch training. I am pretty sure you did a benchmark in one of your other videos, and I found it super useful. I know inference could be a proxy.. but imo.. training can behave differently As always Alex, thanks for the video - great work as always!
I think that in selecting 4 bit small models so they fit on the 4090 VRAM you are teeing up the MacBooks to run on the NPU. The reason I think this is that the Pro and Max chips performed the same. ~40 tops is very capable for any 4bit inference that will fit in memory. For the 4090 to outperform the M4 you would need to do some 16bit and or learning. The other big advantage of the M4 is that it automatically sends instructions to the best core and they all call the same unified memory. The i9 is doing nothing but management of the 4090. For 4 bit inference the i9 might be about as good as the 4090.
I don't know if you have ever done this before, but I would love to see some comparisons running Linux. For some of these tests, I really wonder if Windows is a problem.
For the drive, on the PC does it show up? Check the format drive application and if it doesn’t show up there try using ‘diskpart’ from the command line if you didn’t already. Most APFS formatting doesn’t show up on Windows unless you do a command line level diskpart
Reazer Blade: Let’s do some AAA-gaming! 99% of people buy me for it. MacBook: *Leaves the chat* Alex Ziskind: MacBook is too powerful! Everybody: 🤦♂️🤦♂️🤦♂️
A GAMING laptop with the latest gen I9 should be fast as hell. The only difference between a gaming laptop and a non gaming one is the gaming one will have a much better GPU. Everything else will be the same.
Any chance you throw a Linux distro on the Razor and re-run the tests? Obviously the disk performance will be better for things like that compilation test (NTFS is awful with lots of small files), but I’d be curious to see the difference in the AI benchmarks. Oh, and if you want a real-world compilation test for .Net, why not the .Net framework itself?
@@AZisk hopefully at least some kind of competition. As much as I dislike MacOS this hardware is more than making up for OS. And so far its not even close.
And we're not even talking about the build quality, speakers etc and ofc windows itself. This is just embarrassing. It's really only "good" for gaming and even at that it's bad, lots of noise and awful performance for the price. Especially on battery, might as well get a desktop otherwise.
I really enjoy your videos and appreciate the effort you put into them! I wanted to mention that the Razer 4090 and M4 Max GPUs don’t natively support 4-bit computation in the Machine Learning Benchmark. This might affect the accuracy of 4-bit inference tests in representing their performance in AI tasks. It would be interesting to see 8-bit or higher inference tests, as well as AI training scenarios, in the future. Additionally, the generated code during compilation might not fully reflect real-world use cases, so including variations of large-scale, real-world tasks could add even more depth to the benchmarks. Thank you for your great work!
@@zoscmengiste4990 Yea, and I'm a song lyrical typist, I play piano, organ, 20 stringed instruments, a cell phone and I have a recording studio with about 300 knobs, sliders and buttons and I use my messy hands to operate them all. It is called "skill"... 😀
This is a channel dedicated to developers, so it's not weird at all. Half the video was about compiling code, and the other was about LLM testing. It has covered the bases there. Several channels focus on photography and videography that show the MacBooks easily beating Intel, so watch them if that is your thing. Otherwsie, what pro workload should be tested?
Apple had 128GB of RAM and Windows only 32GB. Compilers do a lot of caching, so more memory is better. Also compilers don't use GPU. So if you need a laptop for development, you would probably not buy Razer. That Mac laptop is $6,000. A Windows laptop with 128GB of RAM will be less than $2,000, especially if you upgrade RAM yourself.
You are the first one to review this. Subscribed. We need more content windows vs mac. I think tech industry is on the edge right and apple silicon is the future or ARM.
Well , you weren't testing it for Gaming which is what the Razer Blade is all about. But still pretty startling. I think both the energy/heat envelope plus the faster (and unified) memory on the Mac is the big differentiator. Can't be the raw CPUs and GPUs, can it?
Small detail regarding ML test: compute increases approximately quadratically with number of tokens. So if the output on the two machines is not identical in length, there's a small bias. Also when you prompt again in the same context, more compute is needed.
In other words, when you output e.g. 900 tokens in one test and 1000 tokens in another test, there should be slightly more tokens per second in the former case.
I wonder what the benchmarks on the same machine would be like when running a Linux distro. I'm no Linux fanboy, it's just a way of seeing how much effect the OS has on the hardware. Windows has its place, but x86 is clearly struggling here as the VM running windows is quicker, and that fan noise would drive me insane.
I hate to ask, but are the drivers/firmware updated (BIOS, chipset, GPU, iGPU, network interface, etc..) and windows security updates performed on windows? I still think the Macbook Pro would readily out perform the Razer even with all of the above mentioned, optimized, however it would be worth it to comparing on slightly optimized windows laptop. Since apple makes the hardware and software, they are more optimized out of the box
Alex...quick question? why are you doing test on these os? non of the computer science majors use mac os or windows since the launch of Ai like 3 years now? Do some Test it on Manjaro, Kali or Bazzite I would like to see the results, another question why haven't you removed all the bloatware from windows 11 and run the test first?
@@chriswinslow if the laptop cannot perform to its fullest extent right out of the box without having to “set it up” then it’s a scam. He clearly stated that he’s using it rihh GH t out of the box without
@@p41n8 I don't know how you came to the conclusion that the laptop is a "scam", the option is there to change the power mode. Many if not all Windows laptops will be set to "Balanced power" mode as it's a laptop so it's typically expected to run on battery power, judging from the results of the "Balanced power" mode, I think the Razor did a decent job. As I said in another comment "I dislike the Razer brand and the Mac arm chips are impressive." Either way, I'd still expect an honest review, these results are flawed!
It is always a good idea to check Apple Chip from 2024 with Intel Chip from 2023. That's the latest SoC from Apple, but Intel already released two new generations. Very good comparison
well the thing is , those are the only high performance intel chips that can be bought right now. As the laptops with the new meteor lake chips shown off at CES are not released just yet
The 13th gen and 14th gen were basically the same--as was the GPU. Typical of Intel in the last ten years. Arrow Lake machines are not being sold for a while, maybe in May, so this is effectively the most recent Razer Pro machine.
It's also great that mac is running unix based OS. So installing packages is quite comfortable, whereas window becomes finicky. Though you can definitely dual boot any linux distro and the performance will probably be better.
6:37 please dont knock the laptop for the external drive bogging down, the random reads/writes is really hard on the drive but the interface and the laptop have shown they can handle the higher speed.
@5:08 you say that the write speeds for sequential on the Razer are much higher, but that’s not what the chart you posted says. For the Razer it says the write speed is approx 5k writes per second, while the Mac’s write speeds for sequential are nearly 8k writes per second. Am I missing something?
Yeah, Apple made the right decission to go their own way. Clearing bottlenecks, not just CPU speed, but the whole 9 yards
@@BeaglefreilaufKalkar efficiency
ARM architecture is doing that, not Apple per se.
@IoanBiTza Snapdragon X Elite also use the ARM architecture, but they can't even release a fanless model, because it's not enough efficient.. loosing half of the performance without a cooler..
😶
@@TamasKiss-yk4st exactly, it is apples special sauce that makes their products so superior. Ie unified memory.
Mac running windows as a VM beat the Windows laptop💀💀
@ believe it or not even for engineering software it is way faster, I couldn’t believe it when I first tried and made the switch
LooooL
😶
Something wrong with that razer 😢. No way
@@rauleduardosantiestebanmor6928 have you tried matlab, cad, kicad and solidworks?
Razer 18 will keep you warm during the winter season....lol
@@oscarjeong9438 apple kidoo wake up its 2025🤡
@@Muneebbbbbbb Just wait until you got yourself a bloated battery 😅
Pros of the m4 MacBook: long battery life
Cons of the m4 MacBook: it doesn't keep me warm during the winter
@@oscarjeong9438 that’s what the PS4 is for
😂
The things I enjoy most on the MBP are the total lack of fan noise and the fact that I never have to tweak performance settings. It just runs buttery smooth plugged in or on battery. And the battery life is actually great!
changed my view about laptops completely, made a switch to macbook m1 pro almost 2 years ago and still amazed to this day, the price is high but worth it
I run ableton on the 16 inch M1 Pro...multiple heavy plugins that would bring my 2017 top of the line MacBook Pro intel to its knees with 2 instances of knifonium.........I run 7 on the m1, no problem, plus many other tracks and plugins also....and still yet to have the fans ramp up....amazing machine
This type of comment - if you dont push your MBP, you dont get fan noise. If you push your MBP your battery will suffer and you will get fan noise. Pure and simple.
That's true but, compared to intel MBP and windows laptops, is it drastically better now.
Apple Silicon may look to be one of the greatest ever technical moves. This is brutal
Apple silicon is mainly a market pitch. It is all thanks to ARM and its highly efficient chip architecture that Apple was able to make this leap. Overtime ARM has allowed major users to have tweaks on the ARM designs and put their logo on top. But its still all ARM architecture underneath that makes this possible. And also ASML chip machines which allowed for to huge breakthrough in the much smaller chips. But dont see Apple ever talking about ARM or ASML so yes it is mostly marketing with Apple silicon. More windows machine might now start making it to ARM as well and we will see huge steps forward for windows machines. Saying this all as a happy Mac user who recognized this great move to ARM since day 1.
@@Futurewise-Humanity nah. Of course ARM is a great, but chip design go well beyond the base architecture.
I can't wait to see you review Nvidia Project Digits working with a Mac.
@@keithdow8327 Digits will run Linux, doesn't it?
@@carstenli Linux
I want to see an M4 Ultra compared against the Project Digits! I think for inference, the M4 Ultra will win. Empty your pockets Alex, it's go time ;)
@ I am a subscriber, therefore he is emptying my pockets! He is worth every dollar though.
Do it; do it! :-)
Honestly, calling the 14900HX a "brand new processor" in 2025 is borderline disingenuous. It's basically a rebranded 13th gen CPU which is more than 2 years old. Also, the RTX 4090 launched in 2022, and the RTX 50-series was recently announced.
The newly announced AMD AI Max+ 395 (yes, awful name) chip would be a much more interesting comparison. It is a "all in one" chip more similar to the Apple Silicon ones. They are expected to be decent value and power efficient too!
Yes. Amd ai 395 can have 96gb vram with 256bit memory lane.
@@passionatebeast24 You can't buy it now
can’t wait to get those in here to try
@@AZisk Yess!!! That is the matchup of the year. And NVidias Lenovo laptop when that comes out in Q4.
Hoping for the HP g1a arrives soon!
@@AZiskI really think there is some issue with the ml benchmark and no way the M4 max beats the 4090 in machine learning
I’ve always been a Windows user, rocking an RTX 1070m 3080m, 3090. But when the time came to choose between the Lenovo Legion Pro 7 (AMD Ryzen 9 7940HX, Intel Core i9-14900HX) for $3,400, or the MacBook Pro 16" M4 Max (48GB RAM, 1TB SSD) for $3,229 at my college bookstore, I went with the MacBook.
At first, I was skeptical I'd always thought Apple products were overpriced and overrated, especially based on things like AirPods Max and every flashy new iPhone. But oh boy, was I wrong. This MacBook is a powerhouse. It handles my 4K and even 8K drone footage in Final Cut Pro effortlessly, staying almost ice-cold and performing incredibly fast. Whether it’s 3D modeling or programming in IntelliJ, this machine is a beast.
As a gamer with over 300 games on Steam, I was initially concerned about the limited macOS library with only half of all my games available, but I’ve been pleasantly surprised. Here are the games I’ve run on 4K ultra settings so far:
- **Metro Exodus**: 100-120 FPS
- **Dying Light**: 120 FPS (200+ FPS with VSync off)
- **Hogwarts Legacy** (via Crossover): 90-100 FPS
- **Elite Dangerous** (via Parallels Desktop): 70 FPS
- **No Man's Sky**: 120 FPS
- **Baldur’s Gate 3**: 100-120 FPS
- **The Witcher 3: Wild Hunt** (via Crossover): 90-100 FPS
And there’s more!
What’s impressed me the most is how quiet the MacBook stays. Even under heavy loads, the fans barely spin up, and at times, they don’t spin at all. The laptop remains completely silent with no FPS drops. Whatever Apple is doing with their hardware and software integration, they’re absolutely on the right track-and I’m confident it’s only going to get better from here.
I have zero regrets about making the switch.
Apple laptops are not overrated. Their phones, earphones, watches are.
Mac on Steam has Compatibility Mode/Proton IIRC, so you can play non-native games on there anyways...
Let's talk when it comes to repair or upgrades.
@@Mr_Beowulf this reads like chatgpt to me
@@Quarker picked on that too
The reason that both the M4 Pro and M4 Max were similar in performance may be due to the Neural Engine being identical in both chips. If properly managed, most of the network, if not all, should run on the ANE, thus providing similar results. It would be very interesting to see where the model runs during inference with Asitop (which shows CPU, GPU and ANE usage). Great comparison!
Just to be clear, they weren’t similar in performance. Using MLX-based runtimes, the M4 Max achieved 172 tokens/second to 103 tokens/second for the M4 Pro. The difference is most likely due to memory bandwidth.
Last time I checked, LLMs were not able to be run on the ANE. Maybe that changed by now, but there was something about Apple not providing 3rd party devs the necessary APIs to use it in full.
@ There is no limitation preventing LLMs from running on the ANE, it comes down to the architecture implementation and quantization. What is interesting though is that MLX, currently does not support the ANE, at least according to python MLX package on PYPI. ANE can be fully utilized though using CoreML, even for 3rd parties (e.g. coremltools on python). On my models, I can see significant speed boosts when running on ANE with CoreML.
Nobody is interested in Intel laptop, please test a laptop with the AMD AI MAX+
yeah, can't wait for proper iGPU tests
AMD integrated has extremely poor power management in Linux. I have 780M.
The only reason I would even consider Intel would be if I wanted TB5 to run those new eGPU's.
Other than that, Strix Halo is the clear choice.
There is no way to compare the AMD AI Max+ 395 Thunderbolt 5 speed because it won't have a Thunderbolt 5. It is impossible it has only 16 PCIe lanes.
@@Berecutecu that was the whole point I made, the only reason you might even consider Intel now is if you NEED TB5.
Before Strix Halo, it could brute force multi-thread stuff slightly better than AMD because it had more cores... that's no longer the case.
As much as I'm a Windows user, I love that Apple silicon is pushing Intel, AMD and NVidia to do better. Competition is good for the consumer.
wait till the Ryzen AI Max+ 395 ☠
@@StreetDawg55 That's not an M4 Max competitor. That's an M4 Pro competitor thus AMD only showed comparisons against the Pro and the reg M4. As far as consumer SOC's go, the M4 Max is currently more powerful than anything from AMD or Intel. Strix Halo will probably still be very good in its class and be the most powerful x86 SOC you can get on Windows/Linux.
@@StreetDawg55 Wait till the M4 Extreme ☠️ (It was cancelled for apple intelligence just for it to failed miserably)
There's something screwy about that Razor. I ran Llama3.2 on my ancient RTX2080 laptop and I got 123 tokens/s. No way can a 4090 only do a 1-2% increase over a card that's two generations older and a lot less powerful. And that multicore score on Geekbench is abysmal.
What wattage does your laptop feed your laptop GPU vs the wattage fed to the Raze? Maybe that makes a noticeable difference?
😮
With my RTX4080 I got only 73 tokens/s :D
You have to set it to gaming or creative mode in the razer software 🎉
Also the write-speeds of the SSDs are not ok - I have high-end and budget m.2 SSD here and none of them breaks down on write-speed nearly as much.
I've bought two 15 inch Razer laptops, 2020 and 2021 Advanced models, both had suboptimal cooling solutions for the compnents they pack and constantly overheated. The other bug bear was needing to run their Razer Synapse software in the background to be able to control keyboard lighting, fan speeds and performance modes. This coupled with their poor build & design quality I honestly can't say I would recommend them.
Poor build quality? Razer has some of the best build quality of any Windows laptop maker. I have a RB14 2021 and it's still going strong, no issues.
@@migueljardim8177 I owned a 2022 model and the edges of this laptop was way to sharp and the keyboard is one of the worst i ever used. Now its sold and replaced with a ThinkPad.
@@migueljardim8177well, I had a 2020 razer blade 15 and had the same issues that @3amael mentions
@@migueljardim8177 I don't agree at ALL. They have zero quality control. So sometimes you get an awesome Razor lappy... and sometimes you go through RMA hell and it spends more time at Razor and in transit than it does at your house before the warrenty runs up.
Ask me how I know 😤
They're very inconsistent and service is AWFUL.
That being said... If you're lucky enough to get one without issues I agree they use nice materials and have a premium chassis like no other.
Just imo... build quality is more about consistency than a nice chassis.
@@migueljardim8177 "imagine someone having problems with a different device by the same brand as your device you don't have problems with" gamer challenge level: impossible
Can't wait for you to test the recently announced NVIDIA Project DIGITS!!
May!!!
@@AZisk Please test/buy two of them!!!
TWO!!?? It's already $3k for just one!!
@@AZisk could you please explain in one of your future videos about FP4 vs FP16, because if I understand correct NVIDIA Project DIGITS is around 1000 TOPS vs RTX 5090 which is 3.1 TOPS so the 5090 is around 3 Petaflobs? (BTW I'm not native English speaker I'm sorry if any mistake)
@@AZisk
Hi Alex, did you try changing the performance modes in the razer synapse software? That normally controls the tdp for both the GPU and CPU on Razer laptops. I would also recommend changing it to discrete only mode for the GPU as it would run the tests on the 4090
Did you hear the fans? I think this green beast took 3 times more energy than the apple.
I’d rather just get a Mac than do this. Windows users are tweakers
@@herrspitz6964 "green beast took 3 times more energy than the apple"💯💯🤣
This is exactly what he missed, but it is annoying that you have to tweak these settings in order to get good performance. It should work this way directly out the box while plugged in!
@@HaiderAli-fh3bf doesn’t he literally check the GPU usage during one of these tests?
You might want to run the same benchmark on the Razer with Linux installed. In my own experience, it's the OS that makes a lot of difference.
Have you even seen that Windows inside the virtual machine performed faster than Windows on bare metal?
Razer blade with nvidia and intel and linux is not fun. This combination will took a lot of hours for bug fixing and workarounds.
I have MBP and an old 4 year old Ryzen linux laptop, the Windows VM on Linux boots my dev env app in 5 seconds, the MBP with ARM Windows boots my env in almost 58 seconds and that's using paid Parallels...I can't stand my mbp from a dev perspective. The way keyboard "shortcuts" are implemented sucks as it's not always supported in all apps.
While this is true, the benchmarks I saw, the M4 Pro beats full blown desktop CPU's in a lot of benchmarks. So there's no way in hell a mobile Intel CPU would have any sort of chance here. Apple is absolutely spanking Intel & AMD here overall. Intel & AMD have their work cut out for them.
@@Bigheadedwon Every gen AMDs own mobile cpus fully blown desktop CPUs in a lot of benchmarks :-). Thats normal :-). Im very seriously considering to buy a AMD based mini pc as my new win desktop. Apple has still advantage with incorporating full SoC, so memory bandwidth is very high and effective. And other things of course. I also have mbp for battery life and maximum reliability for work. I understand this euphoria about macs. They did something, what is hardly achievable (not technically) in Windows world. Even MS with their surfaces isnt capable to order whole SoC and give it full software support. MS cant even properly support nowadays surfaces.
As non developer, I wonder what's the point of high-end windows laptop except for gaming.
@@Dominus_Potatus just for %4 of things that cant made in mac but % is getting down.
Engineering (CAD, architecture apps, etc), data science (GIS), & many more are not compatible with macOS
You can increase ram to 192gb and 3ssd slots inbuilt. Easy to repair.
@@AzVfL Matlab works on mac. Most CAD software works in a VM.
There is no point, game set match...
7:45 the sound of the Razer sizzling
Comparing the hardware of the Razer device with the benchmark results of its individual components that can be found online, your device seems either defective or power limited, as there is a significant discrepancy between the two. I wouldn't call your results valid or conclusive due to this issue. However, there is no denying how impressive a chip M4 Max is, most of the time on par if not slightly ahead of the mainstream X86 hardware while being much more power efficient.
@@askkutay it’s hard to believe his razer is so slow. I own one, and mine isn’t this slow.
he most deffs doesn't know how to work on a windows machine.. Most likely some eco mode lowering the clock speeds.. He also went to gaming instead of creator mode
Yeah, there's some nonsense in this video.
A couple of things to note for ML benchmark: first I don’t think LM studio works good for nvidia chips. Usually we use vllm or sglang to serve the model which is way faster than LM studio. And the benchmarks better to be done with bigger model to exhaust power of 4090 compared to just 3B model considering it’s already 4-bit quantised. Second thing is about inference throughput. Apple mlx has never done any good batch LLM inference, meaning if you have multiple queries, it’s going to be slow to process. I can easily reach 2k tokens/second for a mobile 4090 for a 3B model with batch inference.
The goal is not to provide a balanced analysis of what the machines can potentially do if you use them properly. It all about shitting on PCs to make the Mac look as the superior choice no matter what. This is to lower the cognitive dissonance associated with buying very expensive hardware that isn't that much better in the real world.
Hence the sensationalist titles and whatnot. This guy keeps showing up in my recommended, yet every time his approach makes no sense appart from just shitting on PCs.
To be clear I own Mac/iPhone/aWatch. There is some good thing about it but the value proposition is out of whack...
@@clementcollier8432 but both fall under the category of very expensive hardware. It's hardly just about justifying the price.
Gonna be honest: I don't like the MacOS. I don't even have legitimate reasons, I just dislike using it.
But I do think that as things stand now, Macbooks are better for most people at least as general purpose laptops, but also for some very specific work tasks(video/foto editing, coding, audio work etc). If you need the best GPU on the market and CUDA accelerated applications, obviously that is only present on the Windows side. But looking at the device as a whole, it just looks like a horrible proposition to me - it's loud, it uses a ton of power as it's inefficient, it drops in performance and by a significant margin when used on battery (which btw was very generous of him to use it plugged in basically the whole time), consequently have poor battery life. Granted, it's a poor hardware choice and I would like to see a comparable AMD based device with some of their upcoming offerings, but these Intel machines are pretty bad.
"There is some good thing about it but the value proposition is out of whack"....hm, not sure this criticism can be directed towards the Mac/Macbook lineup. They are pretty on par with the Windows offerings in terms of value. Like sure they ain't cheap, but on the other side, the premium Windows devices aren't cheap either. As soon as you hit any sort of overall equivalency in terms of build quality, battery, screen quality, and somewhat comparable performance, you're basically looking at the same price range. This can be more directed towards the smaller devices, namely iPhones. And there's also the new, very cheap but capable Mac Mini now. It's a lot of PC for the price, to the extent that if you don't intend to do any sort of gaming, it excels at the price point compared to the competition which doesn't have anything to offer in the price range in that form factor at that performance level.
And because of my aforementioned dislike of MacOS, I've been following the market closely for the past year or so to see if something will pop up to satisfy my desire for a new non-MacOS laptop but so far, everything that showed any promise turned out to cost the same as a comparable Macbook Pro, but also turned out to have some sort of flaw that would just annoy me. In some cases they even cost more. I'm not in a rush and will wait to give AMD a chance to prove me wrong, but at this point the internals aren't even that much of an issue, but the whole package. The thing is, I may dislike their OS, but if I get a Macbook, I at least know what I am getting. It's a safe bet. Windows laptops are so tricky to get a good read on, since the market offerings are way too granular and the product lines are very messy for all of the manufacturers.
@@clementcollier8432 Going hard on the Copium I see... The goal here was to compare two pieces of hardware with 2 suites of common/accessible tools.
Could someone make a super-optimized benchmark specifically for ? Sure. Does that demonstrate anything? Not at all. If a Windows laptop can't perform in the day-to-day tasks that people are doing, then they lose. Simple.
There's still plenty of reasons to pick Windows over Mac, or Mac over Windows. I'm writing this to your from my AMD 5950x / RTX 3080 PC right now. Doesn't mean I can't recognize that Mac has some strengths over PC right now, especially in the mobile/laptop segment.
I mean, you're complaining about not giving things a fair shake - the dude ran Windows in a goddamn VM on Mac and *still* outperformed the Intel machine. Were you under the mistaken impression that running things in a VM *improves* performance?
@@peter.dolkens i don't think you understand half the things he has written, point he made is you buy according to your needs, nobody is going to buy a dedicated gpu laptop for tasks that won't benefit from that power, comparing an apu to a cpu/gpu combo is stupid to begin with, that too on a hardware that is 2 years old and a cpu known to literally burn itself to death.
from your comment
"Could someone make a super-optimized benchmark specifically for ? Sure. Does that demonstrate anything? Not at all."
it demonstrates your inability to make proper choices, besides razer performed all tasks in the video just slower.
Exactly!!
Glad you're back Alex! We need you to tell us how AI is AI-ing on a daily basis
good to be back
Blade 18 owner here, you should use custom mode in synapse and set cpu to boost and gpu to high to unlock the full performance razer designed it for. After a bios update, they throttled Turbo mode performance and cpu scores tanked drastically for some reason
for 4k$ I don't want to do any of those.. on Mac I don't need to. Why the hell should I first of all know about this set of actions and then do them? it's not 500$ chinese laptop after all
@@LeGatoDarko dude, its literally the same thing as enabling high performance mode on mac LMFAO🤣🤣🤣🤣🤣
@@yuding237 fresh OS: find the app, install it, familiarize with its UI, find required switchers.. Yes, it's same as clicking on battery icon and selecting power mode on Mac. Agree.
@@yuding237 no such thing, you don't need to. mac's on high performance mode straight out of the box.
@dreiiiguapo dude u can't be that lazy to touch 1 single button💀 if you are, the windows community does not accept you too
Did you adjust the power settings to max in Windows and Synapse software? That will greatly impact your benchmarks.
If only Macbooks could run CUDA apps... 😢
True
yeah. I know.
use mps in pytorch, or mlx, metal based support.
I think that's the real bottleneck for current MacBook compared to x86 laptops with NVIDIA (for AI usage of course)
😢
I mean Razer has got it's own proprietary software called Razer synapse or something that lets you set it to quiet balance, performance mode etc. so.. maybe have a look there, that windows performance setting is just useless for gaming laptops
That will hurt performance massively for the razer.
Great comparison. But can you please test against a new x86 CPU like the Ryzen AI Max+ 395
When they come out, yes
@@AZisk I am waiting
I came here to say the same. With 256 GB/s memory bandwidth, and 40 CU GPU with a Geekbench OpenCL score right between an M2 Max and M3 Max, the 395 should be really really interesting to see, especially in LLM inferencing tests.
Something is wrong with that razer . Ain't no way it's performing like a high end snapdragon x elite 😂. I refuse to believe this
Becuase he chose tests that would favor the Max. Even a $1000 Lunar Lake Laptop would do better in these tests.
@@revben true
@@revben that would make sense
@@revben also no gaming wich is the main use case of that laptop.
the one thing i am interested for you to test is the new ai max+ 395 chip when it comes out, cause thats honestly the direct competitor to the m4
Windows on ARM: Am i Joke to you?!🤡
Windows on Mac: Nah i have de-bloated windows btw 🗿
Mac IS running on ARM btw
@creativeearthian1702 I'm speaking about windows 🗿
@srinivasanpalanisamy3013 yeah but windows on Mac is also windows on arm since Apple chips are arm chips
Well the razer notebook is used more on the gpu since its what makes it shine, well the cpu is mid since you can get it with a 1700$ notebook too. Still it surprised me that the cpu wasn't even close to the m4 chip. Nice video by the way 👍
Thank you so much for this comparison! Was thinking about getting the Blade this week over the MAC
Glad I could help!
Now would not be a good time to get a Blade laptop since very soon they'll be refreshed with the Nvidia 50 series GPU's and new AMD and Intel CPU's. I'd wait personally, especially for the AMD Strix Halo chips.
@@migueljardim8177 When new game laptop models come out I buy last years model marked 80% off. FWIW I don’t value AI frames as highly as rendered frames. The 5 series performance is numbers not experience. Why pay 10X as much so the card can hallucinate it is preforming???
@@migueljardim8177 Or wait for the new snapdragon chips in 2025, if not being in hurry.
1st generation of snapdragon seems better than m3 pro, while m3 pro was using 3nm manufacturing of TSCM, while snapdragon was using 4nm of TSMC. The new snapdragon will use 3nm as m3 & m4 , so based on the previous generation's result, the expected it that it should outperform m4.
@@migueljardim8177 the 50 series has to use AI to generate enough frames for a playable fps... you're literally getting less for more
Im noticing a big divide between next gen machines and everything else.
Fast RAM and unified memory I think is making a staggering difference. Plus as you mentioned storage speed and bad cpu's.
Mac is amazing.
I cant wait for the AMD AI Max+ 395, that might be the closest Mac competitor we'll see.
16 Zen 5 cores (32 threads) and 128 gb of RAM will be INSANE.
It will even destroy the 1 thing Intel was decent at (multi-core because they shoved so many cores in their silicon).
Unless you need TB5, the choice is clear.
Now, trying generating an AI image using Flux. The MacBook will be crying from the pain, while the 4090 does laps around it.
Yeah yeah discreet graphics against integrated graphics 👀. Doesn’t it look embrassing to nvidia?
@@Open6a-fx4qf But the mac is the same price if not more expensive, and i'd expect it to keep up with a 4090
No shit sherlock, the company leading AI development right now is gonna make sure their components the best performance.
Do you ever look back and realize how insecure you seem when you say stuff like that?
Like imagine watching a Lamborghini review and saying “well how many people can you fit inside? Guarantee my van will smoke it in that comparison”.
Legit, makes you sound like a loser. Btw if anyone’s mind is so nonfunctional as to extrapolate the wrong relationship from the paragraph above (and I know someone will) then consider taking up the tide pod challenge. I promise you’ll come out smarter 😉.
Wow, rare to see these kind of benchmarks nowadays, great work. Also, great to see the mention of windows laptops on battery vs plugged in. Very rarely do reviewers mention that, and even rarer to see an actual test of both
Hi maybe it make sense to try Razer on linux?
I ran Arch linux with KDE on a razer blade model 2022. It runs but it is no pleasure. I dont recommend that.
@@herrspitz6964 what is the problem in your case?
@@Erwin_Anderson gonna go out on a limb and say it's arch btw
@ The main problem is the missing driver of cours. Some Hardware components are not compatible with linux. But the most problems did the nvidia gpu. I had to load custom kernel parameters and even than it is not running seamless.
@ Of course you can try to run not rolling release distros with older kernels good luck with this moron. No driver no function of the hardware. You are getting the newest drivers only in rolling release distros or you use "experimental" version distros but these are more unstable than rolling systems.
As a DBA I can tell you memory latency is much more important that anyone realizes. It affects all of these test, sometimes including disk tests because they copy to disk from memory. What's special about Macs is the amazing advantages of memory latency with the unified architecture. Another lesson here is running databases on arm instances in AWS is pretty amazing for simple operations(standard loop joins) with the Amazon version of this chip.
Best thing to do with your razer laptop is returning it.
I'd be interested to see this test done with Linux instead of Windows (on the Razer).
2:09 Something wrong with your Razer... Speedometer 3.0...? On my one year old Beelink SER7 7840HS I have: 22.8 ± 0.62
On my M2 (2022) I got 35.7+-2.9... this can't be very accurate?
I have a laptop with the same processor as yours(ryzen 7 7840hs) .
When I ran speedometer 3.0,the result was only 9.0±0.50
Something seriously felt wrong here because other lower end processors got better scores than the 4nm one
Tried it again but this time, I kept the task manager open and found out that the test wasn't even utilising 3% of my cpu
Could you tell me what I was doing wrong here?
@@WkwkwkkWkwkwkwk-h5c "Could you tell me what I was doing wrong here?" I don't know... all I know is that the results of this specific test depend on many factors.number of background processes, BROWSER used, amount and type of RAM and so on...
@tomaszwaszka3394 I see
Aight then,
Thanks for giving a reply.
This is a fantastic video Alex! I am really excited to see and be grateful to you if you you can do a comparison video of how a top Vanilla M4 (Not pro/max) Macbook Pro does against a Ryzen 9 AI HX370 laptop like a Zenbook S16. The later can be any model powered by the Ryzen 9 AI HX370 but only reason to suggest a Zenbook S16 is because that model too has a fast RAM and SSD.
6:27 So you used different tb5 canbles and different ssds which one was overhating and that one had terrible score? Colour me orange.
I've been looking for a proper reviewer who is more knowledgeable than me and doesn't just test some games and video editing, and this dude is that guy. New sub from me!
I really would love to see a comparison between the M4 Max and the 40x0 mobile for ML/Pytorch training. I am pretty sure you did a benchmark in one of your other videos, and I found it super useful. I know inference could be a proxy.. but imo.. training can behave differently
As always Alex, thanks for the video - great work as always!
I think that in selecting 4 bit small models so they fit on the 4090 VRAM you are teeing up the MacBooks to run on the NPU. The reason I think this is that the Pro and Max chips performed the same. ~40 tops is very capable for any 4bit inference that will fit in memory.
For the 4090 to outperform the M4 you would need to do some 16bit and or learning. The other big advantage of the M4 is that it automatically sends instructions to the best core and they all call the same unified memory. The i9 is doing nothing but management of the 4090. For 4 bit inference the i9 might be about as good as the 4090.
the problem with the razor 18 ... Intel.
Excellent stuff. I was waiting for this.
👉🏻 I really would love to see a comparison between the Mac M4's and AMD AI Max Series machines.
yes, waiting for those
😮
There is no way to compare the AMD AI Max Thunderbolt 5 speed because it won't have a Thunderbolt 5. It is impossible it has only 16 PCIe lanes.
Now do these tests with Linux installed on the razer laptop. I'm curious what will happen
Also run Asahi Linux on the Mac.
I don't know if you have ever done this before, but I would love to see some comparisons running Linux. For some of these tests, I really wonder if Windows is a problem.
For the drive, on the PC does it show up? Check the format drive application and if it doesn’t show up there try using ‘diskpart’ from the command line if you didn’t already. Most APFS formatting doesn’t show up on Windows unless you do a command line level diskpart
Reazer Blade: Let’s do some AAA-gaming! 99% of people buy me for it.
MacBook: *Leaves the chat*
Alex Ziskind: MacBook is too powerful!
Everybody: 🤦♂️🤦♂️🤦♂️
I mean as you said, it's a GAMING laptop - it's primarily meant for gaming.
Which mac still can't beat 4090
@@theo5675 in games none of them.
A GAMING laptop with the latest gen I9 should be fast as hell. The only difference between a gaming laptop and a non gaming one is the gaming one will have a much better GPU. Everything else will be the same.
@@Bigheadedwon It's not the latest processor, and it insanely faster at gaming.
Any chance you throw a Linux distro on the Razor and re-run the tests? Obviously the disk performance will be better for things like that compilation test (NTFS is awful with lots of small files), but I’d be curious to see the difference in the AI benchmarks.
Oh, and if you want a real-world compilation test for .Net, why not the .Net framework itself?
Are you going to test the new Strix Halo process?
hopefully
@@AZisk hopefully at least some kind of competition. As much as I dislike MacOS this hardware is more than making up for OS. And so far its not even close.
@@heroofjustice3349The 395 will be very close and - /+ 10% in Gpu and Cpu... And better in AI.
@ yeah I know, at least thats what they say. Fingers crossed it will be also not jet engine like unfortunately most Windows laptops.
Will you be bench marking the rtx 5000 series, I'm interested in benchmarks outside of gaming.
And we're not even talking about the build quality, speakers etc and ofc windows itself. This is just embarrassing. It's really only "good" for gaming and even at that it's bad, lots of noise and awful performance for the price. Especially on battery, might as well get a desktop otherwise.
The keyboard is the worst thing of this "Laptop"
hope you get 5 billions views as linus!
Bro has same amount of ram as my sdd…
@@buddhaeyes Emotional damage 💀
Would have loved to see a Davinci Resolve export test to compare speed and render times.
Awesome vid as always. Keep it up bro. Cheers
Thanks, will do!
Obsolete at this point. We have to wait for lunar lake and 5090
Excuse my noob question but does the operating system (mac os vs windows) have any influence on all these tests?
Great video, missed your uploads
Good to be back!
I really enjoy your videos and appreciate the effort you put into them! I wanted to mention that the Razer 4090 and M4 Max GPUs don’t natively support 4-bit computation in the Machine Learning Benchmark. This might affect the accuracy of 4-bit inference tests in representing their performance in AI tasks. It would be interesting to see 8-bit or higher inference tests, as well as AI training scenarios, in the future. Additionally, the generated code during compilation might not fully reflect real-world use cases, so including variations of large-scale, real-world tasks could add even more depth to the benchmarks. Thank you for your great work!
I can't own a laptop without touch and the ability to interface with a Windows workstation.
so youre like an 8 year old, who loves to type on screens with their messy hands
@@zoscmengiste4990 Yea, and I'm a song lyrical typist, I play piano, organ, 20 stringed instruments, a cell phone and I have a recording studio with about 300 knobs, sliders and buttons and I use my messy hands to operate them all. It is called "skill"... 😀
How is it fair to compare 128gb ram to a 32gb machine with benchmarks that kinda utilises ram
"This razer is a gaming laptop with a 4090"
Only tests the CPU's and doesn't even mention the graphic performance nor runs a cinebench
He used the CUDA in the 4090 for the AI tests... but yes he got a laptop designed strictly for gaming, then used it for anything but.
@@AyaWetts The 370 max outperform the 4090 on AI, not impressive
can't take this video serious anymore after reading this. yea this video shits only on the cpu and we all know intel cpus suck these days
@ and? was talking about what he used, not what is good... but yes something 2.5 years newer is often better.
??? he run the AI model on the GPU
I'll bet you less than 1% of people watching this are running local LLMs.
Kinda weird very workload specific testing. Everyone watching this already knew that Apple silicon would be better suited to LLM work.
This is a channel dedicated to developers, so it's not weird at all. Half the video was about compiling code, and the other was about LLM testing. It has covered the bases there. Several channels focus on photography and videography that show the MacBooks easily beating Intel, so watch them if that is your thing. Otherwsie, what pro workload should be tested?
yeah bruv we are fedup watching geekbench and photoshop / figma benchmarks. Kudos to @AZisk for doing something that actually applies to developers.
@andyH_England Engineering workloads. Programs like ANSYS, Fusion 360, Matlab etc
@@andyH_EnglandIntel has sucked for the past five years. Try AMD
Apple had 128GB of RAM and Windows only 32GB. Compilers do a lot of caching, so more memory is better. Also compilers don't use GPU. So if you need a laptop for development, you would probably not buy Razer. That Mac laptop is $6,000. A Windows laptop with 128GB of RAM will be less than $2,000, especially if you upgrade RAM yourself.
I know this might be a lot to ask but can you run diffusion models in your comparisons please ? 🙏🏻
You are the first one to review this. Subscribed. We need more content windows vs mac. I think tech industry is on the edge right and apple silicon is the future or ARM.
ARM in general for Laptops. Manufacturer independent.
for the macbook Did you use the mac binaries for vscode. mac binaries are better than universal binaries
As a member of the channel, I am glad I am helping you waste money.
😂 thanks Keith
Well , you weren't testing it for Gaming which is what the Razer Blade is all about. But still pretty startling. I think both the energy/heat envelope plus the faster (and unified) memory on the Mac is the big differentiator. Can't be the raw CPUs and GPUs, can it?
Wow! Didn’t expect that to be such an emphatic victory for the M4 Max never mind the M4 Pro. Crazy!
Against 2 year old pc chips. Crazy indeed!
newer PC chips still suck (power, especially)
What is best one Macbook pro or Razer what is the conclusion?
It was good comparison video.
Small detail regarding ML test: compute increases approximately quadratically with number of tokens. So if the output on the two machines is not identical in length, there's a small bias. Also when you prompt again in the same context, more compute is needed.
In other words, when you output e.g. 900 tokens in one test and 1000 tokens in another test, there should be slightly more tokens per second in the former case.
7:45 F-22 raptor take off
Did you disable Windows Defender? (Or other anti virus / malware stuff) ? Not arguing about the results, just the first thing I always do.
I wonder what the benchmarks on the same machine would be like when running a Linux distro. I'm no Linux fanboy, it's just a way of seeing how much effect the OS has on the hardware. Windows has its place, but x86 is clearly struggling here as the VM running windows is quicker, and that fan noise would drive me insane.
The problem is there are no running drivers for current apple chips and there are bad drivers for nvidia gpus. So this is not working.
128gb ram in a laptop? what do u need so much ram for? genuinely curious
Nobody needs to tell me windows is shit and I’ve even never used Mac.
and somehow Apple still barely licks Windows OS boots globally..hahahahahahahahahhh...call us when you get to 35% Of the market..make sense..
I hate to ask, but are the drivers/firmware updated (BIOS, chipset, GPU, iGPU, network interface, etc..) and windows security updates performed on windows? I still think the Macbook Pro would readily out perform the Razer even with all of the above mentioned, optimized, however it would be worth it to comparing on slightly optimized windows laptop. Since apple makes the hardware and software, they are more optimized out of the box
I just love your tests
thanks
Alex...quick question? why are you doing test on these os? non of the computer science majors use mac os or windows since the launch of Ai like 3 years now? Do some Test it on Manjaro, Kali or Bazzite I would like to see the results, another question why haven't you removed all the bloatware from windows 11 and run the test first?
What is most unbelievable is that there is no company able to replicate similar hardware quality for Windows. Apple hardware is just amazing.
Hows the battery compare?
Not even close. Intel chips just chew though power. ARM is infinitely more efficient
Linus from LTT would like to formally and informally protest these results .
Why didn't you just use VS 2019? I'm on an M1 Max with Monterey, do the Sequoia machines not run 2019?
razer might be loudly, not great battery, speaker and etc. but I always choices razer or other rather than apple.
Besides that, I don't understand his point. Why comparing a gaming laptop without a gaming benchmark.
absolutely right.@@hermanstokbrood
I am sorry but i can't believe this test , there surely has to be something wrong with the razer it shouldn't be so slow .
Imo, speedometer 3.0 is NOT representative at all.
11:56 the CPU is showing a clock speed 1.5~1.7 GHz. I don’t think you’ve correctly set up the laptop powers management to run in performance mode.
@@chriswinslow if the laptop cannot perform to its fullest extent right out of the box without having to “set it up” then it’s a scam. He clearly stated that he’s using it rihh GH t out of the box without
@@p41n8 I don't know how you came to the conclusion that the laptop is a "scam", the option is there to change the power mode. Many if not all Windows laptops will be set to "Balanced power" mode as it's a laptop so it's typically expected to run on battery power, judging from the results of the "Balanced power" mode, I think the Razor did a decent job. As I said in another comment "I dislike the Razer brand and the Mac arm chips are impressive." Either way, I'd still expect an honest review, these results are flawed!
It is always a good idea to check Apple Chip from 2024 with Intel Chip from 2023.
That's the latest SoC from Apple, but Intel already released two new generations. Very good comparison
well the thing is , those are the only high performance intel chips that can be bought right now. As the laptops with the new meteor lake chips shown off at CES are not released just yet
The 13th gen and 14th gen were basically the same--as was the GPU. Typical of Intel in the last ten years. Arrow Lake machines are not being sold for a while, maybe in May, so this is effectively the most recent Razer Pro machine.
Why, to get even more fan noise?
Are the newer versions from Intel even slower since they are targeting better efficiency
💀
It's also great that mac is running unix based OS. So installing packages is quite comfortable, whereas window becomes finicky. Though you can definitely dual boot any linux distro and the performance will probably be better.
That first geekbench score is crazy, that’s literally the same speed as my phone 😂
Wow! Which phone?
Geekbench is not good for power hungry Laptops, even the Ryzen 370 has better Geekbench score
6:37 please dont knock the laptop for the external drive bogging down, the random reads/writes is really hard on the drive but the interface and the laptop have shown they can handle the higher speed.
Pretty sure you got a huge issue in the OS (antivirus real time scan?), or a faulty hardware. A number of cheaper windows machines have better scores
How do you compare the m4 chips with Nvidia gpus?
@5:08 you say that the write speeds for sequential on the Razer are much higher, but that’s not what the chart you posted says. For the Razer it says the write speed is approx 5k writes per second, while the Mac’s write speeds for sequential are nearly 8k writes per second. Am I missing something?