newbs are buying what they can afford - 4060s/7600, maybe 4070/7800xt which are obsolete too slow 0 progress low-VRAM hardware. smartest people get used cheap cards like 6800/xt, because 4080-4090 can't even run MODERN games on max settings anyway - 41/52fps in alan wake 2 @1440p for 1000-2000 bucks, lol. drops to 16fps in cp2077 with pathtracing. GPUs are way too slow & way too overpriced
Intel being good is good for everyone. 770 seems fine now. The problem is that the general public doesn't care. It can be that the a580 is fine. 380 for it's price is a great entry placeholder. Don't get me wrong I'm no Intel enthusiast and would love for them to crumple, but them getting standing in the gpu market would be outstanding.
For me, it's somewhat similar with AMD. I really like what they're doing in the CPU market, but think their GPU department has been dropping the ball for years, and it's honestly frustrating to see.
@@123Suffering456 Agreed. But whenever I point out AMD's GPU failures I get all the fanboys who think I'm some AMD hater, even though I love their CPUs right now.
@@sentryion3106 But when Intel makes flagship cards similar to the 4080 VS 7900 XTX situation, Intel's pricing will bring NVIDIA's prices down as well, which is why the 4080 went from $1200 to the 4080 S becoming $1000.
I hate to break this to you, but BM isn't going to be going after Nvidia's halo or flagship products. xx90/xx80. At best they are going after the xx70. But really what they are going after, as was said before, is AMD's offerings. Simply put no one has anything that can touch Nvidia's upper tier products which sucks......a lot.
@@KellicTiger honestly think that its because they stopped marketing their productivity cards(titan etc) as productivity and started them as gaming cards, which moved that goal post for them to stay on top. they made the 90 take the titans spot, and now with this gen, the 80 is in that space as well
Just drag out any press release from 20 years ago and replace terms like "web two point oh" or "convergence" or whatever with today's buzzword, AI. It doesn't even have to actually mean jack. Web 2.0 didn't. But Wall Street LOVES buzzwords. They demand AI so everybody who likes money happily obliges with their new AI toilet seats and AI shoehorn and their AI can openers. Meanwhile the fuzzy logic appliances from 30 years ago just sulk on the shelf at Goodwill.
Arc was complete trash. 2 years later it is still trash. Don't hold your breath. People falsely claim that Intel is new to graphics, that's why 😂 No. They have graphics on all of their chips so they already knew how it worked in theory. The real problem is they hired Raj, the reject AMD kicked to the curb bc he couldn't develop an efficient and functional chip to save his life. Intel kicks him and now they're making progress. I am all down for competition, just don't get your hopes up bc history says it won't be that great. Although AMD turned things around so we shall see
@@andrewsolis2988 How is it trash if it gets adequate price to performance for games? The A750 costs quite a bit less than a 3060 and beats it… is it perfect? Absolutely not. But does it offer at least something? Yes.
@markwazowskinreal a turd on the sidewalk offers something too. Fertilizer!! But is it desirable? Not really! I'm hoping for the best but two years of driver optimizations and you still can't have a stutter free experience a lot of the times. No thanks! I will gladly pay $75 more for an AMD equivalent at the moment. I'm a realist, Arc is trash but I am rooting for them FYI. Once they kicked Raja from the team I became optimistic about Battlemage so time will tell
Because Intel is a very integrated solution for enterprise uses. When someone buys a ton of accelerators from them they will get WAY more performance and usage out of them than any game could get
I wonder if Battlemage will also give a massive boost to their formerly abysmal FP64 performance on their Alchemist cards. I know that FP64 is almost entirely used for in-depth simulations but... Maybe the reason it's not used very much is simply because many, many GPUS out now have bad FP64 performance. I dunno, I just know I'd personally really like to see it in an Intel card. It would also be something else they could use to make their cards stand out more from the usual AMD and Nvidia offerings. Hell, maybe even FP128 if they want to go really crazy.
@@arnox4554 desktop consumer cards on the lower end are not the right target for FP64. The silicon utilisation is purely targeted towards fp32 and fp16 ops for games etc. Ofc more workstation and data centre options are where you may be looking at FP64 which is targeting the HPC industry.
@@shaufiqfauzi6640 Not the only problem. IMO the bigger problem with Nvidia is the closed-ecosystem and the restricted access to the hardware (like Vbios). The one thing AMD WAS* doing that Nvidia and Intel are not: low level access, hack-ability and openness However, that doesn't mean that Nvidia didn't and won't keep dragging its competitors prices up with its wake. >1k$ are a norm now for GPUs and the glory days are behind us; we have been here. *got reminded in a Reply that AMD from RDN3 (and onwards) has no low level SW tools. You need a HW-programmer to read-modify-write Vbios onto 8pin; you do not own Their cards anymore (briefly checked for myself). Currently i see no reasons to buy anything other then Nvidia for modern GPUs .💹
As a Graphic Programmer, I find the technical breakdowns to be interesting and well put together. I have a good impression of Battlemage and the Arc team, and I hope they continue to share their progress with you and the larger tech community as they continue on their path to building something great.
Thank you VERY much. It's been more than a couple decades since I was conversant with CPU architecture, but I really appreciated the level of detail you presented here, even though a lot of it is over my head now. It's a lot of work digesting technical details, especially when claims often obscure specifics. This was inspiring, if scary.
All those "wastes" means the initial cards had quite a lot of power and they still don't utilise that power fully. For a first iteration this is very good.
Reminds me a lot of Fermi, lots of power but it was used incorrectly and inefficiently, Kepler fixed it a bit, but Maxwell made it a lot better and Pascal perfected it.
@@Salabar_ Their iGPUs were purely for base functionality and nothing else. It's still stupid that they weren't done efficiently, but if your beater car that's held together with duct tape gets you from home to work and back without reasonable expectation of explosion, do you _really_ need to fix it? But yeah, it's still stupid lol
That’s what happens when you treat GPUs like CPUs, in the CPU world everyone bows down to Intel and the software is designed around the hardware. In the GPU world as far as Intel is concerned nobody on the software side will change their course for them. They had to throw a lot of hardware at it because they refused to follow suit and just wanted to do their own thing in their Intel vacuum. Now it seems like they realized, ohhh maybe we need to read the directx documentation and instructions and maybe talk to the game engine developers BEFORE we design the hardware and see what developers need and what shortcuts are available in the DirectX api. They were doing so many things that didn’t need to be done and having to emulate certain things, the overhead was phenomenal both on the GPU and the CPU. This gen looks like a distilled version of Alchemist. Considering AMD is just doing a refresh of 7000 series this time Intel might actually be competitive.
@@Salabar_that's true. an igpu without fast clears is baffling, considering the whole point is to save on power. i wouldn't expect as much performance optimization since some optimizations require higher power, but i'd expect stuff like fast clears and "is zero" lines to be in broad use just for the sake of reducing transistor switching (the main source of power draw and heat generation).
The internet is full of such nice people. I am more likely to hand Steve a Red Bull and controller at 3a.m. And in the voice of goddfrey from the movie kingdom of heaven say “pick it up, let’s see what your made of”
Removing hyperthreading makes total sense. Squeeze extra juice out of those 8 or so P cores, which is all games and single threaded tasks really utilize anyway, and stack the E cores for parallel workloads that need them. Win win. Cool to see these architectural changes, sounds promising.
@@tormaid42 right but now there’s a way to squeeze even more performance out of the big cores for tasks that can only utilize them *and* retain the benefits of hyper threading with small cores
@@tormaid42 That is true, but there are two different kinds of "squeezing out performance" that are relevant here. Hyperthreading sqeezes out more perf per core for parallelizable tasks that can scale up the number threads, but does not give you faster per-thread perf. What Intel is trying to achieve here is to squeeze out maximum per-thread perf. I.e. give you fewer P-core threads, but maximize the execution of single threaded (or lightly threaded) tasks on the P cores. Which should theoretically be especially helpful for high-fps gaming.
One thing I love about Steve is his diligence and work ethic. Reminds me of how competitive Academia can be. Steve is smart and hardworking. You can tell he’s always been this good
@@eugkra33or the media embargo expired immediately after the talk, and intel was just a bit delayed in releasing their own announcement video (they didn't tell people the video would drop at a specific time, and short delays in those situations are common)
I really hope they can pull off battlemage, they've been really stepping their game up with the drivers and who I'm hoping that enthusiasts hop on board with this generation and then within a couple more there's actual mainstream adoption. At least at the middle and low end, super high-end is just completely obliterated competition wise.
That's precisely because they are. They unironically cite samplers being able to read/store more memory between caches than cards had a few years ago; they speak of culling for fixed pipelines, as if everything didn't switch to programmable pipelines about 2 decades ago; and not to mention the filtering nonsense everybody has been baking offline using whichever method they like, instead of computing at runtime using the one, incomplete, and mostly unused vulkan extension for different sampler filtering techniques. Oh, wow you fixed up your trashy depth merger stage? congrats. im pretty certain intel had that figured out the last time they tried to make an x86 based GPU, idk, half a decade to a decade ago. i'm pretty certain msfts software rasterizer, llvmpipe, and swiftshader too have figured out how to implement decent enough depth merging using regular ole simd 10-15 years ago. congrats intel. you're really showing us who's the boss. but at least they now implement standard d3d10+ indirect dispatch features instead of acting like some cpu bound opengl 1 card
@@reecesx every GPU still does fixed function culling before fragment shading, IIRC RDNA2 or 3 had improvements to it too And if your rant has any truth to it, the fact Intel got pretty close in performance with 20 year old tech promises great improvements for the future.
@@Arejen03 Yup. This card so far outperforms 3060. Over here in Europe it is a lot cheaper to get this card somehow, I am not sure why but it has been a really great purchase. If you can find it at 200 euro or even less, don't miss that deal. I bought mine for about the same price
Great work, boy these videos get more detailed and technical from one to the next. Love to see it, great way to know something other then number of new models, naming and PPT "chart colors" :D
4 am is crazy... and you managed to push the video out in 7 hours?! Thank you for all you do in the hardware space! architechture changes sound promising. Hopefully these hardware changes translate to real world performance, and hopefully you're right about it having better compatibility and out of box gaming. Consumers need a better battlemage
Yeah. I bet if they knew they had some super good and powerful but also cheap gpu coming out, they would certainly flex about it. What i gathered from the walls of super detailed information is that the new intel gpus are going to be a bit better than the previous intel gpus.
I don't think you're well informed, Intel Core 14th Gen series is great, with both better performance, powerefficiency and performance per watt! And they don't run hot either!
Battlemage sounds cool! Intel seriously has an amazing team in their gpu department. I just hope that they can make things work out. They are stuck in a situation that game companies are usually too lazy to even add upscaling when it's built into the game engine. Intel has to make their cards work with games, where as the other two companies are in an opposite situation.
Does look like Intel is making rather large changes from mistakes/post Alchemist release growing pains. There is a benefit for being a latecomer which is quick gains from gen 1 to 2. Hopefully more info about Battlemage's (assumed improved) HW AV1 encoders/Quicksync, since those would be exciting for upload limited streaming, Steam In-home Streaming, and HTPCs/local streaming servers
I'd expect those gains even without being a latecomer. 1st gen is almost always "get it out the door." Plus, the driver team almost certainly made a list for hardware as they were going through things.
Good gravy. The fact that they're measuring their uplift as a MULTIPLIER, rather than percentage uplift, is crazy. I hope their drivers are much better able to take advantage of the new hardware, compared to Alchemist.
and I was just thinking just how freaking impressive it is that Steve can process all this information and give a faultless presentation without even the hint of a note. Bravura performance!
Started with the little ARC A380 for 75 bucks open box at MicroCenter. Built My Niece a PC after Christmas that is the GPU inside. Then the 3 Fan ARC A750 OC from Sparkle in a build for me. Got a new ARC A580 on a shelf. With deals, all three under 500 bucks. Because I am not a NV fanboy. The first NV GPU I bought years ago, instead of ATI worked for 5 minutes after install then crashed. Took it back and was charged "restocking fee" GeForce 256, Still have a few 3DFX voodoo cards in my stash.
Fun fact about culling: more advanced culling can also not render the part of the object you're not seeing. Take a sphere, put it in your view, normally the render engine will have to render the whole whole sphere even though you're only seeing half of it. More advanced forms of culling can render only the front half of the sphere, meaning until you view the rear of the sphere directly, the game won't load in those triangles. But like I said, this is a more advanced form of culling and I don't think any modern games today use this (tho I could be wrong) . I have seen this used in animation software mostly to save resources while animating. Like Maya and blender.
In terms of per-generation changes there's a lot, but until we see independent reviews we won't know how much that actually translates to real world improvement. It's also important to note that the interval between Alchemist and Battlemage is quite long, if they were releasing with equivalent timing to Nvidia/AMD then it should have already launched by now. At this rate 50 series GPUs will be starting to release by the time they actually land.
Thank you, Steve and everyone at GN for your coverage of Computex. It's the most in-depth and consistent coverage! You know your're well respected within the industry when companies like Intel give you exclusive looks at new products. The main reason I love this channel so much and respect their opinions is their morals and ethical policies. It's so hard to trust what people say in this industry so thank you again. Also, love my shirts from you, they are super soft and great quality!
Ditching Hyperthreading is GOOD news, even if it will be a hard sell to the "Bigger number more better" consumers. The second thread on a core with hyperthreading was ALWAYS slow under normal conditions. If an E-core is 50-55% as fast as a P-core, the second thread on that P-core was operating at 10-35% as fast depending on the workload. And that SMT feature takes up die space. I think about as much as an entire E-core does(I need to check this...). Then you have problems with scheduling, SMT vs. non-SMT cores. SMT was useful during a very specific age of computing. That age has passed now that we have REAL 2-digit core counts. I'd LOVE to see a CPU with 6x single threaded P-cores and 12-24x E-cores.
nah. SMT doesn't take anywhere near that much space. It's mostly just a bit more register file plus some additions to the scheduler. Not free, but nowhere near an E-core. And the point of SMT was never more compute, but higher utilization. Without it, it becomes much harder to keep the cores fed in memory bound scenarios, which is most scenarios. I'm curious how Intel will handle that.
@@squelchedotter Hrmm. Well, the registers do take up a significant portion of a core. And an E-Core is usually only 10%-20% the size of a P-core. So maybe the comparison isn't that bad. After further reading, the biggest reasons seem to be... 1. Specter/Meltdown mitigations have drastically reduced the effectiveness of SMT. So there is even less benefit to having them. 2. SMT is less power efficient at most tasks than the same performance worth of E-cores.
Hopefully the compatibility claims are true. If Intel Battlemage works as well as RDNA on the driver side, a 300$ 3080/4070 equivalent with 16gb of Vram would fly off the shelves.
If it is good. It will be much more expensive… we have to remember alchemist did not bring money. Intel needs money from battle mage so it will be much more expensive than alchemist…. If it is any good!
@@MechAdv it's just a market reality: if the competition is much more expensive, Intel would just sell everything and then have nothing to sell, and thus lose money. It's more beneficial for them to sell less at a higher price
Ditching SMT sounds like a real "Hail Mary" move. If the single-thread performance or efficiency gains don't match up with the lost performance, Intel's going to be in a world of hurt not just from AMD, but from all the other ARM licencees who would love to finally get some real share of the laptop market.
@@CD-vb9fi Hyper-Threading IS Intel's implementation of Simultaneous multi threading, it's just a marketing name, like how AMD calls their implementation of the x86-64 ISA "AMD64" and Intel calls theirs "Intel 64"
@@tuckerhiggins4336 That's exactly what it is. They claim it's about power or whatever but really HT continues to provide an endless stream of hardware security flaws and they don't know how to fix it.
@@tuckerhiggins4336 Only part of them. Out of order execution that doesn't need software mitigations also probably gives massive improvements. However, it looks like they're taking the space used for HT and doubling down on even more prefetch/out of order logic. If I'm remembering right, at it's base HT is just a 2nd instruction decoder and set of registers. With some muxing logic so if one part of the CPU isn't used by the 1st, or the 1st is waiting on a fetch, then the 2nd will use the freed resources.
@@_droid no one does, AMD is also affected by them. But primarily, HT is not needed at all, it was a stopgap solution to a problem that last existed ages ago. Currently it makes no sense to keep it for the amount of die area it requires.
Great coverage. I was really waiting for to buy a Strix Point laptop later in the year, but seeing all these amazing architecture improvments means Intel's laptops I think are back as an option once performance figures. It is going to be great to have all this performance on the go.
Highly unlikely that they will catch up to, or even exceed AMD or nVIDIA any time soon, but the more competition the better. Despite the prices, nVIDIA's gpus a kind of in a league of their own still.
strong competition is always good, if it were intel or amd in nvidia's place right now i'd wish for the same thing. ideally at least two of them are trading blows somewhere
Life of a triangle - NVIDIA’s logical pipeline. High level view of the NVidia’s GPU architecture Triangles are precious. High level view of AMD’s GPU (GCN) architecture. Understanding Modern GPUs Those are good for a casual reader.
Sounds like AI really stands for "All In." All the companies we're used to talking about are becoming "all in" on AI to the point that soon there will be no PC parts we can buy. We'll just be leasing AI cores on an AI cloud to do AI tasks and play AI games.
tbf many ai companies, including openai, just can't get accelerators fast enough to meet their growth. and the main bottleneck is packaging (putting a cover on the chip so it's protected from the outside)
So much interesting and cool stuff coming out of all this, something to take our minds off the madness that is happening to us all across the world, on all levels of our lives!
Cause the need for it is not existing yet, ray-tracing will only realy when its going to be the norm for Consoles game. its coming but slowly, The Radeon team already have massive whole new architecture that implant RT accelerator better for RNDA 5. Good RT performance just for 2-3 AAA game a years that are most of the time completely bad game its not even worth it even to activate RT.
AMD was aiming to get closer to Nvidia in power but then Nvidia made a jump to RTX and Intel came in when RTX was a thing so they could aim for that at the cost of having poorer driver support at least that’s how I look at it
@@obeyobay9146 And lack of competition. Nvidia has been the underdog to Radeon for many years, until they weren't. Radeon helped save AMD, which is great as we're not stuck with Intel as a monopoly pushing out +5% improvements every year. Just imagine if AMD was able to release the Vega 64, RX 590, and RX 570 like 6 months before Nvidia released the GTX-10 series. Now imagine they released the Vega 7, Vega 56, RX 480, and RX 470 like several months before the GTX-10 Series (1080Ti, 1070Ti) refresh. And imagine if AMD were then able to release the RX 5700XT, 5600XT, and RX 5550XT again like 6 months before they released the RTX-20 Series. Nvidia would have been on their toes, and the prices wouldn't have gone up but actually went down. But that all hinges on AMD not just being able to produce good products, but doing so early, doing so with volume, and being able to advertise it effectively to capture notable market share. That's what matters, not theoretical ideas, but realistic conclusions.
@@rattlehead999 To be honest, gaming on consoles were usually better than on the PC (from NES to the PS2). The console period of PS3 was the pinnacle, and it went downhill after very fast for consoles. Call it the transition period. But during this era, we did get the Golden Age for Pocket Consoles (OG GBA, DS Lite, PSP Brite, new3DS, PS Vita Slim). PC Gaming really hit it's "Golden Age" around 2011-2013 (think i5-2500k/GTX-660 or i7-3770k/HD7970) and upto around 2016-2018 (i7-8700k/GTX-1070 or r7-5800x/GTX-1080Ti). Things hit a transition period with the GTX-16 and RDNA-1 products. Went downhill with the RTX-3000 and RX-6000 products being x3 higher priced from 2020-2022. And they're further gone downhill with RTX-4000, RTX-5000, and it doesn't seem to look like a correction is on the way. Maybe this can then become the Golden Age of Phone Gaming (see AAA-Games on the iPhone, lots of controllers, high-end graphics, and mature emulation).
With the BS prices being charged for MBs these days, Intel needs to re-enter the MB market. No RGB nonsense, just a solid base for an Intel system. CPU/MOBO/GPU bundles for the win!
I think Intel did exactly what we all hoped they would. They had a V1, which was rough, and they learned an absolute truckload about what they should do to make V2 a real contender. I am really looking forward to having a 3rd player in the graphics space. I wonder if we'll ever see a console powered by intel graphics... 👀
I'm kind of getting the sense that if the console market dries up, amd will probably be hurting a bit in the gpu division and intel might eat their tale end on gpu sales.
Same here, though I have accepted the death of 5.25" with grace. The only people buying BD drives are rippers and the prices on those have gone from $40 to $80.
Please be more specific. Intel is certainly fudging things on the CPU side, but I'll absolutely believe their Gen 2 GPU is going to be radically better than Gen 1. Both because Gen 1 was just get something out the door, and because they quickly learned exactly what they were missing on the hardware side when working on the drivers.
Watch our AMD coverage of their new Zen 5 CPUs as well! ua-cam.com/video/Y1yubL0h46U/v-deo.html
any news on new motherboards. I understand AMD chips are set to release in july. have any motherboards been revealed for the x870e line up?
You guys are legit legends.
is that the new sony xperia 1 vi?
"intel has to increase price" - nice shilling, Steve. no they have to drop prices because nobody is buying slow overpriced 0 progress GPUs.
newbs are buying what they can afford - 4060s/7600, maybe 4070/7800xt which are obsolete too slow 0 progress low-VRAM hardware. smartest people get used cheap cards like 6800/xt, because 4080-4090 can't even run MODERN games on max settings anyway - 41/52fps in alan wake 2 @1440p for 1000-2000 bucks, lol. drops to 16fps in cp2077 with pathtracing. GPUs are way too slow & way too overpriced
I am praying for battlemage. Intel could really shift the market if the price is low for good performance.
and if they give us gpus that dont stiff us on the vram lol
And graphics cards that don't need an own power plant or burn their plugs ..
@@hehe42069-kAlready have, 16gb is alot for a card targeted at 1080p
I hope they manage to jump the final hurdle against nVidia in the features list....that it can actually competently play VR games.
Intel being good is good for everyone. 770 seems fine now. The problem is that the general public doesn't care. It can be that the a580 is fine. 380 for it's price is a great entry placeholder.
Don't get me wrong I'm no Intel enthusiast and would love for them to crumple, but them getting standing in the gpu market would be outstanding.
never thought id be rooting for intel in the gpu market, but here we are
What a crazy world we live in
How wild would it be if we wound up with a market where an AMD CPU and an Intel dGPU were a good combo for a mid-range gaming build?
Uni reverse card
I just bought two ARC 770's. They have been handling CAD & vid editing,....at half the price. Took a chance & it's paying off so far.🤞🏽
sure but you guys will still buy Nvidia only
It's crazy that over in the CPU realm we are tired of Intel's crap. And over here in GPU land we are rooting for them. What a time to be alive.
Companies aren't monoliths. Some parts do good things, some parts do bad.
For me, it's somewhat similar with AMD. I really like what they're doing in the CPU market, but think their GPU department has been dropping the ball for years, and it's honestly frustrating to see.
@@123Suffering456 Agreed. But whenever I point out AMD's GPU failures I get all the fanboys who think I'm some AMD hater, even though I love their CPUs right now.
Intel's mobile parts aren't power hungry
@@123Suffering456 Ironic considering the GPU department kept them from going bankrupt for years
Please, we need Battlemage to be good for gaming. I'm so fed up with nvidia pricing.
Battlemage isn’t coming after nvidia they are coming for amd
I'm kinda ready for Nvidia to just be done with gaming so AMD and Intel can duke it out properly
@@sentryion3106 But when Intel makes flagship cards similar to the 4080 VS 7900 XTX situation, Intel's pricing will bring NVIDIA's prices down as well, which is why the 4080 went from $1200 to the 4080 S becoming $1000.
I hate to break this to you, but BM isn't going to be going after Nvidia's halo or flagship products. xx90/xx80. At best they are going after the xx70. But really what they are going after, as was said before, is AMD's offerings. Simply put no one has anything that can touch Nvidia's upper tier products which sucks......a lot.
@@KellicTiger honestly think that its because they stopped marketing their productivity cards(titan etc) as productivity and started them as gaming cards, which moved that goal post for them to stay on top. they made the 90 take the titans spot, and now with this gen, the 80 is in that space as well
Thank you so much for calling out the overuse of A.I. marketing terms, Drives me crazy too. Love your channel.
They say "a.i", I hear "look up script".
@@Retro-Iron11 Literally just calling any algorithm AI these days
Its just as bad as "VR ready" used to be.
Remember? VR Ready PSUs, VR Ready RAM, VR Ready cases, VR Ready keyboards.
@@Waldherz no, I have only seen "VR Ready" label on GPUs from 2016-17 and contemporary motherboards. it made sense on both of those.
Just drag out any press release from 20 years ago and replace terms like "web two point oh" or "convergence" or whatever with today's buzzword, AI. It doesn't even have to actually mean jack. Web 2.0 didn't. But Wall Street LOVES buzzwords. They demand AI so everybody who likes money happily obliges with their new AI toilet seats and AI shoehorn and their AI can openers. Meanwhile the fuzzy logic appliances from 30 years ago just sulk on the shelf at Goodwill.
So, when do we get video of Steve approaching the Asus booth?
did you get your paid answer?
@@AdiiSit's called a donation 😂
Yeah! Been waiting for that with popcorn 🍿 in hand
@@Teletha My bad, did he get his donated answer?
@@AdiiS nerd on the loose hes already on the street as you see in this vid. Another sweet hitpiece comin
BATTLEMAGE!!!
Can’t wait.
intel is so dead there is no way to ever compete with ARM
Battlemage, polly wants a cookie.
Battlemage, polly wants a cookie.
Battlemage, polly wants a cookie.
Arc was complete trash. 2 years later it is still trash. Don't hold your breath. People falsely claim that Intel is new to graphics, that's why 😂
No. They have graphics on all of their chips so they already knew how it worked in theory. The real problem is they hired Raj, the reject AMD kicked to the curb bc he couldn't develop an efficient and functional chip to save his life.
Intel kicks him and now they're making progress. I am all down for competition, just don't get your hopes up bc history says it won't be that great. Although AMD turned things around so we shall see
@@andrewsolis2988 How is it trash if it gets adequate price to performance for games? The A750 costs quite a bit less than a 3060 and beats it… is it perfect? Absolutely not. But does it offer at least something? Yes.
@markwazowskinreal a turd on the sidewalk offers something too. Fertilizer!! But is it desirable? Not really!
I'm hoping for the best but two years of driver optimizations and you still can't have a stutter free experience a lot of the times. No thanks! I will gladly pay $75 more for an AMD equivalent at the moment. I'm a realist, Arc is trash but I am rooting for them FYI. Once they kicked Raja from the team I became optimistic about Battlemage so time will tell
I love how Intel details every single instruction and component in their product. Looking toward to seeing battlemage shine.
Because Intel is a very integrated solution for enterprise uses. When someone buys a ton of accelerators from them they will get WAY more performance and usage out of them than any game could get
I wonder if Battlemage will also give a massive boost to their formerly abysmal FP64 performance on their Alchemist cards. I know that FP64 is almost entirely used for in-depth simulations but... Maybe the reason it's not used very much is simply because many, many GPUS out now have bad FP64 performance. I dunno, I just know I'd personally really like to see it in an Intel card. It would also be something else they could use to make their cards stand out more from the usual AMD and Nvidia offerings. Hell, maybe even FP128 if they want to go really crazy.
Can't wait to see all the suckers complaining the driver issues that would take forever to resolve by INTC.
@@arnox4554 desktop consumer cards on the lower end are not the right target for FP64. The silicon utilisation is purely targeted towards fp32 and fp16 ops for games etc. Ofc more workstation and data centre options are where you may be looking at FP64 which is targeting the HPC industry.
4AM sounds rough, you guys are killing yourselves for us. Real heroes out here!
Nah they must be getting fun
probably good to offset jetlag
Perfect time. 6AM for Finland. :D
Probably fine because they’re jet lagged to hell right now
4am at a dark corner in the park.. "Pssst....Yo, you want some of this street silicon?"
Everyone VS Nvidia
They (Intel and AMD) will just fight for the Nvidia scraps, amongst themselves, unfortunately and realistically.
Nvidia is a goat when it comes to Ai and the problem with Nvidia is the price, let’s hope others like intel catching up in Ai technology.
@@shaufiqfauzi6640 Not the only problem.
IMO the bigger problem with Nvidia is the closed-ecosystem and the restricted access to the hardware (like Vbios).
The one thing AMD WAS* doing that Nvidia and Intel are not: low level access, hack-ability and openness
However, that doesn't mean that Nvidia didn't and won't keep dragging its competitors prices up with its wake. >1k$ are a norm now for GPUs and the glory days are behind us; we have been here.
*got reminded in a Reply that AMD from RDN3 (and onwards) has no low level SW tools. You need a HW-programmer to read-modify-write Vbios onto 8pin; you do not own Their cards anymore (briefly checked for myself). Currently i see no reasons to buy anything other then Nvidia for modern GPUs .💹
@@IcecalGamer Nah the biggest benefit Nvidia got is a fully supported, seamless, functioning and STABLE development environment
@@shaufiqfauzi6640 naive child thinking they want to catch up in prices and sales 😂🤣
As a Graphic Programmer, I find the technical breakdowns to be interesting and well put together.
I have a good impression of Battlemage and the Arc team, and I hope they continue to share their progress with you and the larger tech community as they
continue on their path to building something great.
Thank you VERY much. It's been more than a couple decades since I was conversant with CPU architecture, but I really appreciated the level of detail you presented here, even though a lot of it is over my head now.
It's a lot of work digesting technical details, especially when claims often obscure specifics.
This was inspiring, if scary.
All those "wastes" means the initial cards had quite a lot of power and they still don't utilise that power fully. For a first iteration this is very good.
Reminds me a lot of Fermi, lots of power but it was used incorrectly and inefficiently, Kepler fixed it a bit, but Maxwell made it a lot better and Pascal perfected it.
They have been making iGPUs for a decade. The fact they did not have fast clears if actually baffling.
@@Salabar_ Their iGPUs were purely for base functionality and nothing else. It's still stupid that they weren't done efficiently, but if your beater car that's held together with duct tape gets you from home to work and back without reasonable expectation of explosion, do you _really_ need to fix it?
But yeah, it's still stupid lol
That’s what happens when you treat GPUs like CPUs, in the CPU world everyone bows down to Intel and the software is designed around the hardware. In the GPU world as far as Intel is concerned nobody on the software side will change their course for them. They had to throw a lot of hardware at it because they refused to follow suit and just wanted to do their own thing in their Intel vacuum. Now it seems like they realized, ohhh maybe we need to read the directx documentation and instructions and maybe talk to the game engine developers BEFORE we design the hardware and see what developers need and what shortcuts are available in the DirectX api. They were doing so many things that didn’t need to be done and having to emulate certain things, the overhead was phenomenal both on the GPU and the CPU. This gen looks like a distilled version of Alchemist. Considering AMD is just doing a refresh of 7000 series this time Intel might actually be competitive.
@@Salabar_that's true. an igpu without fast clears is baffling, considering the whole point is to save on power.
i wouldn't expect as much performance optimization since some optimizations require higher power, but i'd expect stuff like fast clears and "is zero" lines to be in broad use just for the sake of reducing transistor switching (the main source of power draw and heat generation).
geez you guys are BUSY! thank you for keeping us updated but please! absolutely take things at a healthy pace!
The internet is full of such nice people.
I am more likely to hand Steve a Red Bull and controller at 3a.m. And in the voice of goddfrey from the movie kingdom of heaven say “pick it up, let’s see what your made of”
4 am in Taiwan and Steve still out there getting the scoops! friggin awesome
RIP to your voices after this trip, godspeed
Removing hyperthreading makes total sense. Squeeze extra juice out of those 8 or so P cores, which is all games and single threaded tasks really utilize anyway, and stack the E cores for parallel workloads that need them. Win win. Cool to see these architectural changes, sounds promising.
Hyperthreading has been, historically, the single most effective way to squeeze extra performance out of your big cores.
@@tormaid42Well, up until it gave you security issues most common nowadays with some Intel products
"removing hyxperthreading to boost efficiency" reminds me of Bulldozer "fake" cores...
@@tormaid42 right but now there’s a way to squeeze even more performance out of the big cores for tasks that can only utilize them *and* retain the benefits of hyper threading with small cores
@@tormaid42 That is true, but there are two different kinds of "squeezing out performance" that are relevant here. Hyperthreading sqeezes out more perf per core for parallelizable tasks that can scale up the number threads, but does not give you faster per-thread perf. What Intel is trying to achieve here is to squeeze out maximum per-thread perf. I.e. give you fewer P-core threads, but maximize the execution of single threaded (or lightly threaded) tasks on the P cores. Which should theoretically be especially helpful for high-fps gaming.
"Stealthily handed to us" Did tom bump into you at computex, say "Hi steve, looks like you dropped something" and passed you that with shifty eyes.
hahaha
This video goes hard being released the same time as the Intel own video announcement releases 😮
Yeah, was this a mistake by them? An hour early maybe?
One thing I love about Steve is his diligence and work ethic. Reminds me of how competitive Academia can be. Steve is smart and hardworking. You can tell he’s always been this good
@@eugkra33 Considering they were stealthily handed a prototype/demo motherboard, I wouldn't be surprised if they got priority, as an outlet.
@@eugkra33or the media embargo expired immediately after the talk, and intel was just a bit delayed in releasing their own announcement video (they didn't tell people the video would drop at a specific time, and short delays in those situations are common)
Was embargoed!
I really hope they can pull off battlemage, they've been really stepping their game up with the drivers and who I'm hoping that enthusiasts hop on board with this generation and then within a couple more there's actual mainstream adoption. At least at the middle and low end, super high-end is just completely obliterated competition wise.
There's on two Battlemage models coming. One is budget, the other is midrange.
Battlemage, polly wants a cookie.
Battlemage, polly wants a cookie.
Battlemage, polly wants a cookie.
Their Intel® Graphics Driver 31.0.101.5522 is best I have seen yet, really good release.
@@kcwilsonii Already on 5534. A bit ahead of this one.
Wow. Listening to those Battlemage improvements is like listening to new GPU architectures from around the year 2000.
That's precisely because they are. They unironically cite samplers being able to read/store more memory between caches than cards had a few years ago; they speak of culling for fixed pipelines, as if everything didn't switch to programmable pipelines about 2 decades ago; and not to mention the filtering nonsense everybody has been baking offline using whichever method they like, instead of computing at runtime using the one, incomplete, and mostly unused vulkan extension for different sampler filtering techniques. Oh, wow you fixed up your trashy depth merger stage? congrats. im pretty certain intel had that figured out the last time they tried to make an x86 based GPU, idk, half a decade to a decade ago. i'm pretty certain msfts software rasterizer, llvmpipe, and swiftshader too have figured out how to implement decent enough depth merging using regular ole simd 10-15 years ago. congrats intel. you're really showing us who's the boss. but at least they now implement standard d3d10+ indirect dispatch features instead of acting like some cpu bound opengl 1 card
@@reecesx every GPU still does fixed function culling before fragment shading, IIRC RDNA2 or 3 had improvements to it too
And if your rant has any truth to it, the fact Intel got pretty close in performance with 20 year old tech promises great improvements for the future.
Appreciate the late-night effort and effective coverage, GN Team. Thanks!
Ordered an Arc A580, I’m excited for Intel GPUs!
Your not going to be disappointed
A750 user here, my intel GPU has been really good so far! Good on you for buying one!
@@ofbaran i plan to replace my 1060 with a 750 , is it worth to buy this under 200 euros ?
@@Arejen03 Yup. This card so far outperforms 3060. Over here in Europe it is a lot cheaper to get this card somehow, I am not sure why but it has been a really great purchase. If you can find it at 200 euro or even less, don't miss that deal. I bought mine for about the same price
@@ofbaran yes Im Form Germany i recently saw a discount like 194 euro for this card
Great work, boy these videos get more detailed and technical from one to the next. Love to see it, great way to know something other then number of new models, naming and PPT "chart colors" :D
I'm really happy to see Intel improving. I have been really impressed running Ubuntu on an N100. Those E cores are great.
the i3-n305 looks like the ultimate portable CPU.
Thanks for tying together the different contexts, Steve.
Nice, it is 4 am here as well. Cant sleep. Thanks for bedtime story GN :*
Thanks for the coverage on this GN, this is genuinely exciting news and I hope Intel is able to bring competition to both CPU and GPU markets!
Loved the hint for the simulation theory at 7:34! xD
GG Guys. I was curious about the setting and you're all going the mile!
"Buzzword warning for AI". Thanks, I appreciate it.
That kind of talk at 4a.m deserve respect! Super clear recap!
Thanks Steve
4 am is crazy... and you managed to push the video out in 7 hours?! Thank you for all you do in the hardware space!
architechture changes sound promising. Hopefully these hardware changes translate to real world performance, and hopefully you're right about it having better compatibility and out of box gaming. Consumers need a better battlemage
Can’t wait for battlemage. Thanks for video as always!
Battlemage, polly wants a cookie.
Battlemage, polly wants a cookie.
Battlemage, polly wants a cookie.
Hope it's available in large numbers
video lighting giving tropical hostage vibes lol much love to the GN team
This all sounds really impressive!
I can't wait to be underwhelmed by independent benchmarks :/
Yeah. I bet if they knew they had some super good and powerful but also cheap gpu coming out, they would certainly flex about it. What i gathered from the walls of super detailed information is that the new intel gpus are going to be a bit better than the previous intel gpus.
CPU market: We're done with Intel's BS
GPU Market: For the love of god, please save us.
To be fair Lunar Lake seems awesome if what intel showed and said is true. Even the E-Cores are great now.
I don't think you're well informed, Intel Core 14th Gen series is great, with both better performance, powerefficiency and performance per watt!
And they don't run hot either!
@@rrrr3666 well considering the i7 14700HX is the only 14th gen CPU that has changes about the E-Cores.
Battlemage sounds cool! Intel seriously has an amazing team in their gpu department. I just hope that they can make things work out. They are stuck in a situation that game companies are usually too lazy to even add upscaling when it's built into the game engine. Intel has to make their cards work with games, where as the other two companies are in an opposite situation.
I have no idea 87% of the things said mean. But I’m still with it. Hoping watching all these videos will give me meaning contextually.
This was great coverage! I tend to skip on-site videos, but this was top notch!
We need more GPU competition. I hope they are great cards.
Thx for putting all the work in.
Steve's working overtime today!
Great review, best breakdown on the net I have seen. Looking forward to Arrow Lake and Battlemage.
Wow. Great reporting and sounds like some impressive engineering from Intel. Thanks GN
Does look like Intel is making rather large changes from mistakes/post Alchemist release growing pains. There is a benefit for being a latecomer which is quick gains from gen 1 to 2. Hopefully more info about Battlemage's (assumed improved) HW AV1 encoders/Quicksync, since those would be exciting for upload limited streaming, Steam In-home Streaming, and HTPCs/local streaming servers
I'd expect those gains even without being a latecomer. 1st gen is almost always "get it out the door." Plus, the driver team almost certainly made a list for hardware as they were going through things.
Not only is 2nd gen gonna be a huge step but 1st gen was already beating its competition in quite a few sectors
I actually understood what you were talking about
(3d graphics programmer in a past career)
Good gravy. The fact that they're measuring their uplift as a MULTIPLIER, rather than percentage uplift, is crazy. I hope their drivers are much better able to take advantage of the new hardware, compared to Alchemist.
their drivers are already good.
and I was just thinking just how freaking impressive it is that Steve can process all this information and give a faultless presentation without even the hint of a note. Bravura performance!
Started with the little ARC A380 for 75 bucks open box at MicroCenter. Built My Niece a PC after Christmas that is the GPU inside. Then the 3 Fan ARC A750 OC from Sparkle in a build for me. Got a new ARC A580 on a shelf. With deals, all three under 500 bucks. Because I am not a NV fanboy. The first NV GPU I bought years ago, instead of ATI worked for 5 minutes after install then crashed. Took it back and was charged "restocking fee" GeForce 256, Still have a few 3DFX voodoo cards in my stash.
This was a really well-done summary of what was clearly a very information and buzzword-dense presentation.
I'm excited to see the Battlemage dGPUs.
Battlemage, polly wants a cookie.
Battlemage, polly wants a cookie.
Battlemage, polly wants a cookie.
@brugj03 very weird.
Literally repeating this on every post that uses the word. You're the parrot.
@@noneyabizz8337 You noticed.....weird indeed.
@brugj03 almost as weird as you not knowing it's cracker, not cookie.
Fun fact about culling: more advanced culling can also not render the part of the object you're not seeing. Take a sphere, put it in your view, normally the render engine will have to render the whole whole sphere even though you're only seeing half of it. More advanced forms of culling can render only the front half of the sphere, meaning until you view the rear of the sphere directly, the game won't load in those triangles. But like I said, this is a more advanced form of culling and I don't think any modern games today use this (tho I could be wrong) . I have seen this used in animation software mostly to save resources while animating. Like Maya and blender.
back-face culling
It's crazy to think this is only Intel's 2nd discreet GPU release. Wild how fast they're progressing.
In terms of per-generation changes there's a lot, but until we see independent reviews we won't know how much that actually translates to real world improvement. It's also important to note that the interval between Alchemist and Battlemage is quite long, if they were releasing with equivalent timing to Nvidia/AMD then it should have already launched by now. At this rate 50 series GPUs will be starting to release by the time they actually land.
@@bosstowndynamics5488 AMD ? Do they make GPU's? Hadn't noticed :)
@@tilapiadave3234amd has made gpus since they bought ati back two decades ago
Intel has had several discreet releases in the history…
The wildest part is that First gen intel GPUs have faster Ray Tracing than AMDs 3rd? try
Thank you, Steve and everyone at GN for your coverage of Computex. It's the most in-depth and consistent coverage!
You know your're well respected within the industry when companies like Intel give you exclusive looks at new products.
The main reason I love this channel so much and respect their opinions is their morals and ethical policies. It's so hard to trust what people say in this industry so thank you again.
Also, love my shirts from you, they are super soft and great quality!
Pat Gelsinger personally came down to Computex handed the Battlemage Gpu to GN and said "Thanks, Steve."
Fractal Torrent Case ad is so familiar, it's nostalgic to the channel now. Like watching Plinko on Price is Right. Warm and fuzzys
Ditching Hyperthreading is GOOD news, even if it will be a hard sell to the "Bigger number more better" consumers.
The second thread on a core with hyperthreading was ALWAYS slow under normal conditions.
If an E-core is 50-55% as fast as a P-core, the second thread on that P-core was operating at 10-35% as fast depending on the workload.
And that SMT feature takes up die space. I think about as much as an entire E-core does(I need to check this...).
Then you have problems with scheduling, SMT vs. non-SMT cores.
SMT was useful during a very specific age of computing.
That age has passed now that we have REAL 2-digit core counts.
I'd LOVE to see a CPU with 6x single threaded P-cores and 12-24x E-cores.
nah. SMT doesn't take anywhere near that much space. It's mostly just a bit more register file plus some additions to the scheduler. Not free, but nowhere near an E-core. And the point of SMT was never more compute, but higher utilization. Without it, it becomes much harder to keep the cores fed in memory bound scenarios, which is most scenarios. I'm curious how Intel will handle that.
@@squelchedotter Hrmm.
Well, the registers do take up a significant portion of a core.
And an E-Core is usually only 10%-20% the size of a P-core. So maybe the comparison isn't that bad.
After further reading, the biggest reasons seem to be...
1. Specter/Meltdown mitigations have drastically reduced the effectiveness of SMT. So there is even less benefit to having them.
2. SMT is less power efficient at most tasks than the same performance worth of E-cores.
Thanks Steve, excellent job can’t wait to see the future architecture pieces, especially on arrow lake
I'm not super into upgrading and GFX cards and such, but I am so happy there's a 3rd player now who's not giving up and might shake things up a bit :)
that was a really great video. excellent in character and content. great job guys!
Hopefully the compatibility claims are true. If Intel Battlemage works as well as RDNA on the driver side, a 300$ 3080/4070 equivalent with 16gb of Vram would fly off the shelves.
Battlemage, polly wants a cookie.
Battlemage, polly wants a cookie.
Battlemage, polly wants a cookie.
If it is good. It will be much more expensive… we have to remember alchemist did not bring money. Intel needs money from battle mage so it will be much more expensive than alchemist…. If it is any good!
Probably not realistic pricing for that performance this year, 379 is the cheapest I could see
@@defeqel6537 Nvidia has trained you well.
@@MechAdv it's just a market reality: if the competition is much more expensive, Intel would just sell everything and then have nothing to sell, and thus lose money. It's more beneficial for them to sell less at a higher price
Thanks for the hardcore coverage this week guys!
Ditching SMT sounds like a real "Hail Mary" move. If the single-thread performance or efficiency gains don't match up with the lost performance, Intel's going to be in a world of hurt not just from AMD, but from all the other ARM licencees who would love to finally get some real share of the laptop market.
Gets rid of their security problems mainly
@@CD-vb9fi Hyper-Threading IS Intel's implementation of Simultaneous multi threading, it's just a marketing name, like how AMD calls their implementation of the x86-64 ISA "AMD64" and Intel calls theirs "Intel 64"
@@tuckerhiggins4336 That's exactly what it is. They claim it's about power or whatever but really HT continues to provide an endless stream of hardware security flaws and they don't know how to fix it.
@@tuckerhiggins4336 Only part of them. Out of order execution that doesn't need software mitigations also probably gives massive improvements.
However, it looks like they're taking the space used for HT and doubling down on even more prefetch/out of order logic.
If I'm remembering right, at it's base HT is just a 2nd instruction decoder and set of registers. With some muxing logic so if one part of the CPU isn't used by the 1st, or the 1st is waiting on a fetch, then the 2nd will use the freed resources.
@@_droid no one does, AMD is also affected by them.
But primarily, HT is not needed at all, it was a stopgap solution to a problem that last existed ages ago. Currently it makes no sense to keep it for the amount of die area it requires.
Great coverage. I was really waiting for to buy a Strix Point laptop later in the year, but seeing all these amazing architecture improvments means Intel's laptops I think are back as an option once performance figures. It is going to be great to have all this performance on the go.
Lunar Lake is a godsend for Intel for handhelds, since it is the equivalent of their former U SKU.
Careful what you wish for younguns. Intel pioneered pricing like Nvidia does. Help us if they ever make the best gpu.
Highly unlikely that they will catch up to, or even exceed AMD or nVIDIA any time soon, but the more competition the better. Despite the prices, nVIDIA's gpus a kind of in a league of their own still.
Not wrong, but that's why we need to hope for them to be competitive, but not just run their competition out of business.
But you PC builder folks loves to overspend.
strong competition is always good, if it were intel or amd in nvidia's place right now i'd wish for the same thing. ideally at least two of them are trading blows somewhere
nobody pioneered charging $1700 for a gpu, that was all nvidia greed. it probably costs them well under $500 to make one
One of the best things for me about Intel entering the GPU market is that I am learning a lot about how GPUs work.
Life of a triangle - NVIDIA’s logical pipeline. High level view of the NVidia’s GPU architecture
Triangles are precious. High level view of AMD’s GPU (GCN) architecture.
Understanding Modern GPUs
Those are good for a casual reader.
@@rattlehead999 I'd also add Render Hell to that list.
@@jcm2606 I concur, but that's a bit above a casual read.
This is so exciting! Happy that the rumors of them scrapping the GPU division was false cause I want to own an Intel card one day.
GN.....You guys are killing it. Keep up the good work 🤙🏼
Sounds like AI really stands for "All In." All the companies we're used to talking about are becoming "all in" on AI to the point that soon there will be no PC parts we can buy. We'll just be leasing AI cores on an AI cloud to do AI tasks and play AI games.
tbf many ai companies, including openai, just can't get accelerators fast enough to meet their growth. and the main bottleneck is packaging (putting a cover on the chip so it's protected from the outside)
So much interesting and cool stuff coming out of all this, something to take our minds off the madness that is happening to us all across the world, on all levels of our lives!
5 obscene user profile pictures in
A new record? Really? Isn't that business as usual?
PC component parts sure turn some people on huh. 😅
excellent explanation, useful content, as always, thank you!!
Intel naming scheme is funny...
X^e
Ryzen AI is even funnier
Almost as funny as your naming scheme 😂
0:19
When even the big companies notice ya'll doing fantastic work, and sneak you a little reward.
Steve keeps being the absolute GOAT
I’m actually REALLY fucking excited for this and have been waiting for sooo long for updates to Battlemage!!
Damn, INTEL GPU architecture out RTX'ing AMD... That's rough. Only to fall victim to driver issues just like AMD used to, and that's even rougher.
what a wacky timeline we're living in
Cause the need for it is not existing yet, ray-tracing will only realy when its going to be the norm for Consoles game.
its coming but slowly, The Radeon team already have massive whole new architecture that implant RT accelerator better for RNDA 5.
Good RT performance just for 2-3 AAA game a years that are most of the time completely bad game its not even worth it even to activate RT.
AMD was aiming to get closer to Nvidia in power but then Nvidia made a jump to RTX and Intel came in when RTX was a thing so they could aim for that at the cost of having poorer driver support at least that’s how I look at it
AMD wouldn't have to worry about being out-anything'd by anyone if they didn't buy out ATi.
Nah dude, AMD drivers are fine. Otherwise XBOX and PlayStation were really in trouble now. I really wonder where those rumors come from.
Thank you GN for putting this video and once again protecting us from watching an hour of presentation where every second word is "AI"
AI killed the gaming golden age
Greed and botting, low inventory, crypto mining AND AI killed gaming golden age
@@sookmedic1959 let me correct myself: AI dealt the finishing blow to gaming
@@obeyobay9146 And lack of competition. Nvidia has been the underdog to Radeon for many years, until they weren't.
Radeon helped save AMD, which is great as we're not stuck with Intel as a monopoly pushing out +5% improvements every year.
Just imagine if AMD was able to release the Vega 64, RX 590, and RX 570 like 6 months before Nvidia released the GTX-10 series. Now imagine they released the Vega 7, Vega 56, RX 480, and RX 470 like several months before the GTX-10 Series (1080Ti, 1070Ti) refresh. And imagine if AMD were then able to release the RX 5700XT, 5600XT, and RX 5550XT again like 6 months before they released the RTX-20 Series. Nvidia would have been on their toes, and the prices wouldn't have gone up but actually went down.
But that all hinges on AMD not just being able to produce good products, but doing so early, doing so with volume, and being able to advertise it effectively to capture notable market share. That's what matters, not theoretical ideas, but realistic conclusions.
The PC golden age was 1998 to 2011, the console golden age was 1996 to 2012
@@rattlehead999 To be honest, gaming on consoles were usually better than on the PC (from NES to the PS2).
The console period of PS3 was the pinnacle, and it went downhill after very fast for consoles. Call it the transition period. But during this era, we did get the Golden Age for Pocket Consoles (OG GBA, DS Lite, PSP Brite, new3DS, PS Vita Slim).
PC Gaming really hit it's "Golden Age" around 2011-2013 (think i5-2500k/GTX-660 or i7-3770k/HD7970) and upto around 2016-2018 (i7-8700k/GTX-1070 or r7-5800x/GTX-1080Ti). Things hit a transition period with the GTX-16 and RDNA-1 products. Went downhill with the RTX-3000 and RX-6000 products being x3 higher priced from 2020-2022. And they're further gone downhill with RTX-4000, RTX-5000, and it doesn't seem to look like a correction is on the way. Maybe this can then become the Golden Age of Phone Gaming (see AAA-Games on the iPhone, lots of controllers, high-end graphics, and mature emulation).
Great no BS summary of some nice innovations.
With the BS prices being charged for MBs these days, Intel needs to re-enter the MB market. No RGB nonsense, just a solid base for an Intel system. CPU/MOBO/GPU bundles for the win!
Yup, no reason for a quality motherboard to be over 120-150$
I remember when we announced the IBM System 360/Model 91 with out-of-order (instruction) execution, translation look-ahead buffer, etc. in *1967.*
way ahead of its time...
unfortunately, memory density back then wasn't high enough to fully exploit the potential of those technologies.
I think Intel did exactly what we all hoped they would. They had a V1, which was rough, and they learned an absolute truckload about what they should do to make V2 a real contender. I am really looking forward to having a 3rd player in the graphics space. I wonder if we'll ever see a console powered by intel graphics... 👀
AMD really needed the business so I would guess they gave Sony and MS a sweetheart deal. Is Intel ready to make an even sweeter deal?
Cool. Really enjoying my A750 and a ton of games I bought with the savings.
I'm kind of getting the sense that if the console market dries up, amd will probably be hurting a bit in the gpu division and intel might eat their tale end on gpu sales.
Don't lose sleep over the console market. It isn't going anywhere, despite what those who cheerlead for Microsoft would have you believe.
Great work guys. Thanks!
I’m glad that Intel is entering the market. I am sick and tired of paying $2000 for a top end video card or even 1000 is pretty ridiculous.
If Intel hits the same performance as the $2k card, what price is it worth? What is the premium Nvidia can gain based on their history?
Intel's GPU won't be top end.
Buy a cheaper one then. You don't HAVE to have the best.
Ez, I already got enough money for literally 100 5090 at 2K. It's peanuts.
Don`t worry!
If battle mage is good. It will be expensive. If it is bad… it remains to be cheap…
All in all i expect 2* price compared to alchemist!
THANKS STEVE
You could hear the chirps of Chinese Bulbuls in the background, which only does so in early morning.
This feels like the first real innovation Intel has had for a long time, honestly. Glad to see competition giving some motivation finally.
You forgot to put the slide when talking about it. Skymont has 2% higher IPC than the previous gen P-core (Raptor Cove).
This channel is something special.
man i wish more modern cases had 5.25 bays again :( at least 1 or 2 would be nice
Same here, though I have accepted the death of 5.25" with grace. The only people buying BD drives are rippers and the prices on those have gone from $40 to $80.
Agree!
for the 4 am experience, thanks steve 💙
Snake oil my favorite condiment
Please be more specific. Intel is certainly fudging things on the CPU side, but I'll absolutely believe their Gen 2 GPU is going to be radically better than Gen 1. Both because Gen 1 was just get something out the door, and because they quickly learned exactly what they were missing on the hardware side when working on the drivers.
I use it as lubricant, you have to when you're getting fkcd constantly.
cry
@@arthurmoore9488 not OP but my guess is that it's a jab at intel's messed up "snake oil" slide.
Bet you didn't notice how Zen 5 IPC is pushed up by the 35% uplift in AVX512
Very happy a770 16gb LE owner here :-)
Happy happy :)