If you like this, watch our video discussion animation error and flaws of frametime testing as we know it today! ua-cam.com/video/C_RO8bJop8o/v-deo.html To further support our interviews and deep dives, consider grabbing our metal emblem pint glass! store.gamersnexus.net/products/gn-3d-emblem-glasses or our ultra comfortable hoodies! store.gamersnexus.net/products/warm-ultra-soft-fleece-zip-hoodie-tear-down NVIDIA latency technical discussion: ua-cam.com/video/Fj-wZ_KGcsg/v-deo.html Or our Intel Arc 2024 revisit: ua-cam.com/video/w3WSqLEciEw/v-deo.html
re drivers you should really have on one of the people working on the MESA linux graphics drivers, it's like one of the things making the steam deck possible
Very interesting! I wonder if it would be possible for Tom to get a video of some engineers actually using these tools to show us what they do. Like maybe a behind the scenes video step by step of an actual problem they encounter in a game and the process of what they do to fixing it using software and testing. Maybe that's a bit intrusive but it could be an interesting video. Thanks for the video Steve and GN team!
With Tom having worked at NVIDIA and now at Intel in this very unique position: how do different choices at the hardware level impact the software and vice versa? And on a personal note: how hard is it to work around previous knowledge that you aren't allowed to use due to patents and NDA's + could your previous employer find out if you did so?
. Hey Steve, (bro) when are you planning to break this system? We're still waiting for an in-depth investigation into the actual prices for these fast, fancy calculators of GPUs in mass production. Correct me if I'm wrong, but isn't it mind-blowing that the iPhone 14 only costs $10 each in mass production out the door?
I wish all GPU vendors were more willing to talk about this kind of stuff. This is very interesting and informative stuff and I applaud Intel for willing to talk about this. Makes me want to pick up an Intel card just to tinker with it.
Hopefully they will talk about this stuff more now that they see there is interest. I think they just assumed people aren't interested in the low-level details as much (specifically on the software/drivers side). Nvidia and AMD have had engineers on this channel before (and others) to discuss other technical info, though. That recent video on latency Steve did with Guillermo Siman of Nvidia was also very informative, as were the videos recorded in the AMD labs in Austin, Texas.
This is amazing. I work as a low level engineer in the reverse engineering field, and these technical deep dives are AMAZING. Its not an exaggeration to say that the engineering content you are putting out is legitimately one of the things people will be watching for years for knowledge of certain software topics. Thanks Steve & team, and you'll always have my support!
the software part is always a game of "speaking the right language" but the low level is always so interesting. like having this API have X registers or reading this text and inferring this information to perform "that" instruction. I work in retail, but low level circuits and programing has been my hobby, I remember building my first dual core circuit 10 years ago and it took me 2 years to figure out how to coordinate just to figure out how to share the information from RAM and match it with the program counter register. and that's is when I was using my own "coding language" it was very hardware specific language. watching those deep dive from giants like Intel is so much fun!
I'm a simple man. I see Tom Peterson in a GN video, and I watch the entire thing sitting forward in my chair with both index fingers pressed against my lips, brow furrowed, learning intently. Love this series. Never stop. There's no such thing as too much Tom on GN's video catalogue.
I honestly just love this miniseries. Im currently studying to become a software engineer and watching your videos makes the learning process very interesting and fun. Thanks to you and Tom for bringing us these videos and I hope we'll see more of them !
Former Mod developer here, working with source engine as 2d/3d artist. id like to share some basic views onto game optimizations. Petersen did a great job explaining deeper driver/engine Level optimizations, but this is only one part of it. Its not just the engine or driver, its also the assets, and this is also where some studios just drop the ball. you want to load a gpu evenly, but you can absolutely choke a gpu when overloading it with a heck of a lot of one single workload. that might be one specific shader, or absolutely insane geometry load, or stuffing the vram. 1. 3D models. models are basically a wireframe that form a body. the possible level detail of a model is determined by the number intersecting lines of the grid. patches formed by the grid are refered to as polygons (there are different types like ngons, quads, not gonna touch on that). a high polycount gives you a higher resolution mesh with potentially more detail, HOWEVER you can have a model with millions of polygons with no details at all. the polycount is direct geometry load on the gpu. the higher the polycount in the scene, the higher the load on the gpu. once you overload the geometry pipeline with absolutely insane levels of geometry, the gpu performance drops off a cliff. this is basically what tessellation does - it takes a simple wireframe and bloats the polycount, increasing mesh fidelity, but blasting the gpu with geometry load. this was nvidias big trick to choke radeon gpus in certain games using gameworks or hairworks. nvidia massively increased the geometry capability of their gpus starting with fermi, and tried to get studios to use their tessellation libraries found in gameworks/hairworks that would absolutely obliterate radeon gpus with stupid high amounts of geometry load. notable games are the witcher 3 with hairworks and crysis 2 with environment tessellation that does absolutely nothing except cutting radeon framerates in half - this is why you can find the tessellation factor option in the radeon and intel arc driver, it limits the mesh complexity scaling of tessellation. geometry load is the whole visible scene with all models and map geometry on the screen. so you want to keep the polycount of the rendered scene as low as possible, and you absolutely want to avoid wasting polygons on geometry that doesnt even need it. a cube can be done with a "single digit" number of polygons, or 2.000.000.000.000 without any visible difference. you can have thousands of the minimal polygon cubes on screen without breaking a sweat, while just a few of the bloated cube will make your gpu scream. modern gpus can handle a ton of geometry load, but this is not an excuse to just waste it. having a ton of unnecessary polycount puts unnecessary load on the gpu, and is a result of lazy mesh optimization. sure, you want higher fidelity models for better visuals, but you can assume that studios rather cut on time per model in exchange for worse client side performance. one technique, usually used in older games, is "backface culling", which basically deletes all parts of the model that the player is never seeing, cutting geometry load. there are possible artifacts when you can actually see the deleted backface, and the model looks hollow. today this is not done alot because gpus are pretty powerful, but there are situations where this should still be done but isnt. but dont worry, polycount isnt the only way to have detailed models, theres a way to simulate polycount with textures, thats why we will switch over to... 2. textures: textures in games have 3 main usecases. 1. give color and texture to a 3d model and to the map/environment, 2. decals and sprites, and 3. control shaders. like you already heard textures are relatively easy on the gpu, HOWEVER, this is not entirely true across the board. the taxing factor of textures is resolution, file size and function - hitting the Vram and the shader pipeline. resolution gives better clarity, but bloats filesize and you can combat filesize with compression. you wanna set a texture resolution that makes sense. high resolution textures that the player is viewing up close, low resolution where it doesnt matter so much, also use compression where possible. this applies for the albedo map, or diffuse map, which is basically the texture that gives you the color information. there are other textures that control shaders, like bump and normal maps, specular maps, phong shader maps, self illumination maps, parallax maps. these textures tell the engine what to do with which part of the texture. these can be greyscale or include different informations on each of the R G B channels, like a normal map (this is a very fancy bumpmap with accurate "3d" information baked into 2d space. normal maps are either hand made (legacy) or more commonly "baked" in a 3d application from a high resolution version of a 3d model to reduce geometry load by using a low poly model + normal map instead of a model thats 100-1000 times the polycount). so you can use a 2d texture to reintroduce "3d" details back onto a low poly model. the normal map acts somewhat like a textured rubber glove you pull over your hand. normal maps should not be lossy compressed, because it will introduce awful blocky artifacts when light hits the object, and normal maps usually also have an alpha channel (greyscale channel next to the RGB channels) with a different function, usually controlling a different shader, like specularity, which makes the normal map a chonker in file size. you dont wanna overdo normal map resolution, as it will eat vram for breakfast and very high resolution normal maps also put more load on the shader pipeline. a gpu can withstand a considerable amount of normal maps, but you can obviously overdo it. lets briefly talk about how you map texture space to models. in the 3d modeling software you "unwrap" or "uv map" the model into 2d space. imagine taking a roll of toilet paper and cutting it across the long side, now you can put it flat on the table and when youre done painting, you can put it back into a roll shape. its basically the same with 3d models, but theres a twist. with the toilet roll you have a 1:1 representation of the model and the texture, but with 3d models you set a texture size (for example 2048x1024) and then you place and scale parts of the model onto it. the bigger the parts on the uvmap, the more pixel space they get, hence higher resolution. now comes the kicker: to preserve texture size you map important parts big and unimportant parts, that are barely visible, but can still be seen, small. take a gun, for example - you want the regions close to the camera to be as high resolution as possible, but you also have tons of parts that just need color, but arent usually seen, or just are plain black that can be smaller. you can also use mirroring to cut uv map area in half for some parts. by not properly scaling the parts of the uv map you can end up with a very unoptimized uv map that gives you worse resolution to important parts while being twice the size. the impact of one bad texture set for one model isnt big, but consider a scene sometimes containing hundreds or thousands of assets. it adds up.
3. bad practice. now that we covered the general function and cost of models and textures, and the general need to optimize certain aspects to retain as much performance as possible, we enter the realm of wasted performance due to bad practice. there are shader effects that are very easy on the gpu, or very hard. and ususally there are multiple ways to do stuff. choosing the right way to do stuff can make or break a game. lets talk about reflections for a moment. there are two common easy ways to add reflectiveness via shaders, and this is via 1. cubemaps and 2. screenspace reflections. screenspace reflections take the actual content rendered in the scene and reapplies it - which makes it expensive to run, but quite close to the actual scene, but ONLY what is shown on the screen. out of screen stuff is blank - a game notorious for bad screenspace reflections is resident evil 2 remake in the police station, where objects would constantly blank out the reflection, because to the shader some parts of the scene plainly dont exist. the 2nd way to do it are cubemaps and they are so basic in function, that you can do it all day and it doesnt matter, because its basically a plain old texture. what is a cubemap? a cubemap is a texture or a set of textures that represent what something may reflect in a certain area of a map. its basically something like a skybox, but for reflections. these are non-dynamic and prebaked, which makes them insanely easy to run. now, if you want to give some kind of reflectivity to an object you can either do cubemaps, screenspace reflections or raytraced reflections, and each method being progressively harder to run. choose wisely. while a rifle scope might look absolutely stunning with raytraced reflections, 600 empty coke cans in an alley dont and will absolutely murder your gpu. raytracing is its own can of worms, and while raytracing the scene would have different implications, i just wanted to demonstrate that you can totally use the wrong tool for the job. lets touch on some different technical bad practice that you actually dont see - as theoretical example: hitboxes. you know why its called a hitbox - in early games a hitbox was exactly that - a simple box around a character that would register the hits instead of the actual "visual" model because its too complex, later hitboxes were a cylinder, now more sophisticated hitboxes are closely modeled after the visual model and look like a stick figure, made out of very basic geometry so its still easy to run. but what if you have very complicated models that need hitboxes and you run out of time... w..wou... would you just take the mesh of the visual model to use as a hitbox so you dont have to make a seperate hitbox mesh? nooooo, certainly no one would be insane enough to run a 80.000 polygon visual model mesh as a hitbox. right? RIGHT? not saying this is a thing, its just an example of under the hood insanity no one can see, but can make your 3D sidescroller the next crysis on steroids. and we have alot of other departments that all can do some really shoddy stuff, like animation and rig setup, mappers, coders, general shader setup etc. making games is very complicated. theres tons of stuff you can do one way or another. doing it properly takes more time than taking some shortcuts that trade developer time vs client side performance. blaming bad performance on the driver or the engine is only one part of the equation. you can do alot of bad stuff on the asset and coding side and, albeit having near perfect driver and engine optimization, the whole thing can just run like a brick. looking at games that barely hit 60 fps @wqhd on a 4090... lets just say i have some serious doubts regarding the technical quality of the games. the last game i had an in depth "game dev" look at was warcraft 3 reforged, and my god is this thing botched. some units dont even have proper textures, like the frostwyrm - its a placeholder texture in a finished game no one cared to fix or finish. having such assets in the game probably also tells you the shader optimization and coding isnt properly done aswell. thats the state of the industry, at least partly. dont be fooled by the hardware hunger of the games, chances are you buy expensive hardware to cross finance a sloppy development.
Excellent comment. Adds an to what was spoken about in the video. One issue I had was while Tom wouldn't mind any question asked, he still did dance around the Starfield optimization question (may not actually be the game specified, but the video made it so). Care to your take on it if possible?
@@ezg8448we didn't specify a game. There was no specific game. We put the reference in the video as an example but it's not like he saw the edit live.
There's a chance that the lessons learned from alchemist - especially driver development - came too late in battlemage's development to be included in the hw design. No idea how relevant drivers are in designing the hw, but if they are battlemage has probably been defined long before all the current development of alchemist drivers (and the lessons learned there) happened. Of course there will be a bunch of improvements regardless (otherwise why design new hw), but celestial is where I'd expect intel could be competitive enough to sell gpu's with reasonable margins. I'd suggest comparing alchemist die sizes and nodes to geforce/radeon cards with similar performance. Iirc, alchemist uses around 50% larger dies on a similar node. That's where they're probably losing a lot of money and making alchemist likely not very profitable if at all.
@@102728 The hardware team doesn't worry too much about driver-level stuff. They are mostly focused on making changes to the structure of the GPU logic to keep things moving through easier. Battlemage will make improvements to things like the dispatch logic Tom mentioned, as well as things like caching subsystems, predictive logic, or specific blocks like TMUs or RT accelerators. Battlemage will have several lessons learned from alchemist under the hood, but you are correct to assume they won't have everything right yet. The ARC team is fighting uphill into decades of industry progression, so I full expect them to need a few goes at getting everything down.
Impressively good info. The register spills are spills to memory which is why it’s bad and slow. The UMD is technically one, but yes, it is made up of DX12, OCL, Vulkan, etc components where the component isn’t necessarily loaded every call.
Great video and explainer! As someone who bought an Arc A770 last year and has seen huge improvements in its drivers; it gives me great confidence seeing Toms passion and drive for Intel GPUS and that the future of intel graphics can only be getting better with the likes of professionals like him in Intel. 👍
As Somebody who’s had two A770’s, (A770 Titan, A770 LE/Founders) this is true. I’d confidently say this card is faster than the 3060Ti and comparable to the RTX 4060Ti
im looking forward to the next gen ARC .. my current ARC 770 16gb is a great little card .. my main7800x3d and 7900xtx is a great combo but ive been using my 14600kf and ARC 770 16gb build and it has been chugging along beautifully !!
This was a great video! Thanks for having Tom on GN. I learned quite a bit in this one. On a separate note, I got an A770 from someone who gave up on Intel early. I let it sit in the box for 6-9 months and it aged like fine wine. It's really cool to see the performance getting unlocked by the hardworking people at Intel.
What I love about GN is that they respect their audience intelligence and have the guts to dive into more complex subjects such as this one. Thank you, Steve! This stuff is trully fascinating.
I was talking to a friend about drivers and how they work not too long ago, but couldn't really convey the finer details (not that I knew them to this extent anyway). This video explains it so well in "normal people speak". I really enjoy the way these video's and difficult topics are framed and laid out for everyone to understand. Awesome reporing! Edit: I really appreciate that Intel is fixed on letting people know what they are doing with Arc and really making a solid attempt at the GPU market. More competition = better and with AMD and NVIDIA so focussed on AI now, barely noticing their "ol' reliable" gamer market, we need someone to keep their attention invested there, before they get their marketshare snatsched away long term.
Only Nvidia has explicitly signaled that gamers are on the back burner, and their projected priorities sell it. Don't see that with AMD pursuing AI, at least not yet.
''Number one, RTX was invented for gamers and for RTX, the technology, the most important technology is AI. Without AI, we could not do ray tracing in real time. It was not even possible. And the first AI project in our company-the number one AI focus was Deep Learning Super Sampling (DLSS). Deep learning. That is the pillar of RTX.'' Jensen Huang 2023@@SelecaoOfMidas
Magic is definitely the best explanation for it. It is absolutely unbelievable when you think about how much happens to generate a single frame of a video game.
@@GamersNexus I mean, it is magic. We're making silicon runes, engraving them with magical sigils, then running energy through it to produce effects that can't be done in nature.
@@GamersNexus hi they need to do much more software works and maybe it will be good. maybe VERY strange but do something as fast as you can! in all people's animal man types are, evil also are. MY NOT FRIENDLY HALF BROTHER IS IN USA MORE THAN 20 YEARS! here the building water contains some spicy thing, bad for heart!!!! leg, and blood pressure raiser! and who knows what other! MANY TIMES! becasue of evil man types plumbers, many times comed to the building "water manhole" and done evil things. and nobody against evils not done nothing! many people not even want! many not can. because evil man types are in many places! mostly in police, plumbers, etc! more than 6 years MANY evil things are!!!!! I WRITED TO MANY PEOPLE, to Israeli jews, to others also, to vikings and to many others! MANY PEOPLE NOT HELPED NOTHING! MOSTLY HERE MANY EVIL PEOPLE ARE AND IN MANY NEAR COUNTRIES! EVILS NOT LIKE ME!!!! MY HALF BROTHER BATMAN OR EVEN BADDER MAN TYPE!!!! WAITING FOR MY MOM AND I TO DIE! MY MOM IS SO OLD SHE CAN DIE ANYTIME! i almost died few times batman not even writed back. LIAR, CORRUPT ROMANIAN not jew not viking, not hungarian. more than 20 years in Usa i never knowed his mobile number! but TOTALLY NOT FRIENDLY! poker player, office worker more than 50 he has blue eyes, different name, face, domain name owner of "HTTP://SONKOLY.EU" and he has others. IF YOU WANT TO HELP PLS DON'T WAST TO MUCH TIME ON EVIL PEOPLES! THEY NOT DESERVE EVEN THE TIME OF BETTER PEOPLE'S! I NEED TO GET AWAY FROM EVIL PEOPLES! BUT EVIL PEOPLES ARE IN MANY COUNTRIES IN MANY PLACES, MOSTLY WHERE JOB OFFERS ARE. HERE HARD TO SELL EVEN CHEAP THINGS LIKE CHEAP DIESEL AUTO, NEW BOOKS, OTHER THINGS. BECAUSE EVILS NOT LIKE OTHER PEOPLES! batman talking with others with BAD PEOPLES! they not like me!!!! and they doing evil things!!!!! I HAVE MONEY FOR FEW THINGS. i'm hungry but many thing contains other thing. i tried to find good peoples but hard because not many are. i can't communicate with many people on facebook, phone, in person, not friendlies, they are talking with others. many ignored me. many people evil even in simple things. i wasted many time, emails, chats, sending cv, phone talks, etc. evils gived job for evils without high school or with high school fast easily. if different people trying something time wasting. nefarious pigmans and other evils runined many food, water products, and many thing! evils against other evils not done nothing. many people wasting time many years! with one batman or even badder man type. PEOPLES HEALT, LIFE NOT GAME. HELP NEEDED MANY YEARS
This made me realize the genius work from Valve on DXVK, the amount of work it needed to be seamless to the games is outstanding. Thank you Steve for the amazing interview.
That’s what their new drivers legacy mode is. It would have been nice for them to just ship their dx9~11 dxvk library as a signed windows module and focusing on making dx12 Vulkan and OpenGL better. Less surfaces to maintain and find improvements for.
Most of that work was done by Microsoft. MS created DirectX HLSL to Vulkan SPIR-V compiler and released is as open source in 2018. HLSL is DirectX shader language. Spir-V is shader language created for Vulkan. MS created this compiler for Linux Subsystem for Windows (WSL). Compiler can translate Vulkan to DirectX and DirectX to Vulkan. DirectXShaderCompiler currently have more than 600 github forks created by: Valve, Sony, Apple and more
This video helps me a lot as a hardware person. I don’t do much on the software side. I think I know what it’s doing (educated guessing) but it’s nice to have an explainer of what it’s actually doing.
This was one of the most fascinating deep dies I've watched in quite a while. Tom is an excellent presenter and does a great job of taking really complex stuff and making an understandable.
I love these sorts of dives, it might not ever make us experts in the field, but it gives you a good sense of just how complex all this stuff is, this in turns allows you to get a better perspective on things.
I was *VERY* close to buying an ARC card for my new system build, and Intel letting their engineers participate in videos of this quality and detail increases the likelihood that my next GPU ends up being one of theirs (given reasonable performance, power consumption and driver quality). This is really, really good PR, and it's very interesting to watch. Thanks for covering both technical and business/process aspects, and for not dumbing things down. Kudos to both Steve for facilitating a conversation where he's sometimes slightly out of his depth (but still asks relevant questions!), and to Tom for very good explanations of pretty technical topics.
I can only repeat: Incredible respect for Intel in working on this stuff even after the initial failure and issues. I hope the current financial losses dont stop you guys from keeping it up and coming with nee Hardware at some point. I will buy it, no matter what. yYou guys worked hard to convince you dont just give up, and you will have earned that money!
Unless intel pay their fines and own their mistakes, I will never buy anything intel related.. “The European Commission has re-imposed a fine of around €376.36 million on Intel for a previously established abuse of dominant position in the market for computer chips called x86 central processing units ('CPUs')” from September 2023
As a game developer myself, I really love to see these conversations, it's not every day I have the opportunity to hear from hardware engineers every day! I do feel it's important to specify one thing I think wasn't super clear: just as every car uses an engine of some sort, same goes for games. The engine is the library that provides programmers like me with the tools we need to run the game in the first place. What Tom meant by "a lot of games use an engine" is "a lot of games use *publicly available* engines" (that's your Unreal and Unity and such). It's worth noting that a lot of game studios build their own if they have the resources to do so, there are quite a few advantages to this approach.
Love seeing Tom on camera, he always has something new and interesting to say. I do have a follow up question for when something is the driver teams responsibility to improve or the application teams responsibilty. Would it not be a more efficient use of your time to identify the reasons people are getting poor performance i.e. Tom's example of register spilling, and writing tools and documentation that tells app developers "Hey, you're doing this wrong". So that over time more people write better performing code, rather than relying on the driver team to fix that issue for you
Since the first time I saw Tap in a video I thought he should be the face of Intel. No marketing veil, just honest and open descriptions of the inner workings of a GPU. He genuinely wants you to understand what a GPU is and what it does like you would expect from a teacher. I mean, of course his appearance on certain channels is intended as some level of marketing, but it's not full of the normal bullshit where they ridiculously exaggerate everything about what they're selling. When he appeared on PCWorld I asked in the Q&A how Arc performed in VR and he straight up said "Do not buy Arc for VR." You would _NEVER_ see that from an AMD or Nvidia rep, and you gotta respect when he shows you respect with a blunt honest answer like that.
I was basically the "they're the same image" meme when that before and after slide came up, I was puzzled until it was zoomed in haha. Honestly, I got lost pretty early (I understand nothing about these things and goodness knows a scientific/analytic mind is not one of my strong points) but even with my limited understanding, it was really interesting. Another example of "yeah just optimise it" when the work behind it is truly daunting. Thanks for the interview Steve. Speaking of optimisation, do you think you could have a game developer on and ask them why a game collection that was 3gb was turned into one that's dozens of gbs heavier and doesn't even work properly? :p it might not be as technical as this but goodness knows I'd like to understand why game devs have decided compression and optimisation aren't necessary, maybe there's a graph for that too. Cheers!
An increase in install size like that is usually because the art assets are higher resolution. For example you might have had 256x256 pixel (width x height) images for textures (the stuff that gives color & depth to 3D models and surfaces), but they might have shipped higher resolution textures like 2048x2048. You might think that's 8x bigger because 256 -> 2048 but actually it's about 64x bigger because it's two dimensions (same idea as 1080p to 4K isn't 2x more pixels, it's 4x more). So think for every texture in the game, you might have thousands or tens of thousands of them, if all of them are made bigger then the game size balloons really fast. Higher resolution textures doesn't necessarily improve things on its own though, cause the new texture needs to actually have more detail to make use of the extra size. If you just resize it and that's it, you still have the same amount of detail as you had before which won't look any better at all. So you need to have an artist go in and redo the texture to make it more detailed and less blocky. And now there's AI tech that can do some of these tasks for upscaling textures (same idea as DLSS upscaling but applied by game devs to the actual game).
Props to the Intel guys who went out of their way to make these slides, and Tom for going over them in detail. As an engineer, it can be annoying to put technical work aside to create diagrams and presentations, but they really went out of their way to make sure this looks great and serves as efficient documentation as well. Thanks, Intel & Tom, and thanks, Steve!
I have been a gamer almost from the very beginning of gaming back on DOS, have very little understanding of how things actually work these days but i still love listening to people like this.
By far the best look at driver/hardware for drivers. I used to work in gaming and loved going over to the graphics guys desks to see what they were doing to optimize the game or games for specific engines. It’s crazy the amount of work they do to just get 1 more fps and a few extra polys on the screen. Love this series and love the content. Tom probably doesn’t remember me, but have had the chance to chat with him while he was at Nvidia long ago.
This is incredible! I’ve been using an A750 since launch and look forward to the future hardware. Really appreciate the hard work employees have put into improving the performance. It’s super cool to see behind the scenes of what makes them tick.
it really goes to show how well AMD and nvidia have done given just how complex all this is, and intel is trying to get into it and having to do it from pretty much scratch.
Or if anything it shows how bad they are considering intels basically redone their entire driver multiple times over the past 2 or so years while amd and nvidia still have issues that have been prevalent for literal years. Hopefully intel kicks them into shape
@@AvocadoBondage Saying intel doesn't have problems or that somehow some way they will get their drivers to be bug free is.... funny. to say the least anyway
Well, just because they JUST got into consumer GPU hardware. They're VERY new to this, so considering they already have a DLSS and FSR competitor in their first card generation is outstanding already. Ontop of that, they have to catch up with drivers, but it seems like their optimizations have been very massive considering how poor they originally performed when they launched.
@@PixelatedWolf2077 They are not in any way "new" to GPUs as they have been putting out APUs for longer than AMD has. It is however the first time intel has been serious about making their GPUs work well.
@Azureskies01 Well originally, the iGPU for Intel was alot like a GT1030. It had its sole purpose as a display driver. Intel, however, realized they could benefit quite a bit by trying to make a proper iGPU. That first attempt was Iris which was a step in the right direction. It was better than the normal UHD graphics of the time. Then their silly little productivity only GPU came out and it turns out it wasn't bad, so that's what got the ball rolling towards making ARC. Arc didn't come until 2022 however. So in reality, they haven't had much experience in both driver tech and making a proper GPU.
As someone who knows just enough about this stuff to understand what TAP is saying at a surface level, this is possibly one of GN's best videos. The average gamer probably won't understand much of this and that's OK, but as a software engineering student and GPU nerd, this is so cool.
I noticed something Steve said "I didn't feel confident in my understanding to....". THIS is what separates GN from most other channels and I'm not talking about only PC hardware, just technology in general. A lot of others will just do some half a$$ed job of compiling information and present it without a clue about what they're talking about. Steve doesn't treat us like that. Massive respect.
I may or may not currently work for the big blue silicon corp, but having someone like Tom on the GPU team is essential. When Raja Koduri left, ARC's future felt way less certain, but I think these types of improvements and attention in the gaming space translate to improvements in the AI compute space, so upper management sees the financial appeal of getting their hands dirty in the gaming space too. I'm actually stoked for the continued improvement to Alchemist and the release of Battlemage.
This was amazing, Tom was able to explain really complex topics for the average user to understand. Makes me appropriate all the work the Arc team is doing. Thanks Tom!
Companies need to have more people like Tom out and doing content with creators on UA-cam and other such media like these discussions. It's quite refreshing seeing a passionate engineer talk about what they know.
I personally have actually said "this game is optimised!" about the initial release of Overwatch (2016). For how good it looked it really was pretty damn well optimised and still looked pretty great even on lower settings on very old hardware while still delivering good or even great FPS. I guess the only area where it lost some points from me was it used to have some very odd random CTDs for no apparent reason.
Steve...I'm a very basic computer user (on my $100 Chromebook ight now). It does what I need. But I'm always interested in the computer space and having such interviews is a great service to your viewers. I appreciate you and the team.
This is singular and amazing, seeing the big corporations talk with the customers and seeing two very smart and nice people talk it's a breath of fresh air to the industry. This is great. Thanks for talking to us.
The Tom videos are an awesome peak under the hood. Thanks for continuing to do these, Steve, it's great content for the community! And also want to acknowledge that Tom has continued to improve on camera from the early episodes when he was using a more Arc-centric framing for the discussion, versus the more recent episodes where he discusses the basic concepts in neutral/general terms, and then uses Intel's ongoing efforts as a case study. More Tom!
Thank you for this....the compiling shaders thing has always puzzled me as while it had existed before, nowdays it is everywhere...i just wish it was done behind the scenes rather than a bar when starting up the game lol
Yeah I wondered about this as well. I don't think I've even heard it as a phrase until the past few years. That was more a behind the scenes thing in the past.
To be fair I'd rather have the shader compile happen before I start playing rather than have a stutter fiesta when playing the game... Jedi Survivor for example. I think many games are way more shader heavy than they used to be so that can be part of the reason why this happens often now. AFAIK it has been a problem with UE4 and now with UE5 engine games mostly.
@zivzulander first I remember seeing it was the ps3 emulator rspc3 though have said it has been around for a while with some older battlefield games....but now almost every game has it
Tom Petersen is amazing. I follow him since he was at Nvidia and he did a review about one of my first gpus, the gtx 650ti. His great commentary about products and software mixed with silly jokes here and there makes him the perfect man for the job lol
So, if I get this right the drivers are "driving" the cards. Those DLLs and Kernels seems to be the instructions on how to handle the "road". It's almost as if you had to learn to drive a new car in a new city for every new game. But every time you come into a new city; I need to learn to drive differently. Not only because of the car but also because of the roads, signs, rules, temperature and terrain... Sometimes it's not too different from one city to another, but other times it's completely wild. What could be the safest maneuver in one city could lead to your death in another one.
this is absolutely fantastic, I seriously love chats like this, it's what has been sorely missing in the relationship between product and consumer. Talking to the people who actually *make* the damn thing makes me feel indescribably more informed and appreciative of everything.
@@GamersNexus hi they need to do much more software works and maybe it will be good. maybe VERY strange but do something as fast as you can! in all people's animal man types are, evil also are. MY NOT FRIENDLY HALF BROTHER IS IN USA MORE THAN 20 YEARS! here the building water contains some spicy thing, bad for heart!!!! leg, and blood pressure raiser! and who knows what other! MANY TIMES! becasue of evil man types plumbers, many times comed to the building "water manhole" and done evil things. and nobody against evils not done nothing! many people not even want! many not can. because evil man types are in many places! mostly in police, plumbers, etc! more than 6 years MANY evil things are!!!!! I WRITED TO MANY PEOPLE, to Israeli jews, to others also, to vikings and to many others! MANY PEOPLE NOT HELPED NOTHING! MOSTLY HERE MANY EVIL PEOPLE ARE AND IN MANY NEAR COUNTRIES! EVILS NOT LIKE ME!!!! MY HALF BROTHER BATMAN OR EVEN BADDER MAN TYPE!!!! WAITING FOR MY MOM AND I TO DIE! MY MOM IS SO OLD SHE CAN DIE ANYTIME! i almost died few times batman not even writed back. LIAR, CORRUPT ROMANIAN not jew not viking, not hungarian. more than 20 years in Usa i never knowed his mobile number! but TOTALLY NOT FRIENDLY! poker player, office worker more than 50 he has blue eyes, different name, face, domain name owner of "HTTP://SONKOLY.EU" and he has others. IF YOU WANT TO HELP PLS DON'T WAST TO MUCH TIME ON EVIL PEOPLES! THEY NOT DESERVE EVEN THE TIME OF BETTER PEOPLE'S! I NEED TO GET AWAY FROM EVIL PEOPLES! BUT EVIL PEOPLES ARE IN MANY COUNTRIES IN MANY PLACES, MOSTLY WHERE JOB OFFERS ARE. HERE HARD TO SELL EVEN CHEAP THINGS LIKE CHEAP DIESEL AUTO, NEW BOOKS, OTHER THINGS. BECAUSE EVILS NOT LIKE OTHER PEOPLES! batman talking with others with BAD PEOPLES! they not like me!!!! and they doing evil things!!!!! I HAVE MONEY FOR FEW THINGS. i'm hungry but many thing contains other thing. i tried to find good peoples but hard because not many are. i can't communicate with many people on facebook, phone, in person, not friendlies, they are talking with others. many ignored me. many people evil even in simple things. i wasted many time, emails, chats, sending cv, phone talks, etc. evils gived job for evils without high school or with high school fast easily. if different people trying something time wasting. nefarious pigmans and other evils runined many food, water products, and many thing! evils against other evils not done nothing. many people wasting time many years! with one batman or even badder man type. PEOPLES HEALT, LIFE NOT GAME. HELP NEEDED MANY YEARS
@@GamersNexus They need to do much more software works and maybe it will be good. maybe VERY strange but do something as fast as you can! in all people's animal man types are, evil also are. MY NOT FRIENDLY HALF BROTHER IS IN USA MORE THAN 20 YEARS! here the building water contains some spicy thing, bad for heart!!!! leg, and blood pressure raiser! and who knows what other! MANY TIMES! becasue of evil man types plumbers, many times comed to the building "water manhole" and done evil things. and nobody against evils not done nothing! many people not even want! many not can. because evil man types are in many places! mostly in police, plumbers, etc! more than 6 years MANY evil things are!!!!! I WRITED TO MANY PEOPLE, to Israeli jews, to others also, to vikings and to many others! MANY PEOPLE NOT HELPED NOTHING! MOSTLY HERE MANY EVIL PEOPLE ARE AND IN MANY NEAR COUNTRIES! EVILS NOT LIKE ME!!!! MY HALF BROTHER BATMAN OR EVEN BADDER MAN TYPE!!!! WAITING FOR MY MOM AND I TO DIE! MY MOM IS SO OLD SHE CAN DIE ANYTIME! i almost died few times batman not even writed back. LIAR, CORRUPT ROMANIAN not jew not viking, not hungarian. more than 20 years in Usa i never knowed his mobile number! but TOTALLY NOT FRIENDLY! poker player, office worker more than 50 he has blue eyes, different name, face, domain name owner of "HTTP://SONKOLY.EU" and he has others. IF YOU WANT TO HELP PLS DON'T WAST TO MUCH TIME ON EVIL PEOPLES! THEY NOT DESERVE EVEN THE TIME OF BETTER PEOPLE'S! I NEED TO GET AWAY FROM EVIL PEOPLES! BUT EVIL PEOPLES ARE IN MANY COUNTRIES IN MANY PLACES, MOSTLY WHERE JOB OFFERS ARE. HERE HARD TO SELL EVEN CHEAP THINGS LIKE CHEAP DIESEL AUTO, NEW BOOKS, OTHER THINGS. BECAUSE EVILS NOT LIKE OTHER PEOPLES! batman talking with others with BAD PEOPLES! they not like me!!!! and they doing evil things!!!!! I HAVE MONEY FOR FEW THINGS. i'm hungry but many thing contains other thing. i tried to find good peoples but hard because not many are. i can't communicate with many people on facebook, phone, in person, not friendlies, they are talking with others. many ignored me. many people evil even in simple things. i wasted many time, emails, chats, sending cv, phone talks, etc. evils gived job for evils without high school or with high school fast easily. if different people trying something time wasting. nefarious pigmans and other evils runined many food, water products, and many thing! evils against other evils not done nothing. many people wasting time many years! with one batman or even badder man type. PEOPLES HEALT, LIFE NOT GAME. HELP NEEDED MANY YEARS
This was some super enlightening stuff. I could just and just follow the topic at hand, but that's what made it so interesting in the first place. These companies really should let you guys interview their lead engineers like this more often. Good stuff.
Kudos to GN for refraining from judging if game is "optimized", because it's a difficult things to analyze without knowing internal details of the game and it's a term often misunderstood by gamers and even experts misinterpreted it. I remember Digital Foundry guys, who are usually very knowledgeable, criticizing a completely empty open world UE5 demo for not utilizing all CPU cores and comparing it to Cyberpunk - with tons of NPCs and various simulations and systems running in the background. That UE5 demo literally had nothing to run on other cores, because there was nothing except graphics and one player entity, so even if their conclusion about the engine was correct, the example used was not and could not be used for this kind of comparison. It was bizarre to see this kind of cluelessness from some of the best "game optimization" nerds in the media.
There is always something to be run on other cores in the vast majority of cases. The case is not IF you can run things on other cores its more like _is it worth the complexity_? -because trying to juggle data flow and data race problems can make a multithreaded program slower than singlethreaded if you get it wrong. But it's still possible to make it go faster. This is especially true with game engines like Unreal because the very nature of a game engine shatters up the program into many thousands of individualized componments like transform components which are relatively straightforward to parallelize. So much so that Unreal does it by default...
@@MallchadAgreed. That is what really hinders the amibtious UE4 games like Jedi Survivor, Callisto protocol, etc is the engine's poor CPU core utilization. Its indicative of the engine's origins in the early-to-mid 2010s where CPU clock speed was more important than utilizaton of the cores available.
@@MSquared135 It's not really the engines fault. the engine is just a shell you bolt onto what you eventually call your game. it's up to the game programmer to make it to faster.
@@Mallchad In some cases it is the engine's fault. An engine is more along the lines of the _framework or scaffolding_ that you build your game within. If that framework does not scale properly across multiple threads then that will manifest as a CPU bottleneck when you eventually stress the framework in exactly the right way that it can't scale properly, and the only remedy is to either stop stressing the framework (dumb down your AI behaviours, reduce the number of AI agents, reuse meshes more to take advantage of instanced draw calls more, etc) or replace parts of the framework with your own in-house code (write your own AI system that can scale the number of agents well, write your own GPU-driven rendering pipeline that can handle many instances of different meshes well).
@@jcm2606 Yes, the "replace parts of the framework with your own in-house code" is why I don't see it as the engines fault. Unless you're using a completely priorietary engine where you have no ability to acces or even read the code, you usually have options. and even highly proprietary engines usually leave you with options for engine-tinkering- the game and the engine are one and the same, and must be treated as such.
Great video. My two only remaining questions are when Battlemage is coming and if the drivers will make it shine 😂 (jk jk) But seriously I so want to buy Intel GPU since prices are great but I’m still discouraged by the prevalent issues and I don’t want to risk the next big field of stars game won’t work on launch.
Thanks for your continued coverage of Arc! On your most recent are video you talked about MSAA being broken for Arc in GTA and how the Intel team responded and said it was a GTA exclusive issue. Please tell the team this isn’t exclusive to GTA, in some older games like Watch Dogs 1 and Assassin’s Creed Black Flag MSAA is completely broken on Arc!
This is THE WAY to do tech journalism. Fkin amazing work - not only this video, but Gamers Nexus' whole approach. Also: Big respect to Intel for doing "risky" and honest interviews/deepdives like this. It really sets them apart from the rest.
The fact that Tom is still doing this stuff (and Intel is letting him) makes me think they still have a long-term plan to _eventually_ get there with their graphics division. I truly believe if Intel holds out a bit and doesn't kill Arc too fast, they can carve out a chunk of the market by focusing on price/perf gamers. From that starting position, it can grow further. Nvidia needs a kick in the pants and to lose sales to snap back to reality.
5:50, just to add; not all shaders are text when they land in driver land. For DirectX12 and Vulkan, they have already compiled the text to an intermediary binary format (DXIL or SPIRV) which is easier for the driver to handle since there's less steps required to convert it to their instructions. But still it needs that compilation step. Probably text is easier to explain though (and relevant to opengl)
In contrast, there's WebGL / WebGPU, where the browser parses the shader source (GLSL / WGSL) on demand, then internally translates into the browser's target API shader language (HLSL, GLSL, whatever Metal uses) then complies that again 😢 It even needs to translate when it's GLSL to GLSL for safety and small changes in the flavor of the language. Chrome even uses something called ANGLE, which is implementing OpenGL in terms of DirectX for... reasons. So you can in theory end up translating GLSL to GLSL to HLSL to DXIL 😢 Hopefully browsers get good at caching all this nonsense!
@SimonBuchanNz such a giant step back. DXIL and SPIRV were greater specifically to remove the compiler from the process. Now you only have to deal with DXC and glslangvalidator directly (you can just file an issue rather than contact IHVs), rather than 1 compiler per vendor. Web never seems to learn from mistakes people found out in the real world :/
@@nielsbishere well, there *are* reasons (security, a lot of the time) ... but I wouldn't be surprised if *eventually* there's a binary form of WGSL, and in theory the browsers could write their own optimizing emitters for DXIL and SPIRV (dunno about Metal) to skip another compile - they do way harder stuff all the time. A lot of the time, though, this stuff is just politics: I think there's a hint of a suggestion that SPIRV might have been vetoed because it might make Apple look worse due to the extra translation they would need due to not having Vulkan (the obvious response being ... why not?)
@SimonBuchanNz I think in the end it's because Apple wants to force the way the api can go. With khronos they have way less influence which they don't like. Apple seems very determined to cut all ties with khronos, and I don't really know why. Security might be true in some cases, but I've ran into enough driver crashes to know that probably this also increases likelihood of bigger issues because of the additional layers in between (seeing as binary -> ISA already goes wrong). Now you'll have to deal with every different browser having different compilers and every backend and every vendor having different behavior... with spirv that'd have been reduced to 1, maybe 2 compilers (dxc for hlsl and glslangvalidator for glsl)
Tom, your technical insight is inspirational. I love the passion and excitement you exude when speaking on this stuff. Intel Arc feels like it's in great hands with someone like you, and your team, working in the trenches. I look forward to next gen Arc and will definitely be jumping in.
Great review! I learned a lot of this stuff in my undergrad computer science courses. This guy can take difficult concepts and explain them in a simplified fashion.
I do wonder how much the game engine matters, for example is UT4 more optimised than UT5 purely because it has been around longer and more mature. I hear so much about games being ported from 4 to 5 and i wonder does the performance take a hit just by doing that
Great question. Without being in a position to directly answer, I'd wager that UE5 is probably more optimized even in spite of UE4's maturity just because UE5 should be rolling all those optimizations into UE4. I do think there are probably situations where your thought is right though: If a dev has an option of a mature engine versus a totally ground-up build, there are probably things better optimized on the older one.
Another problem many gamers don't understand is that a more flexible modern feature can be more expensive even if visually it looks the same or even worse. A photorealistic room with baked light in UE4 will run at 200 FPS and could even look better than the same room in UE5 with lumen running at 60 FPS, where every light can be dynamically changed and the wall be demolished. And it's not because UE5 is unoptimized, it's because it does things in real-time that were offline baked before and this allows for more dynamic worlds and gameplay. But when a dev still makes very static game while using fully dynamic lighting, this new cost doesn't feel justified.
It does matter a lot since the engine dictates the exact calls, shader code, resource usages and such that are fed down into the driver. OpenGL and pre-DX12 drivers can do a lot to wring more performance out of what they've been given by the engine, but at the end of the day it _is_ the engine that dictates what exactly is going on in the frame. DX12 and Vulkan turn this up several notches, too, as they push a lot of stuff out of the driver and back into the engine. Management of API-level command buffers, memory management and frame presentation/buffering under OpenGL and DX11-and-below happened largely in the API runtime itself or the driver based on what the engine fed the driver, but DX12 and Vulkan both push these back up into the engine, making the engine responsible for recording API commands into an API command buffer, performing memory management at the API level, dictating the overall flow of frame presentation and frame buffering, etc. As time goes on this is happening more and more, too, as seen by the Approaching Zero Driver Overhead movement in OpenGL or "bindless" style descriptor management in Vulkan (descriptor indexing, buffer device addresses, etc).
@kazioo2 that is it exactly when I see a game and the graphics on the face of it do not look that great...and yet the performance is nowhere near what I would expect from what I see
As a QA engineer, I have gotten very close to all these various layers of the pipeline. I am glad for the deep dive into it. Make all this stuff common sense.
These are great videos! Even though I may not understand everything that is being said, it at least shows us that not only are the companies doing as much as they can, but they're showing it with someone who at least understands them to ask questions and confirm what they're being told, to a degree of course. It's also great when you have someone like Tom who is enthusiastic about the work he and his teams do!
Hi, im tech lead for one of those games which was optimized in the intel driver lately. I would have question for Intel guy, is there any way to find out what did you optimize for our game? Maybe we could do more stuff on our side to make the game run faster.
Wow, this was very well explained and I have real knowledge about gpus. Really in awe with the complexity and can't believe gpus work at all with everything it needs to do.
Wish you had asked him this critical question: "what was the point of 2 alpha releases with DG1 and DG2, as well as over a decade of integrated graphics drivers? Why were the drivers not ready when you had 2 years to get them ready before arc launched if we only count DG1 and DG2, and why was all the integrated driver work useless for ARC?"
This video has been live for literally 5 minutes. You could not have even known what was asked, first of all, and secondly, the topic of this video is the driver stack.
@@GamersNexus On one hand, sure, the commentor did not watch the whole video before commenting, neither did I, so far. On the other... Aren't they right? Sure, you say it's not the topic of THIS video, but how many videos have there been with Arc engineers so far? It's great to celebrate that Intel is making progress, but shouldn't they be held accountable for the original mess?
@@janbenes3165 do you mean when we held them accountable for 2 straight years when it was all happening actively? Because yes, and we did. We covered DG1 before anyone else, covered DG2's messy launch, and in fact we broke the story on what a trainwreck the drivers were for Arc originally -- so much so that other reporters reported on our findings and on Intel's direct response to them. We covered that story. Receipts: Literally called "Worst We've Tested" - ua-cam.com/video/MjYSeT-T5uk/v-deo.html Intel responds: ua-cam.com/video/znJzozRfJYY/v-deo.html DG1 launch: ua-cam.com/video/HSseaknEv9Q/v-deo.html Talking about what a nightmare the A380 was: ua-cam.com/video/La-dcK4h4ZU/v-deo.html Unrelated but while we're at it, holding them accountable for a terrible card design: ua-cam.com/video/N371iMe_nfA/v-deo.html
@@GamersNexus Yeah, you covered the story. I know you did. I was there too. But there you have a person who was close to the top when it was all happening, or at least should have relevant information about it and did not ask simply "why?" it happened. I'm not saying you should be dragging Arc through mud or anything like that, but this question of "why the DG1 never amounted to functional drivers?" never comes up.
The answers on the surface seem obvious to anyone who has been gaming long enough. Why are games releasing in a broken state so often? Sometimes you have to get a product out. You could spend two years with a product working in a lab, and not learn as much as you could learn three months post release. Arc is Intel's first attempt at a competitive dedicated GPU, which is a different product from the integrated GPUs. Gaming was likely not a priority, and usually nobody cared about gaming performance on Intel integrated. This is essentially Intel working at "getting in shape".
@GamersNexus You asked for questions so here we go. What is the point of cache and why is it so incredibly small? I'm under the impression it holds most commonly accessed data and formulations. it seems like it serves an important purpose (CPU's with essentially identical stats with larger cache sizes seem to do better as do HDD's) so why don't we increase it significantly? For example why not double or quadruple the cache size?
If you like this, watch our video discussion animation error and flaws of frametime testing as we know it today! ua-cam.com/video/C_RO8bJop8o/v-deo.html
To further support our interviews and deep dives, consider grabbing our metal emblem pint glass! store.gamersnexus.net/products/gn-3d-emblem-glasses or our ultra comfortable hoodies! store.gamersnexus.net/products/warm-ultra-soft-fleece-zip-hoodie-tear-down
NVIDIA latency technical discussion: ua-cam.com/video/Fj-wZ_KGcsg/v-deo.html
Or our Intel Arc 2024 revisit: ua-cam.com/video/w3WSqLEciEw/v-deo.html
re drivers you should really have on one of the people working on the MESA linux graphics drivers, it's like one of the things making the steam deck possible
Very interesting! I wonder if it would be possible for Tom to get a video of some engineers actually using these tools to show us what they do. Like maybe a behind the scenes video step by step of an actual problem they encounter in a game and the process of what they do to fixing it using software and testing. Maybe that's a bit intrusive but it could be an interesting video.
Thanks for the video Steve and GN team!
Would love to pick this guys brain about the advantages and disadvantages the vulkan api has over directX on the gpu side
Also
Thanks Steve
With Tom having worked at NVIDIA and now at Intel in this very unique position: how do different choices at the hardware level impact the software and vice versa? And on a personal note: how hard is it to work around previous knowledge that you aren't allowed to use due to patents and NDA's + could your previous employer find out if you did so?
.
Hey Steve, (bro) when are you planning to break this system? We're still waiting for an in-depth investigation into the actual prices for these fast, fancy calculators of GPUs in mass production.
Correct me if I'm wrong, but isn't it mind-blowing that the iPhone 14 only costs $10 each in mass production out the door?
Thanks Steve.
The bot check has been passed.
😱
Thanks @beachslap7359 😐
@@GamersNexus lmao
Never gets old.
I wish all GPU vendors were more willing to talk about this kind of stuff. This is very interesting and informative stuff and I applaud Intel for willing to talk about this. Makes me want to pick up an Intel card just to tinker with it.
You stole my words
Hopefully they will talk about this stuff more now that they see there is interest. I think they just assumed people aren't interested in the low-level details as much (specifically on the software/drivers side).
Nvidia and AMD have had engineers on this channel before (and others) to discuss other technical info, though. That recent video on latency Steve did with Guillermo Siman of Nvidia was also very informative, as were the videos recorded in the AMD labs in Austin, Texas.
just AMD is not visiting GN, Nvidia was there quite often and Intel too.
@@PrefoX GN went to AMD's own offices and spoke to the engineers there. Guess you missed the video
intel bends over to nazisrael don't buy into genos aiders
24:56 Thanks for watching Tom!
I considered cutting that but figured so few people watch that long that it'd be a great easter egg!
@@GamersNexus Really? People switch off from a video with Tom in it?
In my country, we'd say they "have kangaroos loose in the top paddock".
@@virtuserable that's what's stopping me from buying Arc 😪
HAHAHAHA that cracked me up
@@grievesy83Don't see many Albanians around here!
This is amazing. I work as a low level engineer in the reverse engineering field, and these technical deep dives are AMAZING. Its not an exaggeration to say that the engineering content you are putting out is legitimately one of the things people will be watching for years for knowledge of certain software topics. Thanks Steve & team, and you'll always have my support!
That's a cool job! And thank you for the kind words!
the software part is always a game of "speaking the right language" but the low level is always so interesting.
like having this API have X registers or reading this text and inferring this information to perform "that" instruction.
I work in retail, but low level circuits and programing has been my hobby, I remember building my first dual core circuit 10 years ago and it took me 2 years to figure out how to coordinate just to figure out how to share the information from RAM and match it with the program counter register.
and that's is when I was using my own "coding language" it was very hardware specific language.
watching those deep dive from giants like Intel is so much fun!
I'm a simple man. I see Tom Peterson in a GN video, and I watch the entire thing sitting forward in my chair with both index fingers pressed against my lips, brow furrowed, learning intently.
Love this series. Never stop. There's no such thing as too much Tom on GN's video catalogue.
i'm an aging grease monkey, most of this is way above me in the clouds.but i still enjoy it.
Oh yeah these are great!
I honestly just love this miniseries. Im currently studying to become a software engineer and watching your videos makes the learning process very interesting and fun. Thanks to you and Tom for bringing us these videos and I hope we'll see more of them !
That's so awesome to hear! That our content can be helpful at all in early stages of education is a big compliment. Keep studying!
Former Mod developer here, working with source engine as 2d/3d artist. id like to share some basic views onto game optimizations. Petersen did a great job explaining deeper driver/engine Level optimizations, but this is only one part of it.
Its not just the engine or driver, its also the assets, and this is also where some studios just drop the ball.
you want to load a gpu evenly, but you can absolutely choke a gpu when overloading it with a heck of a lot of one single workload. that might be one specific shader, or absolutely insane geometry load, or stuffing the vram.
1. 3D models. models are basically a wireframe that form a body. the possible level detail of a model is determined by the number intersecting lines of the grid. patches formed by the grid are refered to as polygons (there are different types like ngons, quads, not gonna touch on that). a high polycount gives you a higher resolution mesh with potentially more detail, HOWEVER you can have a model with millions of polygons with no details at all. the polycount is direct geometry load on the gpu. the higher the polycount in the scene, the higher the load on the gpu. once you overload the geometry pipeline with absolutely insane levels of geometry, the gpu performance drops off a cliff. this is basically what tessellation does - it takes a simple wireframe and bloats the polycount, increasing mesh fidelity, but blasting the gpu with geometry load. this was nvidias big trick to choke radeon gpus in certain games using gameworks or hairworks. nvidia massively increased the geometry capability of their gpus starting with fermi, and tried to get studios to use their tessellation libraries found in gameworks/hairworks that would absolutely obliterate radeon gpus with stupid high amounts of geometry load. notable games are the witcher 3 with hairworks and crysis 2 with environment tessellation that does absolutely nothing except cutting radeon framerates in half - this is why you can find the tessellation factor option in the radeon and intel arc driver, it limits the mesh complexity scaling of tessellation. geometry load is the whole visible scene with all models and map geometry on the screen. so you want to keep the polycount of the rendered scene as low as possible, and you absolutely want to avoid wasting polygons on geometry that doesnt even need it. a cube can be done with a "single digit" number of polygons, or 2.000.000.000.000 without any visible difference. you can have thousands of the minimal polygon cubes on screen without breaking a sweat, while just a few of the bloated cube will make your gpu scream. modern gpus can handle a ton of geometry load, but this is not an excuse to just waste it.
having a ton of unnecessary polycount puts unnecessary load on the gpu, and is a result of lazy mesh optimization. sure, you want higher fidelity models for better visuals, but you can assume that studios rather cut on time per model in exchange for worse client side performance.
one technique, usually used in older games, is "backface culling", which basically deletes all parts of the model that the player is never seeing, cutting geometry load. there are possible artifacts when you can actually see the deleted backface, and the model looks hollow. today this is not done alot because gpus are pretty powerful, but there are situations where this should still be done but isnt.
but dont worry, polycount isnt the only way to have detailed models, theres a way to simulate polycount with textures, thats why we will switch over to...
2. textures: textures in games have 3 main usecases. 1. give color and texture to a 3d model and to the map/environment, 2. decals and sprites, and 3. control shaders. like you already heard textures are relatively easy on the gpu, HOWEVER, this is not entirely true across the board. the taxing factor of textures is resolution, file size and function - hitting the Vram and the shader pipeline. resolution gives better clarity, but bloats filesize and you can combat filesize with compression. you wanna set a texture resolution that makes sense. high resolution textures that the player is viewing up close, low resolution where it doesnt matter so much, also use compression where possible. this applies for the albedo map, or diffuse map, which is basically the texture that gives you the color information. there are other textures that control shaders, like bump and normal maps, specular maps, phong shader maps, self illumination maps, parallax maps. these textures tell the engine what to do with which part of the texture. these can be greyscale or include different informations on each of the R G B channels, like a normal map (this is a very fancy bumpmap with accurate "3d" information baked into 2d space. normal maps are either hand made (legacy) or more commonly "baked" in a 3d application from a high resolution version of a 3d model to reduce geometry load by using a low poly model + normal map instead of a model thats 100-1000 times the polycount). so you can use a 2d texture to reintroduce "3d" details back onto a low poly model. the normal map acts somewhat like a textured rubber glove you pull over your hand. normal maps should not be lossy compressed, because it will introduce awful blocky artifacts when light hits the object, and normal maps usually also have an alpha channel (greyscale channel next to the RGB channels) with a different function, usually controlling a different shader, like specularity, which makes the normal map a chonker in file size. you dont wanna overdo normal map resolution, as it will eat vram for breakfast and very high resolution normal maps also put more load on the shader pipeline. a gpu can withstand a considerable amount of normal maps, but you can obviously overdo it. lets briefly talk about how you map texture space to models.
in the 3d modeling software you "unwrap" or "uv map" the model into 2d space. imagine taking a roll of toilet paper and cutting it across the long side, now you can put it flat on the table and when youre done painting, you can put it back into a roll shape. its basically the same with 3d models, but theres a twist. with the toilet roll you have a 1:1 representation of the model and the texture, but with 3d models you set a texture size (for example 2048x1024) and then you place and scale parts of the model onto it. the bigger the parts on the uvmap, the more pixel space they get, hence higher resolution. now comes the kicker: to preserve texture size you map important parts big and unimportant parts, that are barely visible, but can still be seen, small. take a gun, for example - you want the regions close to the camera to be as high resolution as possible, but you also have tons of parts that just need color, but arent usually seen, or just are plain black that can be smaller. you can also use mirroring to cut uv map area in half for some parts. by not properly scaling the parts of the uv map you can end up with a very unoptimized uv map that gives you worse resolution to important parts while being twice the size. the impact of one bad texture set for one model isnt big, but consider a scene sometimes containing hundreds or thousands of assets. it adds up.
3. bad practice. now that we covered the general function and cost of models and textures, and the general need to optimize certain aspects to retain as much performance as possible, we enter the realm of wasted performance due to bad practice. there are shader effects that are very easy on the gpu, or very hard. and ususally there are multiple ways to do stuff. choosing the right way to do stuff can make or break a game. lets talk about reflections for a moment. there are two common easy ways to add reflectiveness via shaders, and this is via 1. cubemaps and 2. screenspace reflections. screenspace reflections take the actual content rendered in the scene and reapplies it - which makes it expensive to run, but quite close to the actual scene, but ONLY what is shown on the screen. out of screen stuff is blank - a game notorious for bad screenspace reflections is resident evil 2 remake in the police station, where objects would constantly blank out the reflection, because to the shader some parts of the scene plainly dont exist. the 2nd way to do it are cubemaps and they are so basic in function, that you can do it all day and it doesnt matter, because its basically a plain old texture. what is a cubemap? a cubemap is a texture or a set of textures that represent what something may reflect in a certain area of a map. its basically something like a skybox, but for reflections. these are non-dynamic and prebaked, which makes them insanely easy to run. now, if you want to give some kind of reflectivity to an object you can either do cubemaps, screenspace reflections or raytraced reflections, and each method being progressively harder to run. choose wisely. while a rifle scope might look absolutely stunning with raytraced reflections, 600 empty coke cans in an alley dont and will absolutely murder your gpu. raytracing is its own can of worms, and while raytracing the scene would have different implications, i just wanted to demonstrate that you can totally use the wrong tool for the job.
lets touch on some different technical bad practice that you actually dont see - as theoretical example: hitboxes. you know why its called a hitbox - in early games a hitbox was exactly that - a simple box around a character that would register the hits instead of the actual "visual" model because its too complex, later hitboxes were a cylinder, now more sophisticated hitboxes are closely modeled after the visual model and look like a stick figure, made out of very basic geometry so its still easy to run. but what if you have very complicated models that need hitboxes and you run out of time... w..wou... would you just take the mesh of the visual model to use as a hitbox so you dont have to make a seperate hitbox mesh? nooooo, certainly no one would be insane enough to run a 80.000 polygon visual model mesh as a hitbox. right? RIGHT? not saying this is a thing, its just an example of under the hood insanity no one can see, but can make your 3D sidescroller the next crysis on steroids.
and we have alot of other departments that all can do some really shoddy stuff, like animation and rig setup, mappers, coders, general shader setup etc. making games is very complicated.
theres tons of stuff you can do one way or another. doing it properly takes more time than taking some shortcuts that trade developer time vs client side performance.
blaming bad performance on the driver or the engine is only one part of the equation. you can do alot of bad stuff on the asset and coding side and, albeit having near perfect driver and engine optimization, the whole thing can just run like a brick.
looking at games that barely hit 60 fps @wqhd on a 4090... lets just say i have some serious doubts regarding the technical quality of the games. the last game i had an in depth "game dev" look at was warcraft 3 reforged, and my god is this thing botched. some units dont even have proper textures, like the frostwyrm - its a placeholder texture in a finished game no one cared to fix or finish. having such assets in the game probably also tells you the shader optimization and coding isnt properly done aswell. thats the state of the industry, at least partly. dont be fooled by the hardware hunger of the games, chances are you buy expensive hardware to cross finance a sloppy development.
Excellent comment. Adds an to what was spoken about in the video. One issue I had was while Tom wouldn't mind any question asked, he still did dance around the Starfield optimization question (may not actually be the game specified, but the video made it so).
Care to your take on it if possible?
Thank you for the information. This comment should be higher up.
Thanks for the info, I remember hearing about a river of tesselation running on Crysis 2 under the map just to slow down Radeons lol. Crazy world...
@@ezg8448we didn't specify a game. There was no specific game. We put the reference in the video as an example but it's not like he saw the edit live.
Can't wait for ARC Battlemage!
Will definitely buy one or multiple
There's a chance that the lessons learned from alchemist - especially driver development - came too late in battlemage's development to be included in the hw design. No idea how relevant drivers are in designing the hw, but if they are battlemage has probably been defined long before all the current development of alchemist drivers (and the lessons learned there) happened. Of course there will be a bunch of improvements regardless (otherwise why design new hw), but celestial is where I'd expect intel could be competitive enough to sell gpu's with reasonable margins. I'd suggest comparing alchemist die sizes and nodes to geforce/radeon cards with similar performance. Iirc, alchemist uses around 50% larger dies on a similar node. That's where they're probably losing a lot of money and making alchemist likely not very profitable if at all.
Get a couple of them in SLI and you'll be ready for ARCs Fatalis.
I am really hoping for Battlemage as well
@@102728 The hardware team doesn't worry too much about driver-level stuff. They are mostly focused on making changes to the structure of the GPU logic to keep things moving through easier.
Battlemage will make improvements to things like the dispatch logic Tom mentioned, as well as things like caching subsystems, predictive logic, or specific blocks like TMUs or RT accelerators.
Battlemage will have several lessons learned from alchemist under the hood, but you are correct to assume they won't have everything right yet. The ARC team is fighting uphill into decades of industry progression, so I full expect them to need a few goes at getting everything down.
This graphics miniseries with TAP is great. Excellent explanations and presentation, and Steve is following up with all the right questions.
Thank you! Great educational opportunity for us as well!
Impressively good info. The register spills are spills to memory which is why it’s bad and slow. The UMD is technically one, but yes, it is made up of DX12, OCL, Vulkan, etc components where the component isn’t necessarily loaded every call.
Great video and explainer!
As someone who bought an Arc A770 last year and has seen huge improvements in its drivers; it gives me great confidence seeing Toms passion and drive for Intel GPUS and that the future of intel graphics can only be getting better with the likes of professionals like him in Intel. 👍
I'm hoping so cause I would love to try them in future!
Please tell me you didn't buy it as your main GPU...
As Somebody who’s had two A770’s, (A770 Titan, A770 LE/Founders) this is true. I’d confidently say this card is faster than the 3060Ti and comparable to the RTX 4060Ti
@@KingFeraligator No,I can't tell you. You'll need to say pretty please. Then I might tell you...
im looking forward to the next gen ARC .. my current ARC 770 16gb is a great little card .. my main7800x3d and 7900xtx is a great combo but ive been using my 14600kf and ARC 770 16gb build and it has been chugging along beautifully !!
This was a great video! Thanks for having Tom on GN. I learned quite a bit in this one. On a separate note, I got an A770 from someone who gave up on Intel early. I let it sit in the box for 6-9 months and it aged like fine wine. It's really cool to see the performance getting unlocked by the hardworking people at Intel.
What I love about GN is that they respect their audience intelligence and have the guts to dive into more complex subjects such as this one. Thank you, Steve! This stuff is trully fascinating.
I was talking to a friend about drivers and how they work not too long ago, but couldn't really convey the finer details (not that I knew them to this extent anyway). This video explains it so well in "normal people speak". I really enjoy the way these video's and difficult topics are framed and laid out for everyone to understand. Awesome reporing!
Edit: I really appreciate that Intel is fixed on letting people know what they are doing with Arc and really making a solid attempt at the GPU market. More competition = better and with AMD and NVIDIA so focussed on AI now, barely noticing their "ol' reliable" gamer market, we need someone to keep their attention invested there, before they get their marketshare snatsched away long term.
Nvidia at least have greatly increased their workforce over the past few years. They can easily have people 100% focused on gaming GPUs and software.
Only Nvidia has explicitly signaled that gamers are on the back burner, and their projected priorities sell it. Don't see that with AMD pursuing AI, at least not yet.
''Number one, RTX was invented for gamers and for RTX, the technology, the most important technology is AI. Without AI, we could not do ray tracing in real time. It was not even possible. And the first AI project in our company-the number one AI focus was Deep Learning Super Sampling (DLSS). Deep learning. That is the pillar of RTX.'' Jensen Huang 2023@@SelecaoOfMidas
12:38 All that's happening per frame. PER FRAME, silicon + electrons = magic.
Magic is definitely the best explanation for it. It is absolutely unbelievable when you think about how much happens to generate a single frame of a video game.
Certainly more electrons than silicon!
Magic is a lot of electric tickles
@@GamersNexus I mean, it is magic. We're making silicon runes, engraving them with magical sigils, then running energy through it to produce effects that can't be done in nature.
@@GamersNexus
hi they need to do much more software works and maybe it will be good. maybe VERY strange but do something as fast as you can! in all people's animal man types are, evil also are. MY NOT FRIENDLY HALF BROTHER IS IN USA MORE THAN 20 YEARS! here the building water contains some spicy thing, bad for heart!!!! leg, and blood pressure raiser! and who knows what other! MANY TIMES! becasue of evil man types plumbers, many times comed to the building "water manhole" and done evil things. and nobody against evils not done nothing! many people not even want! many not can. because evil man types are in many places! mostly in police, plumbers, etc! more than 6 years MANY evil things are!!!!! I WRITED TO MANY PEOPLE, to Israeli jews, to others also, to vikings and to many others! MANY PEOPLE NOT HELPED NOTHING! MOSTLY HERE MANY EVIL PEOPLE ARE AND IN MANY NEAR COUNTRIES! EVILS NOT LIKE ME!!!! MY HALF BROTHER BATMAN OR EVEN BADDER MAN TYPE!!!! WAITING FOR MY MOM AND I TO DIE! MY MOM IS SO OLD SHE CAN DIE ANYTIME! i almost died few times batman not even writed back. LIAR, CORRUPT ROMANIAN not jew not viking, not hungarian. more than 20 years in Usa i never knowed his mobile number! but TOTALLY NOT FRIENDLY! poker player, office worker more than 50 he has blue eyes, different name, face, domain name owner of "HTTP://SONKOLY.EU" and he has others. IF YOU WANT TO HELP PLS DON'T WAST TO MUCH TIME ON EVIL PEOPLES! THEY NOT DESERVE EVEN THE TIME OF BETTER PEOPLE'S! I NEED TO GET AWAY FROM EVIL PEOPLES! BUT EVIL PEOPLES ARE IN MANY COUNTRIES IN MANY PLACES, MOSTLY WHERE JOB OFFERS ARE. HERE HARD TO SELL EVEN CHEAP THINGS LIKE CHEAP DIESEL AUTO, NEW BOOKS, OTHER THINGS. BECAUSE EVILS NOT LIKE OTHER PEOPLES! batman talking with others with BAD PEOPLES! they not like me!!!! and they doing evil things!!!!! I HAVE MONEY FOR FEW THINGS. i'm hungry but many thing contains other thing. i tried to find good peoples but hard because not many are. i can't communicate with many people on facebook, phone, in person, not friendlies, they are talking with others. many ignored me. many people evil even in simple things. i wasted many time, emails, chats, sending cv, phone talks, etc. evils gived job for evils without high school or with high school fast easily. if different people trying something time wasting. nefarious pigmans and other evils runined many food, water products, and many thing! evils against other evils not done nothing. many people wasting time many years! with one batman or even badder man type. PEOPLES HEALT, LIFE NOT GAME. HELP NEEDED MANY YEARS
This made me realize the genius work from Valve on DXVK, the amount of work it needed to be seamless to the games is outstanding. Thank you Steve for the amazing interview.
That’s what their new drivers legacy mode is. It would have been nice for them to just ship their dx9~11 dxvk library as a signed windows module and focusing on making dx12 Vulkan and OpenGL better. Less surfaces to maintain and find improvements for.
Most of that work was done by Microsoft. MS created DirectX HLSL to Vulkan SPIR-V compiler and released is as open source in 2018. HLSL is DirectX shader language. Spir-V is shader language created for Vulkan. MS created this compiler for Linux Subsystem for Windows (WSL). Compiler can translate Vulkan to DirectX and DirectX to Vulkan. DirectXShaderCompiler currently have more than 600 github forks created by: Valve, Sony, Apple and more
This video helps me a lot as a hardware person.
I don’t do much on the software side. I think I know what it’s doing (educated guessing) but it’s nice to have an explainer of what it’s actually doing.
This was fantastic, and seeing how invested the Arc team is has me very excited for Battlemage
Just finished watching the video from earlier in the week. Perfect time to jump into this one. *THANKS STEVE!*
This was one of the most fascinating deep dies I've watched in quite a while. Tom is an excellent presenter and does a great job of taking really complex stuff and making an understandable.
I love these sorts of dives, it might not ever make us experts in the field, but it gives you a good sense of just how complex all this stuff is, this in turns allows you to get a better perspective on things.
I love these engineering insights. Keep it up Steve and thanks!
My brain hurts from this, but I have a better understanding and appreciate how much goes into driver optimizations. Hats off to Tom.
Always love seeing some Arc content. Thanks Steve and Tom for these types of videos
I was *VERY* close to buying an ARC card for my new system build, and Intel letting their engineers participate in videos of this quality and detail increases the likelihood that my next GPU ends up being one of theirs (given reasonable performance, power consumption and driver quality).
This is really, really good PR, and it's very interesting to watch. Thanks for covering both technical and business/process aspects, and for not dumbing things down. Kudos to both Steve for facilitating a conversation where he's sometimes slightly out of his depth (but still asks relevant questions!), and to Tom for very good explanations of pretty technical topics.
I can only repeat: Incredible respect for Intel in working on this stuff even after the initial failure and issues. I hope the current financial losses dont stop you guys from keeping it up and coming with nee Hardware at some point.
I will buy it, no matter what. yYou guys worked hard to convince you dont just give up, and you will have earned that money!
Unless intel pay their fines and own their mistakes, I will never buy anything intel related..
“The European Commission has re-imposed a fine of around €376.36 million on Intel for a previously established abuse of dominant position in the market for computer chips called x86 central processing units ('CPUs')” from September 2023
This series is good. Very informative and Mr Intel is a natural teacher. Thanks Steve
Man, this is the kind of content youtube was made for. Outstanding and interesting work here, from both Steve and Tom !
As a game developer myself, I really love to see these conversations, it's not every day I have the opportunity to hear from hardware engineers every day!
I do feel it's important to specify one thing I think wasn't super clear: just as every car uses an engine of some sort, same goes for games. The engine is the library that provides programmers like me with the tools we need to run the game in the first place. What Tom meant by "a lot of games use an engine" is "a lot of games use *publicly available* engines" (that's your Unreal and Unity and such). It's worth noting that a lot of game studios build their own if they have the resources to do so, there are quite a few advantages to this approach.
Love seeing Tom on camera, he always has something new and interesting to say. I do have a follow up question for when something is the driver teams responsibility to improve or the application teams responsibilty. Would it not be a more efficient use of your time to identify the reasons people are getting poor performance i.e. Tom's example of register spilling, and writing tools and documentation that tells app developers "Hey, you're doing this wrong". So that over time more people write better performing code, rather than relying on the driver team to fix that issue for you
Great! More intel arc news on GN always excited when I see the notification.
You and me both fellow arc user.
So am I. Tom Peterson is just a brilliant communicator!
21:40 That's some subtle shade thrown right there. Nice.
What is that sound, for those of us who don't know?
@@Fluke1x Starfield's theme song
@@Fluke1xthey also flickered starfield on the screen right after
Since the first time I saw Tap in a video I thought he should be the face of Intel. No marketing veil, just honest and open descriptions of the inner workings of a GPU. He genuinely wants you to understand what a GPU is and what it does like you would expect from a teacher.
I mean, of course his appearance on certain channels is intended as some level of marketing, but it's not full of the normal bullshit where they ridiculously exaggerate everything about what they're selling. When he appeared on PCWorld I asked in the Q&A how Arc performed in VR and he straight up said "Do not buy Arc for VR." You would _NEVER_ see that from an AMD or Nvidia rep, and you gotta respect when he shows you respect with a blunt honest answer like that.
I love these in depth tech talkes. Tom does such a fantastic job explaining and even is kind enough to bring slides. Cant wait to see more!
It's refreshing to see someone honestly talk about their products, as long as Intel offer their GPU I won't be going back to the other two.
I was basically the "they're the same image" meme when that before and after slide came up, I was puzzled until it was zoomed in haha.
Honestly, I got lost pretty early (I understand nothing about these things and goodness knows a scientific/analytic mind is not one of my strong points) but even with my limited understanding, it was really interesting. Another example of "yeah just optimise it" when the work behind it is truly daunting.
Thanks for the interview Steve. Speaking of optimisation, do you think you could have a game developer on and ask them why a game collection that was 3gb was turned into one that's dozens of gbs heavier and doesn't even work properly? :p it might not be as technical as this but goodness knows I'd like to understand why game devs have decided compression and optimisation aren't necessary, maybe there's a graph for that too. Cheers!
An increase in install size like that is usually because the art assets are higher resolution. For example you might have had 256x256 pixel (width x height) images for textures (the stuff that gives color & depth to 3D models and surfaces), but they might have shipped higher resolution textures like 2048x2048. You might think that's 8x bigger because 256 -> 2048 but actually it's about 64x bigger because it's two dimensions (same idea as 1080p to 4K isn't 2x more pixels, it's 4x more).
So think for every texture in the game, you might have thousands or tens of thousands of them, if all of them are made bigger then the game size balloons really fast. Higher resolution textures doesn't necessarily improve things on its own though, cause the new texture needs to actually have more detail to make use of the extra size.
If you just resize it and that's it, you still have the same amount of detail as you had before which won't look any better at all. So you need to have an artist go in and redo the texture to make it more detailed and less blocky. And now there's AI tech that can do some of these tasks for upscaling textures (same idea as DLSS upscaling but applied by game devs to the actual game).
Props to the Intel guys who went out of their way to make these slides, and Tom for going over them in detail. As an engineer, it can be annoying to put technical work aside to create diagrams and presentations, but they really went out of their way to make sure this looks great and serves as efficient documentation as well. Thanks, Intel & Tom, and thanks, Steve!
Are we going to get a discussion about MESA support?
Especially the sparse implementation there (with TR-TT and VM_BIND) 🐸
I don't feel qualified enough yet, but can study it more. Maybe that'd be a good one for Wendell to help me with!
@@GamersNexus as a linux user, much appreciated and would be content to see a video on that one too :).
@@GamersNexus An interview with one or more of the people working on MESA (not specifically intel ones) would be awesome
+1 to all the Mesa comments, would really appreciate some educational content there!
I have been a gamer almost from the very beginning of gaming back on DOS, have very little understanding of how things actually work these days but i still love listening to people like this.
What a amazingly informative video. Tom does a great job of breaking down very complicated topics and Steve is asking all the right questions.
By far the best look at driver/hardware for drivers. I used to work in gaming and loved going over to the graphics guys desks to see what they were doing to optimize the game or games for specific engines. It’s crazy the amount of work they do to just get 1 more fps and a few extra polys on the screen.
Love this series and love the content. Tom probably doesn’t remember me, but have had the chance to chat with him while he was at Nvidia long ago.
Saw the notification. Clicked it. Liked it.
This is incredible! I’ve been using an A750 since launch and look forward to the future hardware. Really appreciate the hard work employees have put into improving the performance. It’s super cool to see behind the scenes of what makes them tick.
it really goes to show how well AMD and nvidia have done given just how complex all this is, and intel is trying to get into it and having to do it from pretty much scratch.
Or if anything it shows how bad they are considering intels basically redone their entire driver multiple times over the past 2 or so years while amd and nvidia still have issues that have been prevalent for literal years.
Hopefully intel kicks them into shape
@@AvocadoBondage Saying intel doesn't have problems or that somehow some way they will get their drivers to be bug free is.... funny. to say the least anyway
Well, just because they JUST got into consumer GPU hardware. They're VERY new to this, so considering they already have a DLSS and FSR competitor in their first card generation is outstanding already. Ontop of that, they have to catch up with drivers, but it seems like their optimizations have been very massive considering how poor they originally performed when they launched.
@@PixelatedWolf2077 They are not in any way "new" to GPUs as they have been putting out APUs for longer than AMD has. It is however the first time intel has been serious about making their GPUs work well.
@Azureskies01 Well originally, the iGPU for Intel was alot like a GT1030. It had its sole purpose as a display driver. Intel, however, realized they could benefit quite a bit by trying to make a proper iGPU.
That first attempt was Iris which was a step in the right direction. It was better than the normal UHD graphics of the time. Then their silly little productivity only GPU came out and it turns out it wasn't bad, so that's what got the ball rolling towards making ARC. Arc didn't come until 2022 however. So in reality, they haven't had much experience in both driver tech and making a proper GPU.
Please discuss XeSS in another episode I think it would be very interesting to actually understand what's happening with the upscaling tech
As someone who knows just enough about this stuff to understand what TAP is saying at a surface level, this is possibly one of GN's best videos. The average gamer probably won't understand much of this and that's OK, but as a software engineering student and GPU nerd, this is so cool.
These Arc series videos, walkthroughs are great, whenever i see Arc on gamer nexus i immediately watch the full video and enjoy it.
I noticed something Steve said "I didn't feel confident in my understanding to....". THIS is what separates GN from most other channels and I'm not talking about only PC hardware, just technology in general. A lot of others will just do some half a$$ed job of compiling information and present it without a clue about what they're talking about. Steve doesn't treat us like that. Massive respect.
More of this. We need people from the industry we are deeply invested being on this channel talking about the stuff we care, especially in deep dive.
I may or may not currently work for the big blue silicon corp, but having someone like Tom on the GPU team is essential. When Raja Koduri left, ARC's future felt way less certain, but I think these types of improvements and attention in the gaming space translate to improvements in the AI compute space, so upper management sees the financial appeal of getting their hands dirty in the gaming space too. I'm actually stoked for the continued improvement to Alchemist and the release of Battlemage.
This was amazing, Tom was able to explain really complex topics for the average user to understand. Makes me appropriate all the work the Arc team is doing. Thanks Tom!
Please more videos like these, they are genuinely enjoyable and informative!
I saw someone wishing "good luck" at 23:22 but none was needed. You two nailed it.
Videos like this are precisely why I'm a subscriber. Can't wait to see more!
I LOVE this kind of in-depth and well informed break down of complex concepts like this. Ty to everyone involved.
Please grill him on his companys DX3\5\7 OGL1.x\2.x emulation\wrapping. They never want to talk about it.
Companies need to have more people like Tom out and doing content with creators on UA-cam and other such media like these discussions. It's quite refreshing seeing a passionate engineer talk about what they know.
I personally have actually said "this game is optimised!" about the initial release of Overwatch (2016). For how good it looked it really was pretty damn well optimised and still looked pretty great even on lower settings on very old hardware while still delivering good or even great FPS. I guess the only area where it lost some points from me was it used to have some very odd random CTDs for no apparent reason.
Steve...I'm a very basic computer user (on my $100 Chromebook ight now). It does what I need. But I'm always interested in the computer space and having such interviews is a great service to your viewers. I appreciate you and the team.
I switched to a INTEL A770 and I can say Im impressed, I play alot of FPS and my A770 seems to keep up. it's not super fast but stable.
This is singular and amazing, seeing the big corporations talk with the customers and seeing two very smart and nice people talk it's a breath of fresh air to the industry. This is great. Thanks for talking to us.
i love that intel is working so hard at this.... its so great to see. my next budget build i think id be in a place to go intel now.
Same thoughts.
Very optimistic for their future
The Tom videos are an awesome peak under the hood. Thanks for continuing to do these, Steve, it's great content for the community! And also want to acknowledge that Tom has continued to improve on camera from the early episodes when he was using a more Arc-centric framing for the discussion, versus the more recent episodes where he discusses the basic concepts in neutral/general terms, and then uses Intel's ongoing efforts as a case study. More Tom!
Thank you for this....the compiling shaders thing has always puzzled me as while it had existed before, nowdays it is everywhere...i just wish it was done behind the scenes rather than a bar when starting up the game lol
I do like knowing though that the game isn't quite ready yet. Better than waiting and not being sure if it's working or stuck!
I did always wonder are hardrives making a difference to performance and compilations as noticed past few years games on hdd more issues than ssds
Yeah I wondered about this as well. I don't think I've even heard it as a phrase until the past few years. That was more a behind the scenes thing in the past.
To be fair I'd rather have the shader compile happen before I start playing rather than have a stutter fiesta when playing the game... Jedi Survivor for example. I think many games are way more shader heavy than they used to be so that can be part of the reason why this happens often now. AFAIK it has been a problem with UE4 and now with UE5 engine games mostly.
@zivzulander first I remember seeing it was the ps3 emulator rspc3 though have said it has been around for a while with some older battlefield games....but now almost every game has it
Tom is so much fun to watch. Industry needs more people like him.
Tom Petersen is amazing. I follow him since he was at Nvidia and he did a review about one of my first gpus, the gtx 650ti. His great commentary about products and software mixed with silly jokes here and there makes him the perfect man for the job lol
It's so nice when Tom Petersen say, "it's saved on disk". and i'm like awyeah! Thats how it used to be! "DISK" You hardly ever hear that term anymore.
So, if I get this right the drivers are "driving" the cards. Those DLLs and Kernels seems to be the instructions on how to handle the "road".
It's almost as if you had to learn to drive a new car in a new city for every new game. But every time you come into a new city; I need to learn to drive differently. Not only because of the car but also because of the roads, signs, rules, temperature and terrain...
Sometimes it's not too different from one city to another, but other times it's completely wild. What could be the safest maneuver in one city could lead to your death in another one.
this is absolutely fantastic, I seriously love chats like this, it's what has been sorely missing in the relationship between product and consumer. Talking to the people who actually *make* the damn thing makes me feel indescribably more informed and appreciative of everything.
Thanks for the Selfie in Asia Steve.
Absolutely!
@@GamersNexus hi
they need to do much more software works and maybe it will be good. maybe VERY strange but do something as fast as you can! in all people's animal man types are, evil also are. MY NOT FRIENDLY HALF BROTHER IS IN USA MORE THAN 20 YEARS! here the building water contains some spicy thing, bad for heart!!!! leg, and blood pressure raiser! and who knows what other! MANY TIMES! becasue of evil man types plumbers, many times comed to the building "water manhole" and done evil things. and nobody against evils not done nothing! many people not even want! many not can. because evil man types are in many places! mostly in police, plumbers, etc! more than 6 years MANY evil things are!!!!! I WRITED TO MANY PEOPLE, to Israeli jews, to others also, to vikings and to many others! MANY PEOPLE NOT HELPED NOTHING! MOSTLY HERE MANY EVIL PEOPLE ARE AND IN MANY NEAR COUNTRIES! EVILS NOT LIKE ME!!!! MY HALF BROTHER BATMAN OR EVEN BADDER MAN TYPE!!!! WAITING FOR MY MOM AND I TO DIE! MY MOM IS SO OLD SHE CAN DIE ANYTIME! i almost died few times batman not even writed back. LIAR, CORRUPT ROMANIAN not jew not viking, not hungarian. more than 20 years in Usa i never knowed his mobile number! but TOTALLY NOT FRIENDLY! poker player, office worker more than 50 he has blue eyes, different name, face, domain name owner of "HTTP://SONKOLY.EU" and he has others. IF YOU WANT TO HELP PLS DON'T WAST TO MUCH TIME ON EVIL PEOPLES! THEY NOT DESERVE EVEN THE TIME OF BETTER PEOPLE'S! I NEED TO GET AWAY FROM EVIL PEOPLES! BUT EVIL PEOPLES ARE IN MANY COUNTRIES IN MANY PLACES, MOSTLY WHERE JOB OFFERS ARE. HERE HARD TO SELL EVEN CHEAP THINGS LIKE CHEAP DIESEL AUTO, NEW BOOKS, OTHER THINGS. BECAUSE EVILS NOT LIKE OTHER PEOPLES! batman talking with others with BAD PEOPLES! they not like me!!!! and they doing evil things!!!!! I HAVE MONEY FOR FEW THINGS. i'm hungry but many thing contains other thing. i tried to find good peoples but hard because not many are. i can't communicate with many people on facebook, phone, in person, not friendlies, they are talking with others. many ignored me. many people evil even in simple things. i wasted many time, emails, chats, sending cv, phone talks, etc. evils gived job for evils without high school or with high school fast easily. if different people trying something time wasting. nefarious pigmans and other evils runined many food, water products, and many thing! evils against other evils not done nothing. many people wasting time many years! with one batman or even badder man type. PEOPLES HEALT, LIFE NOT GAME. HELP NEEDED MANY YEARS
@@GamersNexus They need to do much more software works and maybe it will be good. maybe VERY strange but do something as fast as you can! in all people's animal man types are, evil also are. MY NOT FRIENDLY HALF BROTHER IS IN USA MORE THAN 20 YEARS! here the building water contains some spicy thing, bad for heart!!!! leg, and blood pressure raiser! and who knows what other! MANY TIMES! becasue of evil man types plumbers, many times comed to the building "water manhole" and done evil things. and nobody against evils not done nothing! many people not even want! many not can. because evil man types are in many places! mostly in police, plumbers, etc! more than 6 years MANY evil things are!!!!! I WRITED TO MANY PEOPLE, to Israeli jews, to others also, to vikings and to many others! MANY PEOPLE NOT HELPED NOTHING! MOSTLY HERE MANY EVIL PEOPLE ARE AND IN MANY NEAR COUNTRIES! EVILS NOT LIKE ME!!!! MY HALF BROTHER BATMAN OR EVEN BADDER MAN TYPE!!!! WAITING FOR MY MOM AND I TO DIE! MY MOM IS SO OLD SHE CAN DIE ANYTIME! i almost died few times batman not even writed back. LIAR, CORRUPT ROMANIAN not jew not viking, not hungarian. more than 20 years in Usa i never knowed his mobile number! but TOTALLY NOT FRIENDLY! poker player, office worker more than 50 he has blue eyes, different name, face, domain name owner of "HTTP://SONKOLY.EU" and he has others. IF YOU WANT TO HELP PLS DON'T WAST TO MUCH TIME ON EVIL PEOPLES! THEY NOT DESERVE EVEN THE TIME OF BETTER PEOPLE'S! I NEED TO GET AWAY FROM EVIL PEOPLES! BUT EVIL PEOPLES ARE IN MANY COUNTRIES IN MANY PLACES, MOSTLY WHERE JOB OFFERS ARE. HERE HARD TO SELL EVEN CHEAP THINGS LIKE CHEAP DIESEL AUTO, NEW BOOKS, OTHER THINGS. BECAUSE EVILS NOT LIKE OTHER PEOPLES! batman talking with others with BAD PEOPLES! they not like me!!!! and they doing evil things!!!!! I HAVE MONEY FOR FEW THINGS. i'm hungry but many thing contains other thing. i tried to find good peoples but hard because not many are. i can't communicate with many people on facebook, phone, in person, not friendlies, they are talking with others. many ignored me. many people evil even in simple things. i wasted many time, emails, chats, sending cv, phone talks, etc. evils gived job for evils without high school or with high school fast easily. if different people trying something time wasting. nefarious pigmans and other evils runined many food, water products, and many thing! evils against other evils not done nothing. many people wasting time many years! with one batman or even badder man type. PEOPLES HEALT, LIFE NOT GAME. HELP NEEDED MANY YEARS
This was some super enlightening stuff. I could just and just follow the topic at hand, but that's what made it so interesting in the first place. These companies really should let you guys interview their lead engineers like this more often. Good stuff.
Kudos to GN for refraining from judging if game is "optimized", because it's a difficult things to analyze without knowing internal details of the game and it's a term often misunderstood by gamers and even experts misinterpreted it. I remember Digital Foundry guys, who are usually very knowledgeable, criticizing a completely empty open world UE5 demo for not utilizing all CPU cores and comparing it to Cyberpunk - with tons of NPCs and various simulations and systems running in the background. That UE5 demo literally had nothing to run on other cores, because there was nothing except graphics and one player entity, so even if their conclusion about the engine was correct, the example used was not and could not be used for this kind of comparison. It was bizarre to see this kind of cluelessness from some of the best "game optimization" nerds in the media.
There is always something to be run on other cores in the vast majority of cases. The case is not IF you can run things on other cores its more like _is it worth the complexity_? -because trying to juggle data flow and data race problems can make a multithreaded program slower than singlethreaded if you get it wrong. But it's still possible to make it go faster. This is especially true with game engines like Unreal because the very nature of a game engine shatters up the program into many thousands of individualized componments like transform components which are relatively straightforward to parallelize. So much so that Unreal does it by default...
@@MallchadAgreed. That is what really hinders the amibtious UE4 games like Jedi Survivor, Callisto protocol, etc is the engine's poor CPU core utilization. Its indicative of the engine's origins in the early-to-mid 2010s where CPU clock speed was more important than utilizaton of the cores available.
@@MSquared135 It's not really the engines fault. the engine is just a shell you bolt onto what you eventually call your game. it's up to the game programmer to make it to faster.
@@Mallchad In some cases it is the engine's fault. An engine is more along the lines of the _framework or scaffolding_ that you build your game within. If that framework does not scale properly across multiple threads then that will manifest as a CPU bottleneck when you eventually stress the framework in exactly the right way that it can't scale properly, and the only remedy is to either stop stressing the framework (dumb down your AI behaviours, reduce the number of AI agents, reuse meshes more to take advantage of instanced draw calls more, etc) or replace parts of the framework with your own in-house code (write your own AI system that can scale the number of agents well, write your own GPU-driven rendering pipeline that can handle many instances of different meshes well).
@@jcm2606 Yes, the "replace parts of the framework with your own in-house code" is why I don't see it as the engines fault.
Unless you're using a completely priorietary engine where you have no ability to acces or even read the code, you usually have options. and even highly proprietary engines usually leave you with options for engine-tinkering- the game and the engine are one and the same, and must be treated as such.
Amazes me how knowledgeable these guys really are. I’m no idiot but can’t even comprehend most of this on a technical level like him.
Great video. My two only remaining questions are when Battlemage is coming and if the drivers will make it shine 😂 (jk jk)
But seriously I so want to buy Intel GPU since prices are great but I’m still discouraged by the prevalent issues and I don’t want to risk the next big field of stars game won’t work on launch.
It don't matter how expert you are. When guys like Tom Petersen and Jon Gerow talk, you always learn something new. Kudos to GN.
Another GN x TAP video so quickly?! Thanks Steve! PS. It would be wonderful if you took us behind the scenes at the Intel graphics dev lab office
A tech channel, making *actually* technical content. Simply fucking brilliant!!!
Thanks for your continued coverage of Arc! On your most recent are video you talked about MSAA being broken for Arc in GTA and how the Intel team responded and said it was a GTA exclusive issue. Please tell the team this isn’t exclusive to GTA, in some older games like Watch Dogs 1 and Assassin’s Creed Black Flag MSAA is completely broken on Arc!
The Black flag AA issue is old and occured on AMD drivers many years ago . It's bad programming from Ubisoft. Not bad drivers.
This is THE WAY to do tech journalism. Fkin amazing work - not only this video, but Gamers Nexus' whole approach. Also: Big respect to Intel for doing "risky" and honest interviews/deepdives like this. It really sets them apart from the rest.
The fact that Tom is still doing this stuff (and Intel is letting him) makes me think they still have a long-term plan to _eventually_ get there with their graphics division. I truly believe if Intel holds out a bit and doesn't kill Arc too fast, they can carve out a chunk of the market by focusing on price/perf gamers. From that starting position, it can grow further. Nvidia needs a kick in the pants and to lose sales to snap back to reality.
I bought ARC .. and I like ARC. Thank you Tom for continually improving this product.
5:50, just to add; not all shaders are text when they land in driver land. For DirectX12 and Vulkan, they have already compiled the text to an intermediary binary format (DXIL or SPIRV) which is easier for the driver to handle since there's less steps required to convert it to their instructions. But still it needs that compilation step. Probably text is easier to explain though (and relevant to opengl)
In contrast, there's WebGL / WebGPU, where the browser parses the shader source (GLSL / WGSL) on demand, then internally translates into the browser's target API shader language (HLSL, GLSL, whatever Metal uses) then complies that again 😢
It even needs to translate when it's GLSL to GLSL for safety and small changes in the flavor of the language.
Chrome even uses something called ANGLE, which is implementing OpenGL in terms of DirectX for... reasons. So you can in theory end up translating GLSL to GLSL to HLSL to DXIL 😢
Hopefully browsers get good at caching all this nonsense!
@SimonBuchanNz such a giant step back. DXIL and SPIRV were greater specifically to remove the compiler from the process. Now you only have to deal with DXC and glslangvalidator directly (you can just file an issue rather than contact IHVs), rather than 1 compiler per vendor. Web never seems to learn from mistakes people found out in the real world :/
@@nielsbishere well, there *are* reasons (security, a lot of the time) ... but I wouldn't be surprised if *eventually* there's a binary form of WGSL, and in theory the browsers could write their own optimizing emitters for DXIL and SPIRV (dunno about Metal) to skip another compile - they do way harder stuff all the time.
A lot of the time, though, this stuff is just politics: I think there's a hint of a suggestion that SPIRV might have been vetoed because it might make Apple look worse due to the extra translation they would need due to not having Vulkan (the obvious response being ... why not?)
@SimonBuchanNz I think in the end it's because Apple wants to force the way the api can go. With khronos they have way less influence which they don't like. Apple seems very determined to cut all ties with khronos, and I don't really know why. Security might be true in some cases, but I've ran into enough driver crashes to know that probably this also increases likelihood of bigger issues because of the additional layers in between (seeing as binary -> ISA already goes wrong). Now you'll have to deal with every different browser having different compilers and every backend and every vendor having different behavior... with spirv that'd have been reduced to 1, maybe 2 compilers (dxc for hlsl and glslangvalidator for glsl)
I love these series, I didn't know there were more coming up after last one. Definitely enjoying Toms in-depth knowledge here. Thanks for sharing!
Tom Peterson seems like such a nice guy :)
Tom is great. Love that you're exposing this part of the stack. I was a firmware developer in another life and totally appreciate this stuff still.
these videos with tom are great!
Tom, your technical insight is inspirational. I love the passion and excitement you exude when speaking on this stuff. Intel Arc feels like it's in great hands with someone like you, and your team, working in the trenches. I look forward to next gen Arc and will definitely be jumping in.
just got a770 recently .. this is interesting
Great choice, hope you will have a great time with it
Absolutely fascinating!! I really hope Intel sticks it out in this market. ARC is getting better and gamers NEED Intel to be successful.
Really thinking of ARC for next card. This community out reach makes it feel less like a corporation
The ARC Team seems to be genuinely committed to their product.
I always love these videos, Tom giving us info about arc I am very excited for Arc and hope to see Paladin arctype.
Doom Eternal was OPTIMIZED. (🙏 Thank you ID).
Great review! I learned a lot of this stuff in my undergrad computer science courses. This guy can take difficult concepts and explain them in a simplified fashion.
I do wonder how much the game engine matters, for example is UT4 more optimised than UT5 purely because it has been around longer and more mature. I hear so much about games being ported from 4 to 5 and i wonder does the performance take a hit just by doing that
UE5 games are heavier because the modern features Devs choose to use are heavier.
Great question. Without being in a position to directly answer, I'd wager that UE5 is probably more optimized even in spite of UE4's maturity just because UE5 should be rolling all those optimizations into UE4. I do think there are probably situations where your thought is right though: If a dev has an option of a mature engine versus a totally ground-up build, there are probably things better optimized on the older one.
Another problem many gamers don't understand is that a more flexible modern feature can be more expensive even if visually it looks the same or even worse. A photorealistic room with baked light in UE4 will run at 200 FPS and could even look better than the same room in UE5 with lumen running at 60 FPS, where every light can be dynamically changed and the wall be demolished. And it's not because UE5 is unoptimized, it's because it does things in real-time that were offline baked before and this allows for more dynamic worlds and gameplay. But when a dev still makes very static game while using fully dynamic lighting, this new cost doesn't feel justified.
It does matter a lot since the engine dictates the exact calls, shader code, resource usages and such that are fed down into the driver. OpenGL and pre-DX12 drivers can do a lot to wring more performance out of what they've been given by the engine, but at the end of the day it _is_ the engine that dictates what exactly is going on in the frame. DX12 and Vulkan turn this up several notches, too, as they push a lot of stuff out of the driver and back into the engine. Management of API-level command buffers, memory management and frame presentation/buffering under OpenGL and DX11-and-below happened largely in the API runtime itself or the driver based on what the engine fed the driver, but DX12 and Vulkan both push these back up into the engine, making the engine responsible for recording API commands into an API command buffer, performing memory management at the API level, dictating the overall flow of frame presentation and frame buffering, etc. As time goes on this is happening more and more, too, as seen by the Approaching Zero Driver Overhead movement in OpenGL or "bindless" style descriptor management in Vulkan (descriptor indexing, buffer device addresses, etc).
@kazioo2 that is it exactly when I see a game and the graphics on the face of it do not look that great...and yet the performance is nowhere near what I would expect from what I see
As a QA engineer, I have gotten very close to all these various layers of the pipeline. I am glad for the deep dive into it. Make all this stuff common sense.
These are great videos! Even though I may not understand everything that is being said, it at least shows us that not only are the companies doing as much as they can, but they're showing it with someone who at least understands them to ask questions and confirm what they're being told, to a degree of course. It's also great when you have someone like Tom who is enthusiastic about the work he and his teams do!
Hi, im tech lead for one of those games which was optimized in the intel driver lately. I would have question for Intel guy, is there any way to find out what did you optimize for our game? Maybe we could do more stuff on our side to make the game run faster.
Wow, this was very well explained and I have real knowledge about gpus. Really in awe with the complexity and can't believe gpus work at all with everything it needs to do.
Wish you had asked him this critical question:
"what was the point of 2 alpha releases with DG1 and DG2, as well as over a decade of integrated graphics drivers? Why were the drivers not ready when you had 2 years to get them ready before arc launched if we only count DG1 and DG2, and why was all the integrated driver work useless for ARC?"
This video has been live for literally 5 minutes. You could not have even known what was asked, first of all, and secondly, the topic of this video is the driver stack.
@@GamersNexus On one hand, sure, the commentor did not watch the whole video before commenting, neither did I, so far. On the other... Aren't they right? Sure, you say it's not the topic of THIS video, but how many videos have there been with Arc engineers so far? It's great to celebrate that Intel is making progress, but shouldn't they be held accountable for the original mess?
@@janbenes3165 do you mean when we held them accountable for 2 straight years when it was all happening actively? Because yes, and we did. We covered DG1 before anyone else, covered DG2's messy launch, and in fact we broke the story on what a trainwreck the drivers were for Arc originally -- so much so that other reporters reported on our findings and on Intel's direct response to them. We covered that story. Receipts:
Literally called "Worst We've Tested" - ua-cam.com/video/MjYSeT-T5uk/v-deo.html
Intel responds: ua-cam.com/video/znJzozRfJYY/v-deo.html
DG1 launch: ua-cam.com/video/HSseaknEv9Q/v-deo.html
Talking about what a nightmare the A380 was: ua-cam.com/video/La-dcK4h4ZU/v-deo.html
Unrelated but while we're at it, holding them accountable for a terrible card design: ua-cam.com/video/N371iMe_nfA/v-deo.html
@@GamersNexus Yeah, you covered the story. I know you did. I was there too. But there you have a person who was close to the top when it was all happening, or at least should have relevant information about it and did not ask simply "why?" it happened. I'm not saying you should be dragging Arc through mud or anything like that, but this question of "why the DG1 never amounted to functional drivers?" never comes up.
The answers on the surface seem obvious to anyone who has been gaming long enough.
Why are games releasing in a broken state so often? Sometimes you have to get a product out. You could spend two years with a product working in a lab, and not learn as much as you could learn three months post release.
Arc is Intel's first attempt at a competitive dedicated GPU, which is a different product from the integrated GPUs. Gaming was likely not a priority, and usually nobody cared about gaming performance on Intel integrated. This is essentially Intel working at "getting in shape".
@GamersNexus You asked for questions so here we go. What is the point of cache and why is it so incredibly small? I'm under the impression it holds most commonly accessed data and formulations. it seems like it serves an important purpose (CPU's with essentially identical stats with larger cache sizes seem to do better as do HDD's) so why don't we increase it significantly? For example why not double or quadruple the cache size?