PCs in the 486 days (When DOOM released) most definitely had 'dedicated' video cards (CGA, EGA, VGA) with different color depths (amount of availble colors) and 2D acceleration... PCs required a video card for ALL output to a monitor... Without one, you could not use the computer... But, they had no 3D capabilities. That all changed when the 3DFX Voodoo add-on cards became popluar. And the start of the GPU race began.
I feel like not covering quake was a missed opportunity, as I feel it cleanly covers the transitionary step from doom to doom 3. with it haveing bsp trees implemented in full 3d with a software renderer, and a later option to use opengl.
Thanks for the feedback! I wanted to focus on just the one franchise as I knew the video was going to be quite long. I think looking at Quake and/or Rage would make another interesting video
I agree, while doom still had 2D sprites, Quake was fully 3D. The transition from doom 1 to doom 3 is huge! I would have shown it from the ID Tech engine perspective. As doom 1 is the first ID engine, Quake was based on ID tech 2, Quake 3 on ID 3 and Doom 3, much later, was based on the ID tech 4 engine. Still, always interesting to hear the background behind this tech!
the funny thing is that Quake source ports (fully 3D) require LESS computational power to run these days than modern Doom clone-engines (half 2D pseudo 3D) There was one guy that figured out how to achieve Multi-Threading in Doom software render reducing dramatically the CPU requirements for bigger intensive maps, but he probably realized what a massive cesspool of negativity Zdoom/GZdoom teams are to deal with and he never released it (why should he) So yeah thanks again gzdoom team, thanks a lot
most modern style source ports, like gzdoom, have support for hardware rendering these days, where it just renders the scene in full 3D with vulkan or opengl. meaning this isn't neccassarily the case.
GZDoom's software renderer is already multithreaded, you can test yourself by forcing it to only run on a single core on Windows task manager. Also, GZDoom devs are very strict about not breaking existing maps, which blocks some potential optimization
Would just like to recommend the channel @Decino to anyone coming down to the comments with some more interest in the mechanics of Doom. This game has never stopped being interesting to me, the work of id software has been part of my academic career even. Really happy to see it covered like this light by Nathan Baggs here, there's just so much to talk about in terms of programming.
The original Call of Duty games also have that BSP system sort of, and you had to manually lay out the level portals for culling, which meant you had to design everything around it. I remember when I was younger it was a nightmare understanding that stuff properly. Back then there were no high quality YT videos or well written tutorials. Even for basic stuff like setting up materials for textures you had to write scripts and so on. Today all that stuff is automated or done visually via specialized tools (press a button and it does [thing] for you). So many engine limitations as well, like not being able to have more than four dynamic lights on any surface (as it was using RGB and Alpha channels), annoying memory limitations because 32bit and so on. Today it's just "here's your ray tracing, enjoy thousands of dynamic lights at once!".
That's because Call of Duty 1 literally uses the Quake 3 engine and Call of Duty 2 is made on a heavy modified Quake 3 engine, that Infinite Ward just called "Infinite Ward engine" or IW engine for short. Btw even Call of Duty 4 uses the IW engine (a even more updated version) and because of that the editor to make levels for that game, is still based on Radiant and still uses the "brush" system from the Quake 3 engine editor to block out the levels and of course the BSP/portal system. About the later COD games, I really don't know because COD 4, was were I stopped caring and playing them...
Please don’t use Adobe Enhance in your future videos. It gives voices a bizarre uncanny quality that is rather uncomfortable to listen to. I can assure you that normal noise removal within your video editor of choice would be more than sufficient, if that’s your concern.
Thanks for the feedback! The room I record in is tiny with hard floors and despite trying three different mics the audio is always very echoey. I'm not an audio engineer so despite my best attempts with Audacity and Davinci I can never get good sounding audio. Adobe Enhance is the only thing I've found that removes the echo which I guess is at the expense of being slightly uncanny. Still figuring out UA-cam and content creation, so always looking to improve!
If you're discussing id Tech, why did you skip Quake 1 and their iteration up to Doom 3? There were huge leaps forward in technology that led to Doom 3.
Because he's not discussing Commander Keen, Wolfenstein, Rage, Hovertank 3D, Tiles of the Dragon, Catacomb 3D, Rescue Rover, Orcs & Elves, Shadow Knights or Dangerous Dave either. He's focusing on Doom. Glad to help.
1:12 35 times a second* 2:10 You're talking about the reject table. BSP trees deal in 2D walls (in Doom's case) - IE - Walls/Line segments (linedefs in Doom parlance).
doom 64 also uses a hardware renderer with true color instead of a 256color palette. doom on PSX might be hardware accelerated but it's more likely a software renderer, saturn has software rendering too
There's always a few errata in videos like this, so thanks for the corrections! The joy of UA-cam is that I can't amend them, so this video will now stand as a monument to my mistakes (:
1:00 24 Mb 8 Mb 12Mhz -> 24 MB 8 MB 12 MHz Edit: Same in the Doom 3 requirements. Lowercase b is for bits, not bytes. If you wanna go even more 🤓, it should probably be MiB for all the size values
@@kellymountain ps1 version is harwdare accelerated but it renders somewhat like the pc version, with single pixel wide triangles for walls (they might use the quads, not sure) which means they avoid texture warping. interestingly, it was discovered recently that the saturn version is also hardware accelerated, iirc it overuses whatever kinda quad clipping commands the saturn vpd has and ends up being pretty janky and inefficient.
It is interesting how rendering engines need to completely change over time with changes to the hardware. The depth pre-pass in 7:07 was present in a lot of rendering engines for a long time. But now it actually hurts performance on mobile graphics hardware because they use a tile-based architecture(it doesn't render the scene in one go but in multiple square segments). It's very slow to read the result of a previous render pass so it is prefable to do "some" unecessary compution. If you have an engine that runs on both desktop and mobile you would need to enable/disable this pass depending on the hardware your game is running on.
@@ethanwasme4307 Game Engines (Unreal, Unity) typically use an intermediate DSL (domain specific language) which could take form of a shader language or a blueprint/graph for writing their shaders, which is then compiled automatically into GLSL or HLSL or what have you, depending on your export settings, target platform and target graphics API.
If I remember correctly, that's because some anti-aliasing techniques involve more than one fragment per displayed pixel and "pixel shader" makes it too likely someone's intuition will assume the shader only gets run once per pixel.
Very solid explanation and I like your "normal" pacing. No informational bombardment or edgy cuts. You English people are like natural born educators, thank you!
Doubt you'll read this comment, but your worms 2 adventure have given me the inspiration & motivation to look at Rugby 08 again. I never knew about the programs you used in that video and was so thankful of you sharing that I had to make this comment.
You came right out of the gate with a mistake :) Computers DID have graphics cards back in the days of Doom. They had to as the CPUs didn't have onboard graphics. But all the graphics cards for comsumers were 2D only. Quake 2 was the first of Id's games to support 3D graphics accelerators, released before Doom 3. I had a dedicated 3D card in my PC at the time that worked in a pass-through mode from the 2D card.
@@jimb12312 that’s true yeah, they back ported code from Quake 2 to create GLQuake as a test of their new engine. So technically it appeared in Quake first but was really written for Quake 2…
i really love this way of explaining. i had to go for 0.75 speed because i am not super familiar with graphics engine internals and still have to think a bit in parallel to listening. but actually i understood everything you explained!
Awesome video showcase, would love to see more in-depth information into Vulkan as I truly believe this is the future Graphics API considering it is a lot smoother/stable and crossplatform!
Carmack has not received enough credit for his genius and work ethic. He took 3D graphics in games to another level several times. I remember how impressed I was when I saw Doom the first time. Even though I came from a demo scene background. We could not stop playing it. Doom was probably the most impressed I ever was with another coders work.
Indeed, Quake 2 was the first iteration to use 3D GPUs. However the 3D cards at the time had fixed effects, and no shaders, so couldn't produce the kind of scenes in Doom 3. It would be more accutate to say Doom 3 was the first game to use OpenGL with shaders...
@@amcadam26 yea, as I said, glquake was an unofficial release (id never officially supported it) which ported the upcoming quake2 engine with OpenGL to the original quake. Quake 2 was the official release.
@@riley-arr-g Ultima Underworld had large 3D environments with slopes and storeys in March 1992, almost 2 years before Doom! Not to mention innovative ImSim mechanics - the developers would go on to make System Shock, Thief & Deus Ex
The depth masks look like those you can produce in consumer level graphics programs such as Bryce or Vue, which give you great control over post processing of the color render in your photo editor.
I bought a 484 66MHz pc with 8 MB (I think) and a 20 MB HDD back in 1995, and got Doom1 a little later and played that and Doom 2 for around 2 years straight. Good times.
The doom 3 section reminded me of that VERY COOL error that one guy had that made all textures fail, but all other steps (like light and whatnot) work properly, so it was an no texture game. I've wanted to play that errored Doom (2016) but no dice.
Hey Nathan, this is the second vid I watched of yours and I'm not even half way through, subscribed, you offer a very detailed and easy to follow explanation of things, you need to start a patreon, I have paid good money to go to a course to learn some similar stuff in 2006 with multimedia and Maya etc. Here you are giving it away for free and your stuff is highly advanced. it is deep back end stuff. really helpful and useful. I have an interest in this stuff but lately not very motivated. on your own there is no match for competing with teams who are established. I am sure you could offer some established developers a lot of great advice. I would certainly hire you to refine some processes if I had an established company in the field.
If I remember correctly, 4 MB RAM an 12 Mhz is about sufficient to start the game, that's it. It would run less then smoothly to say the least. When I started playing it I had a 486/66 with 8MB, and it ran as shown in the video.
Yes not much has changed over the years on this front, min requirements are still generally way under playable or just plain wrong. You needed a DX80 to play Doom properly, it was ok on the 66 but you had to make the screen a bit smaller to get decent frames and it chugged a lot on maps like barrels o' fun.
@@bradallen8909 He means it's enough to get into the menu and hit start game, after that you can forget it on a 12Mhz unless you fancy playing it as if you're playing through a keyhole... 🤣
Very interesting stuff. Thanks for doing the video. There's something funky going on with the recordings of your voice. It sounds glitchy and sped up at times.
It's probably 1000x less wonderful than you imagine, because it requires constantly solving hard problems and never resting on your laurels, not just doing something for a little bit before kicking back and enjoying your "brilliance" and "mystique" with a cup of tea. I would be willing to bet John Carmack never does this, so the "wonderful" reality you imagine is probably actually just a lot of hard work and very few breaks and none of the self-fart-smelling a lot of people assume everyone does.
Seriously though, if you aspire to do stuff like this I would advise you just admire the resourcefulness and determination of these people/teams, and not just think "Oh how wonderful it must be! They are so lucky!". If someone builds the Great Wall Of China by hand, would you say "Oh you're so lucky! How wonderful your life must be to make that grand creation!" to a man with bloody blistered hands
Also I'm not trying to be a c**t or flex on you or anything, I just see this sentiment a lot and think it's a huge misunderstanding of how impressive things are made.
@@jacksonlevine9236 John Carmack worked 12 hours a day, 7 days a week. There's no magic to busting your ass to make something happen. People think because games are fun that everything is puppy dogs & ice cream. Results come from total focus & hard work. In short, fully concurred.
Amazing to think that these calculations are even possible in such a short space of time, now emagine that some of calculations are doubled when in a VR as well as having all of the spacial awareness calculations for 6dof as well and in 120hz meaning each render takes half the time to process and render than in 60fps. It's truly mind-blowing and this isn't even a heavy game
It’s crazy how inefficient that multi pass forward shading is in doom 3 unless they actually optimize it to combine multiple lights into one draw call. You could draw an object once and send some hardcoded max amount of lights per draw call instead, which is usually 4. So if an object is hit by multiple lights, you do own draw call instead of multiple draw calls for each light. Heh another random tidbit. The zbuffer isn’t compressed down to only red. A single channel image is typically displayed as red because it’s interpreted as RGB but you’re missing the GB portion so you only have red information. Shades of gray are technically the presence of all channels, RGB with the same value. It might have compressed a 32 but or 16 but zbuffer down to 8 bits so it could fit into one channel of a gbuffer, but depth is actually one of the things you want to avoid compressing and keep at as high a precision as possible, or you get issues like z fighting and other ugly artifacts. You might also use the zbuffer to figure out world positions per pixel when doing deferred shading or other things and if it’s too inaccurate, the positions aren’t derived as accurately.
00:35 In 1993 computers had dedicated graphics cards. That was the only kind of graphics card back then. What computers didn't have was a 3D accelerator or GPU.
13:50 into the video, and I have maybe stupid question. Is the computer (cpu, gpu ...) doing this every frame of the game? When these calculations are being made? Less FPS fewer calculations, more frames per sec more calculations?
What one can point out is, that Doom: Eternal doesn’t do that much work 60 times per second. It does it waaay past 100, sometimes even 200 times per second! This engine is crazy efficient and manages to push insane framerates, for the graphic fidelity. The fact, that it does all this work and still manages to pump out high FPS on older GPUs, is really impressive.
Umm... computers in the early 90's absolutely DID have dedicated graphics cards! They just didn't have 3D accelerators in them. That was the time of VGA and SVGA. Doom is a software renderer but it still needs a graphics card.
Came here to say the same thing, that claim made me jump. Integrated graphics didn't even exist at the time, so a graphics card was a basic necessity. Most did not accelerate anything, although the most expensive ones could do hardware blitting or color filling, but that was not used in games due to the lack of a unified API - VGA was the lowest common denominator with a reasonable resolution and color depth. Accelerated 3D graphics was unknown in the consumer space, by the time Doom was released it was the privilege of the professional market, for dozens of thousands of dollars. And these powerful workstations could not run Doom.
I remember when the press released; I’ve day, at my first computer job, one of the programmers brought in the shareware version - what a seismic shock it was.
Did you mix up dedicated graphics card and GPUs? Even my 286 had a deficated oak graphics card. We didnt have APUs back then. Maybe you mean something specific that im not understanding, can you clarify? Thanks for the video!
Do you know if the changes, features and discoveries made during the writing of one of those engines are carried over the following iteration of the engine ? Let's pick Rage for instance, does its engine contains the code for Doom 3's per pixel lightmapping system or makes use of it ?
Probably, I imagine that the engine developers use elements from the previous engine as a starting base even if they're "rewriting" the engine. My headcanon is that there's some Quake code still present in modern IdTech, Source 2 and other engines originally based on the Quake engine.
lol,of course there was dedicated graphics card. What wasn’t a thing yet was consumer 3D accelerators, which nowadays are integrated into graphics cards
I remember when Doom first came out. Many a game session with friends ended up with playing either Doom or Dune 2. And the same conclusion applied to me: I was terrible at all FPS games.
id Software has always been so smooth and slick and highly optimized from the get-go., I would say only doom 3 seemed to go a bit slower than the rest, But the technical genius of the people there is so impressive. I would say only Rockstar of Bethesda come close for creating such optimized games
PC have always had graphics cards, integrated graphics is more of a modern thing than a retro thing. It was graphics cards with 3d Acceleration wasn't a thing when doom came out. It was probably doom and other games like it that created a need and demand for such a product.
Could you demonstrate debugging windows program on linux with wine? Following wine logs only get you so far and require true debuggers but can be hard at times with anticheat/ detached processes so i would love if u could give it a shot.
Base requirements are wrong - Doom required a 386 at 33 MHz. It could run on slower hardware (but it had to be 32-bit) with strongly degraded graphics.
Can you make a video showing the differences between OpenGL, Vulkan and DirectX? Specially DirectX 12 Ultimate, that would be the greatest most powerful of all?
Half Life 2 was actually more impressive. The main disappointment in Doom3 rendering techniques is the flash light. It's essentially a 2D sprite blended on a 3D-scence in screen space. That is common for 2D-ligthing but embarrasing for a 3D game that has a focus on lighting.
Great video, definitely feels like talking Doom engine technology while skipping the entire Quake franchise that happened in the middle, skips a lot of development that is tied to ID Tech engines that happened between 1990s doom and Doom 3 (Quake 1 to 3 were released during that time frame and are massive upgrades to original doom visually, too much time passed between original doom games and doom 3). Perhaps it would make the video too long for the purpose of it but there's so many massive changes that occurred during that time.
this stuff is great, but i'm a bit further back in the dev process where you "hook the dll" you intend to use, be it open GL or VULKAN. too bad nobody in the ut99 scene is "sharing that method"
PCs in the 486 days (When DOOM released) most definitely had 'dedicated' video cards (CGA, EGA, VGA) with different color depths (amount of availble colors) and 2D acceleration... PCs required a video card for ALL output to a monitor... Without one, you could not use the computer... But, they had no 3D capabilities. That all changed when the 3DFX Voodoo add-on cards became popluar. And the start of the GPU race began.
That's a good distinction, thanks! I was definitely referring to hardware accelerated graphics cards
@@nathanbaggsOf course! Just wanted to comment in case someone wasn't aware of that specific detail.
OOOOh, thanks for that 3DFX Voodoo nostalgia hit.
Yup, I almost had an heart attack hearing this :).
I remember half-life game made Voodoo card famous
I feel like not covering quake was a missed opportunity, as I feel it cleanly covers the transitionary step from doom to doom 3. with it haveing bsp trees implemented in full 3d with a software renderer, and a later option to use opengl.
Thanks for the feedback! I wanted to focus on just the one franchise as I knew the video was going to be quite long. I think looking at Quake and/or Rage would make another interesting video
Agreed. The quake and doom engines are intertwined very closely.
Wolfenstein and Rage were also very important to telling the development story.
I agree, while doom still had 2D sprites, Quake was fully 3D. The transition from doom 1 to doom 3 is huge! I would have shown it from the ID Tech engine perspective. As doom 1 is the first ID engine, Quake was based on ID tech 2, Quake 3 on ID 3 and Doom 3, much later, was based on the ID tech 4 engine. Still, always interesting to hear the background behind this tech!
Learning more about Rage would be amazing. I frequently forget the Rage franchise even exists!@@nathanbaggs
That 3.6 kTri lightning bolt mesh really got me. It differs a lot from the poly budget my boss is trying to make me obey 😂
the funny thing is that Quake source ports (fully 3D) require LESS computational power to run these days than modern Doom clone-engines (half 2D pseudo 3D) There was one guy that figured out how to achieve Multi-Threading in Doom software render reducing dramatically the CPU requirements for bigger intensive maps, but he probably realized what a massive cesspool of negativity Zdoom/GZdoom teams are to deal with and he never released it (why should he) So yeah thanks again gzdoom team, thanks a lot
aw, that sucks. i hate when egos get in the way of progress
That sucks.
most modern style source ports, like gzdoom, have support for hardware rendering these days, where it just renders the scene in full 3D with vulkan or opengl. meaning this isn't neccassarily the case.
GZDoom's software renderer is already multithreaded, you can test yourself by forcing it to only run on a single core on Windows task manager. Also, GZDoom devs are very strict about not breaking existing maps, which blocks some potential optimization
He's just as bad as the gzdoom team then.
I'm blown away by the amount of information you managed to squeeze in! Awesome work!
Would just like to recommend the channel @Decino to anyone coming down to the comments with some more interest in the mechanics of Doom. This game has never stopped being interesting to me, the work of id software has been part of my academic career even. Really happy to see it covered like this light by Nathan Baggs here, there's just so much to talk about in terms of programming.
This was awesome, especially considering you're able to provide such technical information in such a digestible format. You got a new sub for sure.
Thanks! I've been trying to focus on explaining low-level details at a high-level, glad that comes across!
John Carmack
My Hero.
The original Call of Duty games also have that BSP system sort of, and you had to manually lay out the level portals for culling, which meant you had to design everything around it. I remember when I was younger it was a nightmare understanding that stuff properly. Back then there were no high quality YT videos or well written tutorials. Even for basic stuff like setting up materials for textures you had to write scripts and so on. Today all that stuff is automated or done visually via specialized tools (press a button and it does [thing] for you). So many engine limitations as well, like not being able to have more than four dynamic lights on any surface (as it was using RGB and Alpha channels), annoying memory limitations because 32bit and so on. Today it's just "here's your ray tracing, enjoy thousands of dynamic lights at once!".
That's because Call of Duty 1 literally uses the Quake 3 engine and Call of Duty 2 is made on a heavy modified Quake 3 engine, that Infinite Ward just called "Infinite Ward engine" or IW engine for short.
Btw even Call of Duty 4 uses the IW engine (a even more updated version) and because of that the editor to make levels for that game, is still based on Radiant and still uses the "brush" system from the Quake 3 engine editor to block out the levels and of course the BSP/portal system.
About the later COD games, I really don't know because COD 4, was were I stopped caring and playing them...
Please don’t use Adobe Enhance in your future videos. It gives voices a bizarre uncanny quality that is rather uncomfortable to listen to. I can assure you that normal noise removal within your video editor of choice would be more than sufficient, if that’s your concern.
Indeed, dynamic range expansion with a mild gate is plenty.
Uncanny valley, wait is this guy an AI? Hey data your non humanity is not fooling us! Lore rather. Go back to star trek!
Enhance is only useful if the audio is scuffed. It can fix some issues. But even audacity has a pretty good sample based denoiser.
@AROAH thank you for identifying the product responsible for this; I’ve been hearing it in various places and it’s been driving me crazy for years.
Thanks for the feedback! The room I record in is tiny with hard floors and despite trying three different mics the audio is always very echoey. I'm not an audio engineer so despite my best attempts with Audacity and Davinci I can never get good sounding audio.
Adobe Enhance is the only thing I've found that removes the echo which I guess is at the expense of being slightly uncanny.
Still figuring out UA-cam and content creation, so always looking to improve!
You are definitely my favourite development deep-dive / Mark Williams lookalike on UA-cam.
The innovation id Software has continually showcased is why I've been a life long fan since 92.
If you're discussing id Tech, why did you skip Quake 1 and their iteration up to Doom 3? There were huge leaps forward in technology that led to Doom 3.
Because he's not discussing Commander Keen, Wolfenstein, Rage, Hovertank 3D, Tiles of the Dragon, Catacomb 3D, Rescue Rover, Orcs & Elves, Shadow Knights or Dangerous Dave either. He's focusing on Doom. Glad to help.
Respect for id software creating innovations in gaming technology!
1:12 35 times a second*
2:10 You're talking about the reject table. BSP trees deal in 2D walls (in Doom's case) - IE - Walls/Line segments (linedefs in Doom parlance).
doom 64 also uses a hardware renderer with true color instead of a 256color palette. doom on PSX might be hardware accelerated but it's more likely a software renderer, saturn has software rendering too
There's always a few errata in videos like this, so thanks for the corrections! The joy of UA-cam is that I can't amend them, so this video will now stand as a monument to my mistakes (:
1:00 24 Mb 8 Mb 12Mhz -> 24 MB 8 MB 12 MHz
Edit: Same in the Doom 3 requirements. Lowercase b is for bits, not bytes.
If you wanna go even more 🤓, it should probably be MiB for all the size values
@@kellymountain ps1 version is harwdare accelerated but it renders somewhat like the pc version, with single pixel wide triangles for walls (they might use the quads, not sure) which means they avoid texture warping. interestingly, it was discovered recently that the saturn version is also hardware accelerated, iirc it overuses whatever kinda quad clipping commands the saturn vpd has and ends up being pretty janky and inefficient.
Doom is one of the iconic science fiction video games of all time!
It is interesting how rendering engines need to completely change over time with changes to the hardware. The depth pre-pass in 7:07 was present in a lot of rendering engines for a long time. But now it actually hurts performance on mobile graphics hardware because they use a tile-based architecture(it doesn't render the scene in one go but in multiple square segments). It's very slow to read the result of a previous render pass so it is prefable to do "some" unecessary compution. If you have an engine that runs on both desktop and mobile you would need to enable/disable this pass depending on the hardware your game is running on.
They're called fragment shaders in Vulkan (and OpenGL). "Pixel shaders" is a distinctly DirectX terminology.
are engines using one shader api or another? is there any difficulty writing shaders compatible for both languages?
@@ethanwasme4307 Game Engines (Unreal, Unity) typically use an intermediate DSL (domain specific language) which could take form of a shader language or a blueprint/graph for writing their shaders, which is then compiled automatically into GLSL or HLSL or what have you, depending on your export settings, target platform and target graphics API.
Thanks! hope life treats you well :) @@Luna-qf2zc
If I remember correctly, that's because some anti-aliasing techniques involve more than one fragment per displayed pixel and "pixel shader" makes it too likely someone's intuition will assume the shader only gets run once per pixel.
Same thing & used interchangeably
Very solid explanation and I like your "normal" pacing. No informational bombardment or edgy cuts. You English people are like natural born educators, thank you!
A wonderful introduction to the functioning of modern engines. Thank you so much.
Doubt you'll read this comment, but your worms 2 adventure have given me the inspiration & motivation to look at Rugby 08 again. I never knew about the programs you used in that video and was so thankful of you sharing that I had to make this comment.
Good luck!
You came right out of the gate with a mistake :) Computers DID have graphics cards back in the days of Doom. They had to as the CPUs didn't have onboard graphics. But all the graphics cards for comsumers were 2D only. Quake 2 was the first of Id's games to support 3D graphics accelerators, released before Doom 3. I had a dedicated 3D card in my PC at the time that worked in a pass-through mode from the 2D card.
Figured I'd start as I mean to go on (:
Thanks for the clarification
@@nathanbaggs I still enjoyed the video, thanks!
Quake 1 was the first of their games to support 3D accelerators (GLQuake)
@@jimb12312 that’s true yeah, they back ported code from Quake 2 to create GLQuake as a test of their new engine. So technically it appeared in Quake first but was really written for Quake 2…
i really love this way of explaining. i had to go for 0.75 speed because i am not super familiar with graphics engine internals and still have to think a bit in parallel to listening. but actually i understood everything you explained!
Awesome video showcase, would love to see more in-depth information into Vulkan as I truly believe this is the future Graphics API considering it is a lot smoother/stable and crossplatform!
0:09 That derpy face you make always makes me smile.
Carmack has not received enough credit for his genius and work ethic. He took 3D graphics in games to another level several times. I remember how impressed I was when I saw Doom the first time. Even though I came from a demo scene background. We could not stop playing it. Doom was probably the most impressed I ever was with another coders work.
The jump from Doom 1 to Doom 3 was a bit jarring, surely Quake and Rage are the missing pieces of the story here? Otherwise, insightful analysis!
Indeed, Quake 2 was the first iteration to use 3D GPUs. However the 3D cards at the time had fixed effects, and no shaders, so couldn't produce the kind of scenes in Doom 3. It would be more accutate to say Doom 3 was the first game to use OpenGL with shaders...
Thanks for the feedback, I wanted to focus on the one franchise as I knew the video wold be quite long
@@Stabby666glquake was released in early 1997 to add 3d card support for quake. Quake 2 came out about 10 months later.
@@amcadam26 yea, as I said, glquake was an unofficial release (id never officially supported it) which ported the upcoming quake2 engine with OpenGL to the original quake. Quake 2 was the official release.
@@riley-arr-g Ultima Underworld had large 3D environments with slopes and storeys in March 1992, almost 2 years before Doom! Not to mention innovative ImSim mechanics - the developers would go on to make System Shock, Thief & Deus Ex
I clearly remember the day I saw DOOM running and was amazed by the graphics
The depth masks look like those you can produce in consumer level graphics programs such as Bryce or Vue, which give you great control over post processing of the color render in your photo editor.
I bought a 484 66MHz pc with 8 MB (I think) and a 20 MB HDD back in 1995, and got Doom1 a little later and played that and Doom 2 for around 2 years straight. Good times.
This is so nerdy I love it as an amateur game developer. Please do the Halo series next, if possible.
Very interesting and structured approach to explaining this, good job!
The massive textures was a Id Tech special that got introduced in Rage with id tech 5!
Awesome video. Surprised you didn't delve into the Quake series since it bridges the gap between DOOM and DOOM 3.
The doom 3 section reminded me of that VERY COOL error that one guy had that made all textures fail, but all other steps (like light and whatnot) work properly, so it was an no texture game. I've wanted to play that errored Doom (2016) but no dice.
Fantastic breakdown of how the engines worked.
Any chance of being able to do something similar with Ultima Underworld?
What a beautiful research you have done here. Thank you.
Hey Nathan, this is the second vid I watched of yours and I'm not even half way through, subscribed, you offer a very detailed and easy to follow explanation of things, you need to start a patreon, I have paid good money to go to a course to learn some similar stuff in 2006 with multimedia and Maya etc.
Here you are giving it away for free and your stuff is highly advanced. it is deep back end stuff. really helpful and useful.
I have an interest in this stuff but lately not very motivated. on your own there is no match for competing with teams who are established. I am sure you could offer some established developers a lot of great advice. I would certainly hire you to refine some processes if I had an established company in the field.
The “look up” thing was very entertaining 😊
I like how Nathan's voice goes into Darth Vader mode every now and then
18:04 the one thing that has remained consistent in all the different Doom engines, is the pursuit of efficiency and optimization
17:34 Bro why would they do that 😭😂
All hail benevolent hyper-intelligent architect of the post-singularity simulation we all live in, John Carmack.
Mb is Megabit, MB is Megabyte. Annoyed me more than it should. Great vid though.
Using normal color space, white has more detail is closer than black. Linear may be more accurate, but more detail closer may help improve it
If I remember correctly, 4 MB RAM an 12 Mhz is about sufficient to start the game, that's it. It would run less then smoothly to say the least.
When I started playing it I had a 486/66 with 8MB, and it ran as shown in the video.
There's not really a such thing as a minimum MHz required to start a game. It did require a 386, though.
Yes not much has changed over the years on this front, min requirements are still generally way under playable or just plain wrong. You needed a DX80 to play Doom properly, it was ok on the 66 but you had to make the screen a bit smaller to get decent frames and it chugged a lot on maps like barrels o' fun.
@@bradallen8909 He means it's enough to get into the menu and hit start game, after that you can forget it on a 12Mhz unless you fancy playing it as if you're playing through a keyhole... 🤣
Your tiel mentions a developer, but you fail to mention much about him. Kinda misleading.
love my lightning bolts being >3k triangles, makes all the difference.
Thank you for your research and the making of this video
Very interesting stuff. Thanks for doing the video.
There's something funky going on with the recordings of your voice. It sounds glitchy and sped up at times.
What must it be like to be so brilliant and create something so creative and so iconic? It must be wonderful.
It's probably 1000x less wonderful than you imagine, because it requires constantly solving hard problems and never resting on your laurels, not just doing something for a little bit before kicking back and enjoying your "brilliance" and "mystique" with a cup of tea. I would be willing to bet John Carmack never does this, so the "wonderful" reality you imagine is probably actually just a lot of hard work and very few breaks and none of the self-fart-smelling a lot of people assume everyone does.
Seriously though, if you aspire to do stuff like this I would advise you just admire the resourcefulness and determination of these people/teams, and not just think "Oh how wonderful it must be! They are so lucky!".
If someone builds the Great Wall Of China by hand, would you say "Oh you're so lucky! How wonderful your life must be to make that grand creation!" to a man with bloody blistered hands
Also I'm not trying to be a c**t or flex on you or anything, I just see this sentiment a lot and think it's a huge misunderstanding of how impressive things are made.
Also nice to buy a Ferrari with the proceeds
@@jacksonlevine9236 John Carmack worked 12 hours a day, 7 days a week. There's no magic to busting your ass to make something happen. People think because games are fun that everything is puppy dogs & ice cream. Results come from total focus & hard work. In short, fully concurred.
0:58 you say about "24 mega bytes [MB]" yet on the screen is "24 mega bits [Mb]" so 8 times less. Also, MHz, not Mhz :)
Amazing to think that these calculations are even possible in such a short space of time, now emagine that some of calculations are doubled when in a VR as well as having all of the spacial awareness calculations for 6dof as well and in 120hz meaning each render takes half the time to process and render than in 60fps. It's truly mind-blowing and this isn't even a heavy game
I've played the doom port in vr and it's done exceptionally well
It’s crazy how inefficient that multi pass forward shading is in doom 3 unless they actually optimize it to combine multiple lights into one draw call. You could draw an object once and send some hardcoded max amount of lights per draw call instead, which is usually 4. So if an object is hit by multiple lights, you do own draw call instead of multiple draw calls for each light.
Heh another random tidbit. The zbuffer isn’t compressed down to only red. A single channel image is typically displayed as red because it’s interpreted as RGB but you’re missing the GB portion so you only have red information. Shades of gray are technically the presence of all channels, RGB with the same value. It might have compressed a 32 but or 16 but zbuffer down to 8 bits so it could fit into one channel of a gbuffer, but depth is actually one of the things you want to avoid compressing and keep at as high a precision as possible, or you get issues like z fighting and other ugly artifacts. You might also use the zbuffer to figure out world positions per pixel when doing deferred shading or other things and if it’s too inaccurate, the positions aren’t derived as accurately.
The depth buffer is downscaled to a VK_FORMAT_R16_SFLOAT target, so as fas as I can tell that is a single channel image
00:35 In 1993 computers had dedicated graphics cards. That was the only kind of graphics card back then. What computers didn't have was a 3D accelerator or GPU.
13:50 into the video, and I have maybe stupid question. Is the computer (cpu, gpu ...) doing this every frame of the game?
When these calculations are being made? Less FPS fewer calculations, more frames per sec more calculations?
yup
apitrace can trim to a set of frames (though there's now a new command called gltrim that makes it easier)
What one can point out is, that Doom: Eternal doesn’t do that much work 60 times per second.
It does it waaay past 100, sometimes even 200 times per second!
This engine is crazy efficient and manages to push insane framerates, for the graphic fidelity.
The fact, that it does all this work and still manages to pump out high FPS on older GPUs, is really impressive.
Umm... computers in the early 90's absolutely DID have dedicated graphics cards! They just didn't have 3D accelerators in them. That was the time of VGA and SVGA. Doom is a software renderer but it still needs a graphics card.
Came here to say the same thing, that claim made me jump. Integrated graphics didn't even exist at the time, so a graphics card was a basic necessity. Most did not accelerate anything, although the most expensive ones could do hardware blitting or color filling, but that was not used in games due to the lack of a unified API - VGA was the lowest common denominator with a reasonable resolution and color depth.
Accelerated 3D graphics was unknown in the consumer space, by the time Doom was released it was the privilege of the professional market, for dozens of thousands of dollars. And these powerful workstations could not run Doom.
I mentioned this in another comment, I definitely meant 3D accelerated. Thanks for the clarification!
@@gnurou It wasn't a necessity, onboard graphics existed.
@@rockapartie can you share any example of a motherboard from 1992 with integrated graphics? I haven't met any.
Amazing video! Thank you for sharing your knowledge 🙌
Great video! What monitor/s are you using?
I remember when the press released; I’ve day, at my first computer job, one of the programmers brought in the shareware version - what a seismic shock it was.
this is very clear even though it's full of information, very good video...
Did you mix up dedicated graphics card and GPUs? Even my 286 had a deficated oak graphics card. We didnt have APUs back then. Maybe you mean something specific that im not understanding, can you clarify?
Thanks for the video!
Yes I meant GPUs! A few people have picked up on that in the comments (:
Amazing Video with great detail
Why would inverting the colour of the depth buffer allow greater accuracy?
Overall, this was pretty interesting. The Doom engines have been pretty ground breaking on a technology level.
Do you know if the changes, features and discoveries made during the writing of one of those engines are carried over the following iteration of the engine ?
Let's pick Rage for instance, does its engine contains the code for Doom 3's per pixel lightmapping system or makes use of it ?
Probably, I imagine that the engine developers use elements from the previous engine as a starting base even if they're "rewriting" the engine. My headcanon is that there's some Quake code still present in modern IdTech, Source 2 and other engines originally based on the Quake engine.
Amazing video, i enjoyed throughly, thank you.
In a sense, the blueprint for DOOM could be the blueprint for GenAI game rendering.
Brilliant work, subscribed
lol,of course there was dedicated graphics card. What wasn’t a thing yet was consumer 3D accelerators, which nowadays are integrated into graphics cards
Yeah I addressed this in another comment, my mistake (:
wow...awesome research & video. thanks.
Amazing video, thank you! 👍
I remember when Doom first came out. Many a game session with friends ended up with playing either Doom or Dune 2.
And the same conclusion applied to me: I was terrible at all FPS games.
id Software has always been so smooth and slick and highly optimized from the get-go., I would say only doom 3 seemed to go a bit slower than the rest, But the technical genius of the people there is so impressive. I would say only Rockstar of Bethesda come close for creating such optimized games
AO let’s go, now the song makes sense!
4:28 they are all OpenGL backends just with different vendor extensions.
Doom walked so Quake could run, it used tons of crazy optimizations and rendering tricks
Instant sub, this analysis was brilliant
I wonder how the destructible demons tech works. If it is done with tessellation shaders like in Nvidia’s triangles vs aliens demo.
There's some details from the developers here advances.realtimerendering.com/s2020/RenderingDoomEternal.pdf (starting at slide 54)
PC have always had graphics cards, integrated graphics is more of a modern thing than a retro thing. It was graphics cards with 3d Acceleration wasn't a thing when doom came out. It was probably doom and other games like it that created a need and demand for such a product.
Why no quake exploration? 😭😭
Super interesting - and fascinating.
great video!
Could you demonstrate debugging windows program on linux with wine? Following wine logs only get you so far and require true debuggers but can be hard at times with anticheat/ detached processes so i would love if u could give it a shot.
just watched ur worms vid
True nostalgia! :)
renderdoc is the greatest graphics api debugging tool
Audio is rough, out of sync or something super weird
I think it's some sort of noise cancellation, very distracting
Base requirements are wrong - Doom required a 386 at 33 MHz. It could run on slower hardware (but it had to be 32-bit) with strongly degraded graphics.
Quality videos 👍👍 Would like to see more reverse engineering and game hacking.
Quake was before Doom 3. You could look up in both Quake 1 and 2 already.
Can you make a video showing the differences between OpenGL, Vulkan and DirectX? Specially DirectX 12 Ultimate, that would be the greatest most powerful of all?
doom 3 was revolutionary. graphics were mind blowing
Half Life 2 was actually more impressive. The main disappointment in Doom3 rendering techniques is the flash light. It's essentially a 2D sprite blended on a 3D-scence in screen space. That is common for 2D-ligthing but embarrasing for a 3D game that has a focus on lighting.
Sir... Computers (back then) had dedicated graphics cards. Most of us were using Trident, S3 or Cirrus chipsets with Vesa drivers.😮
Great video, definitely feels like talking Doom engine technology while skipping the entire Quake franchise that happened in the middle, skips a lot of development that is tied to ID Tech engines that happened between 1990s doom and Doom 3 (Quake 1 to 3 were released during that time frame and are massive upgrades to original doom visually, too much time passed between original doom games and doom 3). Perhaps it would make the video too long for the purpose of it but there's so many massive changes that occurred during that time.
The dayz of softram and mitigations, miss those dayz
Fantastic job
I love when you showed each separate draw call for a single frame in render doc, it’s so sick
this stuff is great, but i'm a bit further back in the dev process
where you "hook the dll" you intend to use, be it open GL or VULKAN.
too bad nobody in the ut99 scene is "sharing that method"