Running vis on your maps, while faster than light, still took FOREVER on a 75 mhz machine. A couple years ago I made a quake map for old times sakes, ran a modern nodebuilder with some updated vis and a gl enable light, threw just a simple map together, some big open spaces, some lights of varying strength, some water, y'know, a basic mess around map, and I was disgusted with how absurdly fast the map compiled. I just remember working on a map and clicking the test button on Quark, but not hitting the "skip light" compile option so I could just get into the map for testing, and so I'd just go turn on the TV, pop a bag of popcorn, and watch a show. If it was in the middle of a block, come back and vis had just ended, and it had just started light processing between shows, so you decide whether you want to watch another show and take a break or if you'd just stay at the computer and watch it slowly compile lighting.
I had the same experience recently with cs 1.6 maps. I remember one of my maps taking 2 hours to compile, my modern machine did it in less than TEN SECONDS. Even though it doesn't feel like it, modern computers are insanely fast compared to what we had at the turn of the millennium
I bought Abrash "Zen of Graphics Programming", and spent an intensive couple of weeks of a summer in the 90's learning graphics programming on the PC from it. Starting from the bare bottom with selecting video modes, going though lines, polys, phong shading, texture shading until I got a small world I could move around in. I only ate like noodles or pasta with ketchup, barely showered, and I was in too much overdrive to be able turn off so my sleep schedule was just half and hour there, half han hour there. I still remember it as being one of the best episodes in my life.
I bought The Black Book with my first pay from my first job out of high school. Re-read it so many times trying to figure out these concepts. The one that really stuck with me was the span-based rasteriser that is the next step in the renderer. Thanks for (another) brilliant video.
I’m mad I’m just finding out about this book. Back in the 90s I couldn’t find anything with relation to graphics programming except for a Videogame programming for Dummies book in C and later an OpenGL book.
The most important aspect of this solution is that it moves a huge chunk of the visibility calculations from runtime to compile time. By working out _in advance_ which parts of the space _could theoretically_ be visible from which other parts, they could then, at runtime, cull everything else at very little cost (a lookup, instead of a calculation), and only had to "manually" check a much smaller space. And by checking the known portals first, they could potentially shrink this set even further, before finally devolving to a more time-consuming, detailed process for rendering the remainder.
Source engines still do BSP+VIS+RAD, mostly for better backwards compatibility. GoldSrc-BSP (1998) is derived from quake1-bsp. source-BSP (2004) is closer to quake2-bsp.
@@jess648They aren't talking about "Source 2" they are talking about "Source" (Half-Life 2, Counter-Strike:Source, Left 4 Dead, Team Fortress 2, Garry's Mod, Portal, Vampire: The Masquerade - Bloodlines, ect...) and "GoldSrc" (Half-Life 1, Blueshift, Opposing Force, Team Fortress Classic, Counter-Strike:Condition Zero, ect...)
I regret not developing that 3D graphics engine I made in 1996. It ran texture mapped, shaded polygons, and allowed fullscreen graphics at 640x480 at 16fps minimum (with a scene / screen filled with texture mapped triangles). The best part was that this was on a 486 DX4-100, and it did allow rotation in all 3 axes! Those were the days!
For that 3 axis rotation did you suffer gimble lock or figure out quarternions? :P It took me a long, long, long time to learn that one. I just dealt with it until I heard quarternions were better for encoding into network data.
"I regret not developing that 3D graphics engine I made in 1996" developing a 3D engine in those years man ... crazy, with basically no information, no internet etc. That is tough. I developed some kind of "RPG Maker" back at 97/98 when I was 13. Nothing special actually, but seeing how RPG Maker is now a product and Game Maker is fueling a lot of indie games I even regret that. Funny sometimes how he do great things and simply stop where they could have changed your future. We do underestimate the small things too much :D
That might be the best explanation of BSPs I've ever seen. And the final culling technique was a great example of "it doesn't need to be great, it only has to be 'good enough'". When you're talking about doing 10x extra work, even a scrappy "incomplete" solution gives you huge wins as long as it's not giving false positives, so if it's simple and fast it may remove the need for anything better. And the principle that "anything you can cache you can probably precache" is always a good one to try on for size. Doesn't always apply, but when it does the results are like you're doing magic.
I got a commission to make a few counter strike maps 1.6 a couple years ago. CS is based on GoldSrc, which is like a modification of the Quake engine, you even use the same toolchain to make maps for either one. Compiling a map involves four steps: consolidating the geometry (and somo other stuff), generating the bsp, calculating visibility, and then radiosity-based lighting. They all run on the cpu, and for large / messy maps they still take quite a few minutes to run. It really is a lot of computation being made "offline" when you think about it.
I don’t understand why with early 3d graphics hardware you could not really render front to back. And you could place a boundary volume check before a node / expensive model. For one hierarchical z buffer seems to be difficult. And async await where you could put meshes and triggers behind tests were not supported. I mean like you use the CPU to upload the next finder geometry, while the GPU processes the coarse nodes in front.
@@Ylyrra Quake came out in 1996. Nvidia nv1 came out in 1994 as did the psx. Some games targeted the 3do in 1993. The hardware was obviously geared towards 3d, but more in stage like way. Somehow dungeons only existed on home-computers and PCs. The console industry focused on 2d games with polygon characters or so. Or racing games with a camera very similar to pseudo3d on genesis. Before the internet there were trade shows. Leaders of software companies actually went there and saw 3do, PSX and Saturn even before their Japan release. Or you could just check at SGI to see the future. Donkey Kong Country was modelled on SGI.
@@ArneChristianRosenfeldt Yes, 3D cards "existed". Consumers vastly didn't have them. Every game still had to ship with a software renderer EVEN if it did have a hardware one. I knew ONE person with a 3dfx card, ever. Meanwhile the other 20-30 PC gamers I knew relied on software rendering. The technology effectively didn't exist when it came to developing a game you wanted to sell to any actual customers. And another factor is that development time leads release time.
Carmack originally saw Ultima Underworld Stygian Abyss at a trade show which inspired him to create Wolfenstein. Stygian Abyss uses an octree to render on a 386.
I worked on Beam trees in university, but did not know that they were called that. I needed to calculate Constructive Solid Geometry defined by surface-equations that were combined in logical composition (AND/OR/NOT). The equations would vary. I would have loved to have polygons instead, but the project manager was stuck on the equations. I started with the camera beam, then sub-dividing it into 4 each time. The beams turned the logical composition into 1 dimension, with some overlapping. So with each beam, I could perform calculations that removed a part of the equations. Until a few were left, and I could perform ray-trace calculations. The same was done for calculating shadows. In a later job I made one that generated polygons from slightly similar structures. And that worked much faster. Now these things are already integrated in Blender and such.
I thought that a Beam Tree would be great for the AtariJaguar, but it feels so weird. Instead of normalized device coordinates to clip against, there is the beam tree. It feels more like raytracing. Only in the final step I would fill in the texture using linear interpolation. The tree splits so many polygons. I don’t even dare to start. All the code fills the cache. Access to the tree in DRAM is slow.
Another great video, Simon! This subject is very close to my heart, as I have been specializing in visibility determination pretty much since I started my gamedev career in 2004, and indeed by learning from games like Quake in my early days of tinkering with stuff. I've worked on most of the Tomb Raider games from Crystal Dynamics since that time (mostly the Tomb Raider series, and Marvel: Avengers was my last game before we got acquired by Playstation Studios), including a couple of Eidos Montreal ones (Deus Ex: HR). Of course, this was all GPU rendering, so we were mostly bothered by quickly being able to cull large chunks of geometry. They wanted a cell and portal system where they would hand-author cells and portals into the world geometry and I singlehandedly implemented most of it (which is pretty wild for a junior if I think back about it). The first iteration was pretty basic, where cells could be concave polytopes and portals convex polygons. You generate a BSP tree mainly for point location so you can assign in-game objects to cells, and then use the portals to cut up the view frustum as you traversed the portals, testing all objects and lights against the cut up frustum. Then we did a pass for each visible shadowcasting light to figure out which objects to render to the light's shadowmap. This firstly got shipped in the PC port of Tomb Raider: Legend as it featured the new per pixel lighting engine, that wasn't in any of the console versions (we're talking original Xbox and PS2 here). During development of Deus Ex: Human Revolution we played around with the ability to have occluders. To implement this, I used the beam tree approach as well, by calculating the entire visible volume using the portals and occluders, for the camera as well as the dynamic shadowcasting lights. The beamtree is essentially a bsp tree for visible space, and to test for visibility you simply push the bounding volume of whatever object you want to test through the tree to see if it hits any visible leafs. This system has been used all the way up to Marvel: Avengers (2020). Of course lots of optimizations and minor features were added throughout the years, but the fundamental idea stayed the same.
I really appreciate this video because most of the explanations I've seen only focus on the solution that worked. Showing all the failures is a lot more honest about how actual software development gets done. It also humanizes the devs; Carmack wasn't so much a genius as he was tenacious. That tenacity, and a willingness to abandon an approach to try something else, is what makes a good dev. The "flash of genius" that House has every episode is basically a myth.
I've been studying navmeshes, polygonal decomp, kd-trees, and bsp-trees lately. I hit the deep end of graph theory and just backed up recently. This video was relaxing.
That was probably the most intuitive, well-written explanation of what BSP trees are and how they function. The visuals were an awesome addition 💯. Thanks Simon! Appreciate ya.
How does collision work? Apparently, collision needs polygons to slide on already for Sonic the hedgehog. So we intersect the sweep volume with the BSP and then find contact points?
I've always had just a basic grasp of BSP trees but the deep intuition just wasn't there....until now! I think this proves (to me at least) that I'm a visual learner and I'll be watching this one at least a few more times :) All of your vid's and your courses are very well done, the visuals with explanations and practical examples in layman's/more simple terms are awesome, a great help. I'm mainly a CAD dev and I've already used the info from the spatial hash grid video in one project to great effect, I think BSP's will also help now I understand them better. Thank you!
The tricks they used in this era are truly fascinating. At some point I realized that with a basic set up for a renderer, you can't just render "only what you see on camera," but if you tell the engine to draw a triangle that was behind the camera, then it would go through the entire process of calculating its position in world space, even if the final result presented something that never would be drawn on the screen. When I realized that I had a profound respect for the efforts taken to cull visible geometry. I went back and looked at games like Mario 64 and tried to visualize the game trying to draw every polygon in the room in every frame, even if they were not all visible. If your engine worked that way, things like doors were powerful just in selecting what gets drawn on the screen because you could block off entire rooms.
A lot of dynamic culling is done per object, if all 6 vertices of the objects bounding box fall outside the camera frustrum then there is no need to test any of it's polygons. You also don't need to test every object, if a buildings bounding box is not visible then any object (fully) inside that building is also not visible.
That subdividing ray-casting approach was used in a number of demoscene productions that did real-time ray-tracing (Heaven7 by Exceed, from 2000 being a good one). I once found a little interactive program that could let you limit it to just the horizontal and vertical edges of the subdivisions, so you could get an idea of what it was actually doing, too.
Great video!! but as far as I recall, the engine used in the game Descent, released 1 or 2 years prior, also employed a portal system to determine visibility across different areas. I'm not sure if it was implemented in the same way, but the concept of portals wasn't a 100% innovation of Quake.
@@Simrealism An assembler is the software that assembles your program, written in assembly language, SensSword is correct. Most people nowadays misname it as 'assembler' for some weird reason, it is like saying you are programming in compiler.
I saw Binary Space Partitioning get glossed on by a Digressing and Sidequesting video (anybody remember that?) about Doom and I now understand it. Thank you so much.
Oh, this brings me back to the time where we had a lot of math books and programming language manuals and had to go from there ... to be young again and maybe try something different :-) Had a terain generator via bezier surfaces and a height map as a small school project back then ... took me a few months to figure out how to get it working on a 386-SX so you could actually almost fluidly rotate the map on the screen :-) anyway instant subscribe ... should have discovered this channel long ago ...
brings back some memory from the 90s 🙂 so many tricks you could do with graphics... back then the graphics were drawn directly to memory. no direct x, no opengl. just your own graphics functions written in assembler. but around 2000 those days were long gone.
This was a great way to pay tribute to the games that started a whole genre. Your explanations of concepts difficult to me were super insightful. But, your graphics were phenomenal! Thank you so much for the research and passion that went into the making of this video!
The PVS data is somewhat similar in concept to Doom's "REJECT table". In that it pre-calcuates which parts of a map are possibly visible from each other part. Of course, the REJECT table isn't used for rendering and is based on sectors rather than BSP nodes, but it's similar enough that I can imagine it being an inspiration...
Great explanation of BSPs! I used to make Source maps so I had a rough idea of how some of this worked but your visualisations helped fill in the blanks.
oh man, im 6 minutes in now, for me (born in late 99) this processing comparison between your phone and that pentium 75 was already worth watching this video
@@redslate A calculator can do a lot if running continually and using a stored program though. Giving you the answer to a square root or a power function in a fraction of a second is no mean feat, even if it would seem achingly slow code-wise vs even an 80s computer. Most modern processors can be traced back to a combination of single-chip implementations of the architecture of a desk calculator, data terminal, or aircraft flight control system after all, in some cases almost exactly that (the 4004 series and a couple other early Intels in particular - and whilst that first processor was weak as hell, it was still considered useful for *something*)
I really enjoyed this video, I had to watch it a few times to get my head around "walking the BSPs" but your animations and explanations were on point, it is really amazing to understand what they had to do back then to render 3D environments with such limited hardware power.
I always felt that the bsp and pvs were overkill. Much simpler to just define zones and linking portals, and clip the portals to see if the linked zone was visible, shrinking the clipping frustum for each portal as you recursively processed zones.
We share a lot, and probably our age is one of them. This BSP video brings back a lot of memories of a time i spent editing maps from others to make the BSP work better. Many (hobby-)mappers did not have an understanding of how PVS could (/should) be implemented efficiently. @19:31 is a moment in the video that they needed to see before starting to build their worlds. ..and their magenta space 4 was huge :)
What's crazy is, Half Life 2 still used BSP for its maps--going back to the Quake 1 engine as well as Quake 2 map format. (So BSPs were used through at least 2007, with HL2 episode 2). And Unreal Engine 4 supports BSPs for prototyping maps.
@@tahrey Portal 2 and CSGO make heavy use of props to fill in the details between the simple brushwork. The CSGO version of de_ancient is almost completely props, but it still has a brushwork shell because visibility testing and bounced lighting require it.
The approach of Descent (the first 3D FPS one year and a half before Quake) was clever and faster because based on cubes. Quake engine was so complex in comparison. And Carmack also did the network part where they could have outsource with experimented teams...
Descent was a really fun and ground-breaking game, but the graphics didn't seem as crisp or solid as Quake's. Also as I recall, the bigger areas connected between the tunnels were mostly just open rooms without complex geometry. Still it preceded Quake as did Bethesda's Terminator: Future Shock (using Bethesda's Xngine) which was fully-3D back in 1995.
@@curcumin417 Terminator was great but geometry was very light (and Descent was out 6 month before!) / the geometry of descent is mostly based on corridors (and most of all, based on cubes, that's why he was so smooth on my P90) => also as far as i remember, terminator was sloooooooow as hell to load!
@@edouardberge Right- I remember that! Also (I'm not a programmer or engineer) but I remember some rendering artifacts in the early version of Xngine as well. Still very impressive for its time
Workstyle "Working more gets more done." - John Carmack[30] Carmack has maintained a sixty-hour work week, working a 10-hour day, six days a week, throughout his career.[30] He has spoken publicly about the importance of long hours of uninterrupted focus in his work. Not only does high intensity allow him to make progress more quickly, but long hours are also critical to maintaining a focused mindset over time. Despite working such a demanding schedule, he has never experienced burnout. source: en.wikipedia.org/wiki/John_Carmack
That is a stellar explanation of the BSP, thank you! Back in early 2000s I've used the Genesis3D engine for very rudimentary interior archviz. Thanks to your explanations now I understand why BSP based engines were meant to be indoor engines and why they performed terribly when attempted to model outdoor environments. The BSP tree was simply unable to effeciently rule out big chunks of the scene because it could not crawl down the tree. When not inside a building, walls don't block vision to decimate space and thus geometry subject to culling. Kinda late with my discovery but better later than never! :)
I once watched John Carmack give a talk about Quake’s BSP tree - it was many years ago. I think he must have been at GDC or something. I have never found it since. I understood very little of it but it was mesmerising.
I implemented the poly fill routine from that Black Book which was 10x faster than windows polyfill. It allowed my to do amazing things on CPU for rendering terrain for aviation navigation software back in the 90s. I still use parts of that fill routine for calculating vector interaction with gridded data. Such a great book.
I remember purchasing my first graphics card - pci slot as my motherboard did not have an AGP slot. The difference in graphics even at the same resolution was amazing. 320x240 open GL looked way better than software graphics. 640x480 was just insanely sharp!
I found a paper on Beam Search Trees called, "Revealing the Unwritten: Visual Investigation of Beam Search Trees to Address Language Model Prompting Challenges". Its from October 2023
love the video, great explanations! tiny remark, I think at 15:05 you say 8x8 grid, but I don't think they mean 8 by 8 cells filling the screen (like in your sample image), but rather cells that are 8x8 pixels large
Yeah, otherwise you'd lose a LOT of objects. If it's 8x8 pixels that sets an upper limit on how large a poly can be whilst still being losable, and even then it'd more flicker as you moved around because of the changing positions of a great many borders, so not truly lost. But it still saves you a hell of a lot of time, as, best-case, you may be casting only one ray instead of 64.
@SimonDev, my goodness... @20:27 I remember watching a youtube video regarding this fast inverse square root function as a 'faster' way of doing square roots at the time and dove deep down the rabbit hole so to speak, on benchmarking this vs. other obscure square root methods such as Chris Lamont's version, Jim Blinn's version, other iEEE tutorials on floating point hacks, and good ol' std::sqrt and found out using the intrinsic method was faster but less accurate. I too couldn't find where on Earth the origins came from but I recall reading someone from id software came up with this based on a paper in the 80s, I think! And even though I managed to draw more polygon shapes in 2d using SFML and SDL2, I eventually jumped ship to 3d graphics using OpenGL and simulate all the particles I want using the compute shader . I love it! 👍
Studying failures can also be important because sometimes (sorry, I can't remember anything off the top of my head, but it has happened), things that were a bad idea in the past, now can be done efficiently due to evolution of hardware, or because improvements in algorithms that were used as elements of the previously failed techniques.
Currently getting into classic Halo modding, that engine inherited the BSP concept which means I need to keep my level meshes very proper, but its interesting tech to look back on
I miss my old Potato computer. My first rig was a Quentex that somehow came with a FireGl 1000 Pro. A dev card that was a thousand dollars but the whole computer wasn’t much more. I had a voodoo 2 for Quake’s release. The PC Gamer issue that had Quake on the ad had me hyped.
Excellent video, thank you! Subscribed! This also gave me even more respect for John, what he's done for computer graphics is, I guess, comparable to what Einstein did for physics.
Hello, Simon! I was wondering -since you showed the Black Book- which books would you recommend (new and old) still useful for Computer Graphics and Game Development
I leaned that you are better off, hollowing out cubes than building a map with pieces and end up getting an error because of a hole in the map from a tiny point that didn't line up or overlap.
I think a nice video to make next would be how are shadows done today vs how they were back in the day, very extensive and interesting topic to cover IMO. Great videos in general! My suggestion would be to have a slightly more enthusiastic voice, at the moment it sounds like you are super tired when making these...
My old Packard bell p75, was a 1Mb on board video with 8mb base memory. Over its life I upgraded it to 128mb memory(this was £100 at the time as 2nd hand), a free gfx card 4mb and a creative audigy AWE32. The game ran OK before the upgrades but even after, it was a minor boost.
Beam tree wasn't discarded. It worked effectively in a specific case for the Rendition Verite version of Quake where they could also get microcode support from Rendition - the Verite 'GPU' had a MIPS processor at it's core. Abrash worked on this version.
Considering the limitations id had back when they started making 3D games, I find a bit strange nobody has ever talked about their software renderer code.
POV: You're sitting in your moderately lit room, working on your dark mode system like a reasonable person, with a video playing on the side of your ultrawide monitor. The video has a dark palette, so you've not shrunk it down to just a few hundred pixels. Suddenly, half way through the author shows a screencapture of Google's blank search page in white screen mode, blinding you with the power of a thousand suns. Why would they do this to you? Is there no justice in the world?
PVS is interesting, but given how Quake levels are structured, I think those visibility trees could just be done manually, without any specific algorithm to do it automatically.
what I want to know is what happens if a surface is both in front of, and behind of another one, or that it is passing through it. What happens then? is it just approximating (like middle point of the surface being in front or behind). I would like to know because I am making my own 3D graphics engine that has very limited computing resources
17:30 I think Abrash is dramatizing this for his book. And I doubt Carmack got the idea from watching a bus pass by. Most likely Carmack got the idea from Ken Silverman's build engine that came out a year before quake.
I have seen a lot of video's and books trying to explain how BSPs work, this video and the "Game Engine Black Book: DOOM" are the best ones I have seen yet. That visual explanation was really awesome. Can I ask what do you make these animations with? Are you using manim or motion canvas?
I write shaders and define everything there using code. I have a shader course that touches on some of it, but the videos are like 99% SDF's and shaping functions.
Carmack said that making Quake right after Doom was basically a mistake and they should have instead made a new game on an improved version of the DOOM engine with the networking capabilities found in Quake as a stepping stone.
Those BSP trees: cant it ever happen that soemthing is on both sides of another thing or is this always solveable (i.e. by inverting the order of these elements in the tree)?
Yep, as Arne mentions, you do splits. It doesn't fundamentally change the understanding of the trees presented, but adds some complexity to implementation, which is why I skipped it. This was a crash course primer on bsp's, and I just wanted you to understand the concept.
Hi! Did you (or can you in the future) explain how games render 3D scenes including a depth of field? (sharp at a certain distance from the camera, blurry when we're going closer or further) Thanks!
I'm surprised they didn't find a solution for the dropouts in the raycasting. I would assume that you could fill those using neighbouring pixels and the resulting visual artifacts would be really negligible... especially with Quake's drab colour palette. Perhaps you'd have a problem when several dropouts occur next to each other?
Patrons can now vote for the next video! Thank you for your support.
Patreon: www.patreon.com/simondevyt
Courses: simondev.io
Didn't Quake implement span based drawing also?
Running vis on your maps, while faster than light, still took FOREVER on a 75 mhz machine. A couple years ago I made a quake map for old times sakes, ran a modern nodebuilder with some updated vis and a gl enable light, threw just a simple map together, some big open spaces, some lights of varying strength, some water, y'know, a basic mess around map, and I was disgusted with how absurdly fast the map compiled. I just remember working on a map and clicking the test button on Quark, but not hitting the "skip light" compile option so I could just get into the map for testing, and so I'd just go turn on the TV, pop a bag of popcorn, and watch a show. If it was in the middle of a block, come back and vis had just ended, and it had just started light processing between shows, so you decide whether you want to watch another show and take a break or if you'd just stay at the computer and watch it slowly compile lighting.
I had the same experience recently with cs 1.6 maps. I remember one of my maps taking 2 hours to compile, my modern machine did it in less than TEN SECONDS. Even though it doesn't feel like it, modern computers are insanely fast compared to what we had at the turn of the millennium
I bought Abrash "Zen of Graphics Programming", and spent an intensive couple of weeks of a summer in the 90's learning graphics programming on the PC from it. Starting from the bare bottom with selecting video modes, going though lines, polys, phong shading, texture shading until I got a small world I could move around in. I only ate like noodles or pasta with ketchup, barely showered, and I was in too much overdrive to be able turn off so my sleep schedule was just half and hour there, half han hour there. I still remember it as being one of the best episodes in my life.
Awesome story!
Is there such a book you'd recommend for a more casual audience? I'd like to grind Vulkan one day
38yo and still having those episodes in my life (although with a bit more sleep)
I bought The Black Book with my first pay from my first job out of high school. Re-read it so many times trying to figure out these concepts. The one that really stuck with me was the span-based rasteriser that is the next step in the renderer. Thanks for (another) brilliant video.
I’m mad I’m just finding out about this book. Back in the 90s I couldn’t find anything with relation to graphics programming except for a Videogame programming for Dummies book in C and later an OpenGL book.
@@s81n Flights of Fantasy and Gardens of Imagination by Christopher Lampton were two that I had in high school that I loved.
The most important aspect of this solution is that it moves a huge chunk of the visibility calculations from runtime to compile time. By working out _in advance_ which parts of the space _could theoretically_ be visible from which other parts, they could then, at runtime, cull everything else at very little cost (a lookup, instead of a calculation), and only had to "manually" check a much smaller space. And by checking the known portals first, they could potentially shrink this set even further, before finally devolving to a more time-consuming, detailed process for rendering the remainder.
Source engines still do BSP+VIS+RAD, mostly for better backwards compatibility.
GoldSrc-BSP (1998) is derived from quake1-bsp.
source-BSP (2004) is closer to quake2-bsp.
Source 2 is a mesh based engine
@@jess648 sorse 2 is a bit too new :P
@@jess648They aren't talking about "Source 2" they are talking about "Source" (Half-Life 2, Counter-Strike:Source, Left 4 Dead, Team Fortress 2, Garry's Mod, Portal, Vampire: The Masquerade - Bloodlines, ect...) and "GoldSrc" (Half-Life 1, Blueshift, Opposing Force, Team Fortress Classic, Counter-Strike:Condition Zero, ect...)
I regret not developing that 3D graphics engine I made in 1996. It ran texture mapped, shaded polygons, and allowed fullscreen graphics at 640x480 at 16fps minimum (with a scene / screen filled with texture mapped triangles). The best part was that this was on a 486 DX4-100, and it did allow rotation in all 3 axes! Those were the days!
For that 3 axis rotation did you suffer gimble lock or figure out quarternions? :P
It took me a long, long, long time to learn that one. I just dealt with it until I heard quarternions were better for encoding into network data.
Like Quake? Or just an empty square room?
Very neat! I was originally building out 3d stuff in software, but honestly abandoned ship for opengl once it was available to me.
@matthewtymaja3760 do you have the source code of the engine?
"I regret not developing that 3D graphics engine I made in 1996" developing a 3D engine in those years man ... crazy, with basically no information, no internet etc. That is tough. I developed some kind of "RPG Maker" back at 97/98 when I was 13. Nothing special actually, but seeing how RPG Maker is now a product and Game Maker is fueling a lot of indie games I even regret that. Funny sometimes how he do great things and simply stop where they could have changed your future. We do underestimate the small things too much :D
That might be the best explanation of BSPs I've ever seen. And the final culling technique was a great example of "it doesn't need to be great, it only has to be 'good enough'". When you're talking about doing 10x extra work, even a scrappy "incomplete" solution gives you huge wins as long as it's not giving false positives, so if it's simple and fast it may remove the need for anything better.
And the principle that "anything you can cache you can probably precache" is always a good one to try on for size. Doesn't always apply, but when it does the results are like you're doing magic.
I got a commission to make a few counter strike maps 1.6 a couple years ago. CS is based on GoldSrc, which is like a modification of the Quake engine, you even use the same toolchain to make maps for either one.
Compiling a map involves four steps: consolidating the geometry (and somo other stuff), generating the bsp, calculating visibility, and then radiosity-based lighting. They all run on the cpu, and for large / messy maps they still take quite a few minutes to run. It really is a lot of computation being made "offline" when you think about it.
I don’t understand why with early 3d graphics hardware you could not really render front to back. And you could place a boundary volume check before a node / expensive model. For one hierarchical z buffer seems to be difficult. And async await where you could put meshes and triggers behind tests were not supported. I mean like you use the CPU to upload the next finder geometry, while the GPU processes the coarse nodes in front.
@@ArneChristianRosenfeldt Quake predates 3D graphics cards, there was no GPU to offload anything to.
@@Ylyrra Quake came out in 1996. Nvidia nv1 came out in 1994 as did the psx. Some games targeted the 3do in 1993. The hardware was obviously geared towards 3d, but more in stage like way. Somehow dungeons only existed on home-computers and PCs. The console industry focused on 2d games with polygon characters or so. Or racing games with a camera very similar to pseudo3d on genesis. Before the internet there were trade shows. Leaders of software companies actually went there and saw 3do, PSX and Saturn even before their Japan release. Or you could just check at SGI to see the future. Donkey Kong Country was modelled on SGI.
@@ArneChristianRosenfeldt Yes, 3D cards "existed". Consumers vastly didn't have them. Every game still had to ship with a software renderer EVEN if it did have a hardware one. I knew ONE person with a 3dfx card, ever. Meanwhile the other 20-30 PC gamers I knew relied on software rendering. The technology effectively didn't exist when it came to developing a game you wanted to sell to any actual customers.
And another factor is that development time leads release time.
Carmack originally saw Ultima Underworld Stygian Abyss at a trade show which inspired him to create Wolfenstein. Stygian Abyss uses an octree to render on a 386.
just realised this is the only channel I have notifications turned on for - that says something
Your animations help a lot understanding all these concepts, they are so clear. Perfect!
I worked on Beam trees in university, but did not know that they were called that.
I needed to calculate Constructive Solid Geometry defined by surface-equations that were combined in logical composition (AND/OR/NOT). The equations would vary. I would have loved to have polygons instead, but the project manager was stuck on the equations.
I started with the camera beam, then sub-dividing it into 4 each time. The beams turned the logical composition into 1 dimension, with some overlapping. So with each beam, I could perform calculations that removed a part of the equations. Until a few were left, and I could perform ray-trace calculations. The same was done for calculating shadows.
In a later job I made one that generated polygons from slightly similar structures. And that worked much faster.
Now these things are already integrated in Blender and such.
I thought that a Beam Tree would be great for the AtariJaguar, but it feels so weird. Instead of normalized device coordinates to clip against, there is the beam tree. It feels more like raytracing. Only in the final step I would fill in the texture using linear interpolation. The tree splits so many polygons. I don’t even dare to start. All the code fills the cache. Access to the tree in DRAM is slow.
Matt's Ramblings has done an excellent deep dive on everything Quake rendering.
Another great video, Simon! This subject is very close to my heart, as I have been specializing in visibility determination pretty much since I started my gamedev career in 2004, and indeed by learning from games like Quake in my early days of tinkering with stuff.
I've worked on most of the Tomb Raider games from Crystal Dynamics since that time (mostly the Tomb Raider series, and Marvel: Avengers was my last game before we got acquired by Playstation Studios), including a couple of Eidos Montreal ones (Deus Ex: HR). Of course, this was all GPU rendering, so we were mostly bothered by quickly being able to cull large chunks of geometry. They wanted a cell and portal system where they would hand-author cells and portals into the world geometry and I singlehandedly implemented most of it (which is pretty wild for a junior if I think back about it).
The first iteration was pretty basic, where cells could be concave polytopes and portals convex polygons. You generate a BSP tree mainly for point location so you can assign in-game objects to cells, and then use the portals to cut up the view frustum as you traversed the portals, testing all objects and lights against the cut up frustum. Then we did a pass for each visible shadowcasting light to figure out which objects to render to the light's shadowmap. This firstly got shipped in the PC port of Tomb Raider: Legend as it featured the new per pixel lighting engine, that wasn't in any of the console versions (we're talking original Xbox and PS2 here).
During development of Deus Ex: Human Revolution we played around with the ability to have occluders. To implement this, I used the beam tree approach as well, by calculating the entire visible volume using the portals and occluders, for the camera as well as the dynamic shadowcasting lights. The beamtree is essentially a bsp tree for visible space, and to test for visibility you simply push the bounding volume of whatever object you want to test through the tree to see if it hits any visible leafs. This system has been used all the way up to Marvel: Avengers (2020). Of course lots of optimizations and minor features were added throughout the years, but the fundamental idea stayed the same.
I really appreciate this video because most of the explanations I've seen only focus on the solution that worked. Showing all the failures is a lot more honest about how actual software development gets done. It also humanizes the devs; Carmack wasn't so much a genius as he was tenacious. That tenacity, and a willingness to abandon an approach to try something else, is what makes a good dev. The "flash of genius" that House has every episode is basically a myth.
I've been studying navmeshes, polygonal decomp, kd-trees, and bsp-trees lately. I hit the deep end of graph theory and just backed up recently. This video was relaxing.
That was probably the most intuitive, well-written explanation of what BSP trees are and how they function. The visuals were an awesome addition 💯.
Thanks Simon! Appreciate ya.
How does collision work? Apparently, collision needs polygons to slide on already for Sonic the hedgehog. So we intersect the sweep volume with the BSP and then find contact points?
I've always had just a basic grasp of BSP trees but the deep intuition just wasn't there....until now! I think this proves (to me at least) that I'm a visual learner and I'll be watching this one at least a few more times :)
All of your vid's and your courses are very well done, the visuals with explanations and practical examples in layman's/more simple terms are awesome, a great help.
I'm mainly a CAD dev and I've already used the info from the spatial hash grid video in one project to great effect, I think BSP's will also help now I understand them better.
Thank you!
The tricks they used in this era are truly fascinating.
At some point I realized that with a basic set up for a renderer, you can't just render "only what you see on camera," but if you tell the engine to draw a triangle that was behind the camera, then it would go through the entire process of calculating its position in world space, even if the final result presented something that never would be drawn on the screen. When I realized that I had a profound respect for the efforts taken to cull visible geometry.
I went back and looked at games like Mario 64 and tried to visualize the game trying to draw every polygon in the room in every frame, even if they were not all visible. If your engine worked that way, things like doors were powerful just in selecting what gets drawn on the screen because you could block off entire rooms.
A lot of dynamic culling is done per object, if all 6 vertices of the objects bounding box fall outside the camera frustrum then there is no need to test any of it's polygons. You also don't need to test every object, if a buildings bounding box is not visible then any object (fully) inside that building is also not visible.
Nobody has explained BSPs to me in a way I understand until now
You and Acerola should do a collaboration
That subdividing ray-casting approach was used in a number of demoscene productions that did real-time ray-tracing (Heaven7 by Exceed, from 2000 being a good one). I once found a little interactive program that could let you limit it to just the horizontal and vertical edges of the subdivisions, so you could get an idea of what it was actually doing, too.
Wow thanks for that info- Gotta check out those demos!
Great video!! but as far as I recall, the engine used in the game Descent, released 1 or 2 years prior, also employed a portal system to determine visibility across different areas. I'm not sure if it was implemented in the same way, but the concept of portals wasn't a 100% innovation of Quake.
Quake was also instrumental for me and becoming a programmer. It was also instrumental in my disdain for Assembly Language of any sort LOL
@@Simrealism An assembler is the software that assembles your program, written in assembly language, SensSword is correct. Most people nowadays misname it as 'assembler' for some weird reason, it is like saying you are programming in compiler.
I saw Binary Space Partitioning get glossed on by a Digressing and Sidequesting video (anybody remember that?) about Doom and I now understand it. Thank you so much.
Most videos that try to explain bsp trees in doom don't really understand it themselves so they usually gloss over it.
It’s incredible how these early technical challenges laid the groundwork for the complex graphical systems we enjoy in modern games. 🎮
Oh, this brings me back to the time where we had a lot of math books and programming language manuals and had to go from there ... to be young again and maybe try something different :-) Had a terain generator via bezier surfaces and a height map as a small school project back then ... took me a few months to figure out how to get it working on a 386-SX so you could actually almost fluidly rotate the map on the screen :-) anyway instant subscribe ... should have discovered this channel long ago ...
brings back some memory from the 90s 🙂 so many tricks you could do with graphics... back then the graphics were drawn directly to memory. no direct x, no opengl. just your own graphics functions written in assembler. but around 2000 those days were long gone.
The raycast approach is actually the solution to a problem I've been trying to work out
This was a great way to pay tribute to the games that started a whole genre. Your explanations of concepts difficult to me were super insightful. But, your graphics were phenomenal! Thank you so much for the research and passion that went into the making of this video!
The PVS data is somewhat similar in concept to Doom's "REJECT table". In that it pre-calcuates which parts of a map are possibly visible from each other part. Of course, the REJECT table isn't used for rendering and is based on sectors rather than BSP nodes, but it's similar enough that I can imagine it being an inspiration...
I believe Carmack commented that only later in hindsight he noticed that PVS was similar to REJECT.
Great explanation of BSPs! I used to make Source maps so I had a rough idea of how some of this worked but your visualisations helped fill in the blanks.
oh man, im 6 minutes in now,
for me (born in late 99) this processing comparison between your phone and that pentium 75 was already worth watching this video
Wait 'til you learn that we landed on the moon with less computational power than your calculator. 😅
@@redslate😂
@@redslate What's a "calculator"? 🤔
A phone app? 😁
@@redslate A calculator can do a lot if running continually and using a stored program though. Giving you the answer to a square root or a power function in a fraction of a second is no mean feat, even if it would seem achingly slow code-wise vs even an 80s computer. Most modern processors can be traced back to a combination of single-chip implementations of the architecture of a desk calculator, data terminal, or aircraft flight control system after all, in some cases almost exactly that (the 4004 series and a couple other early Intels in particular - and whilst that first processor was weak as hell, it was still considered useful for *something*)
I really enjoyed this video, I had to watch it a few times to get my head around "walking the BSPs" but your animations and explanations were on point, it is really amazing to understand what they had to do back then to render 3D environments with such limited hardware power.
I always felt that the bsp and pvs were overkill. Much simpler to just define zones and linking portals, and clip the portals to see if the linked zone was visible, shrinking the clipping frustum for each portal as you recursively processed zones.
We share a lot, and probably our age is one of them.
This BSP video brings back a lot of memories of a time i spent editing maps from others to make the BSP work better. Many (hobby-)mappers did not have an understanding of how PVS could (/should) be implemented efficiently. @19:31 is a moment in the video that they needed to see before starting to build their worlds. ..and their magenta space 4 was huge :)
What's crazy is, Half Life 2 still used BSP for its maps--going back to the Quake 1 engine as well as Quake 2 map format. (So BSPs were used through at least 2007, with HL2 episode 2). And Unreal Engine 4 supports BSPs for prototyping maps.
All of Source 1 uses BSP, so Portal 2 and CSGO used it as late as 2011 and 2012.
@@SirYodaJedi Explains why Portal's maps seem so Quakelike in feel, maybe. (And somewhat P2 though it hides it better... not sure about CSGo)
@@tahrey Portal 2 and CSGO make heavy use of props to fill in the details between the simple brushwork. The CSGO version of de_ancient is almost completely props, but it still has a brushwork shell because visibility testing and bounced lighting require it.
The approach of Descent (the first 3D FPS one year and a half before Quake) was clever and faster because based on cubes. Quake engine was so complex in comparison. And Carmack also did the network part where they could have outsource with experimented teams...
Descent was a really fun and ground-breaking game, but the graphics didn't seem as crisp or solid as Quake's. Also as I recall, the bigger areas connected between the tunnels were mostly just open rooms without complex geometry. Still it preceded Quake as did Bethesda's Terminator: Future Shock (using Bethesda's Xngine) which was fully-3D back in 1995.
@@curcumin417 Terminator was great but geometry was very light (and Descent was out 6 month before!) / the geometry of descent is mostly based on corridors (and most of all, based on cubes, that's why he was so smooth on my P90) => also as far as i remember, terminator was sloooooooow as hell to load!
@@edouardberge Right- I remember that! Also (I'm not a programmer or engineer) but I remember some rendering artifacts in the early version of Xngine as well. Still very impressive for its time
I'm always eager for new Simon's videos. Amount of information combined with quality is just outstanding
Very misleading title and thumbnail, but clear explanations!
John Carmack is a total genius. The comparison with House is probably apt 😅
Workstyle
"Working more gets more done."
- John Carmack[30]
Carmack has maintained a sixty-hour work week, working a 10-hour day, six days a week, throughout his career.[30] He has spoken publicly about the importance of long hours of uninterrupted focus in his work. Not only does high intensity allow him to make progress more quickly, but long hours are also critical to maintaining a focused mindset over time. Despite working such a demanding schedule, he has never experienced burnout.
source: en.wikipedia.org/wiki/John_Carmack
What still blows my mind is that they used this for collision detection and response too
That is a stellar explanation of the BSP, thank you! Back in early 2000s I've used the Genesis3D engine for very rudimentary interior archviz.
Thanks to your explanations now I understand why BSP based engines were meant to be indoor engines and why they performed terribly when attempted to model outdoor environments. The BSP tree was simply unable to effeciently rule out big chunks of the scene because it could not crawl down the tree.
When not inside a building, walls don't block vision to decimate space and thus geometry subject to culling. Kinda late with my discovery but better later than never! :)
I once watched John Carmack give a talk about Quake’s BSP tree - it was many years ago. I think he must have been at GDC or something. I have never found it since. I understood very little of it but it was mesmerising.
I implemented the poly fill routine from that Black Book which was 10x faster than windows polyfill. It allowed my to do amazing things on CPU for rendering terrain for aviation navigation software back in the 90s. I still use parts of that fill routine for calculating vector interaction with gridded data. Such a great book.
I remember purchasing my first graphics card - pci slot as my motherboard did not have an AGP slot. The difference in graphics even at the same resolution was amazing. 320x240 open GL looked way better than software graphics. 640x480 was just insanely sharp!
My first card was a Voodoo Banshee, those were the days!
Game slow ? Just buy more ram 🙂
That's the modern solution.
Dedotated wam
Or download more, ideally!
1.2GB OF RAM???
Yeah, in 2024 RAM is available in every corner drugstore.
Buy? Just download more ram from the internet
I found a paper on Beam Search Trees called, "Revealing the Unwritten: Visual Investigation of Beam Search Trees to Address Language Model Prompting Challenges".
Its from October 2023
Now imagine how much performance we could have if programmers came up with cool solutions like this nowadays.
love the video, great explanations!
tiny remark, I think at 15:05 you say 8x8 grid, but I don't think they mean 8 by 8 cells filling the screen (like in your sample image), but rather cells that are 8x8 pixels large
Yeah, otherwise you'd lose a LOT of objects. If it's 8x8 pixels that sets an upper limit on how large a poly can be whilst still being losable, and even then it'd more flicker as you moved around because of the changing positions of a great many borders, so not truly lost. But it still saves you a hell of a lot of time, as, best-case, you may be casting only one ray instead of 64.
That Lobotomy software got a version of this game running on the Saturn is wild.
Hey Simon, just writing to say I love your content. Truly great.
(from a person known to be hard to impress)
Always a good day when Simon posts! Looking forward to watching this.
High Quality as always. :D
@SimonDev, my goodness... @20:27 I remember watching a youtube video regarding this fast inverse square root function as a 'faster' way of doing square roots at the time and dove deep down the rabbit hole so to speak, on benchmarking this vs. other obscure square root methods such as Chris Lamont's version, Jim Blinn's version, other iEEE tutorials on floating point hacks, and good ol' std::sqrt and found out using the intrinsic method was faster but less accurate.
I too couldn't find where on Earth the origins came from but I recall reading someone from id software came up with this based on a paper in the 80s, I think! And even though I managed to draw more polygon shapes in 2d using SFML and SDL2, I eventually jumped ship to 3d graphics using OpenGL and simulate all the particles I want using the compute shader . I love it! 👍
Studying failures can also be important because sometimes (sorry, I can't remember anything off the top of my head, but it has happened), things that were a bad idea in the past, now can be done efficiently due to evolution of hardware, or because improvements in algorithms that were used as elements of the previously failed techniques.
Just amazing! Whenever there's a new video from you i'm like "oh boy, it's xmas time". I'd love seeing a collab between you and madseasonshow, damn!
I have no idea what that video would even be about heh
Currently getting into classic Halo modding, that engine inherited the BSP concept which means I need to keep my level meshes very proper, but its interesting tech to look back on
I miss my old Potato computer. My first rig was a Quentex that somehow came with a FireGl 1000 Pro. A dev card that was a thousand dollars but the whole computer wasn’t much more. I had a voodoo 2 for Quake’s release. The PC Gamer issue that had Quake on the ad had me hyped.
Im no game dev, but the visuals really helped me understand the concepts. What do you use to create these neat animations?
Love the vid, and love that you sound like Bob Belcher!
Love your work! Would love to see a 2D game engine from scratch course 👾
Archer teaching me code history is dope
The 8x8 rays idea is basically just rendering 8x8 resolution and then using edge detection based anti aliasing
Excellent video, thank you! Subscribed!
This also gave me even more respect for John, what he's done for computer graphics is, I guess, comparable to what Einstein did for physics.
Hello, Simon! I was wondering -since you showed the Black Book- which books would you recommend (new and old) still useful for Computer Graphics and Game Development
another simon banger
Amazing video, please keep up!
I leaned that you are better off, hollowing out cubes than building a map with pieces and end up getting an error because of a hole in the map from a tiny point that didn't line up or overlap.
Another banger just dropped
I think a nice video to make next would be how are shadows done today vs how they were back in the day, very extensive and interesting topic to cover IMO. Great videos in general! My suggestion would be to have a slightly more enthusiastic voice, at the moment it sounds like you are super tired when making these...
My old Packard bell p75, was a 1Mb on board video with 8mb base memory. Over its life I upgraded it to 128mb memory(this was £100 at the time as 2nd hand), a free gfx card 4mb and a creative audigy AWE32.
The game ran OK before the upgrades but even after, it was a minor boost.
in the example with walls A to G, how do we know to use B as wall A's left node instead of E, which comes first if doing a rectangular cast from A?
You can do specialized room by room rendering without requiring an algorithm at all.
Please do Lateralus and Invincible - you'll love them!
Beam tree wasn't discarded. It worked effectively in a specific case for the Rendition Verite version of Quake where they could also get microcode support from Rendition - the Verite 'GPU' had a MIPS processor at it's core. Abrash worked on this version.
10:03 huh, so I guess that's why the uncompiled map files are stored as sets of planes without any vertices.
Considering the limitations id had back when they started making 3D games, I find a bit strange nobody has ever talked about their software renderer code.
POV: You're sitting in your moderately lit room, working on your dark mode system like a reasonable person, with a video playing on the side of your ultrawide monitor. The video has a dark palette, so you've not shrunk it down to just a few hundred pixels. Suddenly, half way through the author shows a screencapture of Google's blank search page in white screen mode, blinding you with the power of a thousand suns. Why would they do this to you? Is there no justice in the world?
Well put together production (using JS?) and nice explanation as always. Thanks!
Thank you so much!
Bunny hopping, wobble straffing rocket carnage.
Playing QW on a 28.8 was the start.
PVS is interesting, but given how Quake levels are structured, I think those visibility trees could just be done manually, without any specific algorithm to do it automatically.
Incredible video as always
excellent visualization
can you please do a video about Embark Studios and the finals? They use really stunning rendering methods I rarely seen in any other game.
what I want to know is what happens if a surface is both in front of, and behind of another one, or that it is passing through it. What happens then? is it just approximating (like middle point of the surface being in front or behind). I would like to know because I am making my own 3D graphics engine that has very limited computing resources
You split the surfaces. It's not terribly complex, but I avoided it in the demo's mostly to keep them really simple.
Amazing job, thanks for the video.
Great stuff. Thank you!
BSP seems to be an "intersection" of front facing spaces for a set of walls.
17:30 I think Abrash is dramatizing this for his book. And I doubt Carmack got the idea from watching a bus pass by. Most likely Carmack got the idea from Ken Silverman's build engine that came out a year before quake.
I have seen a lot of video's and books trying to explain how BSPs work, this video and the "Game Engine Black Book: DOOM" are the best ones I have seen yet.
That visual explanation was really awesome.
Can I ask what do you make these animations with? Are you using manim or motion canvas?
I write shaders and define everything there using code. I have a shader course that touches on some of it, but the videos are like 99% SDF's and shaping functions.
@@simondev758 thank you.
This is not dissimilar to how dynamic zoning and transforming coordinates works in Star Citizen.
Very nice, thank you!
Carmack said that making Quake right after Doom was basically a mistake and they should have instead made a new game on an improved version of the DOOM engine with the networking capabilities found in Quake as a stepping stone.
Those BSP trees: cant it ever happen that soemthing is on both sides of another thing or is this always solveable (i.e. by inverting the order of these elements in the tree)?
You split everything onto both sides as you fill the tree. That is the ugly secret.
Yep, as Arne mentions, you do splits. It doesn't fundamentally change the understanding of the trees presented, but adds some complexity to implementation, which is why I skipped it. This was a crash course primer on bsp's, and I just wanted you to understand the concept.
Hi! Did you (or can you in the future) explain how games render 3D scenes including a depth of field? (sharp at a certain distance from the camera, blurry when we're going closer or further) Thanks!
With a path tracer you randomly vary the starting position of the ray over the lens ( 3 mm disk of your eye pupil)
Another great video!
woah mr doob is a patreon supporter! cool.
Hint: would be cool to see a video explaining how DREAD (aka GRIND) managed to create DOOM-class engine running smoothly on Amiga 500 from 1987👌
I'm surprised they didn't find a solution for the dropouts in the raycasting.
I would assume that you could fill those using neighbouring pixels and the resulting visual artifacts would be really negligible... especially with Quake's drab colour palette.
Perhaps you'd have a problem when several dropouts occur next to each other?