When DARPA s brain modem allows us to enter the matrix and fly like superman and experience snatching we want all this hard work on simulation rendering will really pay off.
I was going to watch this but within the first 20 seconds you said “games used to have vast scale” …….. and showed AC mirage! The tiniest crapiest version of ac ever released! AC Valhalla would have been completely acceptable, especially because the twitchy boy response was always that it was “too big” . But no, it was no good.. because the main character is white.
I can tell you're a developer because you sound like you haven't slept in six years. That was an amazing explanation, so kudos on your tireless efforts.
8:48 What's interesting here is that for the Portal64 project (Portal port to the N64) the dev decided to skip having a depth buffer and instead sort things from furthest away to closest using the CPU. The reason for this is that the N64 has extremely limited memory bandwidth and also more CPU cycles than it can use. So he'd rather spend CPU power sorting things than clear/write 32 bits of depth data to every pixel every frame. it does also help that the nature of Portals maps makes it really easy to cull things, anything that's not in the same room or in a room that's visible through a portal, open door or window can be culled without even thinking about it. I don't think he mentioned this in his video, but given the game is running at 320x240 this presumably also saved him 150kb of memory. A not insignificant amount when he has 4mb total to work with.
Yep, it's called Painter's Algorithm, and it was commonly used in the era before depth buffers. The whole era before modern hardware is really cool, and needs it's own set of videos! heh
You should see what the community of 3D creators on Scratch have come up with to overcome the computational limitations of scratch it's wild stuff! Painters algorithm and BSP enhanced painters is common place!
Working on the N64 is weird in general. the rambus is so ludicrously slow compared to the rest of the system that you end up having to do a lot of stuff downright backwards compared even to contemporary systems. It also makes me really appreciate modern development tools, because you look at the internals of games like Mario 64 and its pretty clear they were optimizing blind, without being able to actually measure what was taking the most time, or even whether certain changes were helpful or harmful.
Crazy, I've heard stories about the older architectures being super strange. I landed on the tail end of ps2, which was apparently a total nightmare to work with as well. Never knew any N64 people, but watching some of @KazeN64 videos makes it look super interesting.
This video is insane. All the stuff I was looking for for years on the internet, just made available in a simple, condensed, purpose-made effective educational video with no fluff. Thank you so much. If only every teacher was this good (and every research paper was readable).
@@leeroyjenkins0 indeed, it builds really well onto foundations you gain from uni. Now the problem is just time. Even the doctor's degree genius programmers and the studios spent decades iteratively developing and adding these algorithms to the game engines with each new game. Building a new game engine just seems like such a daunting, massive task. And that's just the rendering side of things! You still gotta create tools on top of that to be able to work with your engine. Modifying existing game engines like UE might be the way to go (Deep Rock Galactic devs chose this route), but even then, you gotta know the engine pretty well, which is stupidly specialized knowledge, as well as know the algorithms involved.
@@rykehuss3435 it only makes sense to make one if you want to do something that those others don't do as well (destruction, voxels...). If you want to do the same, or anything less, what's the point? UE already exists, made by people smarter than you, might as well use it.
Glad I found this again because I wanted to let everyone know it's better than a bedtime story, I fell asleep watching and listening this and woke up to a dead phone with flat battery. Best sleep in ages.
I literally had the thoughts of "oh, so you can compare foreground and background objects in screen-space" and "you don't have a depth buffer yet, but you have the one from last frame" before those subjects came to be implemented in the demonstations. The examples were really well explained and very intuitive thanks to the visuals!
I've always wanted to find and thank a dev for Prototype. That game and the next were amazing and the ability to bring destruction to the town was amazing. Thank you all
@@courtneyricherson2728 I was a bit disappointed with the story direction of the 2nd game. It really felt forced to make Mercer the villain instead of just apathetic. So they should have (in my OP) went with something or someone else as the villain.
That's because you're thinking of it as physical real world objects. When you think of it as data in a notepad file, all the computer is doing is reading very quickly.
@@edzymods Nah. It's still hard to put into human terms just how blazing fast computers are at handling raw numbers - and the fact that they continue to get faster doesn't help in terms of wrapping your head around it. It's like trying to understand the scale of the solar system. You cut put it into various models or scale it down to a convenient size all you want; it'll never truly convey just how big it is.
I guess the reason why Cities Skylines 2 does not bother with occlusion culling is that in a top-down perspective there are simply not many objects behind each other (in contrast to e.g. a 3rd person game).
I feel like you'd still benefit heavily, depending on the shot. When the camera is overhead, the amount of "stuff" is naturally constrained to an extremely small area, thus your occlusion needs aren't high anyway, vs when the camera is lower. But this is mostly conjecture, so take it for what it's worth.
@@simondev758 Its an interesting thing to think about, as a lot of simulation games have had this problem where once you zoom in to look at closer detail, performance tanked. However its especially problematic in Skylines given its a game that encourages you to zoom in to street level, why else make it so detailed? Seems like a big goof not to realise that's going to be problematic.
I rarely do any game development, but love your content! It's good stuff. You and Acerola have become one of my favourites to watch and learn about how these digital worlds come about.
same. I think graphics programming has a lot to teach about programming in general, especially math and the performance of algorithms, and it intrinsically visualises the thing our program is manipulating, which naturally lends itself to clear educational content and a tight feedback loop for problem-solving and evaluating our methods
i hope youll read this, because this video has really inspired me. not only do you explain things in a really easy to understand way, you also carve out a value system of things. "this is easy. not that complicated. not a mystery really" these really help to get a feel for the underlying relationship of all the different approaches (and systems) that you would only get in a 1on1 conversation. thank you for showing its possible! great video
I'm currently making a 2D game and it kind of blows my mind that the tradeoff of not drawing objects is worth the time it takes to check what should be culled every single frame. Surely simply checking which objects should be culled is a massive processing task
To be fair, if those objects have a simple geometry and only a texture, culling on single objects does cost way more than not. For example, in Minecraft you wouldn't want to run culling calculations for every single block. But as objects got more and more complex and shaders entered the picture, that shifted.
In a 2D game you don have to check any objects, just have your world in a grid (2d array), then the culling is just your for-loop which ony iterates through the part of the grid which is visible on the screen.
Well actually it would be worth it if drawing object is a lot more massive task than sorting _and_ drawing the remaining stuff, so the workload is actually got two parts to it @@RandomGeometryDashStuff
As a mechanical engineer i like how you kept this simple yet technical in terms of explanation. This is a skill in itself. You got yourself a new subscriber! Keep it up my man
I'm a dev myself and I gotta say, 90% of this was new info and the last 10% of the new info kinda flew over my head a little. This is amazing, thanks a lot. It's not often that we get to see the nitty gritty inside stuff that you don't directly work with.
Prototype was hands down my favorite game when it came out, and years after. So many days coming home from a crappy shift to take out my frustration on the zombies/soldiers/citizens/mutants of New York. Thanks for the memories.
Ohhhh dev who worked on prototype teaching me game dev, i feel so bleesed. I absolutely loved that game, and still love it, thank you for putting in so much effirt behind it. And thank you for these amazing teaching, keep it up man, much love and respect.
There needs to be a VR game engine that only renders nearby objects twice (for the left & right eye) and far away objects once, because far away objects don't need stereoscopic vision. This would would save resources and improve performance.
Since distant objects are more likely to change less pixels per frame, you could render far away objects with a lower frame rate as billboards. And then if necessary smooth the movement by shifting the billboards in between their frames. Then, with the saved performance, maybe you could even render important objects at a higher resolution than they would be normally at such a distance and then downscale it instead of doing anti aliasing or something.
@@DeusExtra ah yes, that’s another good idea. Render far away objects at half the frame rate or use alternate eye rendering like the Luke Ross VR mod. But in VR, far away objects need to remain at high resolution because low res is very visible in VR. That’s actually the biggest immersion breaker in VR, when you can’t see far away objects, like in real life.
@@voxelfusion9894 how does nearsightedness translate into VR vision? The actual display panels are close to your eyes. And do corrective VR lenses help with that?
That's a pretty darn great summary for beginners, did this for 3 years myself and it's definitely one of the more challenging yet fun programming fields there are! P.s. i trawled through hundreds of pages of documentations and papers with no friendly youtube video to guide, you still can't avoid that really if you want to become actually good at this, but do watch the video to get an overview.
Hi Simon, I've been following you on UA-cam for a couple of years and I'm very inspired by computer graphics and game development, currently making a game on Phaser 3. Thank you for explaining interesting techniques and sharing your incredible experience, I learned a lot thanks to you
Prototype is one of my favorites games, both of them, I hope the source code gets leaked so we can get better mods, since it doesnt feel like we're getting a new one
Somehow you manage to "destress" me while teaching what could seem like a complex topic but you manage to break it down so it seems so simple. I like your javascript projects and I have converted some of them to typescript.
I actually loved prototype! I bought that game right when it released. I really wish they would have done more with the story, and made a good sequel. You guys really did do good work on that game. The free roam playability in the aspect like GTA, has been unmatched in that particular flavor of genre. Now I'm going to have to dust it off and see if I can get it to boot on something
Thank you for a great explanation! If you are suddenly out of ideas how to proceed, I am personally very interested in how this process works with shadows, lights, reflections etc from the offscreen objects. By playing Hogwarts Legacy, which is an UE4-based game, I've noticed that some elements, like reflections in lakes, are often suddenly popping up when I slightly move the camera, which causes an unpleasant experience.
Yeah, one of the options in the last poll on Patreon was exactly that, the various reflection techniques used over the years, and the tradeoffs between them. I haven't played Hogwarts Legacy, but it sounds like SSR, or screenspace reflections, which is what I showed part way through this video (the short of the cheap reflections on water when I talked about the depth buffer). It's a limitation of the approach, but it's super cheap.
am surprised to hear you worked on Prototype... am a car guy, so the none car games i played could be counted on one hand, Prototype was one of them that i loved so much, it was like a modded gta to the younger me, just a weird amazing experience really
Fantastic education, especially for a lone developer trying to learn more, who is unsatisfied with simplistic answers. These twenty minutes were more valuable to me than many hours of the Unity "tutorials" I have watched. Thanks for being so helpful.
This is definitely my new favorite of your videos. It's endlessly fascinating to me to hear about the crazy things that are done in rendering pipelines. I love the GDC presentations where they dig into the minutiae of their rendering customizations.
Just a small tip for visualization, I have trouble seeing a difference between Red and Green, which was worse with the Green and Yellow examples - and for those with Colorblindness, contrast is always the eye's first priority over color. Better to use colors that are complimentary, or better yet, just white and black for visualization. (And yes, it's hard for us in Graphics Programming) :p Really love your video! super simple and a great start to understanding graphics and optimization. Subscribed :3
@@simondev758 thanks for taking it seriously, I appreciate how you handle feedback on your videos You can look for ready-made "accessible color palettes" to drop in, or keep it simple with contrast and patterns. It really does help, I kept pausing this video just to tell the effects apart, and the problem affects everybody when you have different calibration for every monitor.
Occlusion culling was always fascinating. In some games (like FFXIV) the pop-in is really in your face if you move the camera too fast, but even when it's slow to transition it's still just...hard to believe the tech exists. Cool explanation!
What learned me a lot of optimization was actually the source engine when it came to mapping in Hammer. Now using your video to learn more about other types of optimizations that might be possible for Source or at least in Hammer. Thanks for sharing this type of content and information to anyone that wants it! I am also going to code my own game that is heavily inspired by a game that is poorly optimized. So watching these will hopefully ensure I will not make the same mistakes those developers did.
That last "state-of-the-art" demonstration is so cool! I honestly never even realised it was common to do visibility culling outside of precomputed visibility structures. But not only is it done, there's some very interesting algorithms to lean on. I especially love algorithms that don't rely on temporal reprojection, so that last one (use objects visible in the last frame as occluders) is quite fascinating to me.
The amount of tutorials on game development that just ignore optimization is crazy, so it's nice to see that there are at least some people that are willing to talk about optimization
Hey simon, one thing I'd like to see if you could put it up for your patrons to vote for: Rendering in high motion/scene change situations. Eg: Racing sims. While yes, in flight sims planes move faster, a majority of the time you're higher up in the sky, so objects tend to "draw" at around the same area of the scene, but racing sims are interesting (especially in cockpit, and especially in VR) because of the fact that unlike most games where the scene doesn't change TOO much (either the area is reasonably within the same occlusion area or the objects are usually from fairly "similar" angles, VR + racing sims equals fast pace forward movement with often a lot of head jiggle/turning/tilting. Add in suspension modelings, hairpin corners, etc, I've been thinking about all the optimization methods I just can't think of any good ones for racing sims that wouldn't ruin the experience. Particularly when you have something like 4 mirrors in a car (say 3 real and 1 "virtual" in the form of a "rear view camera" It's honestly kind of crazy to think about when considering the processes that most games use, because you want really high up close detail (buttons, texture art for dashes, 3D pins for dashes (especially in VR where flat physical dashes look horrible), and then transparency rendering like windows, fences, buildings, etc. The reason is I play a lot of iRacing and we end up with a lot of folks expecting 120fps in really heavy loads on it, which... well, I'd love to be able to explain that to someone. It just sounds like racing sims as a whole are the WORST possible rendering scenario of any game due to their complexity in many different areas. [Not to mention that iRacing does a lot of external-track art for broadcasting cameras that includes off-track objects you'd really only see for scenic games or tv broadcast spanning cameras. Obviously don't expect any response or coverage on this one, but I figured use cases around specific types of games and rendering pipelines regarding those games might be an interesting topic as it can vary from something like an FPS, RTS, or racing sim in how things are even able to be processed. (Like Horizon Zero Dawn/Death Stranding/Decima engine stuff looks GREAT but I dont see it working with something like a racing sim) Anyways, sorry for the spam, just wanted to send something while I was thinking of it
Optimizing CPU performance is something I enjoy doing a lot, very interesting to see how optimizing GPU operations is done. Loved this! Also makes me grateful for game engines which mostly do this for you already haha, not sure I'd want to do this from scratch unless I really needed to get extra frames
This was really awesome! I used to develop for the nintendo DS, so learning to develop with very strict constraints was really part of the job. This format with in-engine examples really set the video apart, excellent job man!
Gran Turismo has used this technique for at least 2 decades. WipEout 2048 also used this technique on PS Vita, but if you do some funky things with the photomode camera, you can see loads of massive black boxes through large chunks of level geometry labeled in bold red font "Blocker."
I used this depth buffer many times to create from stylistic shader effects or other bells and wistle stuff. it's very handy, but today, I finally understand why translucent materials get F'ed up, because they don't have depth buffer.
What an elegant way to solve the depth buffer dependency issue. Render the simpler version of the view to extract depth data and then render the high resolution view.
as a web dev, not familiar with game dev stuff, I had a clue on how the rendering could work, but this goes way beyond what I understood in the past. good explanation, in a very simple way, at least for people with some dev knowledge like me, can't tell if someone with no dev experience could understand, but this sort of content isn't for the average guy
I was thinking long about how to deal with not just culling, but ray tracing, collisions and gravity simulation too, for a space game. and yeah, cache optimization is important, but so are tree structures, esp for stuff where everything interacts with everything. I want to do a hybrid approach, where the tree nodes serve as containers (indexes) for arrays of properties. I'm super excited for it!!!! but for now i gotta work on simpler games so i can make a name for my self, and make it as a game developer \o/
god...just the sheer amount of knowledge and sublime ability to explain usually not quite so straight-forward concepts (when read black-on-white from a uni slide or a blog post written in a VERY dry fashion) *THAT FREAKIN WELL* just amazes me to the point that you Sir have officially becom my role-model (no simping intended). And I mean...duh, no wonder you were (or are, dunno) a Google sofware engineer, cuz that is the level I aspire to become anyhow one day. Thank you A LOT and I hope this world blesses you and your fam for everything! Super thankful that you make such amazing vids! Cheers!
Thanks for illustrating this for the average users. For a newcomer's deeper dive, a dissection of the insanely efficient tricks used in Doom is usually a good starting point.
@@simondev758 Don't remember the channel anymore, but about half a year ago, someone did a great dissection of the Doom code. Loved the simplicity, like how the height stuff for the level was just done by assigning every element two numbers, one for the height offset from the bottom and one from the top, the reason why they couldn't have overlapping elements without additional tricks in the sequel.
Loved this, as an enterprise software engineer with no game development experience, I found this highly interesting and really easy to understand. You did an amazing job, thanks and you have another subscriber. 😁
A quick subscribe from me! I look forward to you going into transparency shenanigans. It surprises me that to this day it is not unlikely for a player to run across transparency issues. I remember even in the recent beautiful Armored Core 6 I found a situation where smoke consistently interacted badly with something. And in playing around with my own projects, I've gone overboard with "transparency is beautiful" too many times, and keep having to be mindful of performance impact.
Woah! 😮 Crazy you worked on Prototype. I loved that game, and I have always remembered it from time to time. Maybe you could make a video about your work on the game.
Would you be interested in that? Brief summary: I did some of the early r&d for the graphics, did a big presentation at GDC, and during production mostly did backend and optimization work.
Great video. I'm a 3dArtist, not a programmer of any sort and there might have been a simple explanation for it that I've missed but how does culling account for things like long shadows or global illumination, that leaks from off-screen into the visible scene? ...Maybe worth a part2? :)
Loved the video, I did quite a bit of reprojection shenanigans about ten years ago with the DK1 and DK2 to improve perceived frame times for stuff outside of our foveal vision!
Nice video. I had once read a paper on culling strategies. Its just amazing how smart people are in the gaming industry. Some of the most amazing algorithms came from the gaming industry.
You worked on Prototype? That's one of my favorite games! The video is very informative, and relatively easy to understand even to someone who knows nothing about game development, though there's a minor issue with captions. At a few points the captions are mismatched, like at 12:10, at it takes a few seconds for them to catch up.
This video and your explanations have increased my level of respect towards game engine developers that implement these sort of things for others to use. Thank you.
Great video, I wish it was made a few years ago :-) In my occlusion culling journey, I originally took the HiZ approach on the GPU which worked out great at first. It became a problem though when I wanted to do more with the visible entities. I tried in several ways to send data back to the CPU but there's just too much latency and not enough types of indirect commands to keep it on the GPU, so I went the CPU route instead. Intel has a great paper for their method of CPU-side HiZ implementation, "Masked Software Occlusion Culling". They also provide an open source implementation which has performed well for my application.
This probably isn't related but, Titan Quest is an old locked perspective 3d isometric game. There was a mod that unlocked the view and let you rotate the camera around. I thought it was odd that the back of everything was there and had textures since they were never supposed to be seen.
I love the video so far but I have one piece of feedback - the yellow and green can be hard to distinguish, especially for someone colorblind. Maybe blue instead of yellow would be a better choice there ^^
meshlet culling using mesh shaders looks like an interesting development, and i'm guessing ue5 uses something like that. I wonder what the new unity culling system uses.
You forgot a few things. The one that still amazes me to this day is, rotating billboards for distance objects. Shocking how you just can't tell, even as it turns into a real object right in front of your eyes seamlessly. AND let's not forget fogging. Even used lightly it is a major system relief to update possibilities. Also blur and a few more tricks of the trade.
For me, the most interesting thing is, how the "MOV AX,CS" and similar lines became an open world game with theoretically unlimited gaming time. 40-50 years ago one or two people worked on a project (code, music, gfx was the same person sometimes) and then, 20-25 years ago the games became so complex that some pc games had longer credits list than a Hollywood movie.
You sound EXACTLY like MadSeasonShow. Thankyou for your wisdom from a new graphics programmer currently learning how to make game engines. Have managed to acquire some great books such as GPU Gems and cant wait until I get to the point I can handle DX12 and VULKAN. I am not surprised someone as talented as yourself was involved in Prototype, it shows. 😊
One of the interesting approaches I have seen explained is a low detail wider world (like a crude model), and detailed render around the viewer (limited view distance).
Hey Simon, amazing video, I really love how you go in depth into the nitty gritty of optimization and the history of it. One such argument I'd love to hear about is collision engines, with broad, middle and narrow phases, aabb collisions, spacial partitioning, the challenges of long range ray and shape casting and so on, I feel like there are so many different interesting things to talk about in collision engines
Glad to hear you enjoyed it! I'd love to dive into more optimization topics, but I think I'll leave collision engines out of it. I strongly dislike when people pass themselves off as experts in things that they're not, and I hold myself to that same standard. I haven't done anything beyond superficial physics work, so I don't feel especially qualified to talk about the subject. I'd encourage you to find resources or experts that specialize in that area. Would love to send you in a direction, but I can't think of any offhand unfortunately.
I never worked with game development, but I love this channel. I love to hear about clever solutions to optimization problems. This video was particularly interesting.
I remember enabling occlusion culling for both audio & video in 'Doom 3' back in the 2000s, it helped so much with our low-end hardware. I honestly dunno why they weren't the default settings cuz I never found any problems with them, guess the devs played it very safe or knew of hardware that didn't work with it.
While it might seem like a huge optimisation step at first glance, backface culling already does most of this work. It's a rendering step that looks at whether a polygon is facing the camera before it gets rendered. On most concave objects, that would already account for a large part of the occluded polygons
I absolutely love how I've been working on my bachelor thesis for some time now, which is on interactive realtime rendering of huge point cloud datasets, and went through all the exact same resources for occlusion culling. In my case I was able to use GPU instanced rendering methods, since I was only displaying points/spheres, so frustum culling, LODs for the primitive meshes and screenspace rendering for the lowest LODs was enough to push below 3ms (~350fps) for ~500.000.000 points in an average case. For that I used a GPU driven filter- and render-pipeline and the only restriction is the accessibility of the raw point cloud dataset as persistent buffers in GPU memory. That means overhead when loading the program, which on the other hand is kind of negligible considering loading times of such datasets from storage into system memory. BUT memory capacity on the GPU can be critical aspect! To solve that, datasets can go through preprocessors splitting them up into data channels (position, size, color...) or only load a small subset of timesteps if the point cloud has temporal data. So depending on the use case, primitive optimizations already get you very far. And thank you for your great video ^^ Definitely will share it to my uni peers
Patrons can now vote for the next video! Thank you for your support.
Patreon: www.patreon.com/simondevyt
Courses: simondev.io
When DARPA s brain modem allows us to enter the matrix and fly like superman and experience snatching we want all this hard work on simulation rendering will really pay off.
Thank you for the video. I want to point out that after 17:03 the captions went nuts.
Your a very good teacher straight facts easy to understand and a calming voice..🎉
U r sick
I was going to watch this but within the first 20 seconds you said “games used to have vast scale” …….. and showed AC mirage! The tiniest crapiest version of ac ever released! AC Valhalla would have been completely acceptable, especially because the twitchy boy response was always that it was “too big” . But no, it was no good.. because the main character is white.
I can tell you're a developer because you sound like you haven't slept in six years. That was an amazing explanation, so kudos on your tireless efforts.
The not-sleeping thing is more from my kids
Sounds like Jason from “Home Movies” lol
I can mostly tell from the name
yee i was gonna say john benjamin @@CrowdContr0l
Thats some bs stereotype for devs. U make shitty code if u dont sleep well
8:48 What's interesting here is that for the Portal64 project (Portal port to the N64) the dev decided to skip having a depth buffer and instead sort things from furthest away to closest using the CPU. The reason for this is that the N64 has extremely limited memory bandwidth and also more CPU cycles than it can use. So he'd rather spend CPU power sorting things than clear/write 32 bits of depth data to every pixel every frame. it does also help that the nature of Portals maps makes it really easy to cull things, anything that's not in the same room or in a room that's visible through a portal, open door or window can be culled without even thinking about it.
I don't think he mentioned this in his video, but given the game is running at 320x240 this presumably also saved him 150kb of memory. A not insignificant amount when he has 4mb total to work with.
Yep, it's called Painter's Algorithm, and it was commonly used in the era before depth buffers. The whole era before modern hardware is really cool, and needs it's own set of videos! heh
An important clarification here, the portal64 example is per display list, and not per triangle.
You should see what the community of 3D creators on Scratch have come up with to overcome the computational limitations of scratch it's wild stuff! Painters algorithm and BSP enhanced painters is common place!
Working on the N64 is weird in general. the rambus is so ludicrously slow compared to the rest of the system that you end up having to do a lot of stuff downright backwards compared even to contemporary systems.
It also makes me really appreciate modern development tools, because you look at the internals of games like Mario 64 and its pretty clear they were optimizing blind, without being able to actually measure what was taking the most time, or even whether certain changes were helpful or harmful.
Crazy, I've heard stories about the older architectures being super strange. I landed on the tail end of ps2, which was apparently a total nightmare to work with as well. Never knew any N64 people, but watching some of @KazeN64 videos makes it look super interesting.
This video is insane. All the stuff I was looking for for years on the internet, just made available in a simple, condensed, purpose-made effective educational video with no fluff. Thank you so much. If only every teacher was this good (and every research paper was readable).
@@leeroyjenkins0 indeed, it builds really well onto foundations you gain from uni. Now the problem is just time. Even the doctor's degree genius programmers and the studios spent decades iteratively developing and adding these algorithms to the game engines with each new game. Building a new game engine just seems like such a daunting, massive task. And that's just the rendering side of things! You still gotta create tools on top of that to be able to work with your engine. Modifying existing game engines like UE might be the way to go (Deep Rock Galactic devs chose this route), but even then, you gotta know the engine pretty well, which is stupidly specialized knowledge, as well as know the algorithms involved.
Not to mention he got H John Benjamin to narrate the whole video
@@1InVader1 Building a new game engine isnt a daunting massive task, unless you want to compete with UE or Unity.
@@rykehuss3435 it only makes sense to make one if you want to do something that those others don't do as well (destruction, voxels...). If you want to do the same, or anything less, what's the point? UE already exists, made by people smarter than you, might as well use it.
@@1InVader1 I agree
Glad I found this again because I wanted to let everyone know it's better than a bedtime story, I fell asleep watching and listening this and woke up to a dead phone with flat battery. Best sleep in ages.
I've watched it all this time and found it very interesting.
I literally had the thoughts of "oh, so you can compare foreground and background objects in screen-space" and "you don't have a depth buffer yet, but you have the one from last frame" before those subjects came to be implemented in the demonstations. The examples were really well explained and very intuitive thanks to the visuals!
I LOVED Prototype growing up. Super cool that I just stumbled across a dev on youtube. Your channel is great btw
The power creep was so fun, really made you feel like a trillion-dollar bio weapon
I've always wanted to find and thank a dev for Prototype. That game and the next were amazing and the ability to bring destruction to the town was amazing. Thank you all
that game made so many memories for me. thank you, Simon, and the entire team on Prototype.
@@courtneyricherson2728 I was a bit disappointed with the story direction of the 2nd game.
It really felt forced to make Mercer the villain instead of just apathetic. So they should have (in my OP) went with something or someone else as the villain.
The speed at which it does all the calculations of what should be drawn and what shouldn't always blows my mind.
A lot of the HZB ones can be done in less than a couple ms.
That's because you're thinking of it as physical real world objects. When you think of it as data in a notepad file, all the computer is doing is reading very quickly.
@@edzymods Nah. It's still hard to put into human terms just how blazing fast computers are at handling raw numbers - and the fact that they continue to get faster doesn't help in terms of wrapping your head around it.
It's like trying to understand the scale of the solar system. You cut put it into various models or scale it down to a convenient size all you want; it'll never truly convey just how big it is.
I guess the reason why Cities Skylines 2 does not bother with occlusion culling is that in a top-down perspective there are simply not many objects behind each other (in contrast to e.g. a 3rd person game).
I feel like you'd still benefit heavily, depending on the shot. When the camera is overhead, the amount of "stuff" is naturally constrained to an extremely small area, thus your occlusion needs aren't high anyway, vs when the camera is lower.
But this is mostly conjecture, so take it for what it's worth.
@@simondev758 Its an interesting thing to think about, as a lot of simulation games have had this problem where once you zoom in to look at closer detail, performance tanked.
However its especially problematic in Skylines given its a game that encourages you to zoom in to street level, why else make it so detailed? Seems like a big goof not to realise that's going to be problematic.
Yeah until you tilt the camera down and then you have tons of building in front and behind each other, then occlusion culling is a must lol
I rarely do any game development, but love your content! It's good stuff. You and Acerola have become one of my favourites to watch and learn about how these digital worlds come about.
I love Acerola's content too!
same. I think graphics programming has a lot to teach about programming in general, especially math and the performance of algorithms, and it intrinsically visualises the thing our program is manipulating, which naturally lends itself to clear educational content and a tight feedback loop for problem-solving and evaluating our methods
-but when I do… it’s Dos Equis.
@@arcalypse1101let's not forget Sebastian Lague
sebastian lague too, these three make the ultimate trio for me
i hope youll read this, because this video has really inspired me. not only do you explain things in a really easy to understand way, you also carve out a value system of things. "this is easy. not that complicated. not a mystery really" these really help to get a feel for the underlying relationship of all the different approaches (and systems) that you would only get in a 1on1 conversation. thank you for showing its possible! great video
Honestly a lot of gamedev isn't super complex, but presented in weirdly convoluted ways.
I'm currently making a 2D game and it kind of blows my mind that the tradeoff of not drawing objects is worth the time it takes to check what should be culled every single frame. Surely simply checking which objects should be culled is a massive processing task
> Surely simply checking which objects should be culled is a massive processing task
worth it if drawing object is a lot more massive task
To be fair, if those objects have a simple geometry and only a texture, culling on single objects does cost way more than not. For example, in Minecraft you wouldn't want to run culling calculations for every single block. But as objects got more and more complex and shaders entered the picture, that shifted.
In a 2D game you don have to check any objects, just have your world in a grid (2d array), then the culling is just your for-loop which ony iterates through the part of the grid which is visible on the screen.
Well actually it would be worth it if drawing object is a lot more massive task than sorting _and_ drawing the remaining stuff, so the workload is actually got two parts to it @@RandomGeometryDashStuff
@@rosen8757 Thanks for this, I'll be implementing it in my engine.
As a mechanical engineer i like how you kept this simple yet technical in terms of explanation. This is a skill in itself. You got yourself a new subscriber! Keep it up my man
I'm a dev myself and I gotta say, 90% of this was new info and the last 10% of the new info kinda flew over my head a little. This is amazing, thanks a lot. It's not often that we get to see the nitty gritty inside stuff that you don't directly work with.
Prototype was hands down my favorite game when it came out, and years after. So many days coming home from a crappy shift to take out my frustration on the zombies/soldiers/citizens/mutants of New York. Thanks for the memories.
Ohhhh dev who worked on prototype teaching me game dev, i feel so bleesed. I absolutely loved that game, and still love it, thank you for putting in so much effirt behind it. And thank you for these amazing teaching, keep it up man, much love and respect.
There needs to be a VR game engine that only renders nearby objects twice (for the left & right eye) and far away objects once, because far away objects don't need stereoscopic vision. This would would save resources and improve performance.
That's pretty clever. I'm curious about how it would perform
Since distant objects are more likely to change less pixels per frame, you could render far away objects with a lower frame rate as billboards. And then if necessary smooth the movement by shifting the billboards in between their frames. Then, with the saved performance, maybe you could even render important objects at a higher resolution than they would be normally at such a distance and then downscale it instead of doing anti aliasing or something.
@@DeusExtra ah yes, that’s another good idea. Render far away objects at half the frame rate or use alternate eye rendering like the Luke Ross VR mod. But in VR, far away objects need to remain at high resolution because low res is very visible in VR. That’s actually the biggest immersion breaker in VR, when you can’t see far away objects, like in real life.
@@djp1234 meh, not seeing far away objects is perfectly immersive for anyone nearsighted. Lol
@@voxelfusion9894 how does nearsightedness translate into VR vision? The actual display panels are close to your eyes. And do corrective VR lenses help with that?
That's a pretty darn great summary for beginners, did this for 3 years myself and it's definitely one of the more challenging yet fun programming fields there are!
P.s. i trawled through hundreds of pages of documentations and papers with no friendly youtube video to guide, you still can't avoid that really if you want to become actually good at this, but do watch the video to get an overview.
Hah, yeah if you could become an expert off a 20 minute youtube video, that'd be great. No shortcuts unfortunately.
Hi Simon, I've been following you on UA-cam for a couple of years and I'm very inspired by computer graphics and game development, currently making a game on Phaser 3. Thank you for explaining interesting techniques and sharing your incredible experience, I learned a lot thanks to you
Prototype is one of my favorites games, both of them, I hope the source code gets leaked so we can get better mods, since it doesnt feel like we're getting a new one
Somehow you manage to "destress" me while teaching what could seem like a complex topic but you manage to break it down so it seems so simple. I like your javascript projects and I have converted some of them to typescript.
I read that as _distress_
I actually loved prototype! I bought that game right when it released. I really wish they would have done more with the story, and made a good sequel. You guys really did do good work on that game. The free roam playability in the aspect like GTA, has been unmatched in that particular flavor of genre. Now I'm going to have to dust it off and see if I can get it to boot on something
I can imagine using the terrain or buildings in a city as occlusion objects has big benefits real quick.
Thank you for a great explanation! If you are suddenly out of ideas how to proceed, I am personally very interested in how this process works with shadows, lights, reflections etc from the offscreen objects. By playing Hogwarts Legacy, which is an UE4-based game, I've noticed that some elements, like reflections in lakes, are often suddenly popping up when I slightly move the camera, which causes an unpleasant experience.
Yeah, one of the options in the last poll on Patreon was exactly that, the various reflection techniques used over the years, and the tradeoffs between them. I haven't played Hogwarts Legacy, but it sounds like SSR, or screenspace reflections, which is what I showed part way through this video (the short of the cheap reflections on water when I talked about the depth buffer). It's a limitation of the approach, but it's super cheap.
You should do headspace recordings. Your voice is immensely soothing 😊
am surprised to hear you worked on Prototype... am a car guy, so the none car games i played could be counted on one hand, Prototype was one of them that i loved so much, it was like a modded gta to the younger me, just a weird amazing experience really
Prototype was a fun, unique title, miss working on that team.
Omg I remember the black book! Such a great read even if you didn’t do graphics dev!
Your channel is an absolute gem, so many high quality videos about topics that are really hard to find online. Thanks.
Fantastic education, especially for a lone developer trying to learn more, who is unsatisfied with simplistic answers. These twenty minutes were more valuable to me than many hours of the Unity "tutorials" I have watched. Thanks for being so helpful.
This kind of quality of content is amazing as a graphics programmer to have access to. I'm amazed by your channel
Thanks! What do you work on?
This is definitely my new favorite of your videos. It's endlessly fascinating to me to hear about the crazy things that are done in rendering pipelines. I love the GDC presentations where they dig into the minutiae of their rendering customizations.
As someone who never made it that far past the Painter's Algorithm section in any graphics book, this was great 🙂
Explaining major breakthroughs in game industry for a given problem is so interesting! Thanks a lot Simon and keep up the good work!
You worked on prototype??? Bro that is like my favourite game! Keep it up
Just a small tip for visualization, I have trouble seeing a difference between Red and Green, which was worse with the Green and Yellow examples - and for those with Colorblindness, contrast is always the eye's first priority over color. Better to use colors that are complimentary, or better yet, just white and black for visualization.
(And yes, it's hard for us in Graphics Programming) :p
Really love your video! super simple and a great start to understanding graphics and optimization. Subscribed :3
I think someone else brought that up, and it never occurred to me. But I will 100% strive to be better in the future.
Here to second this, yellow is a devious colour that is seldom seen
@@simondev758 thanks for taking it seriously, I appreciate how you handle feedback on your videos
You can look for ready-made "accessible color palettes" to drop in, or keep it simple with contrast and patterns. It really does help, I kept pausing this video just to tell the effects apart, and the problem affects everybody when you have different calibration for every monitor.
Next they'll be hiring deaf people as lifeguards. What's the world coming to. 😂
Occlusion culling was always fascinating. In some games (like FFXIV) the pop-in is really in your face if you move the camera too fast, but even when it's slow to transition it's still just...hard to believe the tech exists.
Cool explanation!
What learned me a lot of optimization was actually the source engine when it came to mapping in Hammer. Now using your video to learn more about other types of optimizations that might be possible for Source or at least in Hammer. Thanks for sharing this type of content and information to anyone that wants it!
I am also going to code my own game that is heavily inspired by a game that is poorly optimized. So watching these will hopefully ensure I will not make the same mistakes those developers did.
That last "state-of-the-art" demonstration is so cool! I honestly never even realised it was common to do visibility culling outside of precomputed visibility structures. But not only is it done, there's some very interesting algorithms to lean on. I especially love algorithms that don't rely on temporal reprojection, so that last one (use objects visible in the last frame as occluders) is quite fascinating to me.
Really excellent video! I don't know much about graphics/rendering so I found this fascinating!
It's the cool but also exhausting thing about graphics, you basically have to retrain constantly hah!
The amount of tutorials on game development that just ignore optimization is crazy, so it's nice to see that there are at least some people that are willing to talk about optimization
You do a great job of explaining abstract concepts in a clear and concrete way, thank you.
Hey simon, one thing I'd like to see if you could put it up for your patrons to vote for: Rendering in high motion/scene change situations.
Eg: Racing sims.
While yes, in flight sims planes move faster, a majority of the time you're higher up in the sky, so objects tend to "draw" at around the same area of the scene, but racing sims are interesting (especially in cockpit, and especially in VR) because of the fact that unlike most games where the scene doesn't change TOO much (either the area is reasonably within the same occlusion area or the objects are usually from fairly "similar" angles, VR + racing sims equals fast pace forward movement with often a lot of head jiggle/turning/tilting. Add in suspension modelings, hairpin corners, etc, I've been thinking about all the optimization methods I just can't think of any good ones for racing sims that wouldn't ruin the experience.
Particularly when you have something like 4 mirrors in a car (say 3 real and 1 "virtual" in the form of a "rear view camera"
It's honestly kind of crazy to think about when considering the processes that most games use, because you want really high up close detail (buttons, texture art for dashes, 3D pins for dashes (especially in VR where flat physical dashes look horrible), and then transparency rendering like windows, fences, buildings, etc.
The reason is I play a lot of iRacing and we end up with a lot of folks expecting 120fps in really heavy loads on it, which... well, I'd love to be able to explain that to someone. It just sounds like racing sims as a whole are the WORST possible rendering scenario of any game due to their complexity in many different areas.
[Not to mention that iRacing does a lot of external-track art for broadcasting cameras that includes off-track objects you'd really only see for scenic games or tv broadcast spanning cameras.
Obviously don't expect any response or coverage on this one, but I figured use cases around specific types of games and rendering pipelines regarding those games might be an interesting topic as it can vary from something like an FPS, RTS, or racing sim in how things are even able to be processed. (Like Horizon Zero Dawn/Death Stranding/Decima engine stuff looks GREAT but I dont see it working with something like a racing sim)
Anyways, sorry for the spam, just wanted to send something while I was thinking of it
I am not graphic engineer, but the content you make is extremely interesting to watch. Thank you for your work sir
Optimizing CPU performance is something I enjoy doing a lot, very interesting to see how optimizing GPU operations is done. Loved this!
Also makes me grateful for game engines which mostly do this for you already haha, not sure I'd want to do this from scratch unless I really needed to get extra frames
The fun part of being a graphics engineer is that you end up doing a tonne of both CPU and GPU optimization.
This was really awesome! I used to develop for the nintendo DS, so learning to develop with very strict constraints was really part of the job.
This format with in-engine examples really set the video apart, excellent job man!
Thanks! I worked with a guy who was fresh off of DS years ago, very smart guy. That platform sounded like a pain to develop for.
Gran Turismo has used this technique for at least 2 decades. WipEout 2048 also used this technique on PS Vita, but if you do some funky things with the photomode camera, you can see loads of massive black boxes through large chunks of level geometry labeled in bold red font "Blocker."
I used this depth buffer many times to create from stylistic shader effects or other bells and wistle stuff. it's very handy, but today, I finally understand why translucent materials get F'ed up, because they don't have depth buffer.
What an elegant way to solve the depth buffer dependency issue. Render the simpler version of the view to extract depth data and then render the high resolution view.
So if a tree falls in the woods and no one is around to see it...
actually in this case no tree actually falls
the tree has been called and (mostly) only consumes CPU time because the simulation still tracks it in case a view frustum comes along
as a web dev, not familiar with game dev stuff, I had a clue on how the rendering could work, but this goes way beyond what I understood in the past. good explanation, in a very simple way, at least for people with some dev knowledge like me, can't tell if someone with no dev experience could understand, but this sort of content isn't for the average guy
I was thinking long about how to deal with not just culling, but ray tracing, collisions and gravity simulation too, for a space game.
and yeah, cache optimization is important, but so are tree structures, esp for stuff where everything interacts with everything.
I want to do a hybrid approach, where the tree nodes serve as containers (indexes) for arrays of properties.
I'm super excited for it!!!!
but for now i gotta work on simpler games so i can make a name for my self, and make it as a game developer \o/
god...just the sheer amount of knowledge and sublime ability to explain usually not quite so straight-forward concepts (when read black-on-white from a uni slide or a blog post written in a VERY dry fashion) *THAT FREAKIN WELL* just amazes me to the point that you Sir have officially becom my role-model (no simping intended). And I mean...duh, no wonder you were (or are, dunno) a Google sofware engineer, cuz that is the level I aspire to become anyhow one day.
Thank you A LOT and I hope this world blesses you and your fam for everything!
Super thankful that you make such amazing vids!
Cheers!
Man, I've done Game dev for over a decade and this still sounds amazing :) Love your channel!
I don't know why I watched this whole thing and took notes. I don't work at all in this field, but it's so interesting. THanks!
Wow. Who knew that Bob Belcher was an expert in graphics programming?
Thanks for illustrating this for the average users.
For a newcomer's deeper dive, a dissection of the insanely efficient tricks used in Doom is usually a good starting point.
I'd love to get into some old school tricks, the era before I started often seems like magic, the hoops they had to jump through.
@@simondev758 Don't remember the channel anymore, but about half a year ago, someone did a great dissection of the Doom code.
Loved the simplicity, like how the height stuff for the level was just done by assigning every element two numbers, one for the height offset from the bottom and one from the top, the reason why they couldn't have overlapping elements without additional tricks in the sequel.
23:19: of course this isn’t a complete picture-that’s the entire point of the culling process!
Touché
Loved this, as an enterprise software engineer with no game development experience, I found this highly interesting and really easy to understand. You did an amazing job, thanks and you have another subscriber. 😁
Prototype was one of my favorite games back in the day. Very cool to hear about it here.
A quick subscribe from me! I look forward to you going into transparency shenanigans. It surprises me that to this day it is not unlikely for a player to run across transparency issues. I remember even in the recent beautiful Armored Core 6 I found a situation where smoke consistently interacted badly with something. And in playing around with my own projects, I've gone overboard with "transparency is beautiful" too many times, and keep having to be mindful of performance impact.
Woah! 😮 Crazy you worked on Prototype. I loved that game, and I have always remembered it from time to time. Maybe you could make a video about your work on the game.
Would you be interested in that? Brief summary: I did some of the early r&d for the graphics, did a big presentation at GDC, and during production mostly did backend and optimization work.
@@simondev758Definitely should do a video about it. It would be a great video to watch.
Subscribed. One of the best and more clear content about graphics programming that I ever seen in youtube
Great video. I'm a 3dArtist, not a programmer of any sort and there might have been a simple explanation for it that I've missed but how does culling account for things like long shadows or global illumination, that leaks from off-screen into the visible scene? ...Maybe worth a part2? :)
Loved the video, I did quite a bit of reprojection shenanigans about ten years ago with the DK1 and DK2 to improve perceived frame times for stuff outside of our foveal vision!
You're a great teacher. There are only a handful of good youtube channels where you can actually digest the content. This video is gold.
Nice video. I had once read a paper on culling strategies. Its just amazing how smart people are in the gaming industry. Some of the most amazing algorithms came from the gaming industry.
You worked on Prototype? That's one of my favorite games!
The video is very informative, and relatively easy to understand even to someone who knows nothing about game development, though there's a minor issue with captions. At a few points the captions are mismatched, like at 12:10, at it takes a few seconds for them to catch up.
Ah ok I'll double check the captions.
This video and your explanations have increased my level of respect towards game engine developers that implement these sort of things for others to use. Thank you.
[Prototype] is one of my favorite games of all time. It was the first game I ever got a platinum trophy on. Thanks for working on it.
Mr.doob is one of your patrons! Actually I’m not surprised. GG
Also just noticed. Too cool
That is really cool you worked on Prototype. I really enjoyed that game.
Great video, I wish it was made a few years ago :-) In my occlusion culling journey, I originally took the HiZ approach on the GPU which worked out great at first. It became a problem though when I wanted to do more with the visible entities. I tried in several ways to send data back to the CPU but there's just too much latency and not enough types of indirect commands to keep it on the GPU, so I went the CPU route instead. Intel has a great paper for their method of CPU-side HiZ implementation, "Masked Software Occlusion Culling". They also provide an open source implementation which has performed well for my application.
Yeah I wanted to call out to Intel's library at some point, but didn't have a good reason to.
This probably isn't related but, Titan Quest is an old locked perspective 3d isometric game. There was a mod that unlocked the view and let you rotate the camera around. I thought it was odd that the back of everything was there and had textures since they were never supposed to be seen.
14:36 "Now if you've never done any PS3 development' I feel very called out right now... I actually haven't developed a game for the playstation 3 😥
The depth buffer view was super cool. It looked like a 3D Limbo.
I love the video so far but I have one piece of feedback - the yellow and green can be hard to distinguish, especially for someone colorblind. Maybe blue instead of yellow would be a better choice there ^^
That is a great point, thank you for bringing that to my attention! I'll strive to be better about that in future videos.
@@simondev758 i cannot see it too...otherwise great stuff, love you
You worked on prototype ?
Bro it was my favorite game it was unique and amazing graphics and also amazing gameplay feeling. Wow.
Thank you so much for sharing your knowledge with us, love it when you bring out new videos!
You just threw in the fact that you were one of the devs of my all time favourite game Prototype like it was nothing 😭
meshlet culling using mesh shaders looks like an interesting development, and i'm guessing ue5 uses something like that. I wonder what the new unity culling system uses.
Truly informative. This is a great perspective, i usually hear arm chair devs talk about game development. I learned a lot here. Subbed
It's like if Bob's Burgers explained Computer Science
You forgot a few things. The one that still amazes me to this day is, rotating billboards for distance objects. Shocking how you just can't tell, even as it turns into a real object right in front of your eyes seamlessly. AND let's not forget fogging. Even used lightly it is a major system relief to update possibilities. Also blur and a few more tricks of the trade.
PROTOTYPE MENTIONED!!!!!!!!!! 🗣🗣🗣🗣🗣🗣🗣🗣🗣🗣
For me, the most interesting thing is, how the "MOV AX,CS" and similar lines became an open world game with theoretically unlimited gaming time.
40-50 years ago one or two people worked on a project (code, music, gfx was the same person sometimes) and then, 20-25 years ago the games became so complex that some pc games had longer credits list than a Hollywood movie.
What games dont show you: What you see is only in front of you, everything behind you is just black darkness.
sounds like in real life
unless you walk in front of a mirror. but oh well, a mirror is just another camera
You sound EXACTLY like MadSeasonShow.
Thankyou for your wisdom from a new graphics programmer currently learning how to make game engines.
Have managed to acquire some great books such as GPU Gems and cant wait until I get to the point I can handle DX12 and VULKAN.
I am not surprised someone as talented as yourself was involved in Prototype, it shows. 😊
Interesting and educational. Thanks Simon!
One of the interesting approaches I have seen explained is a low detail wider world (like a crude model), and detailed render around the viewer (limited view distance).
You are literally describing how Prototype did it.
Amazing video!
Hey Simon, amazing video, I really love how you go in depth into the nitty gritty of optimization and the history of it. One such argument I'd love to hear about is collision engines, with broad, middle and narrow phases, aabb collisions, spacial partitioning, the challenges of long range ray and shape casting and so on, I feel like there are so many different interesting things to talk about in collision engines
Glad to hear you enjoyed it!
I'd love to dive into more optimization topics, but I think I'll leave collision engines out of it. I strongly dislike when people pass themselves off as experts in things that they're not, and I hold myself to that same standard. I haven't done anything beyond superficial physics work, so I don't feel especially qualified to talk about the subject.
I'd encourage you to find resources or experts that specialize in that area. Would love to send you in a direction, but I can't think of any offhand unfortunately.
Thanks for sharing this! Very interesting topic!
I didn't know you worked on Prototype. I love that game!
i love your content so much simon, all of it its amazing
I cant believe I found someone who made Prototype. I love that game.
I never worked with game development, but I love this channel. I love to hear about clever solutions to optimization problems. This video was particularly interesting.
oh shit, I loved Prototype
I remember enabling occlusion culling for both audio & video in 'Doom 3' back in the 2000s, it helped so much with our low-end hardware. I honestly dunno why they weren't the default settings cuz I never found any problems with them, guess the devs played it very safe or knew of hardware that didn't work with it.
I wonder about culling only the occluded polygons of a large highly detailed object in the scene now.
While it might seem like a huge optimisation step at first glance, backface culling already does most of this work. It's a rendering step that looks at whether a polygon is facing the camera before it gets rendered. On most concave objects, that would already account for a large part of the occluded polygons
It's what I'd love to talk about in a future video leeroy! hehe
I absolutely love how I've been working on my bachelor thesis for some time now, which is on interactive realtime rendering of huge point cloud datasets, and went through all the exact same resources for occlusion culling.
In my case I was able to use GPU instanced rendering methods, since I was only displaying points/spheres, so frustum culling, LODs for the primitive meshes and screenspace rendering for the lowest LODs was enough to push below 3ms (~350fps) for ~500.000.000 points in an average case.
For that I used a GPU driven filter- and render-pipeline and the only restriction is the accessibility of the raw point cloud dataset as persistent buffers in GPU memory. That means overhead when loading the program, which on the other hand is kind of negligible considering loading times of such datasets from storage into system memory. BUT memory capacity on the GPU can be critical aspect! To solve that, datasets can go through preprocessors splitting them up into data channels (position, size, color...) or only load a small subset of timesteps if the point cloud has temporal data.
So depending on the use case, primitive optimizations already get you very far.
And thank you for your great video ^^ Definitely will share it to my uni peers
That's really cool, I've never tried to load that amount of data before, you must be running into all sorts of interesting issues