- 123
- 58 824
MARKitekta
Приєднався 22 жов 2022
Markitekta channel serves to provide insight into the digital tools and their application in architectural design and practice through tutorials and projects.
What Do We Need to Render BILLIONS of Polygons in Real-Time - The ULTIMATE Guide to Nanite
Uncover the secrets of real-time rendering with this ultimate guide to Nanite! From the basics of clustering and LODs to advanced techniques like small triangle culling and hierarchical Z-buffers, we break down the complex world of virtualized geometry. Whether you’re an artist, developer, or just fascinated by Unreal Engine’s rendering technology, this video is packed with insights to level up your knowledge.
Timestamps
0:00 Intro
1:14 Visual Fidelity
2:11 Problems
4:41 Current Techniques
8:23 What Do We Need
10:42 Clustering
11:10 Bounding Volumes
13:17 Directed Acyclic Graph
14:02 Automating LOD in Nanite
17:42 One Draw Call
19:01 Frustum Culling
19:42 Backface Culling
20:29 Occlusion Culling
21:43 Hierarchical Z Buffer
23:35 Small Triangle and Detail Culling
24:33 Software Rasterizer
25:22 When (Not) to Use Nanite
26:22 Outro
Learn how Nanite helps:
-Render billions of polygons in real-time.
-Optimize performance with dynamic LOD systems.
-Use innovative culling techniques to reduce unnecessary computation.
But is Nanite always the right tool? Discover when to use it and when to avoid it.
Don’t forget to like, comment, and subscribe if you find value in this video. Your feedback shapes the future of this content and brings clarity to complex systems like Nanite.
References & Further Reading
SimonDev's Video Tutorials
How Games Have Worked for 30 Years to Do Less Work: ua-cam.com/video/CHYxjpYep_M/v-deo.htmlsi=_H70toBLvwUhviyC
When Optimisations Work, But for the Wrong Reasons: ua-cam.com/video/hf27qsQPRLQ/v-deo.htmlsi=lrFLs9mHvegb5bGc
Epic Games Documentation and Videos
Unreal Engine 5 Nanite Documentation: dev.epicgames.com/documentation/en-us/unreal-engine/nanite-virtualized-geometry-in-unreal-engine
Unreal Engine Videos
Nanite for Artists | GDC 2024 - ua-cam.com/video/eoxYceDfKEM/v-deo.htmlsi=9fHorEnEPl2kPqhD
Nanite | Inside Unreal - ua-cam.com/video/TMorJX3Nj6U/v-deo.htmlsi=EmyoMn8k1xCSWMYx
Unreal Engine 5 Revealed! | Next-Gen Real-Time Demo Running on PlayStation 5 - ua-cam.com/video/qC5KtatMcUw/v-deo.htmlsi=nkz1JLlpmKe-Xr0G
William Faucher
Nanite: Everything You Should Know [Unreal Engine 5] - ua-cam.com/video/P65cADzsP8Q/v-deo.htmlsi=VnOk3M5AfvytY8KJ
SIGGRAPH Advances in Real-time Rendering
A Deep Dive into Nanite Virtualized Geometry - ua-cam.com/video/eviSykqSUUw/v-deo.htmlsi=AMm4AbpQXxvmFs0N
with the pdf available here
advances.realtimerendering.com/s2021/Karis_Nanite_SIGGRAPH_Advances_2021_final.pdf
Two-Pass Occlusion Culling: medium.com/@mil_kru/two-pass-occlusion-culling-4100edcad501
Humus
Triangulation - www.humus.name/index.php?page=Comments&ID=228
Books
Real-Time Rendering, Fourth Edition by Tomas Akenine-Möller, Eric Haines, and Naty Hoffman
Sections 19 on Accelaration Algorithms
Ludwig-Maximilians-Universität München
Geometry Processing - The Nanite System in Unreal Engine 5 - www.medien.ifi.lmu.de/lehre/ws2122/gp/slides/gp-ws2122-extra-nanite.pdf
Timestamps
0:00 Intro
1:14 Visual Fidelity
2:11 Problems
4:41 Current Techniques
8:23 What Do We Need
10:42 Clustering
11:10 Bounding Volumes
13:17 Directed Acyclic Graph
14:02 Automating LOD in Nanite
17:42 One Draw Call
19:01 Frustum Culling
19:42 Backface Culling
20:29 Occlusion Culling
21:43 Hierarchical Z Buffer
23:35 Small Triangle and Detail Culling
24:33 Software Rasterizer
25:22 When (Not) to Use Nanite
26:22 Outro
Learn how Nanite helps:
-Render billions of polygons in real-time.
-Optimize performance with dynamic LOD systems.
-Use innovative culling techniques to reduce unnecessary computation.
But is Nanite always the right tool? Discover when to use it and when to avoid it.
Don’t forget to like, comment, and subscribe if you find value in this video. Your feedback shapes the future of this content and brings clarity to complex systems like Nanite.
References & Further Reading
SimonDev's Video Tutorials
How Games Have Worked for 30 Years to Do Less Work: ua-cam.com/video/CHYxjpYep_M/v-deo.htmlsi=_H70toBLvwUhviyC
When Optimisations Work, But for the Wrong Reasons: ua-cam.com/video/hf27qsQPRLQ/v-deo.htmlsi=lrFLs9mHvegb5bGc
Epic Games Documentation and Videos
Unreal Engine 5 Nanite Documentation: dev.epicgames.com/documentation/en-us/unreal-engine/nanite-virtualized-geometry-in-unreal-engine
Unreal Engine Videos
Nanite for Artists | GDC 2024 - ua-cam.com/video/eoxYceDfKEM/v-deo.htmlsi=9fHorEnEPl2kPqhD
Nanite | Inside Unreal - ua-cam.com/video/TMorJX3Nj6U/v-deo.htmlsi=EmyoMn8k1xCSWMYx
Unreal Engine 5 Revealed! | Next-Gen Real-Time Demo Running on PlayStation 5 - ua-cam.com/video/qC5KtatMcUw/v-deo.htmlsi=nkz1JLlpmKe-Xr0G
William Faucher
Nanite: Everything You Should Know [Unreal Engine 5] - ua-cam.com/video/P65cADzsP8Q/v-deo.htmlsi=VnOk3M5AfvytY8KJ
SIGGRAPH Advances in Real-time Rendering
A Deep Dive into Nanite Virtualized Geometry - ua-cam.com/video/eviSykqSUUw/v-deo.htmlsi=AMm4AbpQXxvmFs0N
with the pdf available here
advances.realtimerendering.com/s2021/Karis_Nanite_SIGGRAPH_Advances_2021_final.pdf
Two-Pass Occlusion Culling: medium.com/@mil_kru/two-pass-occlusion-culling-4100edcad501
Humus
Triangulation - www.humus.name/index.php?page=Comments&ID=228
Books
Real-Time Rendering, Fourth Edition by Tomas Akenine-Möller, Eric Haines, and Naty Hoffman
Sections 19 on Accelaration Algorithms
Ludwig-Maximilians-Universität München
Geometry Processing - The Nanite System in Unreal Engine 5 - www.medien.ifi.lmu.de/lehre/ws2122/gp/slides/gp-ws2122-extra-nanite.pdf
Переглядів: 18 072
Відео
How To Use Less Memory for Better Looking 8K Materials
Переглядів 58721 день тому
Ever wondered how real-time rendering juggles memory, textures, and optimization to create stunning visuals? 🎮 In this lecture, we unravel the mysteries of rendering, starting from the basics of memory hierarchy and virtual memory to the intricacies of textures, mipmaps, and virtual textures. Explore how physical memory units like SSDs, DRAM, VRAM, and even virtual memory work together to deliv...
Easy Way To Diagnose and Optimize Rendering Issues
Переглядів 663Місяць тому
Ever wondered what happens under the hood of real-time rendering? 🚗 In this lecture, we take a deep dive into the graphics rendering pipeline, breaking down its core stages-application, geometry processing, rasterization, and pixel processing-while exploring bottlenecks, optimizations, and how profiling can enhance performance. Discover how each stage contributes to generating immersive visuals...
COMBINE Urban Planning With Game Level Design
Переглядів 697Місяць тому
In this video, we dive into the intricate connection between urban planning principles and game level design, exploring how key concepts from renowned urban theorists like Kevin Lynch and Gordon Cullen can be applied to build immersive, engaging game worlds. Join us as we cover essential elements of creating memorable levels, from understanding affordances to optimizing environment and asset de...
What KIND of an architect do you WANT to be???
Переглядів 1,1 тис.9 місяців тому
What KIND of an architect do you WANT to be???
6.1 - Is THIS the Closest Thing To a Real Life HOLODECK
Переглядів 630Рік тому
6.1 - Is THIS the Closest Thing To a Real Life HOLODECK
5.1 - The ULTIMATE Guide to Understanding AUGMENTED Reality
Переглядів 206Рік тому
5.1 - The ULTIMATE Guide to Understanding AUGMENTED Reality
4.2 - Proper Scene Illumination CHECKLIST
Переглядів 239Рік тому
4.2 - Proper Scene Illumination CHECKLIST
3.2 - How Wreck-It Ralph FIXED Rendering
Переглядів 367Рік тому
3.2 - How Wreck-It Ralph FIXED Rendering
3.1 - THIS Behavior is Unexpected for Opaque Materials
Переглядів 218Рік тому
3.1 - THIS Behavior is Unexpected for Opaque Materials
2.2 - WHERE to Start with Real-time Rendering
Переглядів 285Рік тому
2.2 - WHERE to Start with Real-time Rendering
1.2 - These FOUR Things Make the Perceptual Process
Переглядів 274Рік тому
1.2 - These FOUR Things Make the Perceptual Process
Elbphilharmonie MEETS Immersive Experiences
Переглядів 293Рік тому
Elbphilharmonie MEETS Immersive Experiences
3.2 - Najbolji način za stvaranje KONTEKSTA u scenama
Переглядів 132Рік тому
3.2 - Najbolji način za stvaranje KONTEKSTA u scenama
3.1 - SVE što treba da znate o materijalima
Переглядів 156Рік тому
3.1 - SVE što treba da znate o materijalima
2.3 - Animiranje geometrije JEDNIM klikom
Переглядів 106Рік тому
2.3 - Animiranje geometrije JEDNIM klikom
2.2 - Kako napraviti SOLIDE od poligona
Переглядів 115Рік тому
2.2 - Kako napraviti SOLIDE od poligona
2.1 - Primena 2D profila za 3D geometriju
Переглядів 199Рік тому
2.1 - Primena 2D profila za 3D geometriju
1.3 - Tipovi ivica i izvlačenje geometrije
Переглядів 146Рік тому
1.3 - Tipovi ivica i izvlačenje geometrije
1.2 - Pravljenje poligona JEDNIM klikom primenom OVE alatke
Переглядів 179Рік тому
1.2 - Pravljenje poligona JEDNIM klikom primenom OVE alatke
1.1 - Navigacija i Stilovi u SketchUp-u
Переглядів 269Рік тому
1.1 - Navigacija i Stilovi u SketchUp-u
Complex Forms - Arcam - How to AUTOMATE Modeling
Переглядів 113Рік тому
Complex Forms - Arcam - How to AUTOMATE Modeling
00:50 ~ T H E D A V I D ~
Would you be able to credit SimonDev for your use of their graphics in the culling section? It appears you took some visualizations from this video of theirs: ua-cam.com/video/CHYxjpYep_M/v-deo.html
Thank you for pointing it out. This was brought to my attention in another comment as well. I agree that giving credit is important which is why I gave a reference under each of the visualizations and animations that I used in this overview. If you think I should do this in a better way, since I appreciate SimonDev's work and what I have learned from his videos, I'm open to suggestions.
@@markitekta2766I would put a link to their video in the description too or as a pinned comment
Are you planning on crediting SimonDev for literally all of the animations?
Thank you for pointing this out. I agree that giving credit is an important part of creating a overview presentation which is why I gave credit in the video under each of the animations or visualizations as best as I could. However, if you believe I should do it in any other way, I am open to suggestions. As for the remark about all the animations, I think I used his great examples in the portion of the video regarding culling and optimization, from which I learned a lot, which is from around 19 minute mark to about 23 minute mark. This is not written as an excuse, just a way to get the facts straight. 😃
@@markitekta2766 I suppose the part I take issue with is that basically every graphic you show is just from someone else. And everything you say seems to just be paraphrased from the source material. Which isn't necessarily a bad thing when done in moderation, but it just seems like you do this to excess. even ignoring that, I feel like you could at least use the papers themselves as sources. You can also put citations in the description or compile them in a Google Doc and link to that in the description. That way people don't have to search up the video titles or go digging through the Blender Stack Exchange to find the original discussion that the GIFs were made for. Or if you made your own then you wouldn't have to deal with a mishmash of vastly different UI versions and bad resolutions. For example, at 5:01 you have a citation for the high-poly/low-poly GIF, but the article is not trivial to find. At the very least you should put the website it's from (Treehouse), and if you actually link to all your sources you eschew the whole issue entirely.
Moses was not that swole lol was mana actually whey
😂 I guess that's why art is subjective, his own interpretation was like this
they key is l how to instace ojects.
I think so too, the more instances you have, the less items in the tree, but the connections can become more complex?
good stuff!
Thnx, appreciate it 😉
Just buy nasa supeecomputer to run it, such disconnect with devs owning $10k workstations ecpecting all who want to run this the same pc specs.... Good luck selling this to very small market of high end pc owners
You might be on to something there... People do need a capable enough graphics card to process all of this, RTX can handle the job but it can get expensive. But maybe we don't need a supercomputer, if we have the RTX? 😆
@markitekta2766 i guess if you own at least 3080 and up then yes, 3060 and 4060 on the other hand not so much
@@roklaca3138 I like that you provided first hand experience, that helps out a lot. Yeah, expensive equipment can be limiting but even the "supercomputers" of half a century ago perform less than a smart phone, if I am not mistaken. The point is, in time, perhaps all of this will become affordable ;-)
Enjoy the vid, watched the whole thing even tho I don't render complex stugg
I'm glad you found it interesting, I also find myself listening and learning about things I don't use generally in life. But everything connects in the end, I guess 😉
0:35 Wener
0:35 balls
Thank you for the very nice video. One minor point to note. You mentioned software rasterizers being used in Nanite for culling microtriangles running on the CPU. Yes, although these software rasterizers could run on the CPU, in practice they are almost always run as regular compute shaders on the GPU bypassing the fixed function raster pipeline.
Thank you for pointing it out. Yeah, I noticed I got my facts wrong, so I pinned a comment with the correction. I appreciate you bringing this to the attention to the community here 😃
Yes, now I see, it is magic!
As Arthur C. Clarke said - Magic's just science that we don't understand yet. And we have more science to implement if we want to understand this magic 😂
Excellent explanation of such a complex topic. Kudos to you, got a new subscriber!
Very much appreciated, I tried to find simple examples but most of all, putting them into a coherent narrative that people can track and engage with. Thnx 😉
You played the video recording of yourself at the wrong place 😩 What's the point of presentations when overlapping important information on it?
You noticed that right and it frustrated the hell out of me too, since the presentation and the recording of me was recorded together, so I couldn't edit it out later. You can see that there is a place at the bottom where I placed my camera feed, but for some reason, the program moved it up there 😒 Sorry for the mistake
@markitekta2766 yes I noticed the big gray rectangle in the bottom right 😩 thought there's a perfect spot for the camera feed, why not put it there? 😂
When I started the recording, I placed my camera feed there and the screen was recorded like that. However, when I finished recording, I noticed that the bottom right was blocked like that and the camera feed was glue to the upper right corner so I had a double fail :/ I guess I'll have better luck next time, but I agree, it is annoying. In the previous video, I placed the camera feed in the bottom right and then overlapped with text from the image, so I had to manually place text over it, but you don't know what you don't know and hence you learn by doing, I guess 😂
@markitekta2766 I see. Well I don't know which software you use, but with obs for example you get separate files for the recordings 😊
@@nowonmetube Great suggestion, I'll be sure to check it out and improve upon this, much appreciated for drawing my attention to this 😃😃
Amazing explaination, I can understand it, thank you!
I'm glad you found value in it. In trying to prepare the lecture I sometimes spent many hours on a single slide, trying to understand all the nuances that are behind it and verify it from several sources. But hearing that other people can follow my train of though really makes it worth while 😃 Thank you
Very interesting. I use Nanite for foliage as well. It removes pop-in and performance in the case of very large numbers of trees etc performs better than non-nanite for some reason. The only problem is that when trees become the size of about a pixel it looks like watching an old TV that doesn't have signal, just a bunch of noise. Somehow distant adjacent objects need to be combined for smooth distance rendering.
Thank you for sharing the info, this is really useful. I saw the feature - preserve area that should help with that. Also, Nanite has evolved allowing for tessellation now, even though it was not possible back in 2021. So, yeah, I guess they will handle these things as well. Can you share some snippets of the performance when using Nanite on trees?
Imagine what Michelangelo would do with zbrush.
Someone asked the same thing and I said, he would probably ask for people to do photogrammetry and digitize what he knows to do best 😀
Thanks for the work you put in this !
You are welcome, I'm glad you found it useful :D
Or just optimize the game so that users can play on a potato?
Now there's a challenge 😂 maybe serially connected potatoes
I originally watched this video to see how our reality could potentially be polygons.. but ended up learning about things I'll never use 😅
Yeah, sometimes the things that we think would be useless, seem to help us gain an interesting understanding of our surroundings later on ;-)
Thanks to all the amazing viewers who have been engaging with this video! One question that popped up is whether Nanite’s software rasterizer runs on the CPU or GPU. The answer: Nanite’s software rasterizer is GPU-based. 🎮 It uses compute shaders to handle small triangles (clusters with edges smaller than 32 pixels) and dynamically bypasses the traditional hardware rasterization pipeline. This GPU-based approach ensures performance scalability and keeps the rendering pipeline efficient, even with highly detailed scenes. This design minimizes CPU-GPU overhead and leverages the massive parallelism of modern GPUs, which is a core aspect of Unreal Engine 5’s real-time rendering magic. Thanks again for the support, and keep the feedback coming! If you’ve got more questions, feel free to ask-I’ll dig up the answers. 😊
Did you ever breath while doing this video?? i haven't seen you pause once holy😂😂😂
😂 I sighed a lot, actually, because there is so much to say at appropriate times, but I edited it out, thnx for bringing my attention to it :D
Nice overview, man! From time to time I rewatch that Siggraph talk trying to understand it a bit better... It's a really complex talk that requires a lot of background knowledge to grasp... Just one observation. The software rasterizer happens in GPU, not CPU. It differs from hardware rasterizer by utilizing compute shaders instead of the dedicated raster bit of the graphics pipeline.
Thank you for providing feedback. Yeah, I came back to the lecture every once and a while, hoping that 1x speed and really committed listening would help me understand. But I was lacking the terminology and the experiences I found later on. However, still I can watch the lecture and not get everything, so many nuances. For the software rasterizer on the GPU instead on the CPU, it seems I got my facts wrong about them, or I implied to myself that if this was hardware, software goes here, bassed on prior information that software is tied to CPU. Taking you very much for pointing it out, much appreciated.
нихуя не понял, но очень интересно 👍
I cant believe you have 700 followers. This needs to change 🙏🏼
Thank you very much, I appreciate the support ;-)
@markitekta2766 no problem. I am learning a lot from your videos
Great video, thank you! A lot of science involved behind a game engine.
Sure is, kinda like not knowing what happens under the hood of a car, yet it still goes if you know how to use it :D
хороший контент, возможно, только из пожеланий, чтобы было осуществлено больше оптимизации. до\после
If I understood correctly, you are looking for specific numbers before and after using, that is a good suggestions, I'll try to post something soon enough ;-)
@@markitekta2766 Ну, возможно можно не большой проект попробовать использовать. Просто я делаю свою игру, и тут трудный выбор как поступать. Использовать наниты или ЛОД. Если геометрия простая и плоская, то ее как будто нет смысла переводить в наниты. А вот фольяж как показала практика сделать нанитами стоит, это дает хороший буст производительности. В общем хочу сказать, что непонятно как делать правильно, надо столько всего учесть, без примеров иногда это просто по интуиции :)
@@markitekta2766 Ну, возможно можно не большой проект попробовать использовать. Просто я делаю свою игру, и тут трудный выбор как поступать. Использовать наниты или ЛОД. Если геометрия простая и плоская, то ее как будто нет смысла переводить. А вот фольяж как показала практика сделать нанитами стоит, это дает хороший буст производительности. В общем хочу сказать, что непонятно как делать правильно, надо столько всего учесть, без примеров иногда это просто по интуиции.
im not deep in unreal but cool to see comments about threat interactive. Let's all work together to find the best solutions
I agree, threat interactive seems to have stirred the pot with this Nanite application issue and bringing everyone to discuss it is definitely a win, regardless of what the outcome is
Ai
Do you mean AI used for upscaling or for something else?
@markitekta2766 nope AI for the whole thing through and through.
If you mean for the entire video I made, I can tell you more about it, but I guess you already have an opinion that requires no rebuttal 😉
Your explanation is fantastic and I am eagerly awaiting your future explain videos
Thank you for the support. The next logical one would be about lighting, but it will definitely take some time to explore and prepare. Until then, maybe you can check out the video about virtual texturing, which was basically the concept behind virutal geometry, like Nanite, or about optimizing the pipeline?
I chose to go with voxels + vertex colors + auto lods for GODOT, I use a quad per pixel with no uvs and textures. From what I seen with benchmarks, shaders and textures are actually much more expensive than poly counts. So I will see in practice how well that will work.
Looking forward to seeing it in action. I've never used voxelized assets, apart from using it in parametric modeling for 3D printing, but I did not research the use of materials when it comes to voxelized geometry, is it really that much of an issue?
Amazing presentation! It really sets the stage of information for standard techniques along with the details with ninite, very lovely. I don't use Unreal so I have a question, but I am curious of the technology. Does Ninite allow for tweaking of the number of polygons per cluster, or how "deep" the tree goes of clusters? Or is the technology more so static in editor? Just curious of how it would impact performance. Glad you share use cases, again great work.
Thank you very much, I'm glad you found it insightful, even as someone who, as you said, is not an UE user. I think you can set the numbers for the error and based on that the clusters will form, but I believe they always strive for those 128 triangles following the logic of 128 pixels for virtual textures, which in this case is where Nanite wants triangles to be the size of pixels, I guess
Only good thing about nanite is that is saves time. But it should not be like it. Also for stuff like terrain you really could just develop a retopology tool that remeshes it once and creates a full quad mesh that i later easy to adjust either automatically or by hand.
Hard to tell. I am a 3d artist not a game dev. What you try to do automatically, I just do by hand.
@@wydua Thanks for sharing your insights. As they say - good things take time, so if manual labor and adjustment with tedious tweaking is what gets great performance down the line then it is worth it, right 🙂
@@markitekta2766Yeah. It's honestly really bugging me off that in modern days the games are just made as fast as possible because it's cheaper. It seems they forgot that you can't rush art.
@@wydua Yeah, we live in times where everything is needed as soon as possible, but when you take your time, you can produce something wonderful :D
@@markitekta2766 :D
What's going on in blender at ua-cam.com/video/RRKCqmctxLs/v-deo.html ? I've spent weeks trying to create a similar dynamic LOD system in blender and would love to see how somebody else has implemented it.
I just found the video to prove a point for what I was saying, but I'd like to see if anyone can offer more information about this
you could use merge by distance to generate the lods and switch node to change lods at certain distances, checking with your camera(self camera object) in geo nodes someone could probably do it with less manual work 😅
Yeah, I saw that CLOD - continuous level of detail uses edge collapse to generate less vertices as you move away, but it seems to cause issues as well i.e. does not solve all the problems
Witha a proper 2014 with projection and baking you can have all those minute details into TANGENT SPACE normal map. you don't need nanite and draw subpixel triangles. To do that it takes more time to draw every triangles by themselves lmao. """"optimization""""
I think there was a talk, perhaps about Lumen, when they said that since one pixel can only display one triangle, the exploration of subpixel triangles is not the way to go. But I agree that if you take the time to optimize your scene with manual work and not automated approaches, you can gain more than letting the system handle it for you ;-)
@@markitekta2766 Also because PCs today are a lot more faster and you can get away with overdrawing most of the time. People should work on mid-poly nowdays and work on a global GI with PBR and you will see 100% resolution rendering with some multisampled anti aliasing in a crystal clear picture on your screen. Instead let's put boxes with billions of triangles just because.. let's lower resolution because somehow my game runs at 2 fps and upscale the frame with AI and "fix" all the artifacts with some vaseline, i meant TAA (which is also a cancer on performance.. > 1 ms is still a lot, you know how many things you can do with that infinite amount of time?.
(rethorical question, you seem to understand this pretty well brother)
@@AvalancheGameArt Valid points, thank you for putting it out there ;-)
I read the title as "why"
:DD I guess that would have gotten even more comments here, some would say why not, others would go for - beacause of realistic depictions, while some would say we don't, we can use less performance heavy approaches. The diversity in optinions based on reason and fact is always welcome to prosper the development :D
Got it. Will sculpt each tree leaf with geo instead of using alpha cards. Thanks! 𓁹‿𓁹
Yeah, the scene really lights up due to overdraw when you use Nanite on aggregate geometry such as leaves. Imposters are also an interesting approach, as was seen in this video on 2:10 ua-cam.com/video/hf27qsQPRLQ/v-deo.html Also, they talk about leaves in this video on 13:04 ua-cam.com/video/eoxYceDfKEM/v-deo.html maybe you can find some suggestions there?
@@markitekta2766 Oh! I was doing a bit of trolling! Definitely would just go the regular industry way to go about it hahah Thanks for the extra videos, super interesting stuff! Keep up your awesome work :D
@@Cless_Aurion Oh, sorry, it went over my head. But yeah, as people have said here, Nanite is just a tool, so if something ain't broke, don't fix it, especially with this tool :D
i guarantee that this will be used in classes. if you're one of those students, hey! you better be reading this _after_ the video is over.
I made it to be seen and used for learning in classes so the editing is not as it could be, with all the bullet points taking up a huge portion of the screen. I read the comment and hopefully other people will as well :D
real good explanation.
Thank you, that means a lot, I was looking for a way to make something that was complex to me, simple and relatable, glad it came through 🙂
Except both nanite and lumen look horrid in production with numerous artefacts, jittering and smeared images on movements. I'm not saying it's not a great tech and great progress, but without their myriad of issues fixed they are pretty much pre-alpha and should not be used in production. Source: every single production use-case of lumen and nanite out there.
Thank you for sharing, I was not aware of that. When I saw the Nanite video 3-4 years ago, I had no idea how it worked or why. I think I have a better understanding now, but not about its use in practice. If anyone has any examples, it would bring a new note to the discussion section, perhaps?
sadly, nanite isn't faster than using basic lods, but was promoted in many epic's videos as is. it produces overdraw that's wastes gpu computational resources.@markitekta2766
some comments in this video are really making me mad, people really dont know what they are talking about, they dont know how the tech works and where it is useful, guys if ur a game developer, you care about making the most performant real time game ever (as needed with esports titles) you offcourse would have billions of polygons in ur scene, and hence wouldnt benefit with nanite, nanite really isnt made for that, it is made for higher details while still keeping the game realtime (around 16ms per frame). Nanite really is a big thing in graphics programming, dont discredit it because some devs dont use it properly. (infact I would say UE5 is really not well optimised in the first place, they compile 1000+ pipelines for no reason, if you really really care about performance, you would be writing ur own custom renderer).
Thanks for sharing, I really appreciate your opinion. As a friendly suggestion, perhaps you shouldn't take other people's opinions as something that should be corrected or heavily debated. They have a different perspective on the topic which helps all of use broaden our horizons. We can only show which paths we think are correct, it amongst many, chose the ones to take. Hopefully they all reach the same place in the end. Having such a great comment section on this really brings a smile on my face as I see so many perspective I never though off, like the one you are making ;-)
I don't understand why people obsess over David when the Egyptians were sculpting figures 5 times bigger thousands of years before. There were statues called Colossi, some were made of bronze. It seems to be some strange bias towards modern European culture. Do you really think ancient people were incapable of sculpting or observing veins and muscles? It's so silly.
Thank you for pointing it out and sharing your opinion :D I just needed an example for the intro and I believe that any example would have sufficed. I found this more relateable to me and the audience I thought I have, but if you have some examples of ancient sculptures, please share them, I'd like to learn more about it :D
They're big, yea. But that's about all we can see. They're so damaged from time that any detail they could've had is gone. Ancient greek statues have stayed in much better condition and so are more appealing for most people
I still don’t quite understand the occlusion culling bit - specifically: The Z buffer of the previous frame is used to do a first culling pass, makes sense.. but then you end up rendering the Z buffer of the current frame to do a second pass of culling.. aren’t you just rendering the scene at that point? Like to generate the Z buffer of the current frame, wouldn’t you basically have to push every single triangle to the screen? Only to then cull a bunch of them and AGAIN render all those triangles?
@@liquos I can actually explain that. So on the GPU you aren't required to use a full pixel shader and the GPU has a fast path for depth assuming you don't do anything funky with the pixels. (Like discarding them or manually setting depth). It's usually referred to as Early Z. Because of this, much less state gets changed, if any at all. The actual drawing to a framebuffer is extremely fast.
It's shading, blending, post-processing, texturing, GPU state changes. All of those eat up performance. A depth pre pass does none of those. It is only for opaque geometry, alternatively geometry that has transparency enabled for say grass billboards. (But this actually oftentimes permanently disables early Z optimization until the buffers get swapped, so you would do this in a separate pass afterwards as it's less efficient.)
If I understood correctly, you need a certain amount of time to create the Z buffer and a certain amount of cache memory for it. If you wait for the GPU to create the Z buffer you are creating overhead, meaning there is idle time between the CPU and the GPU. So, in order to solve this, we take the previous frames Z Buffer and use that to test all the bounding volumes of the meshes that were determined to be the occluders. Since the occlusion query about what is an occluder can take a lot of time, if you test everything, this cuts down on that portion. When you take the previous frame's Z buffer, but at a lower resolution and the current bounding volumes and compare it, you can create a rough estimate of the current frame's Z buffer and then just check the new meshes or the meshes that became visible in the current frame, and test their bounding boxes for occlusion. There is a short text here, medium.com/@mil_kru/two-pass-occlusion-culling-4100edcad501 and here www.nickdarnell.com/hierarchical-z-buffer-occlusion-culling/ I think I got a good understanding of it, but would like if someone can clarify or correct me if I am wrong. :D Sorry for the longer post
@@donovan6320 thank you very much for the explanation guys, very insightful
i wonder if four dimensional beings consider our 3D geometry to be flat textures
That is a great observation. There is a book called - Flatland: A Romance of Many Dimensions by the English schoolmaster Edwin Abbott Abbott who explores this very notion of lower dimensional creatures being visited by a higher dimensional creature. It is understandable and relateable, because it is a square getting to know a sphere. You can read about it or watch a short video on UA-cam regarding this topic, kinda puts everyting into a different scope of thinking.
Such a great explanation.
Thank you, I really tried to explain it to myself, and I'm very picky about all the nuances... At least up to a certain point :D
We are need a full raytracing. GPUs with hundred thousands RT cores
Number of RT cores is hardly the problem.
That is an interesting observation. Currently we have, if I'm not mistaken, over 10 000 cores in a GPU chip, each with a capability of running 3 billion operations every second. But still, if the pipeline is not optimized, a simple scene can cause latency or display issues. I always return to the quote of Jeff Goldblum from Jurrasic Park saying Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should.
@@markitekta2766 I mean, for large scene (10+ billion triangles) raytracing with 100 million rays should be much faster than entire scene rasterization.
@@DefleMask I can see logic in this, but I guess in the end we still have to rasterize it for display purposes, so perhaps they are trying to kill to birds with one stone, even though it can be slow at times?
1999? that's nvidia's marketing lies, GPUs existed well already in 1996
Thanks for sharing, I did not know about that. I found this post that says that in 1999 the first so called GPU turned out, but it existed as far as 1994, but did it really do what GPUs do today. www.reddit.com/r/pcgaming/comments/esqdkw/really_in_1999_nvidia_invents_the_gpu/ Whatever the case may be, I'm sure we can agree that it has been around 25-30 years since the GPU emerged, which is what can be the main takeaway from this?
@markitekta2766 yeah, indeed, it was only that isn't the first time i've seen UA-camrs say what Nvidia propagates, and it's not accurate and only serves to make them some sort of "the GPU company" when they aren't. Before Nvidia, in the 90's there where the 3DFX branded cards, Nvidia bought 3DFX when they GPU surpassed 3DFX's and it's so that started their fortune, as far as I know, the truly innovation of NvIdia where the programable shaders in the form of he CG shaders, that became the basis for the DirectX shader language HLSL, hence why they look so similar.
@@Biel7318 Hey, no worries, sir, thank you for sharing. I have no horse in this race, so what ever piece of information you have, it is nice to alert the people and learn about it. For example, I heard about the existence of SPUs, didn't know they were a thing, so you learn something every day. Here is a link if anyone is interested ua-cam.com/video/CHYxjpYep_M/v-deo.html
4TB hard drives are usual? what?
Do you mean "what" like, I did not know this or "what" as in what are you talking about, this is false :D I have a 4TB external hard drive for storing things, I since they are the cheapest and the slowest to tranfer data, they can pack a really punch when it comes to capacity :D
Interesting. I'm just not sure why all this occlusion culling still can't properly avoid overdraw.
@@cube2fox because occlusion culling is hard and it's not perfect. The hidden surface determination problem is a bit of a rough problem to solve.
If my understanding is correct, when you get significantly far from the overlapping geometry, the distance between the triangles or clusters becomes so close that it can be like an artifact that can occur when two triangle are overlapping in any general setting during modeling. Non the less, I'd like to hear other opinions regarding this question :D
@@markitekta2766 this seems like the best way of explaining it; you cram enough triangles in the scene you can either select relaxed culling and have significant overdraw or strict culling and have artefacting as triangles are removed to save time but there are so *many* triangles you eventually delete important ones through the culling Nanite makes sense on white paper to sell GPUs but ultimately is just a nerf overall.
@@ThePlayerOfGames Got it, thank you making it clearer :D
Awesome work man, thanks for sharing the knowledge.
Thank you for watching, I'm glad you found value in it :D