Epic's Unreal Optimization Disaster | Why Nanite Tanks Performance!

Поділитися
Вставка
  • Опубліковано 27 жов 2024

КОМЕНТАРІ • 1,6 тис.

  • @ThreatInteractive
    @ThreatInteractive  2 місяці тому +449

    PLEASE READ FOR UPDATES & RESPONSES:
    Thank you so much for your support!
    1. As always, watch our videos in 4k (as in streaming settings) to see our comparison details through UA-cam compression.
    2. Please remember to *subscribe* so we can *socially compete* with leading tech influencers who push poor technology on to everyday consumers. Help us spread REAL data, empowering consumers to push back against studios that blame your sufficient hardware!
    * RESPONSE TO COMMUNITY QUESTIONS:
    1. We've repeatedly seen comments attempting to explain how Nanite works; arguing that quad overdraw isn't relevant. When we compare the overdraw view mode to Nanite, it’s YOUR cue to understand we’re measuring traditional non-Nanite rendering against Nanite.
    Additionally many have claimed that Nanite has a "large but flat and consistent cost". This is utterly false. Nanite can and does suffer from its own form of overdraw (though not quad related). A major issue that people are missing involves Virtual Shadow Maps. Which are tied to Nanite.
    Nanite's shadow method not only re-renders your digital scenes at massive resolutions but these maps are also re-drawn under basic scenarios typical in games. Such as moving the CAMERA's position, shifting the SUN/MOON, or having moving objects or characters spread across your scene. Does that SOUND like good performance to you? News flash…It's not.
    Even Epic Games admitted VSMs were terrible for Fortnite but instead accepting it wasn't fundamentally a good fit. They "bit the bullet" and use it anyway. But they didn't really bite anything...consumers did.
    2. To those defending Nanite because it saves on development time. We are fully aware of that. We have constantly stated this in previous videos and comments. We have also said this is a great thing to work towards.
    What these ignorant people fail to grasp is that Nanite is a FORCED alternative, due to a workflow deficiency in legitimate optimization for meshes.
    3. Like we stated in our ‘Fake Optimization Video’, Pro-Nanite users fail to recognize the CONTRADICTION Nanite causes in "visual fidelity".
    If you are using a technology that has such a massive domino effect on performance that you end up having to use a blurry, detail-crushing temporal upscaler to fix performance then you end up smearing all the detail anyway for a distorted presentation. Then if you were to explore CHEAP deferred MSAA options. All that subpixel detail possible in Nanite and VSM's gross use of soft shadow sampling is promoting temporal aliasing/reliance on flawed TAA/SS.
    4: The test shown at 3:36 shows a workflow deficiency rather than an implementation issue. Unreal *does support per-instance LOD selection but the engine defaults to ISM(Instanced Static Meshes) which doesn't support LODs.* But UE5's HISM(Hierarchical Instanced Static Meshes) does but the developers have not made this as accessible and have not produced a system that combines all these meshes with precomputed asset separation culling. Before some people complain about "duplicated" assets and increased file size, we encourage viewers to research how Spider-Man PS4's open worlds where managed.

    • @HANAKINskywoker
      @HANAKINskywoker 2 місяці тому +17

      6:16 wdym non existing problem?
      The problem is that dlss doesn't hurt the performance as much as if you were playing the game at higher resolution/using ssaa yet the quality is still good
      Please think more before saying stuff like that lmao

    • @QuakeProBro
      @QuakeProBro 2 місяці тому +99

      @@HANAKINskywoker "...to a problem that never NEEDED TO EXIST." is what he said. We would not need DLSS that badly, if fundamental optimization methods would be pushed by the industry, instead of upscalers.

    • @QuakeProBro
      @QuakeProBro 2 місяці тому

      ​@@HANAKINskywoker Yes it is true, as the games get bigger and bigger, optimization gets harder. But this is why things like the proposed AI tools would really shine.
      As long as Nanite is like "You can now render a rock with 10 million tris and the performance is (in context of our other new features) better than before, but it really isn't, because the base cost is exponentially higher. But hey, here is the magic fix: Upscalers!" and not "You can now render 10 million rocks with near infinite draw distance, and the base cost is the same or even less than before.", it creates a problem that shouldn't exist.
      There is much more work that needs to be done and Epic should provide support for those who want a smooth transition, but instead they only really push the features that sound great in marketing.
      My own project went from 120fps in UE4 to about 40fps in UE5 on a map that only has a fucking cube and floor.
      Upscalers are definitely not evil and can be super useful, but they just hurt the visuals too much for them being the only answer you get, if the performance is bad.

    • @V1vil
      @V1vil 2 місяці тому +6

      Comparisons details were still noticeable while watching on Xbox 360 in 720p. :)

    • @Navhkrin
      @Navhkrin 2 місяці тому

      @@RandoUA-camAccount Google -> unreal.ShadowCacheInvalidationBehavior

  • @BunkerSquirrel
    @BunkerSquirrel Місяць тому +1581

    2d artists: worried about ai taking their jobs
    3d artists: worried ai will never be able to perform topology optimization

    • @とふこ
      @とふこ Місяць тому +47

      Me: want a robot to take my job 😂

    • @GeneralKenobi69420
      @GeneralKenobi69420 Місяць тому +36

      Who let the furry out of the basement 💀

    • @dimmArtist
      @dimmArtist Місяць тому +16

      3d artists worried that hungry 2d artists will become better 3d artists and taking their jobs

    • @mainaccount888
      @mainaccount888 Місяць тому +73

      @@dimmArtistsaid no one ever

    • @roilo8560
      @roilo8560 Місяць тому +42

      ​@@GeneralKenobi69420bro has 69420 in his name in 2024 💀

  • @pyrus2814
    @pyrus2814 Місяць тому +996

    DLSS was originally conceived as a means to make real-time raytracing possible. It's sad to see how many games today rely on it for stable frames with mere rasterization.

    • @gudeandi
      @gudeandi Місяць тому +45

      tbh 99% of all players (especially in single player) won't see a difference.. and that's the point.
      Get 90% of the result by investing 10%. The last 10% cost you sooo much more and isn't often worth it.
      imo

    • @beetheimmortal
      @beetheimmortal Місяць тому +210

      I absolutely hate the way modern rendering went. They switched from Forward to Deferred, but then broke anti-aliasing completely, introduced TAA which sucks, then introduced DLSS, which is now MANDATORY in any game to run at a sort-of acceptable framerate. Nothing is optimized, and everything is blurry and low-res. Truly pathetic.

    • @griffin5734
      @griffin5734 Місяць тому +26

      No no no completly wrong. DLSS brought the BEST AA in decades. DLAA. Best tech in the world.

    • @dawienel1142
      @dawienel1142 Місяць тому +70

      @@beetheimmortal agreed, can't believe that high end RTX40 series GPU's really struggle to run the latest games at acceptable settings and performance especially compared to the hardware we had with 2010-2015 games which all generally looked good and run well on the hardware of their time.
      I feel like we are at the point of gaining very slight graphical fidelity for way too much cost these days.

    • @techjunky9863
      @techjunky9863 Місяць тому +11

      @@beetheimmortal only way to play games with proper resolution now is to buy a 4k monitor and render at 4k. Then you get somewhat a similar image quality as we had with foreward rendering

  • @SaltHuman
    @SaltHuman 2 місяці тому +1963

    Threat Interactive did not kill himself

    • @ook_3D
      @ook_3D 2 місяці тому +161

      For real, dude presents incredible information without bias, gotta piss alot of AAA companies off

    • @bublybublybubly
      @bublybublybubly 2 місяці тому +71

      @@ook_3D Epic and some twitter nerds may not be happy. I don't think the game companies care 🤷‍♀
      He might just bring them a different solution for saving money on dev time & optimizations. Why would they be mad about someone's who is this motivated to give them another option in their corporate oppression toolbox for free?

    • @168original7
      @168original7 2 місяці тому +2

      Timmy isn’t that bad lol

    • @mercai
      @mercai 2 місяці тому

      @@bublybublybubly Cause this someone acts like a raging asshat, makes lots of factually wrong claims, offers no solution and then tries to crowdfund to "fix" something - all from this place of being a literal nobody with zero actual experience or influence?
      Yeah, it's not maddening, but quite annoying.

    • @gdog8170
      @gdog8170 2 місяці тому +11

      I can't lie this is not funny, doesn't mean that he is revealing info like this that he will be targeted, hopefully that never happens

  • @sebbbi2
    @sebbbi2 2 місяці тому +570

    Nanite’s software raster solves quad overdraw. The problem is that software raster doesn’t have HiZ culling. Nanite must lean purely on cluster culling, and their clusters are over 100 triangles each. This results in significant overdraw to the V-buffer with kitbashed content (such as their own demos). But V-buffer is just a 64 bit triangle+instance ID. Overdraw doesn’t mean shading the pixel many times.
    While V-buffer is fast to write, it’s slow to resolve. Each pixel shader invocation needs to load the triangle and runs equivalent code to full vertex shader 3 times. The material resolve pass also needs to calculate analytic derivatives and and material binning has complexities (which manifest in potential performance cliffs).
    It’s definitely possible to beat Nanite with traditional pipeline if your content doesn’t suffer much from overdraw or quad efficiency issues. And your have good batching techniques for everything you render.
    However it’s worth noting that GPU-driven rendering doesn’t mandate V-buffer, SW rasterizer or deferred material system like Nanite does. Those techniques have advantages but they have big performance implications too. When I was working at Ubisoft (almost 10 years ago) we shipped several games with GPU-driven rendering (and virtual shadow mapping). Assassin’s Creed Unity with massive crowds in big city streets, Rainbox Six Siege with fully destructive environment, etc. These techniques were already usable on last gen consoles (1.8TFLOP/s GPU). Nanite is quite heavy in comparison. But they are targeting single pixel triangles. We werent.
    I am glad that we are having this conversation. Also mesh shaders are a perfect fit for GPU-driven render pipeline. AFAIK Nanite is using mesh shaders (primitive shaders) on consoles at least. Unless they use SW raster today for big triangles too. It’s been long time since I analyzed Nanite for the last time (UE5 preview). Back then their PC version was using non-indexed geometry for big triangles, which is slow.

    • @user-bl1lh1xv1s
      @user-bl1lh1xv1s 2 місяці тому +27

      thanks for the insight

    • @tubaeseries5705
      @tubaeseries5705 2 місяці тому +40

      the issue is that quad overdraw is not a big deal, modern GPUs are never limited by the amount of triangles they output, it's always shaders, and nanite adds a lot of additional work to the shader pipeline, which is already occupied as hell,
      for standard graphics with reasonable triangle counts nanite just doesn't make any sense, it offers better fidelity than standard methods, but performance is not what it can offer

    • @minotaursgamezone
      @minotaursgamezone 2 місяці тому +3

      I am confused 💀💀💀

    • @torginus
      @torginus 2 місяці тому

      Not an unreal expert, but from what I know of graphics, quad rasterization is unavoidable, since you need the derivatives for the pixel shader varyings that are needed for things like texture sampling. Honestly it might make sense to move beyond triangles to things like implicit surface rendering (think drawing that NURBS stuff directly) for the stuff nanite tries to accomplish.

    • @tubaeseries5705
      @tubaeseries5705 2 місяці тому +12

      ​@@torginus rendering nurbs and other non-primitive types always goes down to rendering primitives anyway, CAD software always processed nurbs into triangle meshes using various methods that produce a lot of overhead, GPUs are not capable of efficiently rendering anything else than primitives, we would need to build new hardware standard for rendering them, and that's not really reasonable

  • @sgredsch
    @sgredsch Місяць тому +310

    im a mod game dev that worked with the source engine. and looking back how we optimized for performance manually with every mesh, texture, shader, and seeing how modern studios deliver the worst garbage that runs like a brick also makes me angry. now we start throwing upscaling at games that should run 2x as fast on native resolution to begin with.
    we are subsidising the sloppy or not existing game optimization by overspending on overbuilt hardware that in return gets choked to death by terrible software that is a result of cost cutting measures / lazyness / manipulation. nvidia is really talented in finding non-issues, bloating them up and then selling a proprietary solution to an issue that shouldnt be one in the first place. they did it with physX, tessellation, gameworks, raytracing and upscaling. the best partner in crime is of course the one engine vendor with the biggest market share - epic with their unreal engine.
    fun fact: the overdraw bloat issue goes back to when nvidia forced tessellation in everyones face - nvidia made their gpus explicibly tolerant to sub pixel geometry spam (starting with thermi, i mean fermi), while GCN couldnt handle that abuse. the witcher 3 tessellation x64 hairworks sends its regards, absolutely choking the r9 290X.
    its a shame what he have come to.

    • @e.s.r5809
      @e.s.r5809 Місяць тому +83

      I've got tired of seeing "if it's slow, your hardware isn't good enough" for graphics on par with decade-old releases- struggling on the same tech that ran those games like butter. You shouldn't *need* to spend half a year's rent on a gaming PC, to compensate for memory leaks and poor optimisation. The only winners are studio execs and tech shareholders. It's convenient to call your customers poor, instead of giving your developers time and resources to ship viable products.

    • @googIesux
      @googIesux Місяць тому +22

      Underrated comment. This has been the elephant in the room for so long

    • @T61APL89
      @T61APL89 Місяць тому

      and yet people still buy the games in record numbers, this is what capitalism rewards. throwing shit at the wall and accepting the shit smeared remnants.

    • @Online-j8e
      @Online-j8e Місяць тому +6

      "starting with thermi" lol thats funny

    • @cptairwolf
      @cptairwolf Місяць тому +9

      You're not wrong in that too many studios are taking the easiest way out and skipping any sort of optimization but let's not blame new technology for they. I'd take well optimized nanite structures or micro polygon tech over LODs any day. LODs are time consuming to create, massively increase storage requirements and just plain look ugly. I'm not sad to see them get phased out.

  • @sideswipebl
    @sideswipebl 2 місяці тому +1076

    No wonder fidelity development seems so slow since around 2016. It's getting harder and harder to tell by looking how old games are because we already figured out idealized hyper-realism around 2010 and have just been floundering since.

    • @MrGamelover23
      @MrGamelover23 2 місяці тому +259

      Yeah, imagine the optimization of back then with ray tracing. It might actually be playable then.

    • @metacob
      @metacob 2 місяці тому +190

      GPU performance is still rising exponentially, but I literally can't tell a game made today from one made 5 years ago. As a kid my first console was a SNES, my second one an N64. That was THE generational leap. The closest to that experience we got in the last decade was VR, but that still wasn't quite the same.
      To be honest though, it's fine. I've heard the word "realism" a few too many times in my life. Now it's time for gameplay and style.

    • @Fearzzy
      @Fearzzy 2 місяці тому +58

      @@metacob if u want too look at the potential of todays games just look at bodycam game, you wont be making that 5 years ago. But i see your point, RDR2 is still the most beautiful game i've played (other than faces etc.), but its more to its style and attention to detail rather than "raw graphics"

    • @arkgaharandan5881
      @arkgaharandan5881 2 місяці тому +32

      @@metacob well, i was playing just cause 3 recently, the lighting has some ambient occlusion and reflections so it looks bad compared to modern standards, you can see the foliage lod appearing in real time and unless you are running in 4k the antialiasing it has smaa 2x is not good enough to hide the countless jaggies. Id say a better comparison is with 2018 games, also the 4060 is barely better tan the 3060 with more ram, low to mid range needs to improve alot.

    • @slyseal2091
      @slyseal2091 2 місяці тому

      @@arkgaharandan5881 Just cause 3 is 9 years old. "5 years ago" is 2019 buddy

  • @MondoMurderface
    @MondoMurderface Місяць тому +235

    Nanite isn't and shouldn't be a game developer tool. It is for movie and TV production. Unreal should be honest about this.

    • @Rev0verDrive
      @Rev0verDrive Місяць тому +10

      Been saying this since day one release w/testing.

    • @marcinnawrocki1437
      @marcinnawrocki1437 Місяць тому +33

      Most of new "marketing buzzword friendly" stuff in unreal is not for games. If you as game dev want average steam PC run your game you will not use those new flashy systems.

    • @Ruleta_23
      @Ruleta_23 22 дні тому +9

      Games are using nanite, don't talk if you don't know the topic, what you say is absurd anyway.

    • @0osk
      @0osk 22 дні тому +24

      @@Ruleta_23 They didn't say it wasn't being used in games.

    • @Rev0verDrive
      @Rev0verDrive 21 день тому

      @@Ruleta_23 Every game I seen released with it runs like shit on highend systems. You have to use DLS/FSR to get 60-70FPS

  • @gandev5285
    @gandev5285 2 місяці тому +323

    This may or may not be true, but I actually believe Unreal Engine's quad-overdraw viewmode has been broken for a long time (atleast beyond 4.21). In a Robo Recall talk they talk about the impact of AA on quad overdraw and show that if you enable MSAA your quad overdraw gets significantly worse. Now if you run the exact same test quad overdraw IMPROVES significantly. So unless Epic magically optimized AA to produce less overdraw, the overdraw viewmode is busted. I tested the same scenes in 5.2 and 4.21 and the overdraw view was much worse in 4.21 with it actually showing some red from opaque overdraw (all settings the same). I'm not even sure if opaque overdraw can get beyond green now. I would suspect the overdraw you show from extremely high poly meshes should actually be significantly worse and mostly red or white.

    • @tubaeseries5705
      @tubaeseries5705 2 місяці тому +32

      msaa by nature causes more overdraw because it takes multiple samples for each pixel, with varying amount depending on the settings, so in some cases where only 2pixels are occupied, meaning 50% overdraw, with msaa it could be for example 6 pixels out of 16, meaning 60% overdraw etc.

    • @futuremapper_
      @futuremapper_ 2 місяці тому +8

      I assume that Epic cheats the values when MSAA is enabled as that will cause overdraw in it's nature

    • @JorgetePanete
      @JorgetePanete 2 місяці тому +1

      at least*

    • @neattricks7678
      @neattricks7678 2 місяці тому +3

      It is true. Unreal sucks and everything about it is fucked

    • @FourtyFifth
      @FourtyFifth 2 місяці тому +8

      ​@@neattricks7678 Sure it does buddy

  • @unrealcreation07
    @unrealcreation07 Місяць тому +47

    10 years Unreal developer here. I've never really been into deep low-level rendering stuff, but I just discovered your channel and I'm glad your videos answered some long-lasting questions I had, like "why is this always ugly and/or blurry, no matter the options I select?? (or I drop to 20fps)".
    With time, i have developed a kind of 6th sense about which checkbox will visually destroy my game, or which option I should uncheck to "fix" some horrible glitches... but I still don't know why most of the time. In many cases, I just resign and have to choose which scenario I prefer being ugly, as I can never get a nice result in all situations. And it's kind of frustrating to have to constantly choose between very imperfect solutions or workarounds. I really hope you'll make standards change!

    • @fleaspoon
      @fleaspoon 18 днів тому +4

      you can also learn what those checkboxes actually do and solve the issues by yourself for your specific needs

  • @GraveUypo
    @GraveUypo 2 місяці тому +383

    this is the type of content i wish people would watch, but they don't. i'm tired of hearing that nanite is magic, that dlss is "better than native" and whatnot. when i made a scene in unreal 5, it ran like trash on my pc with 6700xt at the time, and there was almost nothing in the scene even with 50% render scale. it was absurdly heavy for what it was.

    • @SuperXzm
      @SuperXzm 2 місяці тому +60

      Oh no! Think about shareholders! Please support corporations play pretend!

    • @chengong388
      @chengong388 2 місяці тому +51

      DLSS is better than native because it includes anti aliasing and native does not…

    • @FlippingLuna
      @FlippingLuna 2 місяці тому +12

      Just return to forward shading or even mobile forward renderer. It's kinda good and pity that epic made good optimization on mobile forward renderer, but still not ported it to desktop forward

    • @Jakiyyyyy
      @Jakiyyyyy 2 місяці тому +52

      DLSS is INDEED sometimes better than native because it uses its own antialiasing than the default poorly implemented TAA that blurring the games. If DLSS makes the games sharper and better, I would take it over forced-TAA. 🤷🏻‍♀️

    • @dingickso4098
      @dingickso4098 2 місяці тому +74

      DLSS is better than native, they think, because they tend to compare it to the blurfest TAA that is sometimes force enabled.

  • @doltBmB
    @doltBmB 2 місяці тому +93

    Key insight every optimizer should know: draw calls are not expensive, context-switching is. The more resources that a draw call shares with adjacent draw calls the cheaper it is to switch to it. If all draw calls share the same resources it's free(!). Don't worry about draw calls, worry about the order of the draw calls.

    • @h0bby23
      @h0bby23 Місяць тому

      There is the issue of cpu bound to send work with few polygons drawcall but otherwise on modern hardware yes

    • @BlueBeam10
      @BlueBeam10 24 дні тому +2

      So what you say is, if I spawn 10.000 of the same chairs, I don't even need to have them as instanced, or to have them merged into one mesh because the 10k draw calls will equate to one? I don't know man, I tend to disbelieve that...

    • @doltBmB
      @doltBmB 23 дні тому

      @@BlueBeam10 theoretically if each chair can be rendered in a single drawcall, I guess, which would be a very simple chair. instancing is good for more complex renders.

    • @BlueBeam10
      @BlueBeam10 23 дні тому +2

      @@doltBmB But I though draw calls would happen for each non instanced mesh regardless of the complexity right? Since that chair isn't instanced, why would the engine assume the same draw call can apply to those other meshes?

    • @doltBmB
      @doltBmB 23 дні тому +1

      @@BlueBeam10 the engine doesn't assume anything, most engines are designed with 0 regard for these facts. it is up to the graphics programmer to batch their drawcalls appropriately. most engines implement some static batching at best which is an absolutely ancient way to do batching, requires some complicated preprocessing and eats bandwidth and memory

  • @IBMboy
    @IBMboy 2 місяці тому +336

    I think Nanite is innovative technology but it shouldnt replace old style of rendering for videogames as its still experimental in my opinion

    • @matiasinostroza1843
      @matiasinostroza1843 2 місяці тому +52

      yeah and its not made for optimization but for the sake of look, how it looks its beauty, there is almost no transition between lods, so for example in normal games when you get too far u can see how the mesh changes to lower poly models but with nanite this is almos imperceptible, so in that case is "better", thats why is being used for "realistic graphics".

    • @Cloroqx
      @Cloroqx 2 місяці тому +36

      Baseless opinions by non-developers. "Old style of rendering videogames".
      What is your take on the economy's sharp downturn due to a lack of rate cuts by the FED, opinionated one?

    • @vendetta1429
      @vendetta1429 2 місяці тому +76

      @@Cloroqx You're coming off as the opinionated one, if you didn't know.

    • @pizzaman11
      @pizzaman11 2 місяці тому +7

      Also has lower memory footprint and you don’t need to worry about creating lods. Which is perfect with large games with a large amount of unique models.

    • @inkoalawetrust
      @inkoalawetrust 2 місяці тому +42

      @@Cloroqx What are you even talking about.

  • @florianschmoldt8659
    @florianschmoldt8659 2 місяці тому +115

    The current state of "next gen visuals" vs fidelity is indeed questionable. Many effects are lower res, stochastically rendered with low sample counts, dithered, upscaled. Compromises on top of compromises and frame generation in between. Good enough in 4k and 60fps but if your hardware can't handle it, you'll have to live with a blurry, pixelated, smeary mess. My guess is that Nvidia is fine with it. As much as I like Lumen&Nanite in theory but I'm not willing to pay the price.
    To be fair, it isn't the worst thing to have next gen effects available but there is a huge disconnect what gamers expect a game to look like, based on trailers and how it feels to play in 1080p at 20fps.
    JFZfilms defines himself as filmmaker and isn't great at communicating that he and his 4090 doesn't care much about realtime visuals or optimization. Tons of path tracing vs Lumen videos and a confused game dev audience when they accidentally learn that both went through the render queue with maxed settings.

    • @snark567
      @snark567 2 місяці тому +16

      Gamers confuse realism and high fidelity visuals for good graphics. Meanwhile a lot of devs use realism and graphical advancements as a crutch because they lack the imagination to know how to make a game that looks good without these features.

    • @florianschmoldt8659
      @florianschmoldt8659 2 місяці тому +19

      @@snark567 I'm a graphic artist myself and as much as I love a good art direction, realism definitely has it's place. But I get the point. It for sure shouldn't be the only definition of next gen visuals.
      It became much easier to gather free photoscanned models than creating your own in an interesting style. Devs rely on nanite over optimized assets and even games with mostly static light sources use Lumen over "good old" lightmaps. Even if it could look just as good and makes a difference of 120 vs 30fps.
      And Nvidia is like "Here are some new problems...how about a 4090 to solve them?"

  • @ThiagoVieira91
    @ThiagoVieira91 2 місяці тому +361

    Acquiring the same level of rage this guy has for bad optimization, but for putting effort into my SE career, I can single handedly lead us into the the singularity. MORE! MORE! MORE!

    • @JorgetePanete
      @JorgetePanete 2 місяці тому +7

      the the

    • @DarkSession6208
      @DarkSession6208 2 місяці тому +19

      I have 100 other topics around Unreal Engine that make me rage about misinformation just like he rages about Nanite. The Nanite thematic i posted MULTIPLE times on Forums and Reddit to warn people how to not use it, and show how how it should be done. I was researching this topic since version 5.02. Nobody cared. Like i said there are 100 other similar topics (Blueprints, general functions, movement, prediction etc.) which are simply explained wrong by epic themselve and then aknowledged by users.
      If you google : Does culling really not work with nanite foliage?
      the first comment is mine, i was posting there since almost 3 YEARS and repeating myself just for people not listening.

  • @chriszuko
    @chriszuko 2 місяці тому +60

    As much as I think this is a good topic and the results seem genuinely well done, the solution proposed here to have spent the time and money on an AI solution for LOD creation and a much faster / seamless workflow for optimization... is something companies have been trying to do for years.. so I personally don't think it's a good way to move forward. TO me.. it would seem possible that nanite + lumen can evolve and become much more friendly to meshes produced in an already optimized way and rely way less on lower resolutions. I DO think they probably are pushing it a bit too early still and I also agree that DLSS, TSR, and any other upscaling/reconstruction technology is just not good to lean on. But to your point, companies continue to do so because they can say "look everything runs faster!" so it is hard to envision a future without needing it.
    A side note.. I don't think your particular attitude is warranted and, in my opinion, makes it much harder to have a constructive conversation on how to move forward. I'm not perfect either in this regard, but as this gains traction it's probably good to dial that back. In the first post for the mega thread for example you show your results and then feel the need to say "Worse without LOD FPS LMAO!". This type of stuff is just all over the place in these threads.. which to me just looks bad and makes it harder to take you and this topic seriously.

    • @wumi2419
      @wumi2419 2 місяці тому +7

      It ends up being a problem of "who needs to spend money", with choices being developers (on optimizing) or customers (on hardware that can run unoptimized software), and it's obvious which choice saves money for the company.
      Granted, it might result in lower sales due to higher hardware requirements, but that is a lot harder to prove than lowered dev costs.

    • @Viscte
      @Viscte Місяць тому +1

      This is a pretty down-to-earth take.

    • @exoqqen
      @exoqqen 4 дні тому +2

      i agree on the attitude. I'm amazed by the depth of knowledge in these videos but the aggressiveness makes me keep aa distance and be wary. Its not pleasant or professional

    • @Viscte
      @Viscte 4 дні тому

      @@exoqqen the guy comes across as a bitter asshole for some reason. Its important people discuss topics like this but the negativity is definitely off-putting

  • @TenaciousDilos
    @TenaciousDilos 2 місяці тому +198

    I'm not disputing what is shown here, but I've had cases where Nanite in UE 5.1.1 increased framerates 4x (low 20s fps to mid 90s fps) in my projects, and the only difference was turning Nanite on. "Well, there's more to it than that!" of course there is. But Nanite took hours of optimization work and turned it into enabling Nanite on a few meshes.

    • @jcdentonunatco
      @jcdentonunatco 2 місяці тому +71

      none of the devs ever claimed Nanite would boost performance, that was idiots in the forum who said that. The devs were very transparent about the costs and limitations of nanite. The only case it would be an optimization is if you were never using LOD's to begin with

    • @forasago
      @forasago 2 місяці тому +104

      @@jcdentonunatco This is not true at all. When they showed the first demo with the pseudo Tomb Raider scene they explicitly said (paraphrasing) that having as many high poly meshes in a scene would be impossible without Nanite. And they definitely weren't talking about "what if you just stopped using LOD" as the thing Nanite is competing with. Nanite was supposed to raise the limits for polycount in scenes, full stop. That equates a claim to improved performance.

    • @MrSofazocker
      @MrSofazocker 2 місяці тому +113

      ​@@forasago "You can now render way denser meshes" does not equal "so less dense meshes now render faster"

    • @DeltaNovum
      @DeltaNovum 2 місяці тому +25

      Turn off virtual shadow maps and redo your test.

    • @lokosstratos7192
      @lokosstratos7192 2 місяці тому +1

      @@DeltaNovum YEAH!

  • @hungryhedgehog4201
    @hungryhedgehog4201 2 місяці тому +66

    So if I understand correctly: Nanite results in a performance gain if you just drop in high poly 3D sculpts without any touchups, but results in a performance loss if you put in models designed with the industry standard and use industry standard workflow regarding performance optimization?

    • @AndyE7
      @AndyE7 Місяць тому +34

      I swear the point of Nanite was to allow the artists and level designers to just focus on high quality assets and scenes because nanite would do the optimisation for them. It was to stop the need for them to think of optimisation, LODs, how much can the console render in a scene etc because it will handle all of that for you allowing you to just focus on the task at hand.
      In theory you could even develop with Nanite and then do proper optimisations afterwards.

    • @hungryhedgehog4201
      @hungryhedgehog4201 Місяць тому +17

      @@AndyE7 tbf it seems to do that but that means that you move from the industry standard to this new approach that works with no other engine pipeline. Which then binds you to the Unreal Engine 5 ecosystem, which then obviously benefits Epic. That's why they push it so hard.

    • @Noubers
      @Noubers Місяць тому +28

      The industry standard is significantly more labor intensive and restrictive. It's not a conspiracy to lock in people to Epic, it's just a better work flow overall that other engines should adopt because it is better.

    • @Armameteus
      @Armameteus Місяць тому +39

      @@Noubers Except it's not because that then off-loads the rendering overhead to the end-user. To render anything in nanite more quickly than through a traditional rendering pipeline would require the end-user to have the hardware necessary to accommodate it. This is simply unacceptable and shouldn't be the case. The end-user should, within reason, be able to expect their product (like a game) to be able to function on their hardware so long as it's within decent generational tolerances because the developers should have put in the effort to accommodate the end-user. That's part of the selling-point of video games. It's _their job_ to make a product worth our purchase.
      It would be like if Epic created an entirely new type of internal combustion engine that was incompatible with every single vehicle on earth, but was marketed as "easier" to build. Epic then forces factories they have contracts with to start manufacturing this new engine type because it's "easier" for _them_ to produce it - but it's still incompatible with all vehicles, everywhere. This then means the car manufacturers need to completely upend and rebuild their entire manufacturing process to accommodate this new engine design, which costs them a ton of money. The cost of this transition then gets off-loaded onto the regular customer - you and me - that's trying to buy a new car, because the cost of manufacturing these new cars is far higher now due to the alien design of their engine, which was the fault of Epic forcing this new engine on all of their manufacturers. And, even if you end up buying it, it will still run _worse_ than a comparable model car with a traditional engine design; it cost you more and you got a worse product out of it.
      Epic is forcing nanite as the default rendering pipeline going forward, meaning *you can't opt-out of it!* As a developer, this means the overhead of rendering your game falls to the end-user. As a result, your game can only be played by users with the absolute latest, bleeding-edge machines, because nanite only _works_ on those machines, due to its insane overhead. This instantly cuts your potential playerbase to a fraction of its original potential because the cost of purchasing a machine capable of rendering your game is going to be astronomical, not just immediately, but _exponentially_ into the future (as games attempt to render more and more complexity through nanite, chasing the dragon of "photo realism").
      The only parties benefiting from this are Epic (for obvious reasons) and large development companies that have _contracts_ with Epic (for the same reasons). But small studios or indie devs? You're screwed; optimising in nanite is currently impossible and, as an indie dev, you probably don't have the hardware necessary to even render your _own game._ And the end-user? You're _double-screwed;_ you have to front the cost of the hardware to run the game _and_ the exponentially-increased cost of developing that game that anyone outside of Epic's sphere of partners had to eat in developing their game! As a result, both games _and_ the hardware needed to render them will cost more for the end-user to purchase (unless you're best buddies with the executives at Epic).
      Nanite is a complete sham and a waste of effort and resources, built upon self-imposed problems that didn't need to exist, with shoddy "solutions" to those problems, which then create _new_ problems without even fixing the _old problems_ to begin with! It's just Epic's way of pushing out anyone that doesn't have contracts with them, practically requiring corporate nepotism in order to operate within their market. It exclusively benefits them and their friends and they know it.
      InB4: "StOp BeInG PoOr!!1" because that's the _only_ argument against this nonsense.

    • @TheReferrer72
      @TheReferrer72 25 днів тому +9

      @@Armameteus History is going to proof you dead wrong and its a pity so many of you dev's just don't get that hardware always always catches up.
      Artists and designers should not be making LOD's its a kludge, they should not be combining meshes its a kludge, baking in lights and whatever tricks have to be done when algorithms can do the work.
      Artists and designers should be focussing on making their games look and play good.

  • @GugureSux
    @GugureSux 2 місяці тому +52

    This explains both why I generally despise the modern UE games' looks (so much noise and blur) and why especially UE5 run like ass, even under decent HW.
    And since so many devs seem to use lazy upscalers as their "optimization" trick, things only get worse visually.

    • @kanta32100
      @kanta32100 Місяць тому +3

      No visible LOD pop in is nice though.

    • @bennyboiii1196
      @bennyboiii1196 Місяць тому +6

      The blurriness is due to TAA tbf, which is a scourge. This is why I use Godot lmao

    • @ThreatInteractive
      @ThreatInteractive  Місяць тому +8

      @@kanta32100 Not true, you will still get pop and LODs have had various ways on reducing pop-visibility(just not in UE5) in a non-flawed TAA dependent way.

    • @PabloB888
      @PabloB888 Місяць тому

      ​@@bennyboiii1196 TAA image looks blurry, but a good sharpening mask can help a lot. I use reshade CAS and luma filters. I have been using this method for the last couple of years on my old GTX1080. Now however I bought the RTX4080S and can get much better image quality by using DLSS balance and DLDSR2.25 (80% smoothness) at the same time. I get no performance penalty, and the image quality destroys native TAA. What's more if DLSS implementation is very good I can use DLSS ultra performance and still get sharper image compared to the native TAA and framerate is 2x better at this point.

    • @cdmpants
      @cdmpants Місяць тому +9

      @@bennyboiii1196 You use godot because you don't want to use TAA? You realize that TAA is optional?

  • @hoyteternal
    @hoyteternal 2 місяці тому +16

    nanite is specificaly made to render non-optimized scenes with ridiculous polycounts, and microgeometry when each pixel might render a separate triangle. and models that don't have lods. it has huge initial overhead but way higher scalability in this scenario. actually nanity is not a unique technology. it is an implementation of a techique called visibility buffer.

  • @yeah7267
    @yeah7267 2 місяці тому +295

    Finally someone taking about this... I was sick ok people claiming Nanites boost performance when in reality I was losing frames even in the most basic scenes

    • @cheater00
      @cheater00 2 місяці тому +28

      just another case of tencent timmy being confidently wrong. we've all known unreal engine was way worse than quake 3 back in the day as well, it was ancient by comparison and the performance was abysmal. the fact people kept spinning a yarn that unreal engine was somehow competitive with id tech was always such a funny thing to see.

    • @HankBaxter
      @HankBaxter 2 місяці тому +4

      And it seems nothing's changed.

    • @cheater00
      @cheater00 2 місяці тому +6

      @@randoguy7488 precisely. and in those 25 years, NOTHING has changed. this should make you think.

    • @ClowdyHowdy
      @ClowdyHowdy 2 місяці тому +48

      To be fair, I don't think nanite is designed at all to boost performance in the most simple scenes, so anybody saying that is either oversimplifying or just wrong. Instead, it was designed to provide an easier pipeline for artists and designers to build complex level designs, without inducing an equivalent increase in performance loss or time into developing LODs.
      It's designed to be a development performance boost.
      If you don't need nanite for your games then there's no reason to use it, but I think it's weird to pretend like the issue is because it doesn't do something it wasn't designed to do

    • @_Romulodovale
      @_Romulodovale 2 місяці тому +7

      All the last features in engines came to make developers life easier while affecting performance negatively. Its not a bad thing, in the future those features will be well polished and will help us developers without affecting performance. All good tech take time to be polished.

  • @YoutubePizzer
    @YoutubePizzer 2 місяці тому +150

    Here’s the question though: is this going to save significant enough resources in a development team to allow them to achieve more with less. If it’s not “optimal”, it’s fine, as long as the amount of dev time it saves is worth it. Ultimately, in an ideal world, we want the technology to improve so development can become easier

    • @wumi2419
      @wumi2419 2 місяці тому +87

      Cost of development doesn't disappear however, it's just transferred to customers. So they will have to either pay for a better GPU, run the game at lower quality, or will just "bounce off" and not purchase the game at all.

    • @blarghblargh
      @blarghblargh 2 місяці тому +2

      ​@wumi2419 they could just make the game look worse instead. The development style being done now is already burning something, and that something is developer talent. And there isn't an infinite pool of that.
      You also ignored that GPU performance increases over time. So it may be a rough tradeoff now, but the tech will continue to get better hardware over time.

    • @insentia8424
      @insentia8424 2 місяці тому +8

      I don't understand why you ask about whether this allows development teams to be more efficient (achieve more with less), then you believe that in an ideal world tech would make things easier. Something becoming more efficient, and something becoming easier are not the same thing.
      In fact, an increase in efficiency often times causes things to become harder or more complex to do.

    • @snark567
      @snark567 2 місяці тому +10

      You can always just go for stylized visuals instead of hyper complex geometry and realism. This performance min maxing only becomes an issue of concern when your game is too detailed and complex but you still want it to run on a toaster.

    • @seeibe
      @seeibe 2 місяці тому +21

      The problem is when newer games look and perform worse than older games while still costing the same. If that saved developer time will be passed on as price reduction to the user, sure. But that's not what happens for most games.

  • @XCanG
    @XCanG 2 місяці тому +75

    I'm not game developer, but there is some questions that I have to ask and some personal opinion as a gamer.
    1. The main point I remember for introducing Nanite at first was that you can make models faster by not spending time on creating LODs. So in some of your examples comparison: how much time do you need to create same model with Nanite vs same model with LODs? Considering that all fresh games that being made with a bias in realism require a lot of details I think at a scale it will make a difference.
    2. May be because I'm not into this field I don't head that, but I really didn't hearing anyone who would render model with billions of triangles. The only closest example was Minecraft clones who was rewritten in C/Rust etc. who tried to achieve large render distance. Other rendering scenes where 1 frame make take hours of render time, so it was not realtime, but I really didn't hearing anyone else showing examples like that. I can imagine that you are some Senior game developer with at least 6 years of experience, but how many game devs also know about this optimizations? I can't imagine even more than 25%.
    3. Let's assume that you can optimize them better, does UE5 allow you to handle optimization by yourself? I imagine that UE4 do, but what about UE5? If answer is yes, then this problem is more arguing about better defaults where you have manual optimization with a cost of your time vs nanite auto-optimization with faster creation time.
    4. As a player I want to point out that many latest titles comes with bad optimizations, so much that gamers starting to hate their studios. My personal struggle was with Cities: Skylines 2. They use Unity engine where they definitely have all this abilities for optimization, but somehow they released lagging p* of s*, where some air conditioner on a building that you see from a fly height (far away) had 14k triangles and some pedestrians have 30k triangles. Considering that they can't optimize properly I believe if incompetent devs like them just use Nanite the game wouldn't be this laggy. For me it realistically to assume that by default a system that handle optimizations automatically is far better, than a manual one who can be only optimized properly very few individuals.

    • @XCanG
      @XCanG 2 місяці тому +1

      @@miskliy1 I see.

    • @MiniGui98
      @MiniGui98 2 місяці тому +11

      "For me it realistically to assume that by default a system that handle optimizations automatically is far better, than a manual one who can be only optimized properly very few individuals"
      Fair point, although the "manual" technique has been used since basically the beginning of fully 3D games and had been mastered not by "few individuals" but by the whole industry at some point. Throwing everything in the nanite pipeline just because it's simpler and faster is a false excuse once you know that the manual, traditional techniques will get you better performances with little to no difference in visual fidelity. Even better, the extra performance you free with the manual LODs allow you to minimize the need for FSR or DLSS, both of which indisputably degrades image fidelity as well.
      Performance tests with FSR/DLSS enabled are basically a lie over what the performances of the raw game are. Native resolution with more traditional anti-aliasing techniques should still be the norm as it always has been.
      Big games relying on super-sampling are just a sign of badly optimized games, it's as simple as that.

    • @SioxerNikita
      @SioxerNikita 2 місяці тому +21

      ​@@MiniGui98 No, it's never been "the whole industry" at some point.
      Frankly, the good optimizers are kind of far between, from the first days of optimization in general (not just 3D). Beyond that, optimizations are not a thing that you just apply every time, if so, everything would be using those optimizations. An optimization that works in one product, might not work in another, and might sometimes make it slower. Optimizations are not "Catch all solutions", and each project needs different optimizations... so there isn't a "manual" technique that has been "mastered".
      Beyond that, you have a different problem. Optimizations can only really be applied when the product is mostly done, and the rendering pipeline is done, so if the rendering pipeline gets continuesly developed until the end, then optimizations might be actual wasted work, or worse, create a buggy mess, because the rendering pipeline changes stuff that doesn't interact well with the optimizations.
      And then you have some of the most important stuff... Time... Development time... If you have an automated pipeline that is for example 30% better than you not doing manual optimization, then that tons of time you wouldn't spent on optimization, could be spent on doing optimizations and improvements elsewhere. For example, in LoD models, those take time to make... sometimes SIGNIFICANT amount of time. You create a model, then you have to create a lower poly model, and an even lower poly model model, and if you ever need to change that model, you have to do the lower poly versions again. That is a LOT! of time. Sure, may have better performance, but if you can offload that work to for example Nanite and it gives acceptable performance, you can focus on optimizing the main model instead.
      Performance tests with FSR/DLSS are a lie? If the game is intended to be run with FSR/DLSS, then that is the performance test you should do.
      Frankly, considering that you just focus on manual optimization being better, and NONE! of the other factors in it, kind of shows me that you don't really know much about optimizing or game development... and saying "Games relying on Super-Sampling are signs of a badly optimized game"...
      Even more interesting, Super Sampling is not just "DLSS" or similar... Do you even know what Super Sampling is? Relying on super sampling is the opposite of optimizing, because it is one of the more computationally expensive features. It is essentially rendering a LARGER image than you need... That is something you use if you either have optimized heavily, so you can afford it, or you have an option for it for people who have killer setups... Super Sampling is not an optimization technique at all.
      DLSS is kind of the opposite of Super Sampling. It uses Deep Learning (Or as we would call it today "AI") to infer a lot of information, using various different techniques, essentially getting some of the effects of Super Sampling, but without actually Super Sampling. So you saying "Big games relying on super-sampling are just a sign of badly optimized games, it's as simple as that." essentially means you have basically no clue what you are talking about...
      You start out saying "fair point" and then proceed to show you don't even understand the point.
      And relying completely on a single automated optimization technique is always bad... it's not just a "big games" thing... but almost no games do... but optimization is hard to do, very hard... if it wasn't you'd see games that were badly optimized.

    • @Megalomaniakaal
      @Megalomaniakaal 2 місяці тому +4

      @@MiniGui98 Majority of games out there were rather unoptimized messes that Nvidia and AMD optimized on 'game ready' drivers release level in the DX11 and older days.

    • @Jiffy360
      @Jiffy360 2 місяці тому +3

      About that final point about CS:2, that was exactly their plan. They were going to use Unity’s competitor to Nanite, but then Unity delayed or cancelled it, so they were stuck creating a brand new LOD system from scratch at the last moment.

  • @stephaneduhamel7706
    @stephaneduhamel7706 2 місяці тому +133

    The point of nanite was never to increase performance compared to a perfecty optimized mesh with LODs. It is made to allow devs to discard LODs for a reasonably low perfomance hit.

    • @doltBmB
      @doltBmB 2 місяці тому +84

      if losing 60% fps is "reasonably low" to you

    • @stephaneduhamel7706
      @stephaneduhamel7706 2 місяці тому +49

      @@doltBmB it's a lot less than that in most real life use cases.

    • @doltBmB
      @doltBmB 2 місяці тому +34

      @@stephaneduhamel7706 Yeah it might be as low as 40%, real great

    • @Harut.V
      @Harut.V 2 місяці тому +53

      In other words, they are shifting the cost from devs to hardware (consumers, console producers)

    • @ben.pueschel
      @ben.pueschel 2 місяці тому +14

      @@Harut.V that's how progress works, genius.

  • @piroman665
    @piroman665 2 місяці тому +47

    Its normal that new generation of rendering techniques introduces large overheads. Its a good tradeoff as it streamlines development and enables more dynamic games, Major issue is that developers ignore good practices and then blame the software, part of that is also epics fault as they try to sell it as magic solution for unlimited triangles which is not. Nanite might be slower but enable scenarios that would be impossible or very hard to achieve with static lighting. Sure you can render dense meshes with traditional methods but imagine lightmapping them or making large levels with realistic lighting and dynamic scenarios.

    • @SydGhosh
      @SydGhosh 2 місяці тому +19

      Yeah... In terms of all this dude's videos - I find myself technically agreeing; but, I don't think they see the big picture.

    • @thegreendude2086
      @thegreendude2086 2 місяці тому +8

      @@piroman665 I believe unreal was made to be somewhat artist friendly, systems you can work with even if you do not have a deep technical understanding. Hitting the "enable nanite" checkbox so you have to worry less about polycount seems to fit that idea.

    • @lau6438
      @lau6438 2 місяці тому +3

      ​@@SydGhosh The bigger picture being deprecating traditional LODs that perform better, to implement a half-baked solution? Wow, what a nice picture.

    • @DagobertX2
      @DagobertX2 Місяць тому +4

      @@lau6438 They will make it better with time, just like back in the gamedev stoneage they made LOD perform better. Imagine there was a time where you had to optimize triangle strips for a game for best performance 💀

    • @Bdhdh-p7h
      @Bdhdh-p7h 26 днів тому +1

      @@lau6438 LODs are costly and expensive to studios to develop.

  • @GonziHere
    @GonziHere 2 місяці тому +95

    Interesting video, but isn't nanite using it's own software solution for the small triangles exactly because of that issue? I'm pretty sure that they've said so when they started talking about it, it's in their tech talks, etc.
    This test feels somewhat fishy to me. Why not compare the normal scene, instead of captured frame, for example? That skips part of the work of one example, while using the full pipeline of the other...

    • @AdamKiraly_3d
      @AdamKiraly_3d 2 місяці тому +53

      I would love to get my hands on that test scene to give it a spin. I've been working with Nanite in a AA and AAA setting for almost 3 years now, and while it was an absolute pain to figure out the quirks and the correct workflows it has been overall a positive thing for production.
      In my experience Nanite REALLY struggles with anything using masked materials or any sort of Pixel Depth edits.
      I've also seen issues with perf when most meshes were "low poly" in the sense that the triangles are very large on screen, I vaguely remember the siggraph talk where they mention that larger triangles can rasterise slower because it's using a different raster path
      Nanite handling its own instancing and draws moves a lot of the cost off the cpu onto the GPU so the base cost being more should be a surprise to no one. It is also very VERY resolution dependent. The higher you go in res the exponential the cost of Nanite (and VSM, Lumen) not to mention the general cost increase of a larger res. I've grown to accept the only way forward with these new bits of tech is upscaling. I'm not happy as a dev, but as a gamer I couldn't care less and use it everywhere.
      VSM has similar issues, since for shadows you effectively re-render the nanite scene, but there are ways to optimise that with CVars, and generally it's been better performing than traditional shadow maps would when used with nanite - there's a caveat there, since if you bring foliage into the mix then nanite can get silly expensive both in view and in shadows.
      Collision memory has also been a major concern since for complex collision UE uses the Nanite fallback mesh by default, so in a completely Nanite scene you can end up with more collision mem than static mesh memory
      I also feel like having a go at Epic for not maintaining "legacy" feature compatibility indefinitely is a bit unfair. Both VSM and Lumen rely on Nanite to render efficiently and were created as a package. Epic decided that is the direction they want to take the engine, an engine that is as complex as it's large, it is almost expected to lose some things along the way. That being said I have run into many things that I wish they cared enough about to fix (static lighting for example has a ton of issues that will never be fixed cause "just use dynamic lighting"), but at the same time I won't start having a go at them for not supporting my very specific usecase that is not following their guidance.
      No part of this tech is perfect and I get the frustration, but it did unlock a level of quality for everyone to use that was only really available in proprietary engines before.
      Also do we really thing CPPR would drop their own insane engine for shits and giggles if they didn't think it was a better investment to switch to UE5 than to upgrade their own tech?
      same with many other AAA companies, and you can bet your ass they took their sweet time to evaluate if it makes sense or not (not to mention they will inevitably contribute to the engine a TON that will make it back to Main eventually)

    • @ThreatInteractive
      @ThreatInteractive  2 місяці тому +12

      re: Why not compare the normal scene, instead of captured frame, for example?
      You can compare yourself because we already analyzed the "normal" scene, same hardware, same resolution: ua-cam.com/video/Te9xUNuR-U0/v-deo.html
      Watch the rest of our videos as everything we speak about is interconnected.

    • @SioxerNikita
      @SioxerNikita 2 місяці тому +17

      @@AdamKiraly_3d A thing a lot of non-devs forget, is time...
      Time is the ultimate decider on quality, and the more time spent on optimizing, and making things better, the less time is spent on the game. CDPR is dropping their own engine because at this point, upgrades to the engine that either makes it more performant, or have higher fidelity takes exponentially longer with each feature, and they don't have a "specific" game the engine needs to do. If it was only FPS games they developed, having a single engine they'd upgrade continuesly might make sense, but they want to make more games than just FPS.
      Every second they use on upgrading the engine or doing tedious graphical optimizations on models, is time that could've been spent fixing physics bugs, weird interactions, broken map geometry, and other stuff, that will in the end make the game feel a lot better... and they can also focus on larger scale optimizations that might increase performance more overall, because you don't have to do 2-5 LoD models per model, you can instead focus on getting the poly count significantly down on the base model, and increase the perceived fidelity...
      The days of hyper optimization of games is over. Hardware is becoming ever more complex, and it is becoming completely infeasible to talk to hardware directly, as there are SOOOO!!! much different hardware you'd have to account for. Games are becoming prettier, and even the gamers saying "Graphics don't matter" still wont buy a game that looks like a PS1 game, because it gives them a feeling of "Shoddy game". Increasing poly counts, increasing player expectations, and ever more competitive market... yeah, devs need tools to automate a lot of the process.

    • @satibel
      @satibel 2 місяці тому +1

      ​@@SioxerNikita imo graphics don't matter is more of a "yeah realistic graphics are neat, but consistent graphics are better"
      I'll take cartoony graphics like Kirby's epic yarn over battlefield graphics. yes realism looks good, but stylized does too, and a well stylized game can both be way more efficient and age way better, while still looking as good or better than wannabe photorealistic graphics.
      If we're talking about ps1, a game like vib ribbon still looks good nowadays, and for a more well known example, crash bandicoot.
      They look miles better than asset flip looking franchise games that have realistic graphics, but aren't that cohesive and have a meh game under.

    • @lycanthoss
      @lycanthoss 2 місяці тому +3

      ​@@satibelRealistic graphics clearly sell. Just look at Black Myth Wukong.

  • @bits360wastaken
    @bits360wastaken 2 місяці тому +51

    10:49, AI isnt some magic silver bullet, an actually good LOD algorithm is needed.

    • @michaelbuckers
      @michaelbuckers 2 місяці тому +28

      He's talking about a hypothetical AI based LODder with presumably better performance than algorithmic LODders. Which is a fair guess considering that autoencoding is the bread and butter of generative AI. I hazard a guess that you can adapt a conventional image making AI to inpaint an original mesh with a lower poly color-coded mesh from various angles, and use these pictures to reconstruct a LOD mesh.

    • @yeahbOOOIIÌIIII
      @yeahbOOOIIÌIIII Місяць тому +5

      What he is suggesting is amazing. I have to jump through hoops to get programs like reality capture (which is amazing) to simplify photogrammetry meshes in a smart fashion. They often destroy detail. This is an example where machine learning could shine, making micro-optimizations to the LOD that lead to better quality at higher performance. It's a great idea.

    • @MarcABrown-tt1fp
      @MarcABrown-tt1fp 19 днів тому

      @@yeahbOOOIIÌIIII Of course the meshes would need to be hand made still. Generative AI doesn't seem to work well with limits like quads imposed.

  • @Bitshift1125
    @Bitshift1125 19 днів тому +5

    9:20 This dithering is so, so common nowadays and it looks TERRIBLE! Hair especially just looks like a noisy mess in most new games, and there is no way to turn your settings up enough to make them look good due to dithering. Then everyone just tells you that TAA and upscaling fix it. I don't want either of those technologies active, because they ruin the final image no matter what you do. It drives me nuts that people say "Oh, there's a visual issue? Turn on a bigger one to fix it"

  • @Gurem
    @Gurem 2 місяці тому +9

    I remember epic saying this would not replace traditional methods but should be used in tandem with them as it is a way to increase productivity. Tbh this video taught me more about optimization than any optimization video and didnt waste my time. As an indie it did more to reinforce my desire to use nanite while also teaching me how to do more hands on techniques that while require more work may result in better performance that I can use when I have the free time to do so. I thank you for demistifing the BS as I really couldnt understand the tech from those other yt videos as they were purely surface level quick churned content.

  • @rimuruslimeball
    @rimuruslimeball 2 місяці тому +52

    These videos are amazing but to be honest, a lot of it flies over my head. What should we, as developers (especially indie), do to ensure we're doing good optimization practices? Alot of what your videos discuss seem to require an enormous amount of deep-level technical understanding of GPUs that I don't think many of us can realistically obtain. I'm very interested but not very sure where to start nor where to go from there. I'm sure I'm not the only one.

    • @cadmanfox
      @cadmanfox 2 місяці тому +4

      I think it is worth learning how it all works, there are lots of free resources you can use

    • @marcelenderle4904
      @marcelenderle4904 2 місяці тому +5

      As an Indie developer I feel that it's very important to know the problems and limitations of those techs and the concepts behind good practices. That doesn't necessary mean you have to apply them. Nanite, lumen, dlss etc can be very efficient as a cheap solution. If it speeds up your game by alot and gets to the result you want, for me at least, it's what you should aim for. Those critiques at Unreal are great for Studios and the industry itself.

    • @Vysair
      @Vysair 2 місяці тому +3

      I have a diploma in IT which is actually just CS and I dont have a clue on what this guy is talking about as well

    • @JorgetePanete
      @JorgetePanete 2 місяці тому

      A lot*

    • @Vadymaus
      @Vadymaus 2 місяці тому +1

      @@anonymousalexander6005. Bullshit. This is just basic graphical rendering terminology.

  • @nazar7368
    @nazar7368 2 місяці тому +118

    Epic games marketers killed the fifth version of the engine. at the time of 2018, there were no video cards that supported Mesh Shaders. They took advantage of this and added the illusion of cheating support for old video cards. Lumens and nanites were supported on old video cards, but this is all software implementation, which cannot be at the level of hardware mesh shader and ray tracing. This led to real problems with the engine kernel, namely that DX12 and Vulkan do not work correctly and have low efficiency due to the old code that was written for DX11. I'm not taking into account the problems with the engine's layer rendering and compositing algorithms, because everyone already sees the eternal blurring of the picture and annoying sharpening.
    This will not be fixed until they add hardware support for mesh shaders and rtx of the new version (added literally by the dxr 1.2 library). For example, Nvidia has many conferences to show their algorithms for improved ReStir in unreal engine 5, and gives access to them, but developers are firmly in their own way and continue to feed the gaming industry with an anthem.

    • @JoseDiaz-he1nr
      @JoseDiaz-he1nr 2 місяці тому +5

      idk man Black Myth Wukong seems like a huge success and it was made in UE5

    • @manmansgotmans
      @manmansgotmans 2 місяці тому +12

      The push to mesh shaders could have been nvidia's work, as their 2018 gpus supported mesh shaders while amd's gpus did not. And nvidia is known for putting money where it hurts their competitor

    • @Ronaldo-se3ff
      @Ronaldo-se3ff 2 місяці тому +35

      @@JoseDiaz-he1nr its performance is horrendous and they reimplemented a lot of tech in-house so that it can run atleast.

    • @topraktunca1829
      @topraktunca1829 2 місяці тому +22

      @@JoseDiaz-he1nr yeah they made a lot of hard internal optimizations and in the end its running on a "ehh well enough I guess" level. Not to mention the companies that didn't bother to do such optimizations like Immortals of aveum, Remnant 2. These games are unplayable other than 4090s or 4080s. Even then they still have problems

    • @nazar7368
      @nazar7368 2 місяці тому +13

      @@JoseDiaz-he1nr A really great success. With a 2010 picture and 30fps on 4090 with a blurry picture

  • @badimagerybyjohnromine
    @badimagerybyjohnromine 10 днів тому +4

    Whenever someone posts a comparison showing unity has better raytracing, and better performance people lose their mind and mass dislike the video and its been this way for years. Im glad its starting to change, i dont really like using unitys engine, sure its better looking and is better performing but my hopes is unreal improves most of all.

  • @FrancisBurns
    @FrancisBurns 6 днів тому +2

    Yeah Digitial Foundry seem to like blaming "slow" CPU instead of rightly blaming poor optimization in UE5.

  • @salatwurzel-4388
    @salatwurzel-4388 2 місяці тому +23

    Remember the times when {new technology} was very ressource intensive and dropped your fps but after a relative short time it was ok because the hardware became 3x faster in that short time?
    Good times :D

  • @RiasatSalminSami
    @RiasatSalminSami 2 місяці тому +200

    Can't ever take Epic seriously when they can't even be arsed to prioritize shader compilation stutter problems.

    • @USP45Master
      @USP45Master 2 місяці тому +30

      Write the shader compiler yourself ! Only about 30 minutes... put it in ablank level that is async compiling and showing a loading screen :)

    • @user-ic5nv8lj9d
      @user-ic5nv8lj9d 2 місяці тому +2

      maybe provide a solution?

    • @bricaaron3978
      @bricaaron3978 2 місяці тому +6

      Unreal Engine has been a console engine for a long time.

    • @randomcommenter10_
      @randomcommenter10_ 2 місяці тому +27

      What's interesting is that UE4 & 5 actually has a built-in setting in the ini files to precompile shaders called "r.CreateShadersOnLoad" but the weird thing is that it's set to False by default, instead it should be True. What's even more weird is UE3 has a different ini setting called "bInitializeShadersOnDemand" which is False by default, which actually means all shaders will be precompiled before the game starts. I have no idea why Epic didn't set shader precompilation to true in UE4 & 5 but at least the setting is there and more devs should know about it to turn it on for their games in order to help reduce shader compilation stutter

    • @RiasatSalminSami
      @RiasatSalminSami 2 місяці тому +8

      @@user-ic5nv8lj9d why do I have to provide a solution? If devs need to have their own solution for such a game breaking problem, they might as well make their own engine.

  • @almighty151986
    @almighty151986 Місяць тому +13

    Nanite is designed for geometry way above what we currently have in games.
    So until games reach the point where their geometry is high enough to require Nanite then Nanite will be slower than traditional methods.
    Maybe next generation will get there.

    • @youtubehandlesux
      @youtubehandlesux Місяць тому +1

      With the current performance increase of hardware it'll take like 20 years to reach that point.

    • @korcommander
      @korcommander 17 днів тому

      We are at the point of diminishing returns of poly count. Like how many more polygons do we need to render 2Bs buttcheeks?

  • @PanzerschrekCN
    @PanzerschrekCN 2 місяці тому +87

    The whole point of Nanite existence is not to make games faster, but to reduce content creation costs. With Nanite it's not needed anymore to spend time creating LODs.

    • @alex15095
      @alex15095 2 місяці тому +21

      Exactly this, it makes things a lot easier if you can just mash an unoptimized 3d scan and a bunch of 150k poly models to make a scene and just make it work. As we know with electron apps, sometimes it's not about what's most efficient but rather what's easier for developers. An AI solution is unlikely as we've not found a good architecture/representation that effectively combines mesh shape, topology, normals, UVs, and textures, it's a much more complicated problem than just image generation

    • @chiboreache
      @chiboreache 2 місяці тому

      @@alex15095 you can make synthetic dataset by using Blender Sverchok addon and model every thing in procedural way

    • @dbp_pc3500
      @dbp_pc3500 2 місяці тому +10

      Dude LOD are generated automatically by any tools. It’s not time consuming at all

    • @antiRuka
      @antiRuka 2 місяці тому +6

      generate and save lods for a couple 100k poly mesh please

    • @mercai
      @mercai 2 місяці тому +30

      @@dbp_pc3500 Tell us you haven't made actual quality LODs without telling us.

  • @AlexKolakowski
    @AlexKolakowski 2 місяці тому +56

    You didn't really touch on Nanite in combination with Lumen. Part of the benefit of this workflow is that run time GI doesn't require any light baking. The baseline fps is higher but the dev hours saved not having to optimize lights is worth leaving older cards behind for some devs.

    • @manoftherainshorts9075
      @manoftherainshorts9075 Місяць тому +14

      "Why would we work if we can make players work more to afford better hardware for our game?" - Unreal developers circa UE5 release

    • @JensonTM
      @JensonTM Місяць тому +7

      horrible take

    • @gelisob
      @gelisob Місяць тому +6

      agreed, very horrible take. Havent seen game development prices? Want more dev's left jobless and projects canceled, because "there's a way to make it slightly better" for 300% development time? Yeah, i think get the card or accept few less frames, and project happening.

    • @gamechannel1271
      @gamechannel1271 Місяць тому +7

      There's plenty of single-person developer teams who make very optimized and good looking games in a timely manner with traditional techniques. Your excuse basically boils down to "skill issue".

    • @realmrpger4432
      @realmrpger4432 20 днів тому

      @@manoftherainshorts9075 For PC gamers, yeah. But consoles are more-or-less a fixed cost for gamers.

  • @MrSofazocker
    @MrSofazocker 2 місяці тому +19

    What this fails to capture is Nanite uses Software rasterizer, which doesn't suffer from quad overdraw at all.
    Small enough Clusters of Vertecies are offloaded to a software rasterizer and geometry assembler.
    The performance degradation comes from not using GPU ResizeableBAR most likely or doing other fudging on the messurements.
    Epic could do a better at providing information of incompatible project settings etc.
    But what it is, is allowing you to put way more polygons on your screen. that's still true.
    Bringing in an 8th gen game and enabling Nanite won't give you more performance, could even give you worse performance.
    Also, even today, if you want to start a game project, use 4.26 like they tell you.

  • @B.M.Skyforest
    @B.M.Skyforest Місяць тому +12

    What many seem to forget is that DLSS and other upscaling things are meant for games to run on slow hardware. And now it's required to be ON if you want to have nice framerate on your top notch PC at ultra settings. It always makes me laugh and also sad at the same time seeing 2010s level of graphics with barely 30 fps on modern machines. We had better looking and faster games back in the day.

  • @imphonic
    @imphonic 2 місяці тому +41

    This kind of thing has been bugging me for such a long time and it's good to know that I'm not completely insane. I'm currently using Unreal Engine 5 to prototype my game, using it as a temporary game engine (like how Capcom used Unity to build RE7 before the RE Engine was finished). My game will be finished on a custom game engine, which I will open-source when I'm finished. I don't want my debut to be ruined by awful performance and questionable graphics quality. I currently target 120 FPS on PS5/XSX, not sure what resolution yet, but all I know is that we're in 2024 and a smeared & jittery 30 FPS is simply unacceptable.
    I'm not trying to compete with Unreal/Unity/Godot, but I am interested in implementing a lot of old-school techniques which were very effective without destroying performance, while also exploring automated optimization tools rather than pushing the load onto the customers that make my career possible. The neural network LOD system is intriguing, and it might not be perfect, but it might still be a net improvement, so I'll keep that one in mind.
    Edit: I might not be able to finish the game on the custom engine, and might just have to bite the bullet and ship on UE5. Game engine development is simply super expensive and I don't have graphics programming experience. That doesn't mean it won't happen - it's still possible if I can, say, find a graphics programmer - but I'm no longer comfortable guaranteeing that statement. However, I will be shipping future games on my custom engine to avoid the UE5 problem from plaguing future ones.

    • @4.0.4
      @4.0.4 2 місяці тому +11

      You're either a genius or insane. Much luck and caffeine to you regardless (we need more rule breakers).

    • @MrSofazocker
      @MrSofazocker 2 місяці тому +4

      Starting a game project in ue5... good luck with that.
      They even tell you should stay on ue4.26.
      Seeing you playing more Minecraft from your recent video uploads I call cap.
      If you just want an Editor, you can stil use Unreal Engine, without the engine part. and write your own renderer pipeline-.
      No point in rewriting your own graphical editor, asset system, sound system, tools and packaging for PS and XSX... especially getting it to works.
      I imagine you don't even have a dev console for any of those... so good luck getting one as an Indie dev.

  • @cenkercanbulut3069
    @cenkercanbulut3069 Місяць тому +4

    Thanks for the video! I appreciate the effort you put into comparing Nanite with traditional optimization methods. However, the full potential of Nanite might not be fully apparent in a test with just a few meshes. Nanite shines when dealing with large-scale environments that have millions of polygons, where it can dynamically optimize the scene in real time. The true strength of Nanite is its ability to manage massive amounts of detail efficiently, which might be less visible in smaller, controlled setups. It would be interesting to see how both approaches perform in a more complex scene with more assets, where Nanite’s real-time optimization could show its advantages. Looking forward to more in-depth comparisons in the future!

  • @ProjectFight
    @ProjectFight 2 місяці тому +3

    Okey... a few things. This video was way faster than I could follow. But it was sooo interesting. I love seeing the technical aspect of how games are properly optimize, what counts and what not. And I will ALWAYS support those that are willing to go the extra mile to properly research this things. Sooo, new sub :)

    • @NeverIsALongTime
      @NeverIsALongTime Місяць тому +1

      XD I know Kevin in real life; in real life he speaks way faster! He's chill in his videos. He is brilliant, probably a bit on the spectrum (in a good way). He is more passionate and even more intense in real life. I have read some of his screenplay for his upcoming game it is terrifying & edge-of-your-seat exciting!

  • @hoyteternal
    @hoyteternal 2 місяці тому +9

    nanite is an implementation of rendering technique called visibility buffer, this technique is specifically created to overcome quad utilisation issues. once triangle density reduces to a single pixel, the better quad utilization of Visibility (nanite) rendering greatly outweighs the additional cost of interpolating vertex attributes and analytically calculating partial derivatives. you can search for an article called "Visibility Buffer Rendering with Material Graphs", it is a good read on filmic worlds website, with lot of testing and illustrations

    • @ThreatInteractive
      @ThreatInteractive  2 місяці тому +7

      In the original paper on visibility buffers, the main focus was on bandwidth related performance. Visibility buffers might not be completely out of consideration. We've spoken with some graphic programmers who have stated their implementation can speed up opaque objects, but we are still in the process of exploring the options here.
      While Nanite is a solution to real issues, it's a poor solution regardless because the cons outweigh the pros.
      We've seen the paper you've mentioned and we also shown other papers by filmic worlds in out first video ( which discussed more issues with Nanite) .

  • @QuakeProBro
    @QuakeProBro 2 місяці тому +62

    Great video, you've talked about many things that really bother me when working with UE5. Extreme ghosting, noise, flickering, a very blurry image, and compared to Unreal Engine 4, much much worse performance with practically emtpy scenes (sometimes up to 80 fps difference on my 2080ti). All this fancy tech, while great for cinematics and film, introduces so many unnessessary problems for games and Epic seem to simply not care.
    If they really want us to focus on art, instead on optimizing, give us next-gen worthy and automated optimization tools instead of upscalers and denoisers that destroy the image for a "better" experience. This is only battling the symptomes.
    And don't get me wrong, I find Lumen and Nanite fascinating, but they just don't keep what was promised (yet).
    Thanks for talking about this!

    • @keatonwastaken
      @keatonwastaken 2 місяці тому +1

      UE4 is still used and is capable, UE5 is more just for people who want fancier looks early on.

  • @Madlion
    @Madlion 2 місяці тому +12

    Nanite is different because it saves development cost by saving time to create LODs, its a simple plug and play system that just works

  • @pchris
    @pchris 2 місяці тому +16

    I think easy, automatic optimizations that are less effective than manual ones still offer some value. When a studio has to dedicate fewer resources to technical things like they, the faster they can make games, even if they look slightly worse than they could if they wanted to take absolutely full advantage of the hardware.
    Every other app on your phone being over 100mb for what is basically a glorified web page should example how dirty but easy optimization and faster hardware mostly just enables faster and cheaper development by enabling developers to be a little sloppy.

    • @theultimateevil3430
      @theultimateevil3430 2 місяці тому +2

      it's great in theory, but in practice the development is still expensive as hell and the quality of the products is an absolute trash. It's the reason we have a volume control in Windows lagging for a whole second before opening up. The same stuff that worked fine on Windows 95, lags now. Dumbasses with cheap technology still make bad products for the same price.

    • @pchris
      @pchris 2 місяці тому +3

      @@theultimateevil3430 when you're looking at large products make by massive publicly traded corporations you should never expect any cost savings to get passed on to the consumer.
      I'm mostly talking about indies. The cheaper and easier it is to make something, the lower the bar of entry is, and the more you'll see small groups stepping in and competing with the massive selfish corps.

  • @Lil.Yahmeaner
    @Lil.Yahmeaner 2 місяці тому +63

    This is exactly how I’ve felt about graphics for years now. Especially at 1080p, all the ghosting, dithering, and shimmering of UE5 gets unbearable at times and everyone is using this engine. It’s like you have to play at 4k to mitigate inherent flaws of the engine but that’s so demanding you have to scale it back down which makes no sense. Especially bad when you’re trying to play at 165hz and most developers are still aiming at barely 30-60fps, now exacerbated by dlss/framegen.
    Just like all AI, garbage data in, garbage data out, games are too unique and unpredictable to be creating 30+ frames out of thin air.
    Love the videos, very informative and well spoken. Keep up the good fight!

    • @nikch1
      @nikch1 2 місяці тому +6

      > "all the ghosting, dithering, and shimmering"
      Instant uninstall from me. Bad experience.

    • @Khazar321
      @Khazar321 2 місяці тому +6

      For years now? Yeah I don't think that's UE5 mate. Maybe stop hopping on the misinformation train here...

    • @s1ndrome117
      @s1ndrome117 2 місяці тому +4

      @@Khazar321 you'd understand if you ever use unreal for yourself

    • @MrSofazocker
      @MrSofazocker 2 місяці тому +4

      The notion that these defaults cannot be changed, and it's somehow a systemic issue of Unreal is insane to me.
      If you just use smth you don't even understand leave everything on default on press a button "make game", what kind of optimization do you expect?

    • @Khazar321
      @Khazar321 2 місяці тому +2

      @@s1ndrome117 I did and I have also seen the train wrecks that lazy devs cause with UE4 and engines around the same time.
      Horrible stutters, bad AA options, blurry and grey(unless you fix it yourself with HDR/ReShade), shader issues, etc.
      So yeah tell me how horrible UE5 is and what lazy devs can do wrong with it. I have seen it all in 30 years of gaming.

  • @gameworkerty
    @gameworkerty 2 місяці тому +4

    I would kill for an overdraw view like unreal has in Unity, especially because there is a ton of robust mesh instancing support via unity plugins that unreal doesn't have.

  • @sandybeach95
    @sandybeach95 20 днів тому +1

    I absolutely love these videos. God willing the awareness you've shed on this matter will engender positive change in the industry

  • @mike64_t
    @mike64_t Місяць тому +4

    Good video, but I disagree that you can currently train an AI model to reduce overdraw.
    There is no architecture currently that can really take an AAA model as an input.

    • @ThreatInteractive
      @ThreatInteractive  Місяць тому +1

      We are more detached from utilizing AI for implementing the max surface area topology than most people are giving us credit for. We just need faster systems for LODs. One of the biggest problems with the LOD workflow in UE vs Nanite is LOD calculations is extremely slow compared to Nanite(near instant even with millions of triangles). We also need a polished system that bakes micro detail into normal/depth maps faster.
      The way we see is it's always going to be an algorithm. Most AI that get trained enough revert to one anyway.

    • @mike64_t
      @mike64_t Місяць тому +4

      ​@@ThreatInteractive "most AI that get trained enough revert to one anyway" mhhh... I wouldn't say so. Yes, in a sense its an algorithm but the sort of compactness, discreteness and optimality that you picture when you hear the word "algorithm" is not present in a whole bunch of matrix multiplications that softly guide the input to its output. Just because it is meaningful computation doesn't make it deserving of the word algorithm. The LTT video isn't really accurate and makes some dangerous oversimplifications.
      I agree that tooling needs to become better, I also would love for there to be a magic architecture that you could just PPO minimize render time with and invents all of topology therory from that, but that is a long way to go... Transformers are constrained for sequence length and have a bias towards discrete tokens, not ideal for continous vertex data.
      For now it seems like you need to bite the bullet and write actual mesh rebuilding algorithms.

  • @ArtofWEZ
    @ArtofWEZ Місяць тому +6

    I see Nanite like blueprints. Blueprints run slower that pure C++ and Nanite runs slower than traditional meshes, but both of them are lot more fun to work with than traditional ways.

  • @therealvbw
    @therealvbw Місяць тому +3

    Glad people are talking about these things. You hear lots of chatter about fancy UE features and optimisations, while games get slower and look no better.

  • @Anewbis1
    @Anewbis1 Місяць тому +1

    Great content, thank you this! One question that comes to mind. The industry has decades of experience with traditional rendering method while Nanite is only a few years old. Do you think it is a factor to take into consideration when comparing? Do you see Nanite in 5/10years being way more efficient ?

    • @ThreatInteractive
      @ThreatInteractive  Місяць тому

      Great Question.
      The only way for for Nanite to improve would mean it would have to be drastically changed to the point where it would be a bit of a cheat to still call it "Nanite". Our video is talking about the same algorithm that has received 5 iterations.
      Nanite is an implementation of a couple of different concepts such as visibility buffers, mesh compression, and cluster culling but the industry could(already has) meet systems that touch on those same systems for a better performance result. For instance deferred texturing in the Decima Engine.
      How well Nanite runs or what hardware later in time doesn't matter. The industry has been given it's target hardware(9th gen) and the results we're getting from UE5 Nanite enabled games is a joke in terms of visuals and reasonable potential. If we waste potential now, next gen(10th) and so on will lose value as well.

  • @riaayo5321
    @riaayo5321 Місяць тому +4

    "It would have to be free" and thus the unsustainable, artificially low cost of entry for AI products that then do not actually generate profit marches on.
    I do appreciate the in depth look at nanite's problems, I don't mean to sound sour on that. But AI is in a huge bubble. Companies are going in on them at an already artificially low cost of entry and are *still* losing money. It's just not sustainable, and I'm not sure "make these tools exist so our less powerful gpus have more value because games are more optimized" is a good enough return on investment for AMD or Intel.
    I'm not saying I know the answer other than Epic just needs to not be sunsetting older, working methods of optimization.

    • @sylphianna
      @sylphianna Місяць тому +1

      dog heard the word AI and got scared
      this is the kind of thing it is actually supposed to be used for, making an approximation of something we already have, which is the literal main driving force behind LODs, especially because this isn't meant to be a replacement or better version of what we have, but a lower quality one, which is the folly of what companies are trying to do with generative AI; making a shittier version of things when they really really shouldn't be thinking it will be better or more convenient, when they mostly just create more work trying to make whatever the generative AI spit out into something workable, when it would be easier in the end for a human to just make the original asset instead and have a better outcome from the start.
      meanwhile what the AI suggested here will do is the opposite of all that nonsense in essence

    • @sylphianna
      @sylphianna Місяць тому +1

      tbh the fact we haven't already developed AI that could make LODs before all this gen AI stuff made people scared of the word is a tragedy

  • @astreakaito5625
    @astreakaito5625 2 місяці тому +1

    Incredibly in depth video once again, really inspiring. I'm only dabbling into game dev but this explain so much about what we've started to see in recent games!

  • @robbyknobby007
    @robbyknobby007 2 місяці тому +30

    I usually find people who excuse unreal's performance in titles and mention how developers have the choice in not using these techniques like Lumen and Nanite, but unreal 5.0 deprecated TESSELLATION, so where was the developer choice there? You should really do some tests comparing tessellation, parallax occlusion, and Nanite. Because if tessellation doesn't perform better than Nanite then Epic did the community a service by removing it with a replacement. But if tessellation is faster, then Epic really is to blame for alot of shty performance.

    • @vitordelima
      @vitordelima 2 місяці тому +6

      Tesselation can be faster and better looking than Nanite, but nobody cared to code methods to export it from 3D modelling software or convert existing meshes into it.

    • @vitordelima
      @vitordelima 2 місяці тому +1

      @@RADkate There is no easy way to create content that uses it.

  • @Dom-zy1qy
    @Dom-zy1qy 2 місяці тому +2

    I am so glad I clicked on this video. Im a noobie when it comes to graphics programming, but I've learned quite a lot just hearing you talk about things. I didn't know shaders are ran on quads, I thought shaders were per-pixel. Maybe the reason behind this relates to the architecture of the gpus? Some kinda SIMD action going on?

  • @rockoman100
    @rockoman100 Місяць тому +18

    Am I the only one who thinks LOD dithering transitions are way more noticeable and distracting than just having the models "pop" instantly between LODs?

    • @Lylcaruis
      @Lylcaruis Місяць тому

      when i watched that part i thought that too. pretty sure having them instantly switch runs faster as well

    • @bertilorickardspelar
      @bertilorickardspelar Місяць тому +1

      Dithering works pretty ok if you do the fade when the camera pans or tilts. If you dither when just moving forward it is quite noticable. Also depends on the asset. Some assets can just flip without you noticing while others are very noticable when they flip drastically. A car may flip without you noticing while a tree may benefit from some dither.

    • @Shooha_Babe
      @Shooha_Babe Місяць тому +1

      Yeah I played a lot of games and seen both methods being used.
      The classic ''pop'' method seems to work best in terms of immersion.
      You don' focus your eyes when they transition between LoDs, but with dithering your eyes catches them on the side..

  • @AdamTechTips27
    @AdamTechTips27 2 місяці тому +8

    I've been feeling and saying the same thing for years. Just doesn't have the concrete testing yet. This videos proves it all. thankyou

  • @Miki19910723
    @Miki19910723 2 місяці тому +9

    Technicly everything you say is right from a perspective. But youre conclusions are very wierd. Especially the Ai one given quality problems it usually has. I think we should not confuse what some youtubers say with how the feature works and why it was developed and if any one didnt catch that its exactly for the work flow. Nanite doesn't render more triangles it renders them more where there needed. And yes it will lose to very well hand optimised scene but the point is you dont have to do that. Also examples you showed are actually rather bad for nanite. The point is not that it renders single triangle faster its about the complex scene and work requaiered.

  • @devonjuvinall5409
    @devonjuvinall5409 Місяць тому

    Great watch!
    I would also recommend Embark's Example-based texture syntheses video. They get into photogrammetry and their testing on the software for 3D Props. It's just rocks using displacement maps but I think the whole video could be relevant to this situation. I don't know enough to be confident though haha, still learning.

  • @2strokedesign
    @2strokedesign 2 місяці тому +4

    Very interesting tests, thank you for sharing! I was wondering about memory consumption though? I would think a LOD0 Nanite mesh is lighter than let's say LOD0-LOD7.

  • @TechArtAid
    @TechArtAid 2 місяці тому +2

    But wait - isn't Nanite adding a fixed cost to the frame too? So in that sense, a simple mesh doesn't cost 1.28 ms, because adding another mesh wouldn't give you 100% cost increase
    Also, I think its advantage is not make things faster (but it's sadly advertised as such- infinite geo). Isn't it, as with all virtualized techniques, to make the cost more predictable and uniform?
    About mesh topology vs AI - it's one of the hardest problems in the graphics field. It's an obvious goal for many talented people but one that's still eluding a stable solution
    But I'm all with you against the fake expectations and UA-camrs' misleading "tests". Thanks for giving it a proper profiling treatment & explaining the basics

    • @ThreatInteractive
      @ThreatInteractive  2 місяці тому

      "But wait - isn't Nanite adding a fixed cost to the frame too?"
      No, read our pinned comment.

  • @SpookySkeleton738
    @SpookySkeleton738 2 місяці тому +10

    you can also reduce draw calls using bindless techniques, like what they did in idtech 7, they are able to draw the entire scene with just a handful of draw calls.

    • @mitsuhh
      @mitsuhh 2 місяці тому +3

      What's a bindless technique?

    • @SpookySkeleton738
      @SpookySkeleton738 Місяць тому

      @@mitsuhh with vulkan (and i believe also opengl 4.6), you can bind textures to a sparse descriptor array in your shaders that can be modified after pipeline creation, effectively allowing you to swap in and out textures on the fly. you can then put all your material data in a shader storage buffer, and use a vertex attribute to index into that shader storage buffer which can then index into your texture array, it basically means that you don't have to bind new textures when rendering meshes that use different textures, so long as they are on the same shader, which can reduce a TON of command buffer recording and submission in your gpu pipeline.
      "bindless" is technically a misnomer, since obviously there are still samplers being bound to a descriptor, but it's right insofar as you don't have to "rebind" them unless you are loading new textures in.

    • @mitsuhh
      @mitsuhh Місяць тому

      @@SpookySkeleton738 Cool

  • @iansmith3301
    @iansmith3301 5 днів тому +1

    The fact that Epic is telling developers to go ahead and use 200GB 3D models without optimization because Nanite 'can' render them in the engine should send off huge red flags because artists will think that it's fine to not optimize.. Everyone's HDD is fked as well.

  • @furiousfurfur
    @furiousfurfur Місяць тому +3

    I said it on your other video and I will say it again. You need to really look through your script writing. Your structure and presentation of your thesis is better here but can still use some work.
    But the biggest problem is the attitude you present and your tone and script are both doing you favors in convincing people. For example “a measly 11%” not only is that not necessary to putting measly in there, you also said it with a tone that makes you come across as an edgy teen or from an emotional point of view.
    You present good arguments. But you also come across as either a conspiracy theorist or as egotistical in having secret knowledge the rest of the industry doesnt have or is lying about.
    You could win your arguments better by changing for tone and format. And by putting your money where tour mouth is with actual examples. I think getting into the trenches will change your point of view on some these aspects.
    You have good points, i dont think they are a comprehensive, complete or wholistic view of the industry.

  • @kitsune0689
    @kitsune0689 11 днів тому +1

    the main problem is that foliage with traditional lods look really bad unless its stylized like botw or genshin or pokemon etc. Its probably not a performance upgrade but it solves the biggest immersion killer problem in games with open world/big zones: Pop-in. If youre traveling at high speeds foliage looks awful. And looking at far distances looks awful because of billboards.
    Its probably not the be all end all solution, but with anything that has leaves id say its worth the cost.

  • @plamen5358
    @plamen5358 2 місяці тому +11

    Ordered dithering looks and feels terrible, can we please not have that 😓 Awesome video otherwise 🙏

    • @vitordelima
      @vitordelima 2 місяці тому

      Subdivision surfaces and MIP mapped displacement maps.

  • @noname7271
    @noname7271 2 дні тому +1

    You'd need a lot of training data for this kibd of AI model. From the photogrammetry scans to the final product you want it to replicate. It could be open source and with community submitted data.
    The biggest hurdle would be curation of the data as that would significantly influence the final output. Training is becoming cheaper and algorithms faster but the training data needs to be manually created from a pool of volunteers.

  • @martijnp
    @martijnp Місяць тому +6

    Another thing that really just doesn't sit right with this whole nanite thing is that games have gotten increasingly bigger and bigger. It seems that mainly PlayStation games like CoD have gotten to a point where installing one game will take up a solid chunk of your storage (assuming you have a reasonable, average, amount). There is seemingly no interest in optimizing resources as this can impact load and rendering time. Seeing all this "yeah we'll just render millions of triangles for you by toggling a button" makes me sweat bullets for my own drives.
    Imagine every single model and texture suddenly having 10x the detail because past a certain point of inefficiency nanite can cover up the stuff underneath.From their perspective: why even pay a modeler when you can just photoscan a real object with millions of triangles and put it straight into the game. It feels like all this tech is just made to support this lazy approach uninspired form of creation.

    • @letsexplodetogether5667
      @letsexplodetogether5667 Місяць тому +2

      LOD's actually take up more file space than high-poly + nanite models.
      With LOD's, we have to store both the model and it's lighter copies.
      With nanite we just have to store the model.

    • @letsexplodetogether5667
      @letsexplodetogether5667 Місяць тому

      but also yeah this just allows easier (lazier) access to development at the cost of real performance

    • @martijnp
      @martijnp Місяць тому

      @@letsexplodetogether5667 depends on how well the models are designed and how many LODs, if you for instance decrease quality by 50% per LOD that would net 2x the resources, while those photoscans often have millions of vertices each

    • @realmrpger4432
      @realmrpger4432 20 днів тому

      Nanite now supports using deformation maps to reduce the need to store huge models. It's pretty new and likely still has a ways to go to be truly efficient, but it's a step in the direction of reducing total file size.

  • @legice
    @legice 2 місяці тому +2

    Finally a video that talks about nanite!
    Honestly, nanite saved a project, because we had 100 meshes with 20mil poly each and nanite made it work on machines that had no right to be able to, but…
    It is in no way a silver bullet and the day to day use, as a quick LOD it is not.
    As a modeler, there are rules to modeling and if you do it right, you need a day max to optimise a big ass mesh, which you know how, because you made it!
    “Quick” and dirty modeling exists and optimisation down the road, but when you are making the prop, you KNOW or at least understand what and how to do it, for it to be the least destructive.
    Non destructive modeling exists, but it brings in different problems, such as time, approach, workflow and unless it requires it, you dont use it, as its a different beast all together.
    You can model a gun any way you want, but a trash can, house, something non hero, non changing and that has measurements set in stone, you do it the old fashion way.
    Texture and prop batching is simple, but being good at it is not.
    I love lumen, but it is clearly still in the early stages and needs additional work to be optimized for non nanite and optimized workflows.
    Im just so happy I wasnt the only one going insane about this

    • @XCanG
      @XCanG 2 місяці тому +1

      I have a comment above with my opinion on this, but I'm not working in game development, so my knowledge is limited. Considering that you are modeler I have a few questions to you:
      1. How long does it take to take a model for nanite vs LODs?
      2. How many years you work/have experience as modeler? It's for the sake that pro's making stuff way quicker, so that difference one vs another may vary depending on experience.
      3. How much you are aware of the optimizations mentioned in the video? My opinion is that he have at least 6 years of experience and probably already some Senior gamedev, but it's hard to imagine that new gamedevs would have that knowledge.
      4. Do you think nanite is useful right now? Do you think it will be useful in the future? (may be with some polishing and fixes)

    • @legice
      @legice 2 місяці тому

      @@XCanG Sure, I can answer those
      - nanite is obviously just a click, so nothing to do there really. As I said, when modeling, you ALWAYS plan ahead, so when you are doing retopo, its just a matter of how well you preplanned all your steps beforehand. So as there are rules to modeling, there are good practices in place for a long time and if you follow them, you are going to take a bit longer to make a prop, but when doing retopo/LODs, its going to take only a fraction of the time needed. Cant really time this, because its how you should be modeling regardless, unless you are doing for visualisation or movies only, where they dont need to be optimized.
      - professional, none really, as everything I have done is on personal works and game jams, due to the game dev industry being a bitch to really get in. There are steps or approaches, but in the end it dosent matter, as long as you deem it to be the best approach time wise, modeling wise and within budget, but most have the same workflow, because if you hand it off to somebody or leave the company, other need to be able to take your work and adapt/finish it.
      - very little/barely understood anything, because I learned from doing it and adapted my workflow base on the information I got, while searching for solutions. Honestly, you can basically skip anything he said, because that is all theory in the same way of how light works in games. As a modeler, you dont really need to know exactly how it works, but you get a feeling and slight understanding of how any why. He is going way more into technical stuff, something tech artists and programers deal. As a senior modeler, you touch this, but in the end your job is to do other things, such as modeling well, texture packing, instances, draw calls, modular design... in some areas and studios, this gets mixed in between and I 100% guarantee you, that most dont know it, even seniors, but they compensate in other areas.
      - nanite is already useful, it will be more useful, but limited. The fact is, nanite is a constantly active process going on in the screen, where as LODs are a one and done. LODs will never go away, but their dependency will be reduced, as less and less optimization will be needed to make games work so well, because computers are getting better.
      As I said, nanite straight up saved a project of ours when it was still in alpha/beta, so now if you use it for trying out how something looks in game, stress testing a rig pre optimization or whatever, it has its place, but should not be overused and has limitations. You cant use it on an skeleton/rigged model for example, as it relies on a constistent poly mesh.
      Take everything I said with a grain of salt. The video explained everything BEAUTIFULLY and I understand things I have unknowingly been doing for years, but never really grasped why, but I knew it worked.
      I learned things my way, studios teach their way, opinions clash and in the end, nobody really knows what they are doing, only how they feel they should be done and the final result dictates that.

  • @Owl90
    @Owl90 21 день тому +7

    Nanite was never supposed to be faster than traditional LODs. It was never even marketed that way. It's only job is to help save time, by allowing developers to skip the LOD making process.

    • @ThreatInteractive
      @ThreatInteractive  21 день тому +11

      RE: It was never even marketed that way.
      If that was true, we would have never made this video. We have plenty of instances where this was stated by Epic representatives spread this narrative and we will soon release footage of them quickly trying to go back on these statements after we released our video.

  • @average_ms-dos_enjoyer
    @average_ms-dos_enjoyer 2 місяці тому +2

    It would be interesting to see similar breakdowns/criticisms of the other big 3D engines approaches to visual optimizations (Unity, Crytek, maybe even Godot at this point)

  • @tourmaline07
    @tourmaline07 2 місяці тому +26

    Nice to see my £1200 RTX 4080 Super's GPU power being necessary to draw so much badly culled overdrawn geometry at native 4k and struggling to retain decent fps 🫤
    I did wonder why with UE5 games fps massively increases with DLSS being used (more so than other engines) and this does make sense.
    That is an excellent point you made about Intel and AMD funding optimization tools to make DLSS redundant , not one I've seen elsewhere.

    • @MrSofazocker
      @MrSofazocker 2 місяці тому +4

      Yes, with DLSS you decrease the render resolution and with Nanite this means also rendering less polygons... in addition to rendering fewer pixels.
      If a non Nanite game, when lower resolution, that's all you do, fewer pixels to shade.
      With Nanite, a lower resolution proportionally affects how many polygons are generated and shaded.
      That game devs don't expose a "geometry detail" slider like the fractional CVAR that is built into Unreal to adjust how many polygons are created per pixels is stunning!

    • @MiniGui98
      @MiniGui98 2 місяці тому +3

      Yeah and using DLSS as a redundant solution means you have games with worse visual fidelity by default since DLSS outputs an upscaled image from a lower resolution. You cannot create details that didn't show up in the original image by upscaling. This things is killing the "crispyness" of game visuals.

    • @tourmaline07
      @tourmaline07 2 місяці тому +1

      @@MrSofazocker thanks for that detail - that does make sense if less polys are rendered as well. Agree this would be a useful setting to expose to the end user for lots of titles.

    • @tourmaline07
      @tourmaline07 2 місяці тому

      @@MiniGui98 I think that if dlss upscaling can give similar quality settings to native then there is a serious problem with native TAA , you'd think it would be broken?

  • @serijas737
    @serijas737 6 днів тому +2

    Nanites is what you use when you want to replace 3D artists that understand topology.

  • @cheesybrik
    @cheesybrik 2 місяці тому +4

    I really like this video except for your AI solution. Coming from the space I can tell you don’t have a lot of experience with neural nets. You need concrete data to achieve massive training that’s actually usable and to do that you have to take the LOD model from other games. You really think that many game studios are just gonna give over their model data for free? Especially when they know that you need their data? The problem is really how you plan to get the patterns for the net to learn from, you’d need millions of cases for it to start to become somewhat on par with industry modelers. Not to mention the inherit massive jump in complexity when working within a 3d space. It just seems like a naive proposal that I would like to see you flesh out more with concrete plans forward.

    • @ThreatInteractive
      @ThreatInteractive  2 місяці тому +2

      Re: You really think that many game studios are just gonna give over their model data for free?
      We never stated studios should give away free reference LOD data. We were extremely specific with who should invest in this since 6 billion dollar Epic Games won't: 11:11

  • @TheShitpostExperience
    @TheShitpostExperience Місяць тому +1

    I may be missing something, but the point of nanite was not that it would be faster than regular LOD, but rather that you could have all your assets as high poly objects and nanite would take care of doing all the LOD in the engine right? Saving time for the artists/developers.
    Performance is not as good as properly built game (from the top of my mind I can't remember if they said it would be faster, they may have), but the importance of UE5 in general has been that you can have everything done inside the UE5 editor which streamlines the development pipeline for small/medium sized studios.
    Arguing wether regular LOD vs nanite performance in the context of using nanite in UE5 is somewhat a pointless argument because nanite is a convenience features, not a performance one.

  • @CaptainVideoBlaster
    @CaptainVideoBlaster 2 місяці тому +22

    This is what I imagine Unabomber reading his own manifesto would be like if it was about video game development. In another words, I like it.

  • @unrealengine5-storm713
    @unrealengine5-storm713 Місяць тому +2

    As someone whos worked in UE for 5 years now, the engine out of the box is BLOATED. If you want to make any sizeable release, you often need to rework source code to get it where you want it and bypass a lot of useless code you dont need

  • @fluffy_tail4365
    @fluffy_tail4365 2 місяці тому +4

    Aren't mesh shaders still a bit unoptimized and slower compared to a compute shader + draw indirect? I remeber reading something about thayt .The idea of compiling together lods still holds anyway

    • @internetexplorer781
      @internetexplorer781 2 місяці тому

      i guess it depends on hardware and api, but on my hw and vulkan, mesh shaders/meshlets are like 4-5x faster than traditional pipeline and with task shaders you can do some pretty interesting culling techniques etc.. i think i read from epics roadmap that eventually UE will move into this pipeline and ditch the vertex shaders completely. nanite is, iirc, already partly using that new pipeline.

  • @ThatTechGuy123
    @ThatTechGuy123 Місяць тому +1

    Thanks for the information. I'll keep these things in mind as I develop.

  • @frederikgoogel5611
    @frederikgoogel5611 2 місяці тому +8

    Even if you are right. For how long?
    I undertand that UE5 with All Nanite, Lumen and vsm is a solution for ultra high quality visuals without the need of manual labor for the coming years.
    The initial cost may seem high - now. But anything past that is fine. And with next gen gpus it really doesnt matter at some point anymore.
    Yes, you could achieve more with traditional methods, but you need to know your stuff and put in tremendous amounts of extra work. Nanite just needs loads of memory.
    I see it like pathtracing or raytracing in 2018. Everyone said that its performance cost was way too high, which it was at first. But in some years it is the ultimate solution. You dont need to put hours in for manual work and the quality speaks for itself.

    •  2 місяці тому +2

      I agree with this. Nanite isn't really a performance optimisation tool, its a dev time optimisation tool.

    • @MrSkollll
      @MrSkollll 2 місяці тому +2

      Last paragraph: for mainstream hardware, raytracing is still have too high of a computational cost unfortunately. And with slowing down of Moore's law it will be for quite a while.
      Same for not worrying about overdraw

    • @frederikgoogel5611
      @frederikgoogel5611 2 місяці тому

      @@MrSkollll What on earth is mainstream for you? Series X and PS5 are on the market since 2020 and they can use it. Spider man has a great integration. A 4070 (which is mainstram in late 2024) has no issues even in heavy rt use, not including pt, but thats a different story

  • @invertexyz
    @invertexyz Місяць тому

    Another part of the issue may be that Nanite wasn't designed exclusively around the Mesh Shaders system on the newer generations of GPUs. They use general compute to be able to support older hardware and the system is designed with having to do it that way in mind. They do have a fast-path using Mesh Shaders for polygons larger than a pixel, but it sounds like it's kind of just throw in there for a little extra performance boost instead of it being an entirely separate path for the whole system to run through.

  • @PositionOffsetGaming
    @PositionOffsetGaming 2 місяці тому +14

    Not really fair to go after JSFilms so hard. He says over and over again in his videos that he is NOT a game developer. He uses UE to make prerendered movies primarily and just didn't know any better

  • @dwarow2508
    @dwarow2508 Місяць тому +1

    I am a complete noob here, but wasn't the point of Nanite not to improve performance but to simply replace very ugly LODs and allow for dynamic rendering regardless of distance?

  • @thediscreteboys3315
    @thediscreteboys3315 2 місяці тому +13

    You are fighting the good fight man. I just wish more people would sit down and learn about this stuff.

  • @597das
    @597das 16 днів тому +1

    great video! if anyone is incentivized and capable of making free topology optimization tooling its AMD. had no idea why the AAA graphics optimizations had become so stagnant until today!

  • @Schnozinski
    @Schnozinski 2 місяці тому +3

    Threat Interactive is NOT suicidal and did NOT commit suicide by 29 gunshot wounds to the back of the head tomorrow.

  • @DeeOdzta
    @DeeOdzta 2 місяці тому +1

    Amazing work and research thank you for sharing this, many devs I know have had similar opinions of Nanite.

  • @eslorex
    @eslorex 2 місяці тому +40

    You should definitely make a general optimization guide on UE5 as soon as possible. I was barely able to find valid information about optimization in UE5 or how optimized nanite is on different scenerios. I'm so glad I found someone that finally explaining it with reasons.

  • @Neosin1
    @Neosin1 17 днів тому +1

    This is exactly what I was telling people, but no one believed me!
    I taught 3d modelling at an Aus university 15 years ago and back then we modelled everything by hand, which meant our models were very low poly and optimised!
    Now days, devs just scan in hundreds of millions of polygon models with 1 click and call it a day and expect the software to do all the optimisation!
    This is why UE5 games run like garbage!

  • @MaXuPro
    @MaXuPro 2 місяці тому +4

    Quality video. Thanks for clarifying this.

  • @zsigmondforianszabo4698
    @zsigmondforianszabo4698 2 місяці тому +2

    I'd rather think about Nanite as a magic wand for those who don't want to deal with mesh optimization and just want a consistent performance all across without manual optimization. This currently hits us heavily but as soon as technology evolves and everyone is going to have access to modern hardware that can utilize this system, the ease of the system and the decent performance will overcome these hardnesses.
    About the development: 4050 compared to 1060 has a 70% performance uplift in 7 years. 10% every year including hardware and software development and in 5 years nanite will work out really well for fast game development and consistent performance.
    PS: we need to mandate gamedevs when releasing a trailer to give performance statistics about the ingame scene and upscaling used :DD

  • @djayjp
    @djayjp 2 місяці тому +3

    The most damning point against Nanite is the new Riven remake. Everyone thought it was using Nanite because there are no polygonal edges visible in its assets, yet it's just a very well made game.

    • @KingKrouch
      @KingKrouch 2 місяці тому

      A bit of a shame that the reflections are still screen space and kind of suck. If the game has raytracing then this point is moot, but I remember hearing Johnathan Blow mention how crap the screen space reflections are in that game.

    • @forasago
      @forasago 2 місяці тому +2

      @@KingKrouch Riven's SSR is no worse than any other game's. SSR is simply a stopgap technology that should have been left in 8th gen, and it should have NEVER replaced planar reflections on water to begin with. SSR was perfect for creating accurate reflections on random objects in a scene that are too small to warrant a planar reflection or cubemap. Water will take up between half and the entire screen for a lot of the play time in some games, yet those games will cheap out and not use planar reflections. The most recent example is CS2 which ironically has extreme performance problems with its water any way. Just another anecdote for how optimization is dead in the games industry.

    • @Mart-E12
      @Mart-E12 24 дні тому

      @@KingKrouch such a game would benefit from full on path tracing

  • @Eianex
    @Eianex Місяць тому +2

    add some graphs showing a comparison of performance through different scenarios between them both

  • @ThePlayerOfGames
    @ThePlayerOfGames 2 місяці тому +21

    I have been confused as to why Epic have Nanite turned up so high? In all the demos it turns models into white noise of triangles, where in reality players will put up with a LOT of triangles before they notice them
    Is there no way to tell nanite to chill out and stop restoring or adding triangles past a certain point?
    This hand-waving of "AI will save us" isn't going to work, you could have an easy algo that gets rid of triangles by focusing on flatter areas to simplify and keeping areas that curve a little preserved - this has to exist already

    • @doltBmB
      @doltBmB 2 місяці тому +5

      It exists already in every 3D software, but it is not even the optimal way, there is plenty more sophisticated algorithms in dedicated LOD software that prioritize percerptual quality.

  • @TSPxEclipse
    @TSPxEclipse 20 днів тому +2

    Stop giving people tools with the promise that it's going to magically fix your game! Nanite and upscalers like DLSS are not supposed to be an excuse for designing a game that runs like shit. Make the game run well, THEN apply those tools. If the tool makes it worse, DON'T USE IT.

  • @MrGamelover23
    @MrGamelover23 2 місяці тому +11

    All your technical knowledge is impressive, but I just hope the game you end up making with it is actually good.

  • @gurujoe75
    @gurujoe75 Місяць тому +1

    I'm not a programmer but I see that for ten years there has been a waiting for a big graphic paradigm. Goodbye classic renderpipeline as they teach it in school, goodbye rasterization, goodbye classic extremely inflexible polygons, goodbye endless problems with shadows, LOD, UW mapping, etc. etc. etc. etc. etc. etc.
    UE is the only big multiplat engine today. And R@D is extremely expensive. You understand what I'm implying here between the lines.