Gaussian Splatting Has Never Been Easier!

Поділитися
Вставка
  • Опубліковано 26 лис 2024

КОМЕНТАРІ • 37

  • @danieldimarchi7479
    @danieldimarchi7479 3 місяці тому +2

    Struggling with this for awhile, you helped majorly dude. Thanks!

  • @carlossuarez9272
    @carlossuarez9272 5 місяців тому +3

    Today I have seen a lot of content around this topic. It is a technology with great potential. I have tried KIRI Engine, Luma IA and Postshot and by far the latter gives me better results in Unreal Engine. The model is better rendered. I suppose it is because locally I have more control. I did notice that the model lost quality when using it in Unreal Engine but I didn't know why until I heard your explanation of the limitations of Niagara. At the moment I'm training a model of a Castle based on a 360° aerial video that I found on UA-cam for my game. Once I've the final result, I'll share my results here. Thanks for all the breakdown.

  • @ArchitRege
    @ArchitRege 3 місяці тому +1

    Thanks a lot for the indepth walk through

  • @TheWingEmpire
    @TheWingEmpire 5 місяців тому +1

    this is amazing man!! good job

  • @zerosaturn416
    @zerosaturn416 4 місяці тому +2

    thank you so much for this tutorial , for months i have been trying to find a simple program to train gaussian splats locally but all of them never seemed to work because they were to advanced or i would get errors.

    • @levelupvfx
      @levelupvfx  4 місяці тому +1

      Of course! Happy to help, that’s exactly why I wanted to make this tutorial!

  • @jmr2008jan
    @jmr2008jan 2 місяці тому

    It would be pretty neat to have a reference library of these available online through a web 3d app.

  • @nbms950
    @nbms950 2 місяці тому

    Hey thanks for the tutorial, really concise. Do you happen to know if you can then export the PLY out of Unreal as a FBX or other 3D mesh file??

    • @Densmode3dp
      @Densmode3dp Місяць тому

      If you listen he says he exported in a .ply format

  • @TheBadBone23
    @TheBadBone23 2 місяці тому +1

    Can you somehow use this as a 3D mesh? Something like replacing 3D scanning with this method...scan an object and 3D model something around it

  • @RogueBeatsARG
    @RogueBeatsARG 4 місяці тому

    Damn 944 is so good looking

  • @Dartheomus
    @Dartheomus 2 місяці тому

    This software is absolutely amazing, and I think it will only get better as AI progresses. I've found this software really doesn't like it when you miss an angle. So you assume it's going to know how to render something like this car if you walk around and then point down on top. However, if you then try to look at the car from a low angle, the entire model breaks up. Also, and more frustrating is the fact that there is a huge resolution hit. You can feed it really high quality video, and what you get back looks like 1/10th the resolution if that. I'm hoping that can be addressed soon. Finally, I really wish there was a streamline way to rebuild these splats into 3d models. It would be really useful to couple this technology with 3d printing, but it's not very easy at the moment.

  • @korujaa
    @korujaa 2 місяці тому +3

    there is NO aplication, just showing off

  • @Strawberry_ZA
    @Strawberry_ZA 3 місяці тому

    awesome porsche!

  • @sdsfa8337
    @sdsfa8337 4 місяці тому +2

    Been using this programm for a while and I love using it with ue5, btw do you know how to import splats in blender with color atribute, cuz I do not see color atribute export setting in postshot:(

    • @levelupvfx
      @levelupvfx  4 місяці тому

      Sadly I pretty quickly gave up when it came to Gaussian Splats in blender, so I only tested it with blender before I started using postshot, I think the color data should be be in the PLY file, but if not, I’m sure there’s a way to get it out seperately

    • @cedimogotes8662
      @cedimogotes8662 3 місяці тому

      @@levelupvfx how to get the color data to blender?

  • @Utsab_Giri
    @Utsab_Giri 4 місяці тому +1

    When you say that it runs locally, does that mean it doesn't need to be connected to the internet?
    Thanks!

    • @levelupvfx
      @levelupvfx  4 місяці тому +2

      Yes! Nothing you make is processed online, everything happens on your machine, I think you may need to be connected when you first start up because they need you to log in with your account, but after that you are good

  • @anoopak4928
    @anoopak4928 5 місяців тому

    that Mamukkoya Meme lol 😄

  • @gaussiansplatsss
    @gaussiansplatsss 5 місяців тому +5

    is there a limit of uploading photos in postshot?

    • @levelupvfx
      @levelupvfx  5 місяців тому +5

      There is a suggested limit on their documentation of 100 to 300, but since everything is local, you’re not actually uploading anything so you have no limit to how many images you can use.
      For example I’ve run splats using 1500 images, and I’ve run ones using a few hundred. In general, more images will help, but there deffinitly is a sharp falloff from where adding more images don’t add any more detail, they just slow the training down

  • @yvann.mp4
    @yvann.mp4 4 місяці тому

    thanks a lot

  • @deniaq1843
    @deniaq1843 5 місяців тому

    Thumbs up! :)

  • @ElliottK
    @ElliottK 5 місяців тому +1

    Still no spherical harmonics in LUMA AI :(

    • @levelupvfx
      @levelupvfx  5 місяців тому

      I know! I’m hoping they are able to find a way to get them working with Niagara, but it might be an engine limitation

  • @PGANANDHAKRISHNAN
    @PGANANDHAKRISHNAN 2 місяці тому

    dhe nammade mamukoya

  • @AlexTuduran
    @AlexTuduran 4 місяці тому +1

    Of course they can cast shadows. It's just not coded yet.

    • @levelupvfx
      @levelupvfx  4 місяці тому +2

      Deffinitly let me know if you have a way to get shadows working! Currently on the Luma AI plugin documentation they claim “Shadows are not supported in Gaussian Splatting scenes” I figured it was a limitation of them using sprites in thier niagra system, which would make it rather difficult to make an accurate shadow. but if there’s a simple coding fix or something that makes them able to, that would be awesome

    • @AlexTuduran
      @AlexTuduran 4 місяці тому

      @@levelupvfx It's not a simple coding. You'd have to capture the depth buffer from light's perspective, in the shader that renders the actual splat rendering you'd have to compute the fragment's position in light's space, make the comparison between the depth buffer and the distance to light and decide if the fragment is lit or not. And that's just the basic approach, but since the splats are puffy, additional shadow filtering techniques would have to be employed in order to produce a smooth shadow. Or implement volumetric light scattering where the splats could be interpreted as cloud density and also have self-shadowing. There ar multiple ways, but it's definitely possible. I was kind of expecting that since Unreal supports lit particles, that would kind of work automatically.

  • @redsnow936
    @redsnow936 Місяць тому

    this doesnt seem photoreal tho

    • @levelupvfx
      @levelupvfx  Місяць тому +2

      Agreed, while gaussian splatting looks great in a proper renderer like PostShot, there are still tons of limitations to rendering them with unreal. Given this is a super new technology though, I have high hopes for constant improvements to be made to the point where gaussian splats can look just as good in unreal as they do elsewhere! (In all honesty I should have also used my Supra Gaussian splat as the example here, as that one turned out much cleaner, but I wanted to try out a new one for this video, in general as long as you have enough capture points, I still feel Gaussian splatting is second to none at the moment in terms of creating a photoreal result)

  • @Patheticbutharmless
    @Patheticbutharmless 3 місяці тому +1

    To be honest, I don't see the benefit, for me, on photogrammetry. The wireframe is, likely, still a big mess. There is nothing much you can do with it. Professionaly. Yet.
    Since the method cannot understand what kind of surfaces it is capturing everything has this very bland, very uniform self iluminated look.
    How to give areas different types of roughness or, for example, metallic values ect? It isn't possible.
    Seperating parts of the mesh will look awful with lots and lots of jagged edges, smoothing these out will take about forever.
    Trying to force any kind of remeshing or whatever will distort everything beyond recogination I imagine unless the face count is 50 million upwards.
    At least for simulated enviornments you can't really mix photogrammetry(or this) well with modeled 3d objects because they will not "mesh"(pun by accident). It's either fully modeled or fully captured.(Ok I have to correct this, in a brightly lit outside enviornment they can be ok looking but, so far, because you don't have to delight them. Personally I have always need to retexture objects with the captured diffuse as a starting off point.
    Without corrections it just doesn't hold up. It just always looks way out of place.
    There is so much more to a object than its mere shape and basic color value. We get a lot of information about something by the types of reflections and refractions from a object that HAVE to be simulated via the information a model surface provides for the renderer.
    In a few years, when some ai will know what the object is, after the capture process and understands what color area corresponds to what type of surface(basic, a painted car hood with rusted patches on it, or a won out leather jacket ect), I will look at this again.