3D in the Dark! Making Gaussian Splatting in Low Light

Поділитися
Вставка
  • Опубліковано 10 січ 2025

КОМЕНТАРІ • 54

  • @djayjp
    @djayjp Рік тому +3

    We'll be able to save our memories in full 3d explorable environments!

  • @gblargg
    @gblargg Рік тому +1

    No way this won't be used in video games in the future. I can't wait until this works on common PCs for photos and videos. This is really amazing.

  • @matslarsson5988
    @matslarsson5988 Рік тому

    There's lots of crap on UA-cam. But your channel is actually interesting. Thanks for doing this!

  • @coltvideolibrary
    @coltvideolibrary Рік тому +7

    Great work, very helpful! I also am trying to get models from old videos. I was able to improve the crispness of my result by exporting extra frames from the video (increase the probability of exporting an in focus frame). Then I removed the blurry ones by calculating the canny edges of the GRAYSCALE representation of each image using "img = cv2.imread(imgSrc ,cv2.IMREAD_GRAYSCALE)" "threshold = np.mean(cv2.Canny(img,50,250)"

    • @gblargg
      @gblargg Рік тому

      Exactly my thought, run some lower-tech tools to filter the poor stills out so you feed it the best possible data.

  • @CDANODC
    @CDANODC Рік тому +6

    Love watching your experiments! Have you tried to see what the minimum amount of images you can use to get a good model. Then you could you a DSLR camera with really slow shutter speed to get good night images and build a module from them. I'd love to try it out, just have to finish other projects first

  • @ZOMBIEHEADSHOTKILLER
    @ZOMBIEHEADSHOTKILLER Рік тому +9

    it would be interesting to see you make a tutorial on how to do this..... im sure many viewers, including myself, have little to no idea how to get started doing this.

    • @OlliHuttunen78
      @OlliHuttunen78  Рік тому +4

      Well I have to think about it. Although The NeRF Guru Jonathan Stephen has made good tutorial about it. And this is where I learned to make Gaussian Splatting: ua-cam.com/video/UXtuigy_wYc/v-deo.htmlsi=YjCvsrRHfaxmQ2yX

    • @Deathbynature89
      @Deathbynature89 Рік тому

      This is great

  • @RealityCheckVR
    @RealityCheckVR Рік тому +1

    Great video! I recently found out that I can use all of my old 360 videos whether I'm in the footage or not and it will work! I even tested out an old 360 drone shot and it did a pretty good job of recreating the area. Keep it up! 🙌

  • @raddles
    @raddles Рік тому

    great captures! the 360 camera turns out so well

  • @gamebuster800
    @gamebuster800 Рік тому +2

    I would love to try this with a light-sensitive camera + lens

  • @julienblanchon6082
    @julienblanchon6082 Рік тому

    Wow great ! We want more Gaussian Splatting video 😂

  • @AlexKongMX
    @AlexKongMX Рік тому

    MAN, this is mind blowing!!!

  • @o0oo888oo0o
    @o0oo888oo0o Рік тому

    Very nice test!

  • @icegiant1000
    @icegiant1000 Рік тому +1

    I don't understand how the back sides of objects are being rendered correctly. Take for example those pallets that were in the corner, I know you didn't take your camera behind those, yet it looks like the 3D rendering is accurately showing their depth. Let's take something silly, what if there was a leprechaun standing behind a trash can, and you walked by with your camera, how would the 3D rendering know it was there? Skip the leprechaun, how does it know enough to make the trash can round all the way, unless you recorded all angles, is it just guessing? Absolutely fascinating.

  • @MACHIN3
    @MACHIN3 Рік тому

    Very impressive. How come it even does the reflections in the puddles?

  • @alteredcarbon3853
    @alteredcarbon3853 Рік тому +2

    I have a question about these gaussian splatting scenes. Can you do modular environments with them? By that I mean can you isolate a part of the scene like a door or a car and reuse it or recombine it in a different way in another scene just like we do with photgrammetry scenes? Or is the result totaly static with no creative freedom?

    • @SungazerDNB
      @SungazerDNB Рік тому

      Of course you can mix and remix, just like you can composite video and imagery. - You could use photoshop on frames, use 3d renderings of environments and splice them in etc.

  • @martymcflyer8487
    @martymcflyer8487 Рік тому

    I've had best results using a shoot- step-pause-shoot-step... iterative approach with 360 still frames masked out around my body. Pausing reduces blur and the wide field of view gives the algorithm more features to lock in the camera poses. This allows for fast reliable high resolution capture with low data storage.

  • @mmmmigs
    @mmmmigs Рік тому

    fascinating. great work.

  • @charlieBurgerful
    @charlieBurgerful Рік тому

    So happy to come across your video. I am a film director and working for a low budget music video. I had in mind to incorporate some scanned scene in the music video using the glitches and error in scanning as a surnatural effect with a a strong mood. I did some test with Luma ai. However I am wondering with what software are you rendering the final file and also going into it ?
    I was wanted to use luma and Blender or a realtime engine.
    What's your softwares ?
    Cheers

  • @jupit3r131
    @jupit3r131 Рік тому +1

    How possible is it to do dynamic environments with gaussian splatting? I'm aware of stuff like composite NERFs but is it possible to perform realtime shader like operations while rendering?

  • @360socialms
    @360socialms Рік тому

    Excellent work Olli !!, when you comment on the editing in Insta 360 Studio, are you referring to a 360 video "Reframe" job and then you work in GSplatting with a 2D video?

    • @OlliHuttunen78
      @OlliHuttunen78  Рік тому +1

      No I don't use the full 360 footage. I render a cropped 16:9 video out from 360 footage. And I use "Natural view" option to get straight perspective. Gaussian Splatting point cloud calculation do not work that well with the distorted fish eye looking footage.

  • @Scannerian1
    @Scannerian1 Рік тому

    Moi Olli! I use the same kit. Do you slow your reframed/overcaptured 360 footage down to 1/4 speed and take off the motion blur? I have found this is very good for my workflow.
    I use M2Pro but are you also using a PC?
    Best wishes,
    Ian

  • @cedermannen
    @cedermannen Рік тому

    Great content as always! 🙌 Can you use 3DGS without a Nvidia card, like if you only have a Apple silicon computer?

    • @OlliHuttunen78
      @OlliHuttunen78  Рік тому +1

      Well I’m not sure. I have seen some AR experiments where they already use these Gaussian Splatting models on iPad or another ios device. There is lot of stuff in twitter right now on this topic. For the training we still need for now the Nvidia cards because the INRIAs sorce code is based on CUDA but for watching ready-made GS model it seems that we can do almost on any device. I’m sure that there will be a decent viewer for Apple also soon. Or someone will relase a proper WEB based viewer very soon. Lot of happening aroud this topic right now!

  • @darmelli954
    @darmelli954 8 місяців тому

    Hey oli, loving your videos and process sharing.
    Have you found much quality difference between the Gaussian splatting softwares?
    I’m loving post shot for its easy interface and how it can take in footage and select the images for you,
    But most importantly I wondered if there are other Gaussian splattering software that gives better results.

    • @OlliHuttunen78
      @OlliHuttunen78  8 місяців тому

      Yes. One thing that greatly affects the quality of Gaussian Splatting models is a feature called Spherical Harmonics. It has three levels. Some of the services are able to show SH level three, where the model is at its best. Others don't and the model looks very flat. I have noticed that this happens in some plugins when the PLY file is exported to, for example, Unreal Engine or Spline service online.

    • @darmelli954
      @darmelli954 8 місяців тому

      @@OlliHuttunen78
      Thanks oli, yes, I noticed when I compared the 2 unreal plugins, luma and the Japanese research one;
      Luma didn’t have the harmonics, while the Japanese one did, however the Japanese one ran much slower.
      However I meant when calculating the Gaussian splat; do some softwares calculate better than others when given the same image set.?

    • @OlliHuttunen78
      @OlliHuttunen78  8 місяців тому

      It's quite interesting how similar end results each of these available splat generators produce from the same image material. Luma AI and Polycam are pretty good at the moment, but I don't know how many iteration steps they will eventually subtract from the GS model (maybe 30k at most). With the Postshot program, you get the best results because you can determine how precisely GS model is trained. I haven't been very successful in doing tests with Nerfstudio, but there is an opportunity to use the Splatfacto and Splatfacto-Big methods which should somehow improve the accuracy of Gaussian Splatting. I haven't found out much about them whether they really make much of a difference. But I think Postshot is best when you let it really train the model as far as say 300k iteration steps or more. Then the model will already have a lot of accuracy.

    • @darmelli954
      @darmelli954 8 місяців тому

      @@OlliHuttunen78 thanks for the detailed reply!
      I feel this is gonna really get big for unreal, now that the Gaussian splats can cast shadows in 5.4, but I’m waiting for when it’s visible in reflections,
      That will be a big game changer for environment capture and tweaking

  • @Jgolbstein
    @Jgolbstein 17 днів тому

    guys did anyone tried to train 3dgs on full 360 image? (equirectangular) I'm training on the 6-cubes format but I think it will be better to have a rasterizer that can be fed with full 360 images

  • @NicolasGaillard
    @NicolasGaillard Рік тому +1

    Super nice. Can I ask if you are using this technique (ua-cam.com/video/LQNBTvgljAw/v-deo.html) to create 3DGS from 360 cam ?
    Do you know if there is a way to use Agisoft metashape for example, which is much faster than colmap and natively takes 360 images as input ?
    Thanks

    • @OlliHuttunen78
      @OlliHuttunen78  Рік тому

      Yes. I tried that technique that Jonathan showed on that tutorial. But it is quite complicate. So many steps and convertsion to Cube map. And it is not full 360 picture because it left the top and bottom part off from the counting. But as Jonathan said there is some better way to do it also. I'm curious to see what it is. For the point cloud conversion I have heard that basically all photgrammetry program which creates point cloud based the source images is valid for generating a Gaussian splatting model for training. Source code that INRIA has developed uses .ply files as point clouds. It is worth to try if Agisoft can produce that same format.

  • @Mobay18
    @Mobay18 Рік тому

    Very cool. Tell me, can you capture a person using this technology?

  • @Nobody-Nowhere
    @Nobody-Nowhere Рік тому

    how would depth of field affect these? as larger sensor cameras always have more limited depth of field?

  • @J3R3MI6
    @J3R3MI6 Рік тому +1

    Soon you’ll be able to stitch splats together with AI finding patterned structures in the data. That would be sick because then you could do an entire stadium 🏟️ live and we could all fly around it in VR…. The crazy part is that’s less than 5 years away 🫠

  • @hasoevo
    @hasoevo Рік тому

    Interesting 🤔 I always assumed the more light The better result based on my experience 3d scanning, I am going to have to retain my brain bc the two are similar yet very different when capturing for certain intended results.
    Great work and info, love your channel you rock
    🖖😎🍻

  • @CHESSZILLA
    @CHESSZILLA Рік тому

    hey this is amazing, can this be done on a mac?

  • @sirleto
    @sirleto Рік тому

    i understand from 2:45+ that you export only every ~10th frame to use with the algorithms? why not export every frame and then run a tool that decides per picture if it is too blurry or artefact affected? that would improve the quality by using only the best frames 🙂

    • @OranJuno
      @OranJuno 4 місяці тому

      What kind of tool would automatically identify less usable frames? Any example?

  • @taureanwooley
    @taureanwooley Рік тому

    Why are most of the footage shot in the same areas as its inception/reconditioning. Ask raven hole where ravenol disappeared situation

  • @jaakkotahtela123
    @jaakkotahtela123 Рік тому

    Hienolta kyllä näyttää! Onko tästä Gaussian Splating -tekniikasta mitään hyötyä siinä tapauksessa, jos haluaa luoda rakennuksesta 3D-mallin vaikkapa Unreal Engineen? Vai onko lopputulos samaa laatua kuin Luma AI:llä?

    • @OlliHuttunen78
      @OlliHuttunen78  Рік тому

      En ole vielä testannut. Gaussian Splatting plugi Unrealille on vasta pari päivää sitten julkaistu marketplaceen. Se hyödyntää Niagara partikkeleita koska tässä pelataan pistepilvien kanssa. Ihan niin tarkkaa kuin polygoneista rakennettua pintamallia näistä ei varmaankaan saa mutta ulkonäöllisesti hyvin paljon alkuperäistä kuvaa muistuttavan mallin kyllä.

  • @aryaman9254
    @aryaman9254 7 місяців тому

    is 8gb vram good or should i go with rtx 3060 12gb. I can't decide between rtx 3060 and rtx 3070

  • @elvisnotpresley
    @elvisnotpresley Рік тому

    🔥

  • @pocongVsMe
    @pocongVsMe Рік тому

    can i import it into blender?

  • @timo1949
    @timo1949 Рік тому

    this could probably be used as a convenient way to automatically remove artifacts from video, if implemented into NLEs

  • @Alliks82
    @Alliks82 Рік тому

    nice

  • @SungazerDNB
    @SungazerDNB Рік тому

    The past tense of shoot is "shot", not "shooted". :)

  • @dialectricStudios
    @dialectricStudios Рік тому

    king