Photogrammetry / NeRF / Gaussian Splatting comparison

Поділитися
Вставка
  • Опубліковано 24 лис 2024

КОМЕНТАРІ • 268

  • @ggavilan
    @ggavilan Рік тому +419

    I love how you can hear the fan going with the gaussian splatter

    • @MatthewBrennan
      @MatthewBrennan  Рік тому +75

      😂 my poor comp was breathing heavy

    • @pbjandahighfive
      @pbjandahighfive Рік тому +27

      You can hear it from the jump when he switches to NeRF too.

    • @Panzer_the_Merganser
      @Panzer_the_Merganser Рік тому +6

      @@MatthewBrennanThought it was raining there for a moment, then realized I’ve head that same sound in my office. Def taxing the GPU, but great video.

  • @DangitDigital
    @DangitDigital 11 місяців тому +63

    For anyone interested, here's my TLDR of the video: Biggest advantage of photogrammetry method (polygonal): model is metrically valid, can be used for measurements, model is fairly lightweight as it's a model, polygonal, and also less computationally taxing to post-process for the same reason, can use in many software aand share easily. Advantage of NeRFs (radiance field): includes more scene information (distance, sky), but this info is estimated, generated (via color info aka a radiance field) in realtime so perhaps it's not the most scientifically accurate. But because a full scene is being computed, creating new "footage" from novel viewpoints is possible. Advantage of Guassian Splat (static point cloud): includes scene information like NeRFs, but is not being computed in realtime aka it's a static point cloud (or splat cloud). Because the visualization is static, Guassian Splats can be used in many visualization software (game engines like Unity, Unreal, as well as 3D software like Blender, Cinema 4D). It's "the best of both worlds". Also, of course, it's the most fun to say :)

    • @MatthewBrennan
      @MatthewBrennan  11 місяців тому +2

      well put!

    • @Tasarran
      @Tasarran 5 місяців тому +5

      Another advantage of Gaussian Splat over NeRF is that the splats are real objects in 3D; you can edit, move, delete parts you don't like.

  • @error-4518
    @error-4518 Рік тому +239

    NeRF and gaussian splatting are so realistic that they even captured the wind.

  • @BrightAfternoonProductionsPlus
    @BrightAfternoonProductionsPlus Рік тому +125

    Not exaggerating, when Gaussian Splatting became to sudden prominence, I was waiting for a video exactly like this.

  • @buroachenbach703
    @buroachenbach703 Рік тому +82

    Hi, great video - I think it clarifies to a lot of people the difference between the three different technologies.
    Just one thing that would have been important to mention, is the biggest advantages of NERF and GS is the ability to capture reflections, and even transparency, which is just about impossible with Photogrammetrie. Granted that your example is, of course, not the right one to demonstrate these features but maybe you have a different set of images where you can demonstrate that difference in more detail.

    • @MatthewBrennan
      @MatthewBrennan  Рік тому +29

      You're absolutely right - in fact I just went out and took some video of my car and some reflective surfaces in the rain - which would be very hard for photogrammetry to reconstruct well - I'll do a side-by-side!

    • @JorgetePanete
      @JorgetePanete Рік тому +1

      photogrammetry*

    • @zyang056
      @zyang056 11 місяців тому +1

      GS can also rasterize thin structures much better than mesh recon or nerf. Give it a few months my bet is GS can surpass meshing in rendering quality and file size.

    • @DangitDigital
      @DangitDigital 11 місяців тому

      Good point

  • @linecraftman3907
    @linecraftman3907 Рік тому +5

    I have never used either of these techniques, however ever since I've seen these technologies become popular, i had a really poor understanding and had no idea how they compared, but still remained curious. This video filled the gap in my understanding perfectly

  • @BunkerSquirrel
    @BunkerSquirrel 9 місяців тому +3

    The splatter is really cool looking. Looks like a hallucination or a dream

  • @IndieLambda
    @IndieLambda 5 місяців тому +2

    In the future, the 2 others might become an option too, but my favorite aspect of photogrametry is that being a "standard" 3D model, it can be edited in something like Blender, fixed, simplified, reworked, given PBR textures, and then used as an optimized asset in a game engine, like Unity, and for exemple, uploaded to VRChat.
    So after having done all the drone videos, you could just, be there as a friend, with a properly scaled world, then have your friend walk towards it, showcasing just how big it actually is.

  • @NOLNV1
    @NOLNV1 8 місяців тому +2

    That's an amazingly beautiful rock formation, not that it's the topic of the video but just felt like mentioning it

    • @MatthewBrennan
      @MatthewBrennan  8 місяців тому

      It has an interesting backstory too! The legend goes that a utopian community wanted to hollow out the rock to use as a church - and even went so far as to begin chiseling a doorway (you can see the opening in the video/model).

  • @LorandNagy_89
    @LorandNagy_89 Рік тому +7

    This was exactly what i was looking for! I tried gaussian in unity in vr. I have to try with your conclusion. Thanks!!! Subscribed! :)

  • @DarthEditous
    @DarthEditous 29 днів тому

    At last, a serious and in-depth video without hyperactive editing and gimmicks 👍 Amazing how NeRFs and splats give you that change in lighting.

  • @98SE
    @98SE Рік тому +44

    This is absolutely amazing! I think Gaussian Splatting might replace rasterised/polygonal rendering in the near future!

    • @dmitriytuchashvili8594
      @dmitriytuchashvili8594 Рік тому +12

      GS is great, but it will be a real challenge to invent an optimized way of applying real time lighting

    • @ruy_mascarua
      @ruy_mascarua Рік тому +4

      @@dmitriytuchashvili8594 agree, interactive lightning is the main issue

    • @constantinosschinas4503
      @constantinosschinas4503 Рік тому +1

      Curious to see how GS handles reflections, normals, displacements, transluscency and so on.

    • @miroaja1951
      @miroaja1951 Рік тому +5

      There's also a problem with physics, interactivity and the rendering of anything besides real world data (procedural generation would be hell), which altogether makes it likely to only have niche use cases, though I do admit it's cool

    • @MatthewBrennan
      @MatthewBrennan  Рік тому

      @@constantinosschinas4503 see this video: ua-cam.com/video/gheD8vrOJNI/v-deo.html

  • @plyczkowski
    @plyczkowski Рік тому +33

    Could be cool to use an example with more variance in material properties, to showcase how different techniques deal with things like reflectivity and transparency.

    • @MatthewBrennan
      @MatthewBrennan  Рік тому +7

      See my video here :) Reflective Object: Gaussian Splatting radiance field vs. Photogrammetry mesh
      ua-cam.com/video/gheD8vrOJNI/v-deo.html

  • @Redranddd
    @Redranddd Рік тому +7

    I think photogrametry and some kind of Gaussian splatting will be fused one day

  • @JWPanimation
    @JWPanimation 5 місяців тому

    Thanks for posting this. Adding Houdini to your tool set will give you the ability to leverage volumes and point clouds to their fullest potential.

  • @peterbell6557
    @peterbell6557 4 місяці тому

    Thank you very much for the detailed explanation of the three systems, I am teaching myself to use Agisfoft Metashape for developing 3d models of cave interiors. Your video has really helped, I have just rendered your zip file of Church rock with excellent results the output quality is confirming my techniques and results for image capture underground. Excellent video.

  • @360Pros
    @360Pros Рік тому +2

    Thanks for making this video! I've been playing with photogrammetry for about 6 - 7 years, and have only been a curious bystander with respect to neRF and GS. It's interesting but I envisioned something like neRF and GS in conjunction with 3D meshes several years ago, before learning that they exist. Your video does a wonderful job of explaining the distinction between the three, especially between neRF and GS. I watched til the end! Thank you for creating it. I'll connect with you on your social accounts and hopefully we'll run into each other in the unfolding "metaverse".

  • @Melvin420x12
    @Melvin420x12 Рік тому +4

    I have absolutely no affiliation with anything 3D-related though somehow I find this Gaussian Splatting thing so intriguing though I had no clear understanding what it was haha. Just that you could make high quality looking 3D renders of things with just a video. Cool to actually see a more technical and comparison video about it. Thank you for making this video!

  • @MikkoRantalainen
    @MikkoRantalainen Рік тому +1

    Great comparision of different methods! Looking at the drone video vs the output, it seems clear that all these technologies will get better when we get more processing power. The current output is nowhere close the detail level of input video but there's no reason to think it couldn't be given enough computing resources.

  • @ishibaro
    @ishibaro 8 місяців тому

    thank you very much for this video :D superb for future developments in archaeology. I am already checking NERF with Kiri Engine, but I loved to see how to do it with the tools you mentioned. Coool!

  • @KevinMerinoCreations
    @KevinMerinoCreations 11 місяців тому +1

    Thanks for the good comparison video! You did a great job highlighting many of the topics of interest! 👏👏👏

  • @zsigmondforianszabo4698
    @zsigmondforianszabo4698 6 місяців тому +4

    11:26 bro uploaded the video in 4k but recorded the screen with an amazon doorbell camera 💀

    • @MatthewBrennan
      @MatthewBrennan  6 місяців тому +1

      😂 yep. Lesson learned. The renderings were 4K but my screen recording was 1080 🥲

    • @parthhappy
      @parthhappy 2 місяці тому

      Lol

  • @mikegentile13
    @mikegentile13 2 місяці тому

    really cool! if you map the panorama photograph to a hemisphere and a ground plane, you can get the model and the panorama to line up perfectly. the spherical mapping gizmo just needs to be in the location the panorama was captured in relation to the ground plane

    • @MatthewBrennan
      @MatthewBrennan  2 місяці тому +1

      Neat trick - I've always just mapped the pano to a sphere and set the gizmo to the origin/center of the sphere, but what you suggest could help with extending the ground plane imagery past the model geometry and giving the appearance of continuity. Thanks!

  • @AerialWaviator
    @AerialWaviator Рік тому +6

    Very intriguing comparison. Had not heard of Gaussian Splatting previously. With Photogrammetry it would be possible to model different sun angle and lighting effects. Could be interesting to explore how the various could take advantage of video captures taken at different times of day.
    For example, could allow for animating time and motion. Just a thought that might be interesting to explore.

    • @DangitDigital
      @DangitDigital 11 місяців тому +4

      While it's true with photogrammetry you could relight the scene, the shadows are still "baked" into the texture. So in this case, Church Rock would still be casting that shadow even if you put virtual lights into the scene to reimagine it.

  • @CRivlaldo
    @CRivlaldo 10 місяців тому

    Amazing comparison! And very cool flying scenes for NeRF and Gaussian Splats.

  • @fraizie6815
    @fraizie6815 Рік тому +4

    Nobody gonna talk about how the rock looks like a space ship that turned into stone?

  • @SimiVideoCreator
    @SimiVideoCreator Рік тому +7

    honestly I love the gaussian splatting look. Especially when you move "too" close :D

  • @chasechampagne867
    @chasechampagne867 Рік тому +11

    I love the examples, and the discussion on the technical differences. Though could definitely use better quality screen capture.

    • @MatthewBrennan
      @MatthewBrennan  Рік тому +8

      You're right - I captured at 1080 (my screen's max res) and upscaled it, but didn't realize it because I had my premiere clip previews set to 1/8 res, so didn't notice the blurriness! Sorry!

    • @chasechampagne867
      @chasechampagne867 Рік тому +3

      @@MatthewBrennan Some great points in there. Thanks for the vid.

    • @lemovision
      @lemovision Рік тому +2

      Looks more like 360p mate, maybe you set premiere to render using preview cache @@MatthewBrennan

    • @MatthewBrennan
      @MatthewBrennan  Рік тому +1

      @@lemovision could be- I think I fixed it for subsequent exports, at least 🙃

    • @AerialWaviator
      @AerialWaviator Рік тому

      UA-cam compression likely not helping either.

  • @ramonteleco
    @ramonteleco Рік тому

    Thanks for the video! The best way to learn the difference about NeRF and gaussian splatting 🙏🏻

  • @macScsgo
    @macScsgo 5 місяців тому

    The fan kicking on when the Gaussian Splat started rendering got to me 🤣🤣

  • @andrasliptak
    @andrasliptak Рік тому +1

    intersting fact is that nerf technically uses ml to search for the camera positions. the process stops when the render of the estimated volume matches the photo itself. ( within a threshold )

  • @leanderren4548
    @leanderren4548 Рік тому +4

    Great video, I think if you reuploaded this with better quality it could gather even more attention.

    • @MatthewBrennan
      @MatthewBrennan  Рік тому

      Unfortunately I don’t think you can replace previous uploads on UA-cam

  • @mikailmaqsood818
    @mikailmaqsood818 Рік тому

    Thank you!! I’ve been looking for a video like this since I learnt about Gaussian splatting. Also the music you played in the showcases was chilling :))

  • @FireballVFX
    @FireballVFX Рік тому +2

    Thank you, it was very educational and enjoyable video!

  • @rubenbernardino6658
    @rubenbernardino6658 Рік тому

    Very valuable information for us 3D creators. Thank you very much!

  • @jackjansen7265
    @jackjansen7265 Рік тому

    Thanks a lot! That was exactly the quick introduction to the subject that I needed!

  • @antimatters6283
    @antimatters6283 Рік тому

    Good comparison and review. Good notes, links in the video info grey area.

  • @AaronBegley
    @AaronBegley 5 місяців тому

    Enjoyable and educational! Thanks Matthew!

  • @bytesandbikes
    @bytesandbikes 10 місяців тому

    Interesting how the NeRF has captured the changing cloud shadow over time as a positional aspect

    • @MatthewBrennan
      @MatthewBrennan  10 місяців тому +1

      3DGS does something similar, as everything is based on the interpolated “viewing angle”/position of the scene, which of course is tied to the conditions/time (shadow or sun) that each photo was captured under/during.

  • @foreversaint__
    @foreversaint__ Місяць тому

    I think Gaussian splatting could be used more going forward, since it gives you a point cloud(which can be turned into a mesh) and also because it captures reflections. Btw it'd be a great tool for virtual production! Also thanks for this video

  • @keterbinah3091
    @keterbinah3091 Рік тому

    interesting , good informative vid , thankyou, as a side note, lets consider Bob ross once painted 50 sheds and 1 tree.

  • @simonhartley9158
    @simonhartley9158 Рік тому +3

    It seems that the next step is a neural/AI enhanced version of guassian splatting to improve quality/render performance/data size.

  • @DangitDigital
    @DangitDigital 11 місяців тому

    Ooh thanks for this overview. Very helpful.

  • @donaldnewlands1737
    @donaldnewlands1737 11 місяців тому

    Thanks - I'd like to see a comparison of how you would actually use these in production - especially NERF and splatting.

    • @MatthewBrennan
      @MatthewBrennan  10 місяців тому

      Right now I get the feeling it's very much a "solution" in search of a problem. But here's another video with some thoughts on the potential practicality: ua-cam.com/video/Ksi_RfY77SI/v-deo.html

  • @nitisharora41
    @nitisharora41 5 місяців тому +1

    Very nice!

  • @josiahjack455
    @josiahjack455 Рік тому

    Oh hey I've been there. Recognized it before you even said "Church Rock". Nice stretch of road just outside Canyonlands National jawdropping park.

    • @gardenofadam79
      @gardenofadam79 11 місяців тому

      That's why I'm here, I have driven past that rock hundreds if not thousands of times in my life. I have no knowledge about the actual subject matter of this video but I'll watch the whole thing just out of gratitude for the nostalgia.

  • @sennabullet
    @sennabullet Рік тому

    superb explanation...thank you for sharing your knowledge.

  • @pauldorman
    @pauldorman Рік тому +2

    Pity about the low resolution. I assume it's low resolution as that's how it appears on my computer, even at 4K. Very interesting though!

    • @MatthewBrennan
      @MatthewBrennan  11 місяців тому

      Yea, I accidentally screen captured at 1080, but rendered everything at 4K!

  • @Sir_Racha
    @Sir_Racha 4 місяці тому

    Great comparison. Thanks!

  • @Daniel-xz6cm
    @Daniel-xz6cm Рік тому +4

    you can also compare nvidia's neuralangelo. that would be great

  • @coalbanksYQL
    @coalbanksYQL Рік тому +11

    Great comparison! Could you have cropped the splats that were interrupting the sky in your Unity example for a cleaner look - or is that limited?

    • @MatthewBrennan
      @MatthewBrennan  Рік тому +7

      In theory it should be possible, because the splats are directly related to the sparse cloud from COLMAP - I'm planning to investigate this.

  • @robertbogu4794
    @robertbogu4794 Рік тому

    Thank you for explaining brother! 💪

  • @qbert4325
    @qbert4325 Рік тому

    The last shot is really good!

  •  11 місяців тому +1

    It should be no problem to import the photogrammetry model into blender, import a world texture (photosphere) and render a similar camera path.

  • @NithinJune
    @NithinJune Рік тому +1

    very very interesting

  • @64jcl
    @64jcl 2 місяці тому

    Played a bit with gaussian splatting, but found it really hard to get rid of the garbage points/splats that are clearly visible in this video too. You need at least to try to use some kind of select box/sphere to limit them to the object in focus but even then there are stray points that you need to manually select and zap from the data.

    • @MatthewBrennan
      @MatthewBrennan  2 місяці тому

      There is some functionality for this in the current Unity project, and the updated scripts have helped eliminate many of those "floaters". I've found that it really depends on high-quality, sharp input data - so it's best to use still images rather than video frames.

  • @euralsky
    @euralsky 2 місяці тому

    Thanks for the video and links.

  • @darviniusb
    @darviniusb 11 місяців тому

    Working with photogrametry for 15 years or more to. And i tested Nerf as soon as they got out, and all nice and cool untill you give them uniform reflective surfaces. Photogrametry and Nerfs do not like uniform glossy surfaces. GS have no problm, they even get transparency,. Is an insane technology, far from perfect, have very limited use case, but i can see already GS being perfect solution for then next generation of realistic 3D google maps. Are very small and a lot easyer to do then NerFs and can be used with night shots to.

  • @vexnity460
    @vexnity460 9 місяців тому

    I'd say, if your using reflective or translucent surfaces,i 100% recommend nerfs instead of photogrammatry, cuz it does it somuch better

    • @MatthewBrennan
      @MatthewBrennan  9 місяців тому

      I made another video explicitly comparing the two- the photogrammetry model actually turned out pretty well.

  • @tommy_s
    @tommy_s Рік тому

    Wonderful work, really appreciate it

  • @Tonatar
    @Tonatar Рік тому +2

    Imaging google earth with gaussian splatting.

  • @alblez
    @alblez Рік тому +2

    Seeing Matthew's analysis of these three technologies was quite insightful. It brought to mind a question an architect friend once posed: Could one feasibly craft an architectural blueprint of a home or apartment using video footage? Considering your experience with these three tech contenders, would you say we're on the brink of making this potential a reality?

    • @MatthewBrennan
      @MatthewBrennan  Рік тому +3

      You could definitely build a rough model from video footage (provided that the video entered every room). No digitization technology (yet) will output a plan useful to an architect without substantial work by hand- however the power of these techs is that you can achieve results based on very little information (I.e. a series of photos or video) that can then be interpreted by an architect or draftsman and turned into a polished representation, like a plan or section.

    • @alblez
      @alblez Рік тому +1

      @@MatthewBrennan Thank you very much for your response; I have more clues so my friend can make things more efficient.
      It's a matter of time before new papers are published. 🔜

  • @notso_usualyoutbuer
    @notso_usualyoutbuer Рік тому

    Great work! Keep doing the stuff! Like and subcribed!

  • @MrDmonahan96
    @MrDmonahan96 Рік тому +1

    thanks this was super helpful

  • @raspas99
    @raspas99 Рік тому

    I don't know what is the third method more than I did before starting your video

    • @MatthewBrennan
      @MatthewBrennan  Рік тому

      This wasn't meant to be a technical video, but if you want to know more about the technicals behind 3DGS, this is a good one: ua-cam.com/video/HVv_IQKlafQ/v-deo.html

  • @thenozon
    @thenozon Рік тому

    Thx for the vids - deep respect for your knowledge and sharing it. (needed to watch it in 1.5x tho - otherwise it would have been kind of as if told in slow motion xD)

  • @chumleyk
    @chumleyk 7 місяців тому

    Ok. Why did the psychedelic sky of the gaussian splat give me a panic attack? I'm going to call it a Splat Attack.

  • @DataJuggler
    @DataJuggler 11 місяців тому

    It would be nice if in a few years the Gaussian splatting has a way to erase things you don't want, or correct blurry parts of a scene. Gaussian to mesh would be the holy grail.

    • @MatthewBrennan
      @MatthewBrennan  10 місяців тому

      It's possible (albeit somewhat crudely) to edit the Gaussian cloud now, in Unity.

  • @maxmeier532
    @maxmeier532 Рік тому

    What do you thing is currently the best technology to scan faces to create highly accurate 3d files? Which are also future proof in terms of working with them? The priority is accuracy and feasibility for a non-professional. If we are limiting to cost to maybe 5 to 10 grand? Photogrammetry, a 3D scanner like from Einscan, NeRF or anything else? Do you know of any software that could benefit from having more than 1 camera at a time for photogrammetry? When I look at professional studios, they have like a hundred cameras surrounding a person that gets scanned.

    • @MatthewBrennan
      @MatthewBrennan  Рік тому

      Photogrammetry would fit the bill, but you'd need a multi-camera rig. I've scanned a live subject (just the bust - shoulders + head) with a single camera, but it required quite a bit of cleanup in 3D sculpting software. A calibrated high-resolution, multi-camera solution would be the way to go.

  • @16pxdesign
    @16pxdesign 9 місяців тому

    Well described ❤ Appreciate ❤

  • @m.sierra5258
    @m.sierra5258 Рік тому +3

    I wish the source videos were higher quality... Especially the Nerf part is painful to watch

    • @MatthewBrennan
      @MatthewBrennan  Рік тому

      Ah yeah, I see that now. Unfortunately I screencaptured at 1080 (my monitor's max) and then upscaled to match the NeRF/Gaussian videos (4k), and had my premiere preview set to 1/8 res. Whoops. I'll fix it for future ones - thanks for pointing it out!

  • @JustinDeRosa
    @JustinDeRosa Рік тому

    This is nuts for set design, remodels... Be interesting to see what could be done with scopes for plumbers, both doctors and the ones with the butt crackin.

  • @ruperterskin2117
    @ruperterskin2117 Рік тому

    Cool. Thanks for sharing.

  • @robinhouston788
    @robinhouston788 Рік тому +2

    Nice comparison! And finally, someone pronounces "Gaussian" correctly. Tired of these other youtube "gawzhin splatting" experts.

    • @jurandfantom
      @jurandfantom Рік тому

      Drop those sources/channels as i never heard anybody saying Gaussian in other way than in this video - you just found yourself in wrong place :)

  • @nicolassuarez2933
    @nicolassuarez2933 2 місяці тому

    Outstanding! How about size comparison? Thanks

    • @MatthewBrennan
      @MatthewBrennan  2 місяці тому

      like a comparison of the data/size on disk? 3DGS and photogrammetry are more lightweight than NeRF - particularly if you want to run it in real-time, where NeRF is very taxing on your GPU.

  • @SpikeySlayer
    @SpikeySlayer 4 місяці тому

    The subject is actually a star destroyer but shhht, it's a rock

  • @serk_la_patata_espacial
    @serk_la_patata_espacial Рік тому +1

    It's me or from 8:30 and a half the quality of video seems 480?
    I can appreciate the quality correctly because seems a 480p video rescalated.
    Aside of that the video is very interesting.

  • @broslyons8045
    @broslyons8045 5 місяців тому +1

    That was good man - thanks

  • @sun-door
    @sun-door Місяць тому

    if you have an rtx series gpu may i suggest "nvidea broadcast" for your mic. it has amazing background noise suppression that works in real time and doesn't reduce the quality by much at all, especially compared to hearing air vents or water leaks.

    • @MatthewBrennan
      @MatthewBrennan  Місяць тому +1

      Thanks for the suggestion - I'll check broadcast out. I learned a lot about recording videos at home (since) making this video.

  • @nosyb
    @nosyb Рік тому

    Very cool work thanks!

  • @aubydauby
    @aubydauby Рік тому +2

    Is this purpose-driven for a particular field? I've always been fond of the intersection between geospatial tech and the broader CS/gaming world.

    • @MatthewBrennan
      @MatthewBrennan  Рік тому +1

      Right now, I think the primary application is in virtual production. This is a relatively new method, so I'm sure as it evolves, new applications will develop. At the moment it is not a straight replacement for any existing digitization or visualization technology.

  • @ryukisai99
    @ryukisai99 7 місяців тому

    Thanks for the good video and for providing the nice dataset. What is the focal length of your drone's camera (full frame equivalent)?

    • @MatthewBrennan
      @MatthewBrennan  7 місяців тому +1

      35mm equivalent is ~28mm. It's a 1" CMOS 20mpx sensor (for still images). However this dataset uses 4k video.

    • @ryukisai99
      @ryukisai99 7 місяців тому +1

      @@MatthewBrennan thanks for your answer. I'm trying to run your dataset using micmac photogrammetry. I'll let you know if I get good results!

  • @summer.vfX-trem
    @summer.vfX-trem 11 місяців тому

    great video. except that the definition is not good even in 2160, it appeared blurry ?

  • @khairummaksudahoqueadeeba9911
    @khairummaksudahoqueadeeba9911 9 місяців тому

    Hi Matthew! Thanks for this video. I'm new and a total noob to this field. I'm a Marketer and my line of work I'm having to learn a lot of these things including reality capture, photogrammetry, NeRFs, 3D GS, Digital Twin. Do you have videos that are educational about these aspects which would help a beginner like me to understand the basics?

  • @AyushBakshi
    @AyushBakshi Рік тому

    Why the video is blurred at 1440p?

  • @AlisonBLowndes
    @AlisonBLowndes 8 місяців тому

    Hi Matt, are you testing with NV Omniverse? Great video!

    • @MatthewBrennan
      @MatthewBrennan  8 місяців тому

      No. I tried it about a year ago but didn't find it very compelling.

  • @lukassarralde5439
    @lukassarralde5439 8 місяців тому

    Hi Matthew. Great video explanation. Which drone did you use for this test? Do you have by any chance any more drone footage? Have you use the DJI Mavick 3 pro Cine for photogrammetry? Thanks.

    • @MatthewBrennan
      @MatthewBrennan  8 місяців тому

      I used a Mavic 2 for this model. I have used a number of different drones for photogrammetry in the past, but haven't tried the Mavic 3 yet, although I don't think the Cine model adds anything particularly useful for traditional photogrammetry.

  • @AlexanderBukh
    @AlexanderBukh Рік тому

    Great job, subbed. 🎉

  • @RolandHa23
    @RolandHa23 9 місяців тому

    Where is the panorama sphere texture coming from that you used in the end?

    • @MatthewBrennan
      @MatthewBrennan  9 місяців тому

      It’s a panorama I took using a UAV directly above church rock.

  • @rowanw5912
    @rowanw5912 Рік тому

    Good video, but I have some tips. 1: Get a capture card so your system usage doesn't effect quality, it's hard to tell which method is better when the resource
    intensive ones are in 240p. And 2: either write a script or, if you want to maintain your natural manner of speaking, do a dry run first. Go though all your talking points once, then immediately start recording and do it again. Should help you move along a little faster and keep your pauses and "ums" to a minimum.

  • @Apollotwente
    @Apollotwente 9 місяців тому

    Hello Matthew, thank you very much for your response. I assume I need to create an mp4 file if I want to scan a gaussian splatter. What are the settings? I am really a novice in this. Previously I was doing an fps 30. Is it convenient to set fps60? Thanks in advance.

    • @MatthewBrennan
      @MatthewBrennan  9 місяців тому

      In my experience, still images (photographs) work much better than video! Follow good photogrammetric practice for capture, and then process as a 3DGS.

  • @kozyboiiii1341
    @kozyboiiii1341 Рік тому +1

    Matthew, may i know what is your computer specification when creating this?

    • @MatthewBrennan
      @MatthewBrennan  Рік тому +2

      The photogrammetry and NeRF were processed on a desktop computer with a Ryzen 9 3900X CPU + 4070ti GPU. The Gaussian Splatting was processed with an nVidia A100 GPU.

  • @simonelorenzoni
    @simonelorenzoni 10 місяців тому

    Big up my friend!

  • @Buraak_87
    @Buraak_87 10 місяців тому

    The capture of the SW is really blurry 😞

  • @matheussalabert392
    @matheussalabert392 Рік тому

    Man is that a rocket fan?

  • @Apollotwente
    @Apollotwente 9 місяців тому

    Great video, thank you. Could you advise me how to scan an object very sharply for gaussian splatting? As it happens, I can't get it sharp. The letters are not clear. I own an android (s22 samsung, nikon z50, insta 360 x3). Which one would be the most accurate? As I am using the texture for training purpose. Thanks in advance. Greetings, Sebas

    • @MatthewBrennan
      @MatthewBrennan  9 місяців тому

      The Nikon z50 would likely be the best (physical shutter + megapixels), although of course it depends on what lens you are using. In my experience, I get the best results using a high-resolution mirrorless camera (compared to an iphone or action camera). Of course - more data = longer processing times, so there is always a trade off or compromise.

  • @thepoppunx
    @thepoppunx Рік тому +1

    the main problem with GS its that i cant work with the model,,, i have no geometry to work with... at the end of the day y have a geometro model with a texture that i can modify and use in a 3d enviroment i create...

    • @MatthewBrennan
      @MatthewBrennan  Рік тому

      Yep - see this video for a discussion of Neural Surface Reconstruction: ua-cam.com/video/qFkCGvscsMQ/v-deo.html
      I don't think NeRFs or GS will replace meshes anytime soon, but I do like Gaussian Splat point clouds for video rendering: ua-cam.com/video/Mi27jpUC5nU/v-deo.html

  • @aeth_y
    @aeth_y 11 місяців тому

    what is the name of the music used at 14:30?

  • @VerdonTrigance
    @VerdonTrigance 11 місяців тому

    Hi, you said we can export camera positions from 1st method and import it into NeRF to reduce amount of work for neural network and speed up a process. Don't you know how to do this with MeshRoom and colmap (or instant ngp)? Recently I was trying to make a NeRF with those tools but found that even after 2 days of work it was still processing the data (which are around 1200 frames from video).

    • @MatthewBrennan
      @MatthewBrennan  11 місяців тому

      I use metashape instead of COLMAP, because COLMAP is very slow and gives subpar results (it’s open source though, which is nice).

    • @VerdonTrigance
      @VerdonTrigance 11 місяців тому

      @@MatthewBrennan I'll check this as well, thanks.

  • @blender_wiki
    @blender_wiki 11 місяців тому

    Tips: you can transfert synthetic data generate by one method to another method with very nice results. We do this really often and is incredible how you can solve problems and improve the final result injecting syntetic data to the original data set. You must try.