Is Nerf The End of Photogrammetry

Поділитися
Вставка
  • Опубліковано 18 гру 2024

КОМЕНТАРІ •

  • @InspirationTuts
    @InspirationTuts  Рік тому +4

    The first 1,000 people to use the link will get a 1 month free trial of Skillshare skl.sh/inspirationtuts07231

  • @Dilligff
    @Dilligff Рік тому +31

    I can't see why the two can't work off of each other. NerF still requires images to be taken like photogrammetry, just far less, and generates a full 3D view in high fidelity. Why not, then, extrapolate the additional reference points from the generated view to aid in creating meshes without artifacts? I don't see one replacing the other so much as them working in tandem for maximum results.

  • @Shura86
    @Shura86 Рік тому +31

    bro's bouta start a war with that thumbnail

  • @naninano8813
    @naninano8813 Рік тому +8

    come to think of it, gaussian splats can be used directly for other things like rigid body physics simulation (easy to calculate object intersections or estimate center of mass - don't need meshes for that). Once animation is solved, i can totally see meshless gaming engines pop up in the next hundred of years

  • @iamgnud1092
    @iamgnud1092 Рік тому +12

    But isn't NERF(or similar AI that do the same) also a "photogrammetry" ?

  • @DirkTeucher
    @DirkTeucher Рік тому +22

    There is no way NERF can replace photogrammetry .... But Neuralangelo or Neoralangelo 2.0 most likely will :D ... I've been making nerfs for about a year now since it first came out and the quality is just not that useful in most typical 3d pipelines if you want high quality 3d models and texture accuracy. But Neuralangelo looks like it can pump out some nice geometry + textures. I am very excited about getting my hands on it.

    • @merseyviking
      @merseyviking Рік тому +3

      Could you use NeRF as an input to photogrammetry? Never again would you miss that important shot of the underside of a model.

    • @DirkTeucher
      @DirkTeucher Рік тому +5

      ​@@merseyviking No, the image quality that you would be sending to photogrammetry would be much worse than just using photos. And NeRF does not create objects out of thin air. It must be able to see the object to construct it. The cool thing that NeRF does that Photogrammetry cannot is reflections. You can record the ocean, windows, cars even reflections in mirrors if you are smart about it.
      However perhaps you mean combining the tech to make something better. If so the NeRF math and Photogrammetry math can be combined potentially and that is something that is already being researched.

    • @MrGTAmodsgerman
      @MrGTAmodsgerman Рік тому +2

      @@DirkTeucher I see a lot of people saying that reflection is something NeRF can handle. But it only seems to be the case for the NeRF view, not if you convert it to a mesh. I used LumaAI alot recently and the reflections were all nearly as bad as a photogrammetry approach.

    • @DirkTeucher
      @DirkTeucher Рік тому

      @@MrGTAmodsgerman Yeah that is true.

  • @rjv2395
    @rjv2395 Рік тому +6

    lots of blah blah about NeRFs, but no practical explanation of which engine to use, how to upload, formats, etc

  • @matthewendler5232
    @matthewendler5232 Рік тому +1

    What video is being referenced at 4:40?

  • @looneytuned777
    @looneytuned777 Місяць тому +1

    It's amazing that you managed to make an 11-minute video that said so much and so little at the same time.

  • @ge2719
    @ge2719 Рік тому +8

    would be interesting to see how this may be able to take 2d video content and make it viewable in 3d in vr. Like imagine being able to watch a sitcom and feel like youre there in the room with them.

    • @GamingShiiep
      @GamingShiiep 8 місяців тому

      That would be awesome. I believe the biggest issue there is that *dynamic* scene reconstruction itself is already incredibly difficult with conventional methods (such as photogrammetry). It's new to me that NeRFs could do that now.

  • @hchattaway
    @hchattaway 10 місяців тому

    Many of the issues with photogrammetry mentioned here can be pretty easily overcome. Reality Capture, which Epic games recently bought to allow game developers to create assets for UE, has some great features to model shiny objects easily. And the latest version is MUCH faster now. What takes Meshroom 8 hours to do, RC can do in under 2 hours.

  • @mujahidfaruk2152
    @mujahidfaruk2152 Рік тому

    Can anybody tell me, What device the girl used with her phone attached? What is the smartphone name? 3:01 to 3:15

    • @romantenger2328
      @romantenger2328 Рік тому

      Revopoint Pop, its not photogrammetry it is a infrared 3d scanner

  • @gaba023
    @gaba023 Рік тому +4

    Now I need a deeper dive into the subject. Mostly what I got out of this is that Nerf is AI enhanced photogrammetry that uses a different file format. It has to be more complicated than that. There must be a reason the people that developed Nerf, didn't use an object mesh as output. Isn't the situation similar to a RAW image format like DNG vs some other proprietary format like CR2 (Cannon) or RAF (Fuji)?

    • @matthewharrison3813
      @matthewharrison3813 Рік тому

      A NERF is a fundamentality different way to represent a scene, it's not just a different way to import a scan. To convert a NERF to a traditional mesh you need to go through a process of similar difficulty to processing a photoscan in the first place.

    • @schultzeworks
      @schultzeworks Рік тому

      Agreed. It was exceptionally superficial and just said ‘game-changer’ a few times. How do you create and edit this data ? Not covered.

  • @ArtwithAmarBrisco
    @ArtwithAmarBrisco Рік тому +4

    I would say this woulld be the evolution of photogrammetry because it saves time and will get better overtime allot faster. Where the other is at its peak and cannot get better as fast over time. Only things it will be able to do is create more detailed models but it requires more work and better tech.
    Nerf would also be great for indie companies and creators so i would say if this is accepted by individuals and industry. This would be a better option especially since nvida and others are already working on this tech.

  • @aresaurelian
    @aresaurelian Рік тому

    As long as the data exists in the training databases, or even real time updated data sets for scientific purposes, the generative transformers can recreate, using standard models, the attributes and effects of the missing parts, or by direct request of the user. For gaming industry, it means that if any part of our cosmos has been measured and scanned with various methods of source collections: images, lidar, telescopes, microscopes; these generative transformers can manifest your requested objects or subject immediately at run time. A combination of many sets used in the training data is good, but it requires strong ai models to drive it until science can optimize the systems further.

  • @laigamer9719
    @laigamer9719 Рік тому +2

    Bro never stop to make blender videos.. we love it ❤️😊

  • @jarmida6371
    @jarmida6371 Рік тому

    03;28 That "desired object" is not an object of desire.

  • @SilverEye91
    @SilverEye91 Рік тому +6

    It has the potential sure. But just like any type of emerging tech like this you always just see the best results of the absolute best material. In most real life cases the results are not good enough to be used or a real headache to get there. It's intersting technology for sure, but I am really, really hesitant to call it anything but potential for the future. It may be future tech, but the future is not now.

  • @insaniaac
    @insaniaac Рік тому +1

    can someone explain photogrametry? what are other terms for it?

    • @timothy209
      @timothy209 Рік тому +2

      It's basically the process of scanning real-life objects and converting them into 3D models

    • @merseyviking
      @merseyviking Рік тому +4

      It's several processes chained together. But essentially it figures out where you took each photo from, then takes pairs of photos to generate a stereo image just like your eyes do. From there it can calculate the distance of each pixel from each photo. Next it creates a point cloud from that depth information. Finally it turns that into a mesh amd textures it from the photos. That is an oversimplification, but that's the general workflow.

    • @Dilligff
      @Dilligff Рік тому +1

      Other terms: Sorcery, voodoo, magic, and witchery. What it does is compare multiple images of different angles of objects and extrapolates geometric data from the results based on tracked reference points. It basically already uses a less advanced kind of AI in order to work. Nerf just adds another layer of AI to it that allows it to 'fill in the blanks' based on a trained data set of similar objects and stores that information in shorthand.

  • @BacatauMania
    @BacatauMania Рік тому +2

    Can someone explain me why lidar isn't used over photogrammetry already? Like, today

    • @AntonisDimopoulos
      @AntonisDimopoulos Рік тому +1

      Because of reflections.

    • @coyohti
      @coyohti Рік тому +3

      I would take a guess the reason might be that easily accessible lidar tools (for instance, the lidar option in Polycam) do not yield particularly useful results. The resulting mesh is lacking in detail and overall "blobby" looking. Compared to photogrammetry where one can use anything from a phone to a dSLR to capture source images and the result is generally as good as your gear, and patience, can handle.

    • @stealth2951
      @stealth2951 Рік тому +1

      Lidar is used, but they are very expensive.
      Example lidar on a iPhone is not a great option to make something very detailed. It will be close, but nothing that could be used close up in a scene.
      The good lidar guns you could have it close up, they are made for that (I think they start around $4,000).
      Where taking photos, basically everyone has access to a decent camera. You can take a bunch of pictures to get the details you want.
      So that's why it's popular. It is very cheap and basically gets the same results as the expensive lidar ones.
      It basically just comes down to cost. If a really good lidar gun was like $50, lidar would be more popular and used way more than photos.

    • @merseyviking
      @merseyviking Рік тому +1

      Price mostly. You can hack something together with a Roomba LiDAR and some electronics skills, but the high-end ones used in engineering are much more accurate.
      As someone else pointed out, reflections can be an issue with LiDAR, but they are also a problem with photogrammetry.

  • @imnotryuu6391
    @imnotryuu6391 Рік тому

    how different is this to polycam?

  • @grproteus
    @grproteus Рік тому

    In 3:15, you show the use of depth sensors with real time capture, and label that as photogrammetry. This is not photogrammetry because it does not calculate depth based on parallax pairs.

  • @CharlesVanNoland
    @CharlesVanNoland Рік тому +1

    Pretty sure Gaussian Splats are the end of NeRFs. I think you're a year or so behind the times! Maybe there will be a transformer-like advancement in NeRFs but for how to-the-point and efficient that Gaussian Splatting is, I am not particularly inclined to believe that it's possible.

  • @DamageNando
    @DamageNando Рік тому +6

    I felt like you said alot and also said very little at the same time

  • @googleyoutubechannel8554
    @googleyoutubechannel8554 Рік тому +1

    Imagine if researchers put any time into AI models that can take garbage photogrammetry point-clouds, which are basically useless as 3D object, into useable, reasonable, geometry. NeRF doesn't look to be helpful in this respect, 'radiance fields' are even more esoteric, fragile, inflexible, types of data structures.

  • @elliotjackson1
    @elliotjackson1 Рік тому

    They might need to rethink that brand name though. It sounds familiar.

  • @Madlion
    @Madlion Рік тому

    No cus u cant change materials and manipulate surface shaders, cannot make something burn etc. But its ok for visualization, a good replacement for traditional point cloud

  • @filmalchemy9949
    @filmalchemy9949 Рік тому +3

    3:24 did you just objectify a woman? Jk XD

    • @zaionDoe
      @zaionDoe Рік тому

      The comment of the month!

  • @khalidmounir3475
    @khalidmounir3475 Рік тому

    which free software can we use nerf

  • @KrazyKaiser
    @KrazyKaiser Рік тому +1

    Glad you are addressing the issue of ethical data gathering for AI training, it's something A LOT of people just gloss over when it come to talking about AI technologies.

  • @humphrey_bear_MDC
    @humphrey_bear_MDC 7 місяців тому

    Google maps needs nerf. The google trees look wonky in 3D.

  • @Overall4k
    @Overall4k Рік тому

    Brother do the review of Launch control Car addon plz bro tell us how it looks ❤❤❤❤❤❤

  • @shanester1832
    @shanester1832 Рік тому

    Is any work being done in this general field that would allow breaking core photogrammetry rules? I'm referring to changing shadows, the object moving amid the light sources instead of being fixed and the camera moving around it.
    Example: a museum has 20 pics of an artifact but they weren't taken with photogrammetry in mind, all the info is there but it can't be combined in a conventional way.
    I recently became aware of one-2-3-45 and the one pic to 3d object concept. The dream program is one step beyond that, where instead of AI filling in the gaps with guesses it references angles from other photos.
    Alt approaches? Pipe dream?

    • @bricaaron3978
      @bricaaron3978 Рік тому

      If all of the info is there, why couldn't a 3D representation be generated?

    • @shanester1832
      @shanester1832 Рік тому

      @bricaaron3978 If you walked around an object and took 50 pics, photogrammetry will work.
      If you move the object around or change light direction, it's unusable. I'm no tech on this but it works on how light reacts with the object, how deep the shadow go, where it fades showing depth. When everything's constant it can match & overlap the photos using shadows and reference points.
      If it got bumped a little bit it throws everything off and the photogrammetry will fail completely or give very distorted results.

    • @bricaaron3978
      @bricaaron3978 Рік тому

      @@shanester1832 Thanks. I had assumed that algorithms were advanced to the point that static lighting wasn't necessary.

    • @shanester1832
      @shanester1832 Рік тому

      @bricaaron3978 Unfortunately not. NERF was new to me and I thought it could be what I was looking for, but it relies on the same photogrammetry base rules.

  • @scottgarriott3884
    @scottgarriott3884 Місяць тому

    Very interesting, but I found nothing in here to convince me of the "more accurate" than photographetry. I suppose it can create models that appear more detailed, but this is not the same as accuracy. In industries where accurately measuring real things is important, I doubt NeRF is the tool of the future.

  • @ThoughtsFew
    @ThoughtsFew Рік тому +1

    is peanut butter the end of jelly?

    • @fingfufar9878
      @fingfufar9878 Рік тому +1

      To be fair, I would take peanut butter, but I prefer jelly in some moods still

  • @bricaaron3978
    @bricaaron3978 Рік тому

    "Scary and exciting". Yes, that is what it is. It's not good or useful for anything other than the creation of fiction. It's scary that anyone might think this is useful for recording reality, or attempt to use it in such a way.

  • @somethingsomethingsomethingdar

    Damn it could t they have found a way to call it Narf?

  • @binyaminbass
    @binyaminbass Рік тому

    This video has only been out for a month and now there's Gausian Splattering which people seem to say is even better than NeRF. What's going on?!

  • @Ludak021
    @Ludak021 Рік тому

    This is already old tech. Gaussian splatting is the new thing.

  • @DaveBjornRapp
    @DaveBjornRapp Рік тому

    No, NERF will not be the end of photogrammetry - Gaussian Splats will do it. Oh, and they capture motion too, if you shoot the subject correctly... you can even bring them into Unreal Engine.

  • @Filtersloth
    @Filtersloth Рік тому

    You can tell this script was written by ChatGPT

  • @ThreeFourEqualsSeven
    @ThreeFourEqualsSeven Рік тому +1

    It's Nerf or nothing

  • @RomanBuehler
    @RomanBuehler Рік тому

    eh, photogrammetry is here to stay... with photogrammetry you can do precise scientific measurements... you can't do measurements at all with NeRF... and that's especially the strength of photogrammetry, which is why it will never go away...
    but, NeRF is fun, so... it's also here to stay, but in completely different application fields...

  • @nicholaspostlethwaite9554
    @nicholaspostlethwaite9554 Рік тому

    All depends. It seems all a bit better visual trickery on worse models. Bear in mind the fuss about ai looking at images made by others. Stealing real world objects to convert to a digital form is the same.

  • @davideghirelli4453
    @davideghirelli4453 Рік тому

    the answer is no, as long as nerf can't provide 3d geometry at least

  • @RogerGarrett
    @RogerGarrett Рік тому +1

    It seems like you just keep repeating yourself on the comparative points over and over again.

  • @4per8
    @4per8 Рік тому +3

    its this or nothing
    (the joke is that the slogan of the toy weaponry company Ferf is "Its Nerf or Nothing" and this is called nerf)

  • @cholasimmons
    @cholasimmons Рік тому

    HFS 😱

  • @punkpin
    @punkpin Рік тому

    It's Nerf or nothing.

  • @ciankiwi7753
    @ciankiwi7753 Рік тому

    nerf or nothin?

  • @robot7338
    @robot7338 Рік тому

    3D gaussian splatting go BRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR

  • @dpmjmun
    @dpmjmun 6 місяців тому

    The baity title makes me want to say that yes, photogrametry will be forever useless and should be criminalized, for long shall live the NERF, the one final solution for every field of science. 3d? Nerf. Animations? Nerf. Cooking? Nerf. Depression? Nerf

  • @laurent-minimalisme
    @laurent-minimalisme 2 місяці тому

    bro can you explain me how to do nerf without photogrammetry? I think you really dont know what you are talking about :)

  • @stalkershano
    @stalkershano Рік тому

    😊

  • @astrea555
    @astrea555 Рік тому

    god, I hate AI.

  • @exoqqen
    @exoqqen Рік тому +2

    this was so annoying to watch. im new to the topic of nerv and was looking for some insights on how the technology works, but all you did was give a surface level overview without any explanations, and whenever you attempted to give an explanation it was just very general and not insightful at all.

    • @j_shelby_damnwird
      @j_shelby_damnwird Рік тому

      This comment was so annoying to read.
      You can type searches on an engine, right? 🤦🏻

  • @3dkiwi920
    @3dkiwi920 Рік тому

    Nerf IS Photogrammetry my guy, and 3d scanning is not.

  • @hamburgercheeseburgerwhopper

    omg im so early

  • @oil9151
    @oil9151 Рік тому +2

    First🎉