I can't see why the two can't work off of each other. NerF still requires images to be taken like photogrammetry, just far less, and generates a full 3D view in high fidelity. Why not, then, extrapolate the additional reference points from the generated view to aid in creating meshes without artifacts? I don't see one replacing the other so much as them working in tandem for maximum results.
come to think of it, gaussian splats can be used directly for other things like rigid body physics simulation (easy to calculate object intersections or estimate center of mass - don't need meshes for that). Once animation is solved, i can totally see meshless gaming engines pop up in the next hundred of years
There is no way NERF can replace photogrammetry .... But Neuralangelo or Neoralangelo 2.0 most likely will :D ... I've been making nerfs for about a year now since it first came out and the quality is just not that useful in most typical 3d pipelines if you want high quality 3d models and texture accuracy. But Neuralangelo looks like it can pump out some nice geometry + textures. I am very excited about getting my hands on it.
@@merseyviking No, the image quality that you would be sending to photogrammetry would be much worse than just using photos. And NeRF does not create objects out of thin air. It must be able to see the object to construct it. The cool thing that NeRF does that Photogrammetry cannot is reflections. You can record the ocean, windows, cars even reflections in mirrors if you are smart about it. However perhaps you mean combining the tech to make something better. If so the NeRF math and Photogrammetry math can be combined potentially and that is something that is already being researched.
@@DirkTeucher I see a lot of people saying that reflection is something NeRF can handle. But it only seems to be the case for the NeRF view, not if you convert it to a mesh. I used LumaAI alot recently and the reflections were all nearly as bad as a photogrammetry approach.
would be interesting to see how this may be able to take 2d video content and make it viewable in 3d in vr. Like imagine being able to watch a sitcom and feel like youre there in the room with them.
That would be awesome. I believe the biggest issue there is that *dynamic* scene reconstruction itself is already incredibly difficult with conventional methods (such as photogrammetry). It's new to me that NeRFs could do that now.
Many of the issues with photogrammetry mentioned here can be pretty easily overcome. Reality Capture, which Epic games recently bought to allow game developers to create assets for UE, has some great features to model shiny objects easily. And the latest version is MUCH faster now. What takes Meshroom 8 hours to do, RC can do in under 2 hours.
Now I need a deeper dive into the subject. Mostly what I got out of this is that Nerf is AI enhanced photogrammetry that uses a different file format. It has to be more complicated than that. There must be a reason the people that developed Nerf, didn't use an object mesh as output. Isn't the situation similar to a RAW image format like DNG vs some other proprietary format like CR2 (Cannon) or RAF (Fuji)?
A NERF is a fundamentality different way to represent a scene, it's not just a different way to import a scan. To convert a NERF to a traditional mesh you need to go through a process of similar difficulty to processing a photoscan in the first place.
I would say this woulld be the evolution of photogrammetry because it saves time and will get better overtime allot faster. Where the other is at its peak and cannot get better as fast over time. Only things it will be able to do is create more detailed models but it requires more work and better tech. Nerf would also be great for indie companies and creators so i would say if this is accepted by individuals and industry. This would be a better option especially since nvida and others are already working on this tech.
As long as the data exists in the training databases, or even real time updated data sets for scientific purposes, the generative transformers can recreate, using standard models, the attributes and effects of the missing parts, or by direct request of the user. For gaming industry, it means that if any part of our cosmos has been measured and scanned with various methods of source collections: images, lidar, telescopes, microscopes; these generative transformers can manifest your requested objects or subject immediately at run time. A combination of many sets used in the training data is good, but it requires strong ai models to drive it until science can optimize the systems further.
It has the potential sure. But just like any type of emerging tech like this you always just see the best results of the absolute best material. In most real life cases the results are not good enough to be used or a real headache to get there. It's intersting technology for sure, but I am really, really hesitant to call it anything but potential for the future. It may be future tech, but the future is not now.
It's several processes chained together. But essentially it figures out where you took each photo from, then takes pairs of photos to generate a stereo image just like your eyes do. From there it can calculate the distance of each pixel from each photo. Next it creates a point cloud from that depth information. Finally it turns that into a mesh amd textures it from the photos. That is an oversimplification, but that's the general workflow.
Other terms: Sorcery, voodoo, magic, and witchery. What it does is compare multiple images of different angles of objects and extrapolates geometric data from the results based on tracked reference points. It basically already uses a less advanced kind of AI in order to work. Nerf just adds another layer of AI to it that allows it to 'fill in the blanks' based on a trained data set of similar objects and stores that information in shorthand.
I would take a guess the reason might be that easily accessible lidar tools (for instance, the lidar option in Polycam) do not yield particularly useful results. The resulting mesh is lacking in detail and overall "blobby" looking. Compared to photogrammetry where one can use anything from a phone to a dSLR to capture source images and the result is generally as good as your gear, and patience, can handle.
Lidar is used, but they are very expensive. Example lidar on a iPhone is not a great option to make something very detailed. It will be close, but nothing that could be used close up in a scene. The good lidar guns you could have it close up, they are made for that (I think they start around $4,000). Where taking photos, basically everyone has access to a decent camera. You can take a bunch of pictures to get the details you want. So that's why it's popular. It is very cheap and basically gets the same results as the expensive lidar ones. It basically just comes down to cost. If a really good lidar gun was like $50, lidar would be more popular and used way more than photos.
Price mostly. You can hack something together with a Roomba LiDAR and some electronics skills, but the high-end ones used in engineering are much more accurate. As someone else pointed out, reflections can be an issue with LiDAR, but they are also a problem with photogrammetry.
In 3:15, you show the use of depth sensors with real time capture, and label that as photogrammetry. This is not photogrammetry because it does not calculate depth based on parallax pairs.
Pretty sure Gaussian Splats are the end of NeRFs. I think you're a year or so behind the times! Maybe there will be a transformer-like advancement in NeRFs but for how to-the-point and efficient that Gaussian Splatting is, I am not particularly inclined to believe that it's possible.
Imagine if researchers put any time into AI models that can take garbage photogrammetry point-clouds, which are basically useless as 3D object, into useable, reasonable, geometry. NeRF doesn't look to be helpful in this respect, 'radiance fields' are even more esoteric, fragile, inflexible, types of data structures.
No cus u cant change materials and manipulate surface shaders, cannot make something burn etc. But its ok for visualization, a good replacement for traditional point cloud
Glad you are addressing the issue of ethical data gathering for AI training, it's something A LOT of people just gloss over when it come to talking about AI technologies.
Is any work being done in this general field that would allow breaking core photogrammetry rules? I'm referring to changing shadows, the object moving amid the light sources instead of being fixed and the camera moving around it. Example: a museum has 20 pics of an artifact but they weren't taken with photogrammetry in mind, all the info is there but it can't be combined in a conventional way. I recently became aware of one-2-3-45 and the one pic to 3d object concept. The dream program is one step beyond that, where instead of AI filling in the gaps with guesses it references angles from other photos. Alt approaches? Pipe dream?
@bricaaron3978 If you walked around an object and took 50 pics, photogrammetry will work. If you move the object around or change light direction, it's unusable. I'm no tech on this but it works on how light reacts with the object, how deep the shadow go, where it fades showing depth. When everything's constant it can match & overlap the photos using shadows and reference points. If it got bumped a little bit it throws everything off and the photogrammetry will fail completely or give very distorted results.
@bricaaron3978 Unfortunately not. NERF was new to me and I thought it could be what I was looking for, but it relies on the same photogrammetry base rules.
Very interesting, but I found nothing in here to convince me of the "more accurate" than photographetry. I suppose it can create models that appear more detailed, but this is not the same as accuracy. In industries where accurately measuring real things is important, I doubt NeRF is the tool of the future.
"Scary and exciting". Yes, that is what it is. It's not good or useful for anything other than the creation of fiction. It's scary that anyone might think this is useful for recording reality, or attempt to use it in such a way.
No, NERF will not be the end of photogrammetry - Gaussian Splats will do it. Oh, and they capture motion too, if you shoot the subject correctly... you can even bring them into Unreal Engine.
eh, photogrammetry is here to stay... with photogrammetry you can do precise scientific measurements... you can't do measurements at all with NeRF... and that's especially the strength of photogrammetry, which is why it will never go away... but, NeRF is fun, so... it's also here to stay, but in completely different application fields...
All depends. It seems all a bit better visual trickery on worse models. Bear in mind the fuss about ai looking at images made by others. Stealing real world objects to convert to a digital form is the same.
The baity title makes me want to say that yes, photogrametry will be forever useless and should be criminalized, for long shall live the NERF, the one final solution for every field of science. 3d? Nerf. Animations? Nerf. Cooking? Nerf. Depression? Nerf
this was so annoying to watch. im new to the topic of nerv and was looking for some insights on how the technology works, but all you did was give a surface level overview without any explanations, and whenever you attempted to give an explanation it was just very general and not insightful at all.
The first 1,000 people to use the link will get a 1 month free trial of Skillshare skl.sh/inspirationtuts07231
I can't see why the two can't work off of each other. NerF still requires images to be taken like photogrammetry, just far less, and generates a full 3D view in high fidelity. Why not, then, extrapolate the additional reference points from the generated view to aid in creating meshes without artifacts? I don't see one replacing the other so much as them working in tandem for maximum results.
bro's bouta start a war with that thumbnail
Naa.... Superman and Batman are both shitty Characters they are in same league
🤣🤣🤣
@@SuperMyckiethe most L take ever
Nah, no one gives a shit about DC.
come to think of it, gaussian splats can be used directly for other things like rigid body physics simulation (easy to calculate object intersections or estimate center of mass - don't need meshes for that). Once animation is solved, i can totally see meshless gaming engines pop up in the next hundred of years
But isn't NERF(or similar AI that do the same) also a "photogrammetry" ?
There is no way NERF can replace photogrammetry .... But Neuralangelo or Neoralangelo 2.0 most likely will :D ... I've been making nerfs for about a year now since it first came out and the quality is just not that useful in most typical 3d pipelines if you want high quality 3d models and texture accuracy. But Neuralangelo looks like it can pump out some nice geometry + textures. I am very excited about getting my hands on it.
Could you use NeRF as an input to photogrammetry? Never again would you miss that important shot of the underside of a model.
@@merseyviking No, the image quality that you would be sending to photogrammetry would be much worse than just using photos. And NeRF does not create objects out of thin air. It must be able to see the object to construct it. The cool thing that NeRF does that Photogrammetry cannot is reflections. You can record the ocean, windows, cars even reflections in mirrors if you are smart about it.
However perhaps you mean combining the tech to make something better. If so the NeRF math and Photogrammetry math can be combined potentially and that is something that is already being researched.
@@DirkTeucher I see a lot of people saying that reflection is something NeRF can handle. But it only seems to be the case for the NeRF view, not if you convert it to a mesh. I used LumaAI alot recently and the reflections were all nearly as bad as a photogrammetry approach.
@@MrGTAmodsgerman Yeah that is true.
lots of blah blah about NeRFs, but no practical explanation of which engine to use, how to upload, formats, etc
What video is being referenced at 4:40?
It's amazing that you managed to make an 11-minute video that said so much and so little at the same time.
would be interesting to see how this may be able to take 2d video content and make it viewable in 3d in vr. Like imagine being able to watch a sitcom and feel like youre there in the room with them.
That would be awesome. I believe the biggest issue there is that *dynamic* scene reconstruction itself is already incredibly difficult with conventional methods (such as photogrammetry). It's new to me that NeRFs could do that now.
Many of the issues with photogrammetry mentioned here can be pretty easily overcome. Reality Capture, which Epic games recently bought to allow game developers to create assets for UE, has some great features to model shiny objects easily. And the latest version is MUCH faster now. What takes Meshroom 8 hours to do, RC can do in under 2 hours.
Can anybody tell me, What device the girl used with her phone attached? What is the smartphone name? 3:01 to 3:15
Revopoint Pop, its not photogrammetry it is a infrared 3d scanner
Now I need a deeper dive into the subject. Mostly what I got out of this is that Nerf is AI enhanced photogrammetry that uses a different file format. It has to be more complicated than that. There must be a reason the people that developed Nerf, didn't use an object mesh as output. Isn't the situation similar to a RAW image format like DNG vs some other proprietary format like CR2 (Cannon) or RAF (Fuji)?
A NERF is a fundamentality different way to represent a scene, it's not just a different way to import a scan. To convert a NERF to a traditional mesh you need to go through a process of similar difficulty to processing a photoscan in the first place.
Agreed. It was exceptionally superficial and just said ‘game-changer’ a few times. How do you create and edit this data ? Not covered.
I would say this woulld be the evolution of photogrammetry because it saves time and will get better overtime allot faster. Where the other is at its peak and cannot get better as fast over time. Only things it will be able to do is create more detailed models but it requires more work and better tech.
Nerf would also be great for indie companies and creators so i would say if this is accepted by individuals and industry. This would be a better option especially since nvida and others are already working on this tech.
As long as the data exists in the training databases, or even real time updated data sets for scientific purposes, the generative transformers can recreate, using standard models, the attributes and effects of the missing parts, or by direct request of the user. For gaming industry, it means that if any part of our cosmos has been measured and scanned with various methods of source collections: images, lidar, telescopes, microscopes; these generative transformers can manifest your requested objects or subject immediately at run time. A combination of many sets used in the training data is good, but it requires strong ai models to drive it until science can optimize the systems further.
Bro never stop to make blender videos.. we love it ❤️😊
More to come!
03;28 That "desired object" is not an object of desire.
It has the potential sure. But just like any type of emerging tech like this you always just see the best results of the absolute best material. In most real life cases the results are not good enough to be used or a real headache to get there. It's intersting technology for sure, but I am really, really hesitant to call it anything but potential for the future. It may be future tech, but the future is not now.
can someone explain photogrametry? what are other terms for it?
It's basically the process of scanning real-life objects and converting them into 3D models
It's several processes chained together. But essentially it figures out where you took each photo from, then takes pairs of photos to generate a stereo image just like your eyes do. From there it can calculate the distance of each pixel from each photo. Next it creates a point cloud from that depth information. Finally it turns that into a mesh amd textures it from the photos. That is an oversimplification, but that's the general workflow.
Other terms: Sorcery, voodoo, magic, and witchery. What it does is compare multiple images of different angles of objects and extrapolates geometric data from the results based on tracked reference points. It basically already uses a less advanced kind of AI in order to work. Nerf just adds another layer of AI to it that allows it to 'fill in the blanks' based on a trained data set of similar objects and stores that information in shorthand.
Can someone explain me why lidar isn't used over photogrammetry already? Like, today
Because of reflections.
I would take a guess the reason might be that easily accessible lidar tools (for instance, the lidar option in Polycam) do not yield particularly useful results. The resulting mesh is lacking in detail and overall "blobby" looking. Compared to photogrammetry where one can use anything from a phone to a dSLR to capture source images and the result is generally as good as your gear, and patience, can handle.
Lidar is used, but they are very expensive.
Example lidar on a iPhone is not a great option to make something very detailed. It will be close, but nothing that could be used close up in a scene.
The good lidar guns you could have it close up, they are made for that (I think they start around $4,000).
Where taking photos, basically everyone has access to a decent camera. You can take a bunch of pictures to get the details you want.
So that's why it's popular. It is very cheap and basically gets the same results as the expensive lidar ones.
It basically just comes down to cost. If a really good lidar gun was like $50, lidar would be more popular and used way more than photos.
Price mostly. You can hack something together with a Roomba LiDAR and some electronics skills, but the high-end ones used in engineering are much more accurate.
As someone else pointed out, reflections can be an issue with LiDAR, but they are also a problem with photogrammetry.
how different is this to polycam?
In 3:15, you show the use of depth sensors with real time capture, and label that as photogrammetry. This is not photogrammetry because it does not calculate depth based on parallax pairs.
Pretty sure Gaussian Splats are the end of NeRFs. I think you're a year or so behind the times! Maybe there will be a transformer-like advancement in NeRFs but for how to-the-point and efficient that Gaussian Splatting is, I am not particularly inclined to believe that it's possible.
I felt like you said alot and also said very little at the same time
Imagine if researchers put any time into AI models that can take garbage photogrammetry point-clouds, which are basically useless as 3D object, into useable, reasonable, geometry. NeRF doesn't look to be helpful in this respect, 'radiance fields' are even more esoteric, fragile, inflexible, types of data structures.
They might need to rethink that brand name though. It sounds familiar.
No cus u cant change materials and manipulate surface shaders, cannot make something burn etc. But its ok for visualization, a good replacement for traditional point cloud
3:24 did you just objectify a woman? Jk XD
The comment of the month!
which free software can we use nerf
Glad you are addressing the issue of ethical data gathering for AI training, it's something A LOT of people just gloss over when it come to talking about AI technologies.
Google maps needs nerf. The google trees look wonky in 3D.
Brother do the review of Launch control Car addon plz bro tell us how it looks ❤❤❤❤❤❤
Is any work being done in this general field that would allow breaking core photogrammetry rules? I'm referring to changing shadows, the object moving amid the light sources instead of being fixed and the camera moving around it.
Example: a museum has 20 pics of an artifact but they weren't taken with photogrammetry in mind, all the info is there but it can't be combined in a conventional way.
I recently became aware of one-2-3-45 and the one pic to 3d object concept. The dream program is one step beyond that, where instead of AI filling in the gaps with guesses it references angles from other photos.
Alt approaches? Pipe dream?
If all of the info is there, why couldn't a 3D representation be generated?
@bricaaron3978 If you walked around an object and took 50 pics, photogrammetry will work.
If you move the object around or change light direction, it's unusable. I'm no tech on this but it works on how light reacts with the object, how deep the shadow go, where it fades showing depth. When everything's constant it can match & overlap the photos using shadows and reference points.
If it got bumped a little bit it throws everything off and the photogrammetry will fail completely or give very distorted results.
@@shanester1832 Thanks. I had assumed that algorithms were advanced to the point that static lighting wasn't necessary.
@bricaaron3978 Unfortunately not. NERF was new to me and I thought it could be what I was looking for, but it relies on the same photogrammetry base rules.
Very interesting, but I found nothing in here to convince me of the "more accurate" than photographetry. I suppose it can create models that appear more detailed, but this is not the same as accuracy. In industries where accurately measuring real things is important, I doubt NeRF is the tool of the future.
is peanut butter the end of jelly?
To be fair, I would take peanut butter, but I prefer jelly in some moods still
"Scary and exciting". Yes, that is what it is. It's not good or useful for anything other than the creation of fiction. It's scary that anyone might think this is useful for recording reality, or attempt to use it in such a way.
Damn it could t they have found a way to call it Narf?
This video has only been out for a month and now there's Gausian Splattering which people seem to say is even better than NeRF. What's going on?!
This is already old tech. Gaussian splatting is the new thing.
No, NERF will not be the end of photogrammetry - Gaussian Splats will do it. Oh, and they capture motion too, if you shoot the subject correctly... you can even bring them into Unreal Engine.
You can tell this script was written by ChatGPT
It's Nerf or nothing
eh, photogrammetry is here to stay... with photogrammetry you can do precise scientific measurements... you can't do measurements at all with NeRF... and that's especially the strength of photogrammetry, which is why it will never go away...
but, NeRF is fun, so... it's also here to stay, but in completely different application fields...
All depends. It seems all a bit better visual trickery on worse models. Bear in mind the fuss about ai looking at images made by others. Stealing real world objects to convert to a digital form is the same.
the answer is no, as long as nerf can't provide 3d geometry at least
It seems like you just keep repeating yourself on the comparative points over and over again.
its this or nothing
(the joke is that the slogan of the toy weaponry company Ferf is "Its Nerf or Nothing" and this is called nerf)
Hahahaha, I love it 😂
HFS 😱
It's Nerf or nothing.
nerf or nothin?
3D gaussian splatting go BRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR
The baity title makes me want to say that yes, photogrametry will be forever useless and should be criminalized, for long shall live the NERF, the one final solution for every field of science. 3d? Nerf. Animations? Nerf. Cooking? Nerf. Depression? Nerf
bro can you explain me how to do nerf without photogrammetry? I think you really dont know what you are talking about :)
😊
god, I hate AI.
this was so annoying to watch. im new to the topic of nerv and was looking for some insights on how the technology works, but all you did was give a surface level overview without any explanations, and whenever you attempted to give an explanation it was just very general and not insightful at all.
This comment was so annoying to read.
You can type searches on an engine, right? 🤦🏻
Nerf IS Photogrammetry my guy, and 3d scanning is not.
omg im so early
First🎉