I've been just getting into Reality Capture and getting good with it. Would you say using a laser scanner would be a better way to go than the traditional route of taking a lot of photos?
There is no single answer. In this case handheld lsaer scanner did a better job, because the metallic surface was shiny, hard to solve with photogrammetry. But photogrammetry gives the best textures, and no laser scan can come even close.
Would be great to see specific feature in Capture Reality coming in further releases which do not require processing in Cloud Compare. 3d Scanner are getting more and more available and lot of user might want to get excellent texture quality from photogrammetry to scanned model. :-)
Great Tutorial as all you make. Thanks for that awesome work. But, correct me if i am wrong, it's not even necessary to go over cloudcompare. I started to do a lot stuff like this with realitycapture recently, using a DotProduct mobile DP-10 Lidar Scanner intended for scanning buildings, scenes and large objects. Then adding Photos taken with a Dslr for Texture and sometimes to reconstruct parts not included in the Pointcloud. My workflow is, 1. import scans, all my scans are georeferenced on a local euklidian Grid and aligne them to create a component, 2. import .jpg and alinge again, they will build annother component as they never will line up with scans. 3. creating controlpoints to link both components together, but there's a little issue i encountered several times. When using tiepoint through those 2 sets of different input rc will crash. So i still will first i will create tie points, as you do when registering Photocomponents together which works absolutly fine with Photos, but if you try to use the same tiepoints in a laserscan and photoset component it will crash the programm. My workaround is: After setting up the Tiepoints which should tie the Photos to the scans, select the component only containing the scans, select a tiepoint and change to groundcontrol, assing the coordinates shown as actual position (refering to its position in the selected component (here laserscan)) then delete all .lsp files related to the groundcontrol and merge components, the Photos and scans will be in one component. Depending on my scans i will already use 36h11 tags which are recognised by dotproduct and enabling me to extract their local position already via DP then i will only have to assigne those to the easy detecteable targets in the Photos what makes both sets line up. so essentially i will create tiepoints in both components, set one componet to georeferenced, change tiepoint to groundcontrol and extract their their position in the laserscan and assigne it to Fotoscomponent. I really would love to discuss this way with you, is there a possibility to get in contact with You? best regards Ascan
Hi Ascan, thank you for your words. This workflow was created for handheld laser scans which is not possible import into RealityCapture. When you import mentioned laser scan, do you have there more scanner positions? This is quite unusual workaround, but could work in some cases. Basically, after using CPs it shouldn't crash, but just merge those components. Are you using also georeferenced images? If so, this could cause some issues, as the coordinates will be quite different.
Can I assume you performed the laser scan at the same time as the photogrammetry, that is, photo on side1 then laser scan on side1, and then side2 and repeat?
Hello, I have a question that I would like to consult, that is, how to ensure that the generated model is the same size as the scanned model in the process of photo processing.
The models were aligned in CloudCompare, so if the size was different after this step they are the same. Also, you can use some scale or markers with coordinates to obtain photogrammetry model in correct size.
You can do this in the texture settings under Mesh Model tab/Mesh Color & Texture/Settings/Defaul unwrap parameters and there you have options for changing the Maximal texture resolution and Maximal texture count. You can try to change the maximal texture count to a higher number.
@@gladiatormechs5574 Yes, you did. The general workflow is texturize the bigger model, simplify the model to a smaller triangle count and then reproject texture from the bigger to the smaller model.
Hi Wallace, this is a absolutely useful and helpful tutuorial material when it deals with handheld laser scanner. Great work!
I've been just getting into Reality Capture and getting good with it. Would you say using a laser scanner would be a better way to go than the traditional route of taking a lot of photos?
I’ve asked myself the same. Probably the combo as he did…
There is no single answer. In this case handheld lsaer scanner did a better job, because the metallic surface was shiny, hard to solve with photogrammetry. But photogrammetry gives the best textures, and no laser scan can come even close.
@@Vassay I see, can you say which laser scanner you prefer as someone that might be looking to try it out
@@JonathanWinbush Sadly, I don't own any (photos only), but maybe someone else would like to chime in.
scanning spray and cross polarization are 2 other great and more affordable options
Great tutorial! How to handle situation when model from photogrammetry is not correctly scaled?
Hi, basically the same situation is in the tutorial, as photogrammetry model is not scaled. You will need to scale it beforehand.
Would be great to see specific feature in Capture Reality coming in further releases which do not require processing in Cloud Compare. 3d Scanner are getting more and more available and lot of user might want to get excellent texture quality from photogrammetry to scanned model. :-)
@@android4cg Thank you for your ideas.
I know it's just a friendly way to say it will never come. ;-)
@@android4cg Just as a note, it is an asked feature and it is already in our feature request database.
If tNice tutorials isnt the most true tNice tutorialng ive ever read
excellent, what laser scanner was used?
Great Tutorial as all you make. Thanks for that awesome work. But, correct me if i am wrong, it's not even necessary to go over cloudcompare. I started to do a lot stuff like this with realitycapture recently, using a DotProduct mobile DP-10 Lidar Scanner intended for scanning buildings, scenes and large objects. Then adding Photos taken with a Dslr for Texture and sometimes to reconstruct parts not included in the Pointcloud. My workflow is, 1. import scans, all my scans are georeferenced on a local euklidian Grid and aligne them to create a component, 2. import .jpg and alinge again, they will build annother component as they never will line up with scans. 3. creating controlpoints to link both components together, but there's a little issue i encountered several times. When using tiepoint through those 2 sets of different input rc will crash. So i still will first i will create tie points, as you do when registering Photocomponents together which works absolutly fine with Photos, but if you try to use the same tiepoints in a laserscan and photoset component it will crash the programm. My workaround is: After setting up the Tiepoints which should tie the Photos to the scans, select the component only containing the scans, select a tiepoint and change to groundcontrol, assing the coordinates shown as actual position (refering to its position in the selected component (here laserscan)) then delete all .lsp files related to the groundcontrol and merge components, the Photos and scans will be in one component. Depending on my scans i will already use 36h11 tags which are recognised by dotproduct and enabling me to extract their local position already via DP then i will only have to assigne those to the easy detecteable targets in the Photos what makes both sets line up.
so essentially i will create tiepoints in both components, set one componet to georeferenced, change tiepoint to groundcontrol and extract their their position in the laserscan and assigne it to Fotoscomponent. I really would love to discuss this way with you, is there a possibility to get in contact with You?
best regards Ascan
Hi Ascan, thank you for your words. This workflow was created for handheld laser scans which is not possible import into RealityCapture. When you import mentioned laser scan, do you have there more scanner positions? This is quite unusual workaround, but could work in some cases. Basically, after using CPs it shouldn't crash, but just merge those components. Are you using also georeferenced images? If so, this could cause some issues, as the coordinates will be quite different.
Amazing video !! Thank you!
Can I assume you performed the laser scan at the same time as the photogrammetry, that is, photo on side1 then laser scan on side1, and then side2 and repeat?
As this is just tutorial, the hammer was scanned just as you see it - stanging.
Great work!
Hi,Is it support DJI L1 lidar gimball ?
Hi, generally there could be used similar workflow.
Hello, I have a question that I would like to consult, that is, how to ensure that the generated model is the same size as the scanned model in the process of photo processing.
The models were aligned in CloudCompare, so if the size was different after this step they are the same. Also, you can use some scale or markers with coordinates to obtain photogrammetry model in correct size.
TIL cloud compare has a use
ice its actually working, im suprised
Error reading file: hammerwithspray_Model_8.mtl, what's this ? I can't find it.
It is a file exported with the model and refers to the material. It should be next to your exported model file.
@@CapturingReality Yes, but I couldn't find it. Thanks for your reply!
@@voqo8730 Can you try to export your model again and check if the file is there?
@ I used the official data,when I used cloudcompare APP,It indicated this,I could not see the mtl file
@ My model has mtl file,but the hammerlaserscan.obj ,I could not find mtl file
I have downloaded the dataset but I don't have the laser scan?
Hi, which dataset have you downloaded?
@@CapturingReality hi this one with the hammer off the link. It has the photos both sides and masks but not the laser scan
@@Canihelpyouz we have just added the laser scan on the sample dataset page
@@wallacewainhouse8714 Thanks Wallace!
Change the "Program" from Agressive TE to Analog Base 1 TE inside the GMS setup equlizer panel.. Found another comnt with the
when i try to texturize my model, it says ... INCREASE MAXIMAL TEXTURE COUNT OR MAXIMAL TEXTURE RESOLUTION. how do i do this ?.
You can do this in the texture settings under Mesh Model tab/Mesh Color & Texture/Settings/Defaul unwrap parameters and there you have options for changing the Maximal texture resolution and Maximal texture count. You can try to change the maximal texture count to a higher number.
@@CapturingReality i got it to texturize after i put the model thru SIMPLIFICATION... did i lower the quality by doing this ?.
@@gladiatormechs5574 Yes, you did. The general workflow is texturize the bigger model, simplify the model to a smaller triangle count and then reproject texture from the bigger to the smaller model.
that’s deadass rn
all workеd
Going through tNice tutorials right now
!
Sa
on instagram