Hello I'm using Blk 360 lidar scan from that i have two formats .ptx and .e57. Here Reality Capture not accepting these two cloud formats showing failed importing , will you help me about this?
GREAT TUTORIAL! I noticed that the Phantom dataset showed in some CPs a error between 1 an 4px that caused the red alert icon, i'm wondering if this could cause a distortion or any kind of probem in the final model. Unfortunately the video don't show the settings used to align the dataset, like exif, gps, prior, brown type and so on, could you provide more info in order to follow in details your amazing tutorial I'm asking this because we are thinking about buying a leica BLK360 as a support for our drone dataset but we would like to provide hight precision output for architectural purpose. thank you!
Hello Sandro, you want to have the lowest error possible. Such small error should be still ok but if you have large errors due to misalignments it can cause double walls and other visual errors in the final model. But in this case the errors were lowered after the next alignment that optimized the previous alignment. This example in the tutorial a was an extreme case because like I said the datasets were captured during a long time span. Usually you capture the data during one day or several days if it is a large project. The default settings are fine, just in this case the detector sensitivity was increased from medium to high. The GPS priors were disabled because they created a non-georeferenced component. I later georeferenced the whole scene with ground control points.
Hi, sorry for the late response. I did not check the comments for a while but my answer can still help other users as well. Yes this can happen that is why you need to shoot your photos from the same perspectives as the laser scanner was positioned. But if you are shooting photos like this the probability to merge them without control points is very high.
Great tutorial - being able to merge with terrestrial (structured) scans is great. However my company is increasingly moving to mobile (unstructured) LIDAR scanning systems which don't have built in cameras. Is there any way to import and merge unstructured scans in a similar method? I would be very curious to know. Thank you!
@@CapturingReality Thanks for the quick response - do you think this is something that would be possible in future updates? Releasing this as an option for unstructured lidar would be an incredible feature that no other photogrammetry software offers easily yet.
Hello Giulio, unfortunately, we cannot share this data with the public. But you can download free sample datasets from our website www.capturingreality.com/SampleDatasets
Hi, do you mean laser scan without colors? There are manuals like this: ua-cam.com/video/KPn1mi-LNww/v-deo.html. Yes, it is also about merging laser scans and images, but the principle is the same also for uncolored laser scans
@@CapturingReality thanks for getting back. Yes I have a laserscan without colour. At the time I did not know how the merging of scan and photo works in RC and due to the bad photo quality decided against scanning in colour at all. I managed to get a result but had to use natural control points. I was hoping RC would manage if I'd use greyscale photos for the alignment and coloured for texturing but that didn't work either. But as I said, I managed. Thanks again :)
@@christophherrmann9681 Hi, if you want to use some part of your inputs just for one step, you have to set it in 1Ds. If you will have still problem with this, please contact our support.
This is a video with older GUI. Now, there is a newer interface. But in the Alignment settings there is a tool called Merge components, which only is merging components.
Hi Nicko, the laser scans import process is showed in the video from 3:30. What kind of laser scan do you have? RC supports only terrestrial laser scans.
Hello I have a question: I noticed that the first fusion that was made between cloud by laser scanner and cloud by photogrammetry with drone was performed without control points but the alignment took place automatically ... how much is done is correct or is it better to perform the alignment by means of control points as was done with the last union with the photos from drone seen from above? NB=From the video to how I understood if the cloud from laser scanner and the cloud obtained from photogrammetry have a good overlay Reality Capture works well so you can make an automatic alignment, instead when between the cloud from laser scanner and the cloud from photogrammetry do not have a good overlay of scans we proceed as in the last alignment made in the video through control points, if possible I would like to have confirmation of this reasoning, thanks
@@CapturingReality Thank you for your reply, I take advantage of your kindness to ask you something about the merge component command, which now has a dedicated button, I also have a project with a laser scanner cloud and a drone cloud, when I press merge component, I don't have a single component but two components are created again, one from the drone and one from the point cloud, is this normal? in the video I notice that the file created by merge component is only one thanks
@@198000669 Is it the same when you use Align tool? If so, there is no sufficient overlap between datasets to create one component. Also, are your data georeferenced? If so, you can use Merge georeferenced components option in the Alignment settings.
Good tutorial, but I have problems with my alignment. I'm using DSLR and leica blk360 e57 point cloud. I'm doing your way step by step but at the "merged" component when I do the align, actually RC makes another 2 components with pictures and laser. The same two components I want to align... "I checked the option to merge them"... There is another option to look at? Thanks.
Hi, sorry for the late response I did not check the comments for a while. In the Alignment settings you can change the number of Max features per image from default 40 000 to maybe 80 000 and Preselector features to half of Max features per image so in this case 40 000. These settings usually solve any problems for me. Also try setting the detector sensitivity to High. If this doesn't work try adding more images. Shoot them from the same perspectives as the laser scanner was positioned. This way the possibility to align both together into one component will be very high. If you do not have the opportunity to shoot new photos then you have to use control points or ground control points. If you have have only control points use at least 6. if you have ground control point use at least 3.
Hi, just to know if you achieved a final good result merging the blk360 imaegs adn DSLR camera with the help of Jakub or you still have propblem? i'm thinking about buyng a blk360 Thanks!
Hi, if i wanted to keep the scan in one position, do i use the lock component functionality? my scans are not georeferenced and my drone images are georeferenced.
Hi, if you want to keep the position from data and you are sure with that position, you can use Locked. If you want to keep the positions from scans, you can set Unknown position for drone's images.
Hi Mehmet, I no longer have this RC project but I found the laser scans overall cloud to cloud registration report and the error was 5 mm. By setting the registration to exact I locked the relative poses of the laser scans and the photos aligned to the laser scans. After that I georeferenced the whole project using the 3 ground control points. They were measured with a GNSS receiver and the accuracy of the measurement was 2 cm in the horizontal plane and 4 centimeters in the vertical axis. I hope this helps you.
@@Mehmet_KISSACIOGLU What accuracy would you hope to achieve for your needs? RC is regularly used in surveying. The more accurate inputs you provide to RC the more accurate results you will get. You can provide even more accurate GCPs to RC and also check points to determine the residuals. When we are speaking about photogrammetry only the accuracy also depends on the resolution of the model. The resolution is different when you are taking photos 10 meters away from the object than taking photos only from 1 m from the object.
@@jvanko89 acurracy should be around 15 mm both horizontaly and vertical. Looks like we need to take more Closer photos and more Control Points mybe to asure that accuracy right?
Hi Abrimaal, this message means that the created model is too big to be displayed in the solid/sweet mode in a 3D view. The number of triangles that is possible to display in the solid/sweet mode depends on video memory of your GPU. Limitations are as follows: 8 million triangles for 1GB VRAM, 16 million triangles for 2GB VRAM, 31 million triangles for 4GB VRAM, 40 million triangles for 6+GB VRAM.
The upper limit is 40M triangles regardless of the GPU you have. However, this is just a display issue. If you exceed the limits above, only a point cloud will be displayed, while the render mode will be preserved. In this case you can also use a clipping box to display your model by parts. It does not decimate or cut the model, it only displays the triangles inside. You can find more information on how to use the clipping box in the application Help - tile Clipping Box.
@@CapturingReality I know I can simplify the model. But can I split the model into 2 or 4 equal parts and see how it looks in full detail? The app crashes in normal mode, I can reconstruct it only in preview mode.
@@Abrimaal Do you mean in RealityCapture? To see the model in full detail you can use mentioned tool - Clipping box. You select a part which you want to see and it will be showed in full resolution (if it will be smaller as 40 M tris). With a crash, can you please contact our support?
Thanks for creating the tutorial, but can you explain the reason for this workflow i.e. why do you export multiple components and then align together with control points, instead of just importing all photos and scans into the same project and aligning them together without control points? I'd like to know for future projects
Hi Gregory, the laser scan and image data can be connected also without these steps, when there is a good overlap between these data. But doing this process, you can achieve a bigger precision of merging.
2:17 - Import and align from photos
3:18 - Export registration
3:30 - Laser scans import
5:55 - Import/merge components
7:56 - Merging using Control Points
10:15 - Quick control point alignment
11:25 - Reconstruction
12:31 - Clean up mesh
13:43 - Texturing
Hello
I'm using Blk 360 lidar scan from that i have two formats .ptx and .e57. Here Reality Capture not accepting these two cloud formats showing failed importing , will you help me about this?
the .e57 files have to be exported as 'ordered e57 files' not unordered. it's likely an option in your export engine. @@chkiranganesh9920
GREAT TUTORIAL!
I noticed that the Phantom dataset showed in some CPs a error between 1 an 4px that caused the red alert icon, i'm wondering if this could cause a distortion or any kind of probem in the final model. Unfortunately the video don't show the settings used to align the dataset, like exif, gps, prior, brown type and so on, could you provide more info in order to follow in details your amazing tutorial
I'm asking this because we are thinking about buying a leica BLK360 as a support for our drone dataset but we would like to provide hight precision output for architectural purpose.
thank you!
Hello Sandro, you want to have the lowest error possible. Such small error should be still ok but if you have large errors due to misalignments it can cause double walls and other visual errors in the final model. But in this case the errors were lowered after the next alignment that optimized the previous alignment. This example in the tutorial a was an extreme case because like I said the datasets were captured during a long time span. Usually you capture the data during one day or several days if it is a large project. The default settings are fine, just in this case the detector sensitivity was increased from medium to high. The GPS priors were disabled because they created a non-georeferenced component. I later georeferenced the whole scene with ground control points.
some quality content here!
About marking the control points - could there be a situation when it would be necessary to mark same CPs also to the laser scans?
Hi, sorry for the late response. I did not check the comments for a while but my answer can still help other users as well. Yes this can happen that is why you need to shoot your photos from the same perspectives as the laser scanner was positioned. But if you are shooting photos like this the probability to merge them without control points is very high.
Great tutorial - being able to merge with terrestrial (structured) scans is great. However my company is increasingly moving to mobile (unstructured) LIDAR scanning systems which don't have built in cameras. Is there any way to import and merge unstructured scans in a similar method? I would be very curious to know. Thank you!
Hello Scott, unfortunately, RealityCapture does not support the import of unstructured lidar data right now.
@@CapturingReality Thanks for the quick response - do you think this is something that would be possible in future updates? Releasing this as an option for unstructured lidar would be an incredible feature that no other photogrammetry software offers easily yet.
Hi, great tutorial, it so rare to find something like this on youtube. Could you please link all the data you used?
Hello Giulio, unfortunately, we cannot share this data with the public. But you can download free sample datasets from our website www.capturingreality.com/SampleDatasets
Any chance you could do a guide on the workflow with a black and white scan?
Hi, do you mean laser scan without colors? There are manuals like this: ua-cam.com/video/KPn1mi-LNww/v-deo.html. Yes, it is also about merging laser scans and images, but the principle is the same also for uncolored laser scans
@@CapturingReality thanks for getting back. Yes I have a laserscan without colour. At the time I did not know how the merging of scan and photo works in RC and due to the bad photo quality decided against scanning in colour at all. I managed to get a result but had to use natural control points. I was hoping RC would manage if I'd use greyscale photos for the alignment and coloured for texturing but that didn't work either. But as I said, I managed. Thanks again :)
@@christophherrmann9681 Hi, if you want to use some part of your inputs just for one step, you have to set it in 1Ds. If you will have still problem with this, please contact our support.
Merge components only is no longer a setting in the alignment settings
This is a video with older GUI. Now, there is a newer interface. But in the Alignment settings there is a tool called Merge components, which only is merging components.
Still confused where you are importing the scans to align them. I don't see anywhere at the end the PointCloud's rig component.
Hi Nicko, the laser scans import process is showed in the video from 3:30. What kind of laser scan do you have? RC supports only terrestrial laser scans.
Hello I have a question: I noticed that the first fusion that was made between cloud by laser scanner and cloud by photogrammetry with drone was performed without control points but the alignment took place automatically ... how much is done is correct or is it better to perform the alignment by means of control points as was done with the last union with the photos from drone seen from above?
NB=From the video to how I understood if the cloud from laser scanner and the cloud obtained from photogrammetry have a good overlay Reality Capture works well so you can make an automatic alignment, instead when between the cloud from laser scanner and the cloud from photogrammetry do not have a good overlay of scans we proceed as in the last alignment made in the video through control points, if possible I would like to have confirmation of this reasoning, thanks
Hi, as you wrote, when there is a good overlap between datasets then they are aligned automatically.
@@CapturingReality Thank you for your reply, I take advantage of your kindness to ask you something about the merge component command, which now has a dedicated button, I also have a project with a laser scanner cloud and a drone cloud, when I press merge component, I don't have a single component but two components are created again, one from the drone and one from the point cloud, is this normal? in the video I notice that the file created by merge component is only one thanks
@@198000669 Is it the same when you use Align tool? If so, there is no sufficient overlap between datasets to create one component.
Also, are your data georeferenced? If so, you can use Merge georeferenced components option in the Alignment settings.
@@CapturingRealitythe laser scanner point cloud is not georeferenced, the drone point cloud is georeferenced
@@198000669 Then just control points can be used for merging
Good tutorial, but I have problems with my alignment. I'm using DSLR and leica blk360 e57 point cloud. I'm doing your way step by step but at the "merged" component when I do the align, actually RC makes another 2 components with pictures and laser. The same two components I want to align... "I checked the option to merge them"... There is another option to look at? Thanks.
Hi, sorry for the late response I did not check the comments for a while. In the Alignment settings you can change the number of Max features per image from default 40 000 to maybe 80 000 and Preselector features to half of Max features per image so in this case 40 000. These settings usually solve any problems for me. Also try setting the detector sensitivity to High. If this doesn't work try adding more images. Shoot them from the same perspectives as the laser scanner was positioned. This way the possibility to align both together into one component will be very high. If you do not have the opportunity to shoot new photos then you have to use control points or ground control points. If you have have only control points use at least 6. if you have ground control point use at least 3.
Hi, just to know if you achieved a final good result merging the blk360 imaegs adn DSLR camera with the help of Jakub or you still have propblem? i'm thinking about buyng a blk360 Thanks!
Very helpful video. Many many thanks!
Hi, if i wanted to keep the scan in one position, do i use the lock component functionality?
my scans are not georeferenced and my drone images are georeferenced.
Hi, if you want to keep the position from data and you are sure with that position, you can use Locked. If you want to keep the positions from scans, you can set Unknown position for drone's images.
Great video and tutorial. Thank you.
Hi, Jakub can you plz tell me what's the accuracy of the dataset in terms of using this software in the survey field ?
Hi Mehmet, I no longer have this RC project but I found the laser scans overall cloud to cloud registration report and the error was 5 mm. By setting the registration to exact I locked the relative poses of the laser scans and the photos aligned to the laser scans. After that I georeferenced the whole project using the 3 ground control points. They were measured with a GNSS receiver and the accuracy of the measurement was 2 cm in the horizontal plane and 4 centimeters in the vertical axis. I hope this helps you.
@@jvanko89 looks like it needs to be improved little bit to use it in survey field
@@Mehmet_KISSACIOGLU What accuracy would you hope to achieve for your needs? RC is regularly used in surveying. The more accurate inputs you provide to RC the more accurate results you will get. You can provide even more accurate GCPs to RC and also check points to determine the residuals. When we are speaking about photogrammetry only the accuracy also depends on the resolution of the model. The resolution is different when you are taking photos 10 meters away from the object than taking photos only from 1 m from the object.
@@jvanko89 acurracy should be around 15 mm both horizontaly and vertical. Looks like we need to take more Closer photos and more Control Points mybe to asure that accuracy right?
My reality Capture doesn't show the option "merge components only"
Which version do you have? There were some changes in UX and You can find Merge components tool under Alignment tab/Registration
@@CapturingReality When I click on settings like you did on the video, my version doesn't have the option "merge components only "
@@markinmkn Yes, itis the older UX, but there is option Merge components next to Align button, which is basically doing the same thing.
I have 40 GB RAM. While reconstruction and coloring I get "Not enough video RAM to render" .... Cannot the video processor use the main RAM?
Hi Abrimaal, this message means that the created model is too big to be displayed in the solid/sweet mode in a 3D view. The number of triangles that is possible to display in the solid/sweet mode depends on video memory of your GPU. Limitations are as follows:
8 million triangles for 1GB VRAM,
16 million triangles for 2GB VRAM,
31 million triangles for 4GB VRAM,
40 million triangles for 6+GB VRAM.
The upper limit is 40M triangles regardless of the GPU you have. However, this is just a display issue. If you exceed the limits above, only a point cloud will be displayed, while the render mode will be preserved.
In this case you can also use a clipping box to display your model by parts. It does not decimate or cut the model, it only displays the triangles inside. You can find more information on how to use the clipping box in the application Help - tile Clipping Box.
@@CapturingReality I know I can simplify the model. But can I split the model into 2 or 4 equal parts and see how it looks in full detail?
The app crashes in normal mode, I can reconstruct it only in preview mode.
@@Abrimaal Do you mean in RealityCapture? To see the model in full detail you can use mentioned tool - Clipping box. You select a part which you want to see and it will be showed in full resolution (if it will be smaller as 40 M tris). With a crash, can you please contact our support?
Thanks for creating the tutorial, but can you explain the reason for this workflow i.e. why do you export multiple components and then align together with control points, instead of just importing all photos and scans into the same project and aligning them together without control points? I'd like to know for future projects
It is mostly for the speed of computation and also in this case to show different ways how to import data into RealityCapture
why workflow has to be so complicated, why not automate this? and you just removed half of the cross
Hi Gregory, the laser scan and image data can be connected also without these steps, when there is a good overlap between these data. But doing this process, you can achieve a bigger precision of merging.
@@CapturingReality okay