Looks like very powerful software, but you really really need to get someone in who understands UX. So many hidden interaction modes, so many tool bars and weirdly named options
there is barely any standardization anywhere with 3d, just recently it seems we at least got somewhere with the USD file format, and you want UIs to become intuitive and easy to use 😂 not gonna happen, more likely it will become like prompt for AI rather than clunky options and menus
Great video. This was just the right length and got to where I needed to be. You advised a photo every 10deg, but what is the rule of thumb for capturing buildings? A photo every meter?
For the building you need to watch the overlap images. As basic you can follow 1:4 rule (to 4m distance from the building move 1 m to a side) which should keep about 75% overlap between the images.
Hello... a very nice training video. I am doing everything as you said here.but when I combine all the pictures and masks and do the calculation, unfortunately it models separately. how can i fix this problem?
Hi, do you mean that the sides of your object are not aligned into one component? How is your overlap between images? How many rows have you used to capture your object from both sides? Are the masks applied in your process? If not, do you have the proper naming convention?
@@CapturingReality The number of pictures is at least 100 photos for each side. I do everything as in the video. When I load the images with the masks I press the tab key and check if everything is normal. When I ask it to detect the photos, it calculates the edges separately. It does not combine them and display them as a model.
@@faridmuradov9623 Not just the number of images is important, also the way how they are captured and what it the overlap between them. What kind of object have you captured? It would be good to see your project somehow... What do you mean by: When I ask it to detect the photos ? Is there something similar for both sides of your capturing on the object? You can also try to merge the components using the control points.
Is it normal to need to run the alignment thing multiple times before it spits out a correct alignment? A took all the photos twice with different lighting and backgrounds thinking it was that, but after running the alignment like 5 times it finally worked.
I've tried to make everything step by step after you, unfortunetly program didin't recognise, that there are two other sides of the object in two different folders. When I did the alignment of both sides program show cameras only above the object. In the end i gained texture with top and bottom on the top and nothing at the bottom. Is there any way to make it work right?
It all looks nice but when I go to align the Top and Bottom it splits it into 2 components. Especially the bottom is not rotated 180°. The images taken overlap perfectly (there are 1134 shots). Is it possible to see the alignment settings you used in this tutorial?
Hi, if it is not said, then there were used pre-defined settings. What kind of object are you capturing? In cases like that it could help to use control points to merge the components together. Is there a good overlap between top and bottom parts?
@@CapturingReality Hi, thanks for the reply. There is a great overlap of the two parts (even more than necessary. The object is a small semi-rigid leather door. I'm doing some tests and with the control points it works but it leaves some artifacts (nothing that can't be corrected with the brush later but I always try to achieve the best possible result). I wonder how influential a change in lighting (even if slight) is in the tracing of the parts? Thanks
@@daskydesign846 Have you turn it over to capture? As it is semi-rigid, it could change a shape a little bit, so therefore it could be considered as different object. Also, changed light conditions didn't help there.
@@CapturingReality The error I get when joining the two components via control points is that it creates a sharp cut between the two parts. I wonder if there is a way to join the two parts in a "gradual" way. A sort of union gradient. I was thinking of fading the masks to black in the part where I cut. So that it blends uniformly with the other part. The problem I think is that when I turned the bag it probably deformed a bit and this is evident when I join the two parts.. Do you think a gradient on all the masks could help? Or do you have any other suggestions for joining the two parts without showing a sharp cut?
this is really cool, where can i read the system requirements, alongside the Gear requirements to capture photos that the software could easily detect? wandering if my android phone could do it as well as a thousand dollar camera
You can find some basic info here: www.capturingreality.com/realitycapture#:~:text=RealityCapture%20DataSheet Other useful information can be find here: dev.epicgames.com/community/capturing-reality/learning The results won't be the same for such cameras.
Hola, yo tengo un gran problema con esta aplicacion. Estaba desarrollando un videojuego con Unreal. Pero a la hora de establecer en un capitulo el escaneo en video de una habitacion , me indica este error " No suitable transform to encode or decode the content was found " y no encuentro una manera de poder solucionarlo. No se que hacer,llevo varios dias intentandolo. Ademas, no consigo configurar la aplicacion en idioma español. GRACIAS por ofrecernos estas herramientas igualmente para poder cumplir nuestros sueños.
hi, in "mesh model" i can see "close holes" button, but can't find "open holes". Where i can find it? or may be in "reconstruction" settings somewhere hiding "close holes" dialog ? cant find it(
@@CapturingReality thank you for fast answer !!! sad news(( and no chance to find things like this in settings (of "reconstruction" or other places?)? right? PS : old version make it by default, and then users must close\fill holes by himself. i think this feature (like open\close holes) can add a little bit more flex\possibilities to this grate software. Again thank you for your answers\communication !!!
@@toyhunter9746 What exactly do you mean? There weren't any changes regarding this tool, so it should work in the same way as before. What for will be the "open holes"? To do that you can just filter a selected part of the model. Or am I missing something?
@@CapturingReality Need reconstruct 3d model from photos, for HD remodelling (HQ production pipeline)... huge number of polys(100 000 000) building with a lot of mirror surfaces(for example). CR create model and automatic close all holes ( exactly this huge number of mirror surfaces) and i must cleanup all of this "closed holes geometry", coz i need remodel this model in good quality model (in 3dsmax software) and i'm doing this operation with cleanup every time when i got task like this... spending alot of work hours... if i can say to RC recostruct and dont close holes, this will be a huge joy\holiday for me and save a lot of time (this geometry in holes interferes with modeling in viewports) sry about my eng(
@@toyhunter9746 Thank you for the explanation. I suppose those surfaces will be created from bigger triangles. To remove those triangles you can play with the Advanced selection tool, where you can set the threshold to be selected and then filter those triangles.
anyone can explain about side 1 and 2 ? Is that he take the picture in different location or he flip the radio to get button view ? Or to get more detail ?
I created a large model in reality capture. How do I determine the scale or resolution of my model? My model has an actual texel size of (0.004204 units/texel). Point count 4,123,726 with a triangle count of 998 thousand.
@@ignacio_the_coralbiologist It seems it wasn't scaled properly as you are getting units/texel and not meter/texel. I suppose it is not possible to say the exact resolution of the model. If you know the size of it, you can compute it, but this won't be the exact number.
And it's doesn't even require real photos. :) You may throw 3D renders in it and it will do the trick :) Photogrammetry works from parallax/color pattern matching math with no additional information. in Nuke the same technology recreates real camera movement in shot scene to mimic virtual 3D camera movement and match with your 3D objects composed onto real shots with depth-clipping/masking :)
@@SugarTouch ok cool, I'm just asking because most of the time I see people doing this with DSLR cams & drones. Some do with phone but advertise some dumb free photo scanning app 😂
I have a problem when i add jpg photos, the error "incomplete camera sensor info" appears. I ended up having to add png photos and it worked. But it's strange, you don't have any error even though you added jpg photos
After import does the camera have the same name in RealityCapture? That could be a reason for the triangle. There you can find some other information: rchelp.capturingreality.com/en-US/appbasics/cameradb.htm
Such military-grade look of high-end CG/VFX software make CG/VFX artists feel like real tough guys working with really serious tools amid making funny animations :)) Part of "high-end" vibe :) But I agree : this particular app is even worse than usual :)))))
A user interface is like a joke, if you have to explain it it's bad. Love the power of this program and I've done some cool stuff with it....but man this UI is painful
It used to be good, but now it's complete garbage. It crashes quite often or the end result is a complete mess for something that 2-3 years ago was a good model. Every time I use the wizard it freezes. Absolutely every time. If you put a few more pictures in it, it completely breaks down.
Very nice tutorial, have a lot of thx.
Looks like very powerful software, but you really really need to get someone in who understands UX. So many hidden interaction modes, so many tool bars and weirdly named options
Yup agree, I use Reality Capture and know my way around, but it’s not particularly intuitive.
there is barely any standardization anywhere with 3d, just recently it seems we at least got somewhere with the USD file format, and you want UIs to become intuitive and easy to use 😂
not gonna happen, more likely it will become like prompt for AI rather than clunky options and menus
Great video. This was just the right length and got to where I needed to be. You advised a photo every 10deg, but what is the rule of thumb for capturing buildings? A photo every meter?
For the building you need to watch the overlap images. As basic you can follow 1:4 rule (to 4m distance from the building move 1 m to a side) which should keep about 75% overlap between the images.
Thanks a lot.
Is it possible to release a new tutorial of drone 3D capture based on last RC version ?
what do you mean by the latest RC version?
@@chemjblab Latest Reality Capture (RC) i suppose :)
its possible?
Hello... a very nice training video.
I am doing everything as you said here.but when I combine all the pictures and masks and do the calculation, unfortunately it models separately.
how can i fix this problem?
Hi, do you mean that the sides of your object are not aligned into one component?
How is your overlap between images? How many rows have you used to capture your object from both sides? Are the masks applied in your process? If not, do you have the proper naming convention?
@@CapturingReality The number of pictures is at least 100 photos for each side.
I do everything as in the video. When I load the images with the masks I press the tab key and check if everything is normal. When I ask it to detect the photos, it calculates the edges separately. It does not combine them and display them as a model.
@@faridmuradov9623 Not just the number of images is important, also the way how they are captured and what it the overlap between them.
What kind of object have you captured? It would be good to see your project somehow... What do you mean by: When I ask it to detect the photos ?
Is there something similar for both sides of your capturing on the object?
You can also try to merge the components using the control points.
Hola, saben si en algún momento podremos utilizar reality capture con gráficos amd? Gracias
Is it normal to need to run the alignment thing multiple times before it spits out a correct alignment? A took all the photos twice with different lighting and backgrounds thinking it was that, but after running the alignment like 5 times it finally worked.
It is not normal, but it could happen for not properly captured datasets.
I have a question about how I can rotate an entire point cloud. Every photogrammetry model I create is skewed
Scene 3d > tools > Scene Alignment > Set Ground Plane
I've tried to make everything step by step after you, unfortunetly program didin't recognise, that there are two other sides of the object in two different folders. When I did the alignment of both sides program show cameras only above the object. In the end i gained texture with top and bottom on the top and nothing at the bottom. Is there any way to make it work right?
Hi, what kind of object have you scanned? Have you used masks? Have you used control points. The parts were also processed in two different projects?
It all looks nice but when I go to align the Top and Bottom it splits it into 2 components.
Especially the bottom is not rotated 180°.
The images taken overlap perfectly (there are 1134 shots).
Is it possible to see the alignment settings you used in this tutorial?
Hi, if it is not said, then there were used pre-defined settings.
What kind of object are you capturing?
In cases like that it could help to use control points to merge the components together. Is there a good overlap between top and bottom parts?
@@CapturingReality Hi, thanks for the reply.
There is a great overlap of the two parts (even more than necessary.
The object is a small semi-rigid leather door.
I'm doing some tests and with the control points it works but it leaves some artifacts (nothing that can't be corrected with the brush later but I always try to achieve the best possible result).
I wonder how influential a change in lighting (even if slight) is in the tracing of the parts?
Thanks
Sorry is not a door but bag :)
@@daskydesign846 Have you turn it over to capture? As it is semi-rigid, it could change a shape a little bit, so therefore it could be considered as different object.
Also, changed light conditions didn't help there.
@@CapturingReality The error I get when joining the two components via control points is that it creates a sharp cut between the two parts.
I wonder if there is a way to join the two parts in a "gradual" way.
A sort of union gradient.
I was thinking of fading the masks to black in the part where I cut.
So that it blends uniformly with the other part.
The problem I think is that when I turned the bag it probably deformed a bit and this is evident when I join the two parts..
Do you think a gradient on all the masks could help?
Or do you have any other suggestions for joining the two parts without showing a sharp cut?
Please tell me the reason why the normal retail button cannot be pressed in reality capture.
Do you mean normal detail button? Do you have provided the alignment? If so, are the images aligned in one component?
this is really cool, where can i read the system requirements, alongside the Gear requirements to capture photos that the software could easily detect? wandering if my android phone could do it as well as a thousand dollar camera
You can find some basic info here: www.capturingreality.com/realitycapture#:~:text=RealityCapture%20DataSheet
Other useful information can be find here: dev.epicgames.com/community/capturing-reality/learning
The results won't be the same for such cameras.
Hola, yo tengo un gran problema con esta aplicacion. Estaba desarrollando un videojuego con Unreal. Pero a la hora de establecer en un capitulo el escaneo en video de una habitacion , me indica este error " No suitable transform to encode or decode the content was found " y no encuentro una manera de poder solucionarlo. No se que hacer,llevo varios dias intentandolo. Ademas, no consigo configurar la aplicacion en idioma español. GRACIAS por ofrecernos estas herramientas igualmente para poder cumplir nuestros sueños.
forums.unrealengine.com/t/8k-video-import-issue/712495
@@CapturingReality tengo una gráfica AMD 7900xt. Veo que no es compatible con la aplicación. 😪
Also it would be good to have a similar video to show how this model can be exported into Revit?
I suppose Revit works better with point clouds, so all you need to do is create a model and export point cloud from it.
hi, in "mesh model" i can see "close holes" button, but can't find "open holes". Where i can find it? or may be in "reconstruction" settings somewhere hiding "close holes" dialog ? cant find it(
Hi, there is no such tool as "open holes".
@@CapturingReality thank you for fast answer !!! sad news(( and no chance to find things like this in settings (of "reconstruction" or other places?)? right? PS : old version make it by default, and then users must close\fill holes by himself. i think this feature (like open\close holes) can add a little bit more flex\possibilities to this grate software. Again thank you for your answers\communication !!!
@@toyhunter9746 What exactly do you mean? There weren't any changes regarding this tool, so it should work in the same way as before. What for will be the "open holes"? To do that you can just filter a selected part of the model.
Or am I missing something?
@@CapturingReality Need reconstruct 3d model from photos, for HD remodelling (HQ production pipeline)... huge number of polys(100 000 000) building with a lot of mirror surfaces(for example). CR create model and automatic close all holes ( exactly this huge number of mirror surfaces) and i must cleanup all of this "closed holes geometry", coz i need remodel this model in good quality model (in 3dsmax software) and i'm doing this operation with cleanup every time when i got task like this... spending alot of work hours... if i can say to RC recostruct and dont close holes, this will be a huge joy\holiday for me and save a lot of time (this geometry in holes interferes with modeling in viewports) sry about my eng(
@@toyhunter9746 Thank you for the explanation.
I suppose those surfaces will be created from bigger triangles. To remove those triangles you can play with the Advanced selection tool, where you can set the threshold to be selected and then filter those triangles.
Hello... Can I use Reality Capture if I have an AMD videocard?
Hi, you need to have GPU with CUDA to run RealityCapture.
@@CapturingRealityesto va a cambiar pronto o nunca será compatible con amd?
@@JESSJAMES1886 I can't give you more information about that.
@@CapturingReality vale. Gracias igualmente
anyone can explain about side 1 and 2 ? Is that he take the picture in different location or he flip the radio to get button view ? Or to get more detail ?
Hi, the radio was just flipped.
I created a large model in reality capture. How do I determine the scale or resolution of my model?
My model has an actual texel size of (0.004204 units/texel). Point count 4,123,726 with a triangle count of 998 thousand.
You need to sale your model using GCPs or some known distance: ua-cam.com/video/qb4EPyLBRHM/v-deo.html
@@CapturingReality my model was scaled using a known 10 cm distance. I guess I want to know the resolution of the model that was created.
@@ignacio_the_coralbiologist It seems it wasn't scaled properly as you are getting units/texel and not meter/texel.
I suppose it is not possible to say the exact resolution of the model. If you know the size of it, you can compute it, but this won't be the exact number.
Does this work with any phone camera? Or does the phone have to have sensor or whatever?
Any phone
It only need photos
No Special Sensors needed, but you get better models with better camera quality
And it's doesn't even require real photos. :) You may throw 3D renders in it and it will do the trick :) Photogrammetry works from parallax/color pattern matching math with no additional information. in Nuke the same technology recreates real camera movement in shot scene to mimic virtual 3D camera movement and match with your 3D objects composed onto real shots with depth-clipping/masking :)
@@SugarTouch ok cool, I'm just asking because most of the time I see people doing this with DSLR cams & drones. Some do with phone but advertise some dumb free photo scanning app 😂
Hi, can you put the object on a lazy Susan?
Hi, you can. Then it depends on the background if you will need to use the masks or not.
@@CapturingReality OK, so what does the background need to be for a lazy Susan to work?
@@darrinholroyd8203 Basically featureless (like black).
I have a problem when i add jpg photos, the error "incomplete camera sensor info" appears. I ended up having to add png photos and it worked. But it's strange, you don't have any error even though you added jpg photos
After import does the camera have the same name in RealityCapture? That could be a reason for the triangle.
There you can find some other information: rchelp.capturingreality.com/en-US/appbasics/cameradb.htm
please share the images for learning purpose
Hi, all available datasets can be find here: www.capturingreality.com/sample-datasets
The principles of the processing are then the same.
@@CapturingReality Thankyou
I beg of you, hire a UI/UX designer.
Let they hire you guy 😅😅😅😅😅.
Such military-grade look of high-end CG/VFX software make CG/VFX artists feel like real tough guys working with really serious tools amid making funny animations :)) Part of "high-end" vibe :) But I agree : this particular app is even worse than usual :)))))
A user interface is like a joke, if you have to explain it it's bad. Love the power of this program and I've done some cool stuff with it....but man this UI is painful
It used to be good, but now it's complete garbage. It crashes quite often or the end result is a complete mess for something that 2-3 years ago was a good model. Every time I use the wizard it freezes. Absolutely every time. If you put a few more pictures in it, it completely breaks down.