Thanks for sharing, and in return, if you want to scan with a static camera, you need a turn table of some sort, simply rotating by hand is too inconsistent.
Thank you! If you use the Auto Object Masking feature, you can flip the object midway through the scan to capture the underside. That's how I captured the bottom and inside of the conch, and if you have enough overlap (let's say 60%) between all the images, it can reconstruct the mesh easily.
Lots of apps like this. I remember when we used to do set survey and measurements for VFX. Now it's literally, a guy with a cell phone walking around, and then we get everything form it. OBJ models, point clouds, gaussian splats etc. No planning. Just 2 minutes walking around and covering every corner. Then AI figures it all out, and you can work on the scene the same day in CG. Used to cost $10k to hire a LIDAR scan operator, who carried a gadget that cost just as much. And these days I can do telemetry on a 10 year old Samsung Galaxy. No problem, since all the computation is done remotely.
Thanks for sharing your experience. It's crazy how far we've come. What was the transition like, in professional settings? They still hire professionals for AAA titles and movies, right?
@@RenderRides The film industry is years back, when it comes to tech adoption. Also, with old tools, the person had a degree of control. If you implement some workflow into production of a show you expect to keep running for years, you need a guarantee that you can replicate everything about it down the line. That tools you're using won't just vanish overnight. And with AI, every vendor wants to keep the software online only with a subscription. And what happens if they get sued out of existence the next day? What if their render farm is overloaded and you have a deadline? What if their servers are down for maintenance, or they roll up an update, that breaks your part of the pipeline? Concerns like that, keep studios running according to older, but more reliable and predictable methods. For experimental and low budget film, anything goes, as long as it saves money...
hi nice video but i wonder if i just video record objects and then just lowering the framerate of the video to get many images (or frames in this context) instead of taking images on e by one, is it a better method or not?
It is definitely possible, but it's not recommended since videos are prone to a lot of motion blur, which will result in blurry textures or even failures in computing the mesh. I've tried using high shutter speeds to reduce the motion blur, and even then you need to be very careful and move the camera slowly to reduce artifacts from rolling shutter, etc. Overall, it's best to line up a shot perfectly and take a photo instead, so that all of these variables are taken care of.
@@SledgeHammer-ht2rq There are desktop applications for offline computing like Meshroom and RealityCapture. The whole point of this app is that it offloads the computing to their own hardware so that you can simultaneously run multiple scans.
You can watch this video to get an idea: ua-cam.com/video/9dyAj9gXIms/v-deo.html&pp=ygUWa2lyaSBlbmdpbmUgdnMgcG9seWNhbQ%3D%3D P.S. It is made by the creator of Kiri Engine himself, but I'd say it's still a fair comparison of the features. You can look up more videos on UA-cam comparing them if you want to find out :)
At 1:59 when you said like button the like button had a amazing glow and it glows each time i repeat in that timeline untill i finally hit like button how did you do this???
Neat, isn't it? :D Whenever a video contains a call-to-action (e.g. I asked you to like the video), it highlights it in a cool way. I don't know exactly when UA-cam added this feature but I, too, accidentally found out about it just like you haha!
The free plan allows up to three exports per week. They're still a business and they have to incentivize people to pay them somehow. I feel like it's still a decent offer, and the upgrade is also quite affordable for when you need those unlimited exports for an important project.
@jendabekCZ I understand your perspective. My goal was to show a practical use case for scanning a hero asset and building a scene around it. It’s an example of how even quick scans with minimal input photos can lead to impressive results. I appreciate your feedback and would love to hear your approach to similar challenges.
@@jeanrenaudviers Kiri Engine is a free app with a lot of powerful features that anyone can use without paying. There are also paid options for those who want to access more advanced tools, like featureless object scanning or 3D Gaussian Splatting, but the core functions are completely free and accessible. My goal in the video was to showcase both the free and paid features to give a full picture. Hope this clears things up!
KIRI is by far my favorite to use. Nice to see others noticing the same! Fantastic overview!
@@deygus Appreciate the kind words! I'm glad you like the video :D
"Meshroom... I learned the basics, found a cool looking Rock and well you know how it goes from there" so true, lol. 😀
I think that's the collective experience of how we all got into the rabbit hole of 3D scanning 😆
You are a GEEK! Please make more tutorials! Thanks
@@MMMMeeeeMMMM This is ironically the sweetest comment ever! Thank you so much 🥹❤️
Very well made video, from introducing the app to actually making use of it. Loved the video.
@@reyuk6113 Appreciate the kind words! Glad you liked the video :D
Thanks for sharing, and in return, if you want to scan with a static camera, you need a turn table of some sort, simply rotating by hand is too inconsistent.
Yes! That's a great tip. I wonder if I can create a makeshift turn table using household items?
@@RenderRides Possibly, if not, the items needed, should be obtainable for a reasonable cost.
There is probably a DIY tutorial on YT
@@RenderRides I took it out of the microwave :)
Thank you. for the info.
Good to know it was helpful! :)
thank you a lot for this. help me a bunch!
@@DEUS.studio Glad to hear that! ⭐
How did you get the inside of the conch shell when you were scanning the outside of it? Also, awesome video!
Thank you! If you use the Auto Object Masking feature, you can flip the object midway through the scan to capture the underside. That's how I captured the bottom and inside of the conch, and if you have enough overlap (let's say 60%) between all the images, it can reconstruct the mesh easily.
Lots of apps like this. I remember when we used to do set survey and measurements for VFX. Now it's literally, a guy with a cell phone walking around, and then we get everything form it. OBJ models, point clouds, gaussian splats etc. No planning. Just 2 minutes walking around and covering every corner. Then AI figures it all out, and you can work on the scene the same day in CG. Used to cost $10k to hire a LIDAR scan operator, who carried a gadget that cost just as much. And these days I can do telemetry on a 10 year old Samsung Galaxy. No problem, since all the computation is done remotely.
Thanks for sharing your experience. It's crazy how far we've come. What was the transition like, in professional settings? They still hire professionals for AAA titles and movies, right?
@@RenderRides The film industry is years back, when it comes to tech adoption. Also, with old tools, the person had a degree of control. If you implement some workflow into production of a show you expect to keep running for years, you need a guarantee that you can replicate everything about it down the line. That tools you're using won't just vanish overnight. And with AI, every vendor wants to keep the software online only with a subscription. And what happens if they get sued out of existence the next day? What if their render farm is overloaded and you have a deadline? What if their servers are down for maintenance, or they roll up an update, that breaks your part of the pipeline? Concerns like that, keep studios running according to older, but more reliable and predictable methods. For experimental and low budget film, anything goes, as long as it saves money...
Interesting to watch!
@@yockimontuno5439 happy to hear that :)
hi nice video but i wonder if i just video record objects and then just lowering the framerate of the video to get many images (or frames in this context) instead of taking images on e by one, is it a better method or not?
lowering the framerate cause there may be to many frames of the video
It is definitely possible, but it's not recommended since videos are prone to a lot of motion blur, which will result in blurry textures or even failures in computing the mesh. I've tried using high shutter speeds to reduce the motion blur, and even then you need to be very careful and move the camera slowly to reduce artifacts from rolling shutter, etc. Overall, it's best to line up a shot perfectly and take a photo instead, so that all of these variables are taken care of.
Does this app need online connection for uploading for computation ? ...if so, i think there should be completly offline solution
@@SledgeHammer-ht2rq There are desktop applications for offline computing like Meshroom and RealityCapture. The whole point of this app is that it offloads the computing to their own hardware so that you can simultaneously run multiple scans.
Postshot (gaussian splatting) for desktop is fast and free, and it also works with just a video or multiple videos.
@@RenderRides OK....got it. Thx. I know there was one offline mobile app. Forgot the Name. Thats why i asked.
But is it better than Polycam?
You can watch this video to get an idea: ua-cam.com/video/9dyAj9gXIms/v-deo.html&pp=ygUWa2lyaSBlbmdpbmUgdnMgcG9seWNhbQ%3D%3D
P.S. It is made by the creator of Kiri Engine himself, but I'd say it's still a fair comparison of the features. You can look up more videos on UA-cam comparing them if you want to find out :)
@@RenderRides You, sir, are a hero 🫡
At 1:59 when you said like button the like button had a amazing glow and it glows each time i repeat in that timeline untill i finally hit like button how did you do this???
Neat, isn't it? :D
Whenever a video contains a call-to-action (e.g. I asked you to like the video), it highlights it in a cool way. I don't know exactly when UA-cam added this feature but I, too, accidentally found out about it just like you haha!
"stitching together multiple overlapping images of an object " -> procede to insert footage of stefanie joosten in front of cameras
_Shhh dude, tryna get me cancelled or what?_
Like button 🎉
So the export menu is only in the paid plan? So how can it be free game changer for 3D artists?
The free plan allows up to three exports per week. They're still a business and they have to incentivize people to pay them somehow. I feel like it's still a decent offer, and the upgrade is also quite affordable for when you need those unlimited exports for an important project.
🎉🎉🎉🎉
❤️⭐
Adjusting the whole scene to your single scanned asset is a desperation, not a workaround.
@jendabekCZ I understand your perspective. My goal was to show a practical use case for scanning a hero asset and building a scene around it. It’s an example of how even quick scans with minimal input photos can lead to impressive results. I appreciate your feedback and would love to hear your approach to similar challenges.
God I hate these sponsored 15 minute UA-cam ads!
@@beeceelad My guy, nothing I said in this video would be different even if it wasn't sponsored. Watch the full video, you'll find a lot of value.
And he said it was free. Lol. Misleading video
Subscription? That's an instant NO!
I get that! Luckily, Kiri Engine has a free version too, which works great for many use cases. Definitely worth checking out before deciding.
No
Kiri is really investing on publicity LOL
As they should! It's a great app :)
Boriya😁
@@pritha716 🙏🙏
Kiri isn't free. Pay to export model. Very misleading video. Down voting.
Three free exports per week.
NOT FREE, liar.
@@jeanrenaudviers Kiri Engine is a free app with a lot of powerful features that anyone can use without paying. There are also paid options for those who want to access more advanced tools, like featureless object scanning or 3D Gaussian Splatting, but the core functions are completely free and accessible. My goal in the video was to showcase both the free and paid features to give a full picture. Hope this clears things up!