Unleash the power of 360 cameras with AI-assisted 3D scanning. (Luma AI)

Поділитися
Вставка
  • Опубліковано 23 лис 2024

КОМЕНТАРІ • 75

  • @johnw65uk
    @johnw65uk 7 місяців тому +4

    Tip: Merge the vertices on the model and you can sculpt inside a 3d package without the mesh breaking apart.

  • @thatvideoguy4k
    @thatvideoguy4k 10 місяців тому +2

    I got to one of your videos while looking for some turntables alternatives, and here I'm watching the 3rd one in a row that have nothing to do with what I was looking for at the beginning, well done mate, you have very engaging and informative videos 👍

  • @Decoii
    @Decoii Рік тому +1

    Thank you for this. Even the harsh models will be great references in terms of scaling.

  • @AClarke2007
    @AClarke2007 Рік тому

    Keeping us all up to date and realising that 360 isn't just a gimmick any more!

  • @infinitymovies4u
    @infinitymovies4u 25 днів тому

    And this the real content ♥️

  • @Sigurgeir
    @Sigurgeir Рік тому +4

    This is just brilliant, thank you for the great explanation. I wonder if this method would be useful to scan a bigger environment like a whole street from a moving car to use as a backdrop in a studio recording.

  • @ArcticSeaCamel
    @ArcticSeaCamel Рік тому +1

    Ai että! Hienoa kamaa tulossa. Vielä kun saadaan tuosta tehtyä rakennuksen IFC-komponentit niin avot!

  • @camshand
    @camshand Рік тому +2

    Love the Car example for typiclaly "impossible" camera moves through windows. I do wonder if putting windows up and down as the camera moves through may trick it into keeping windows up for the NERF scan allowing you to move through the passenger windows in the final animation.

    • @OlliHuttunen78
      @OlliHuttunen78  Рік тому +1

      Good idea. You should try that. Although I think that car should be then scanned two times. Once with the windows open an once with closed. And then compine some how these parts of the model for example in Unreal. Since NeRF creates lumby mesh if something moves during the scanning.

  • @ney.j_
    @ney.j_ Рік тому +3

    Excellent video appreciate the work you put in for it!

  • @fallogingl
    @fallogingl Рік тому +2

    Unironically the lump looks like the orb from Donnie Darko 😂

  • @tribaltheadventurer
    @tribaltheadventurer Рік тому

    This is fantastic work Oli, leep up the good work

  • @f1pitpass
    @f1pitpass Рік тому

    Thank you Olli!

  • @JazmineGilliam-s6z
    @JazmineGilliam-s6z Рік тому +1

    Great work thank you for the info :). very interesting!.

  • @easyweb3056
    @easyweb3056 7 місяців тому

    Excellent content, keep going!

  • @luckybarbieri8533
    @luckybarbieri8533 9 місяців тому

    Great info. Thx. Do you think this setup would be good to create a 3D model of a large place, like a church for example?? Or do you recommend another type of setup? Thank you!

  • @notanotherbrick6114
    @notanotherbrick6114 Рік тому +2

    Fascinating! Can you look the generated models in a VR headset, as the quest 2? In this case, can you walk around inside the model? This would be a perfect application for that!

    • @OlliHuttunen78
      @OlliHuttunen78  Рік тому +2

      Sure it can be done in Unreal. There is a video from bad decision studio where the guys tests how NeRF models run in VR in Unreal engine. Check it out: ua-cam.com/video/bKt2oVTw900/v-deo.html

  • @saemranian
    @saemranian Рік тому +1

    Thanks for sharing

  • @mariorodriguez8627
    @mariorodriguez8627 Рік тому +1

    Great work thank you for the info :)

  • @madedigital
    @madedigital Рік тому +1

    very good info

  • @IdahoMthman
    @IdahoMthman Рік тому

    I will have to try this with my X3

  • @TheBFHmontage
    @TheBFHmontage Рік тому

    great informative video, just what I needed thanks!

  • @smiledurb
    @smiledurb Рік тому +2

    very interesting!

  • @dewanthornberry7938
    @dewanthornberry7938 3 місяці тому

    Please, can this be used to do room interiors? And then these be used as minute data points for comparison?

  • @lennycecile3775
    @lennycecile3775 Рік тому +1

    Hi Olli, great content. I'm curious on if this will work with the insta360 sphere, and what kind of results will you get?

    • @OlliHuttunen78
      @OlliHuttunen78  Рік тому +1

      Sure it works. I have tried that on sphere with my dorne. But it is not that convincing when rendered as a equirectangular image out from Luma AI. But when they get this new Gaussian Splatting method work for 360 images it will be perfect. We just need to wait a little bit because its very new tecnique.

    • @lennycecile3775
      @lennycecile3775 Рік тому

      @@OlliHuttunen78 Thank you. Its mind-boggling technology 🔥

  • @lobodonka
    @lobodonka Рік тому +1

    Nicely described video! Your interests match mine, so, just subscribed! Bring us some more goodies. 👍

  • @masanoriito
    @masanoriito Рік тому +2

    Please let me know how I can get high quality scans like yours.
    You mentioned that in the middle of the video, you export the HD video instead of the 360 ​​video and upload it to luma ai. However, in the subsequent scene where the two containers are painted, you used equirectangular video.
    Which video format would you recommend based on your experience so far?
    Also, did uploading the insv file directly work for you? I'm using ONE X2, but it doesn't work because it doesn't have leveling function.

    • @OlliHuttunen78
      @OlliHuttunen78  Рік тому +5

      Yes. I recommend that you allways edit your material in Insta360 studio. Right now I have had much accurate and better nerf models when I edit the video such way that the target that I shot is in the middle of the picture during the whole video. Then I render it out as a normal MP4 in HD resolution and upload that in the Luma AI service as a Normal video. Second option is to load the full equiretangular video (also in MP4 format). But I have noticed that NeRF trained from equiretangular video do not convert that accurate model as the one where the target is centered. Perhaps I could make another video where I go more deeply in these methods.

    • @masanoriito
      @masanoriito Рік тому

      Thank you for your detailed response. Looking forward to another explainer video.
      When scanning a place, do you scan the same place over and over again at different heights? Or is it a one time thing?

    • @OlliHuttunen78
      @OlliHuttunen78  Рік тому +1

      Yes. When I'm scanning I record all at once to one video file. Usually with 360 camera you don't need to make so many walk arounds of your object on different heights because those wide lenses sees most of the surroundings at once. With the selfie stick it is very easy to reach and capture all corners of your object.

    • @masanoriito
      @masanoriito Рік тому

      Gotcha! Thanks a lot!

  • @Mauriliocaracci
    @Mauriliocaracci Рік тому

    Great! Thanks

  • @jasoncow2307
    @jasoncow2307 Рік тому +1

    hi!i'm wondering,, the video you uploaded is 360 original footage or recuted one side camer footage?

    • @OlliHuttunen78
      @OlliHuttunen78  Рік тому

      Yes. I made test with both. The original full equiretangular footage does not make as good result as the video which is cropped from full 360 video. Luma works better if you can go around your target.

  • @gaussiansplatsss
    @gaussiansplatsss 7 місяців тому +1

    Which is better for you, postshot or Luma Ai?

    • @OlliHuttunen78
      @OlliHuttunen78  7 місяців тому +1

      I'd say Postshot because you can train your model more accurate than in Luma AI and you can live preview the process.

  • @o0oo888oo0o
    @o0oo888oo0o Рік тому

    Thank you

  • @sujitchachad
    @sujitchachad Рік тому +1

    Thanks for the video. I followed your tips but when I import the model in the blender it just imports small chunk of cropped scene. in Luma Ai i have adjusted the crop to cover the whole geometry but wjen export to .gltf it exports cropped geo. is limitation of free service? I hope I have explained properly.

    • @OlliHuttunen78
      @OlliHuttunen78  Рік тому +1

      Yes. I noticed that luma exports only cropped models right now if you export GLB or OBJ. If you export it to Unreal you will get both versions full model with the backgroud and the cropped one. I quess this need to be asked directly from LumaLabs if they could include the full model also for mesh models.

  • @2imtuan
    @2imtuan 8 місяців тому +1

    what is accessory that you used with insta 360 camera ? I saw a connector attach to a rig

    • @OlliHuttunen78
      @OlliHuttunen78  8 місяців тому

      It is a power selfie stick. There is a battery in selfie stick which can give extra power to 360 camera via usb and you can also press record button and control camera from the stick.

    • @2imtuan
      @2imtuan 8 місяців тому

      @@OlliHuttunen78 oh right !! thank you so much mate

  • @michael_knight3457
    @michael_knight3457 11 місяців тому

    Hello! Can LUMA AI phone scanning software scan a given item in a 1 to 1 ratio? It will know the dimensions of the scanned item, e.g. height, width. I want to model a separate part based on the scanned item that would match the first one. Is it possible?

  • @rockbench
    @rockbench 11 місяців тому

    Hi, is the final result download able?

  • @pietervandervyver516
    @pietervandervyver516 Рік тому

    If I take 30 sec with a 360 Does it take up a lot of resolution or memory?
    B..
    I just want to video 4x people next to each other similar to yre car lady
    Thank you

  • @360socialms
    @360socialms Рік тому

    Thank you very much for the tutorial!! I have uploaded on Luma web, a 360 video as equirectangular, filmed with the camera always vertically (the video is not walking around an object, it is a free walk through an outdoor space). Luma processes it and creates the NeRF model, but with important noise, cuts and cloud species of noise. In the same way when I create a Reshoot in free form and render, the results are still of poor quality. Do you have any suggestions to improve this? Does the 360 ​​origin video have to have any requirements? Thank you so much !!

    • @OlliHuttunen78
      @OlliHuttunen78  Рік тому +1

      Yes. I have noticed also that Luma does not make so great models from 360 equiretangular images where you just walk straight line. It will create something but Luma is mostly based on circular movement where you move aroud something. But you also should not rely what you can see on the web browser when you rotateng the model in 3D mode. It is only aproximate preview. Much better result will appear if you render some videos out from Luma service. That is when the actual NeRF model can be seen and it is often much better looking than the model which you can see in the Web Browser. Another tip is to download the model into Unreal Game Engine and see how the volume model will look in there. All the other options when you download the model in GLTF, USD or OBJ format thaey will convert the NeRF volume to polygons and it will loose its quality. In polygons the model is not that good. But as for the 360 camera settings I do not have any special tip. Just don't try to upload too long clips where you walk like over 100 meters long route in the video. Luma works best when you have video shot from short area.

    • @360socialms
      @360socialms Рік тому

      @@OlliHuttunen78Thank you very much Olli for the answer. Yes indeed, it seems that Luma responds very well to scanning objects when moving around them, and not in more limeal routes. In my commented case, the video source is very short, only 17 seconds and taken with Ricoh ThetaV camera.The final video with the route animation in the Reshoot and the 3D model (gltf) generated by Luma, both are very bad. I'll keep trying different alternatives, seeing if I can get better results. Your channel is the only one that deals with this important topic. Thank you very much for your help !!

  • @JAYTHEGREAT355
    @JAYTHEGREAT355 Рік тому +1

    hello brother , did you shoot a 360 video or where you shooting consting pictures to then upload to luma ai

    • @OlliHuttunen78
      @OlliHuttunen78  Рік тому

      I shooted 360 video.

    • @JAYTHEGREAT355
      @JAYTHEGREAT355 Рік тому

      @@OlliHuttunen78 thank you brother , i will try to repricate by fallowing youre video , i 3d print so maybe i can scan some figurings and convert them to 3d printable stls . thank you .

    • @OlliHuttunen78
      @OlliHuttunen78  Рік тому

      @@JAYTHEGREAT355 I recommend also check out the 3Dpresso web service 3dpresso.ai/. It can also make 3D models from video. They turn out to be much solid and suitaple models for 3D printing than luma ai model. When NeRF model is tornet to polygon model it can be very broken and takes lot of work to make it solid stl for 3D printing.

  • @SwissAdventureRider
    @SwissAdventureRider Рік тому

    Great video thanks for sharing and thanks and congratulations to your partner who puts up with your tests 😂

  • @gaussiansplatsss
    @gaussiansplatsss 7 місяців тому

    what is your pc specs Sir?

  • @TrasThienTien
    @TrasThienTien 7 місяців тому

    🤗🤗🤗

  • @Niberspace
    @Niberspace 7 місяців тому

    If this app wasn't cloud based I would have loved to try it, but

  • @robmulally
    @robmulally Рік тому

    Thanks for this video. Time to dust off my 3d camera

  • @resanpho
    @resanpho Рік тому

    Hi Oli and thank you for this interesting video. Do i get it right that the objects which are being recorded should be static and the whole thing will not work when you have moving objects? For instance would it be possible to capture a 360 video from a scene in which people dance? I guess not.
    Thanks

    • @OlliHuttunen78
      @OlliHuttunen78  Рік тому +1

      Yes. This scannig method works only with static objects and surroundings. If something moves or passes by (like bike or a car in the background) while you are scanning. AI tries to ignore them and remove from radiance field. It's kind of same effect if you take a photo with a very long exposure time. So you cannot make a very good 3D model with this method from the scene where people are dancing.

    • @resanpho
      @resanpho Рік тому +1

      ​@@OlliHuttunen78 Thank you for response.
      I was thinking about the ability of 3d modeling important events such as wedding. If every guest play well, once could create a memorable 3D model of the event. :)
      Another question:
      Is there a special media player / tool to view the exported 3d Model? Can a normal user easily view the model or needs to install specific and complex tools?

    • @OlliHuttunen78
      @OlliHuttunen78  Рік тому +1

      Yeah! It could work to model that kind of group picture in wedding if everybody can remain in place couple of minutes while you scan the moment with 360 camera. You can easily share a link from Luma AI and people can look rendered NeRF video and rotate 3d model in web browser. It works in mobile and on the computer. You don't have to login or download any kind of special app or plugin for that. And model can be also be embeded to any webpage. Those are the normal features of this kind of cloud service. Luma AI is a great service.

    • @resanpho
      @resanpho Рік тому

      @@OlliHuttunen78 thanks a lot Mate. Need to Test it.

  • @anthonycampbell7843
    @anthonycampbell7843 6 місяців тому

    ua-cam.com/video/PclwALPiqiQ/v-deo.html
    Was your video done before the update to remove the floaters? Or were they still present during your tests at the 6:30 mark?

    • @OlliHuttunen78
      @OlliHuttunen78  6 місяців тому

      My video was made after that Luma AI floaters announcement. But it should be noted that I presented the model in preview mode on Luma's web pages. It doesn't tell the whole truth. The final result of the NeRF model will only appear when the camera animation is rendered. There are often significantly fewer floaters to be seen. But this is quite secondary now that Gaussian Splatting technology has replaced everything and the older 3D models produced with NeRF technology are not talked about very much anymore. In that sense, many things in this video are already outdated information.

  • @kriptomavi
    @kriptomavi Рік тому

    only iphone?

    • @Hopp5ann
      @Hopp5ann 7 місяців тому +1

      It has an android app now

  • @Mateee.01
    @Mateee.01 7 місяців тому

    if u use a pro iPhone that have lidar sensor the result will be much more detailed than luma ai....

  • @iarde3422
    @iarde3422 10 місяців тому

    I hate it, when people put their feet in dirty shoes on top of seats where other people are going to seat afterwards and make their pants dirty, because of inconsiderate filthy people, that have climbed on the seat with their dirty shoes. If such people don't understand it, then they should be punished for doing this by cleaning the seat every day for a week.