Hands on With Nvidia Instant NeRFs

Поділитися
Вставка
  • Опубліковано 22 гру 2024

КОМЕНТАРІ • 163

  • @alanmelling3153
    @alanmelling3153 2 роки тому +11

    I really appreciate you sharing your insights and experience with this tool. Thanks, and I look forward to more

    • @EveryPoint
      @EveryPoint  2 роки тому

      Thanks, Alan!

    • @cousinmerl
      @cousinmerl 2 роки тому

      I wonder if this tool can be used for police work? if the police manage to capture multiple surveillance photos, they could also composite them with scenes after, the deep thinking could then evaluate how things have changed, highlighting changes and show clues to detectives.

  • @AnvABmai
    @AnvABmai 2 роки тому +45

    Dear Author , please load 1080p ! THis is great and informative video explain me a lot regarding NeRF settings but it's make me sad to watch it in 720p .

    • @EveryPoint
      @EveryPoint  2 роки тому +8

      Unfortunately the livestream was recorded in 720p. That was our mistake! We will have additional content soon at 1080p resolution.

    • @voidchannel1492
      @voidchannel1492 2 роки тому +1

      @@EveryPoint Try to use some level of super resolution models in AI and see if that works

  • @ZAZOZH43
    @ZAZOZH43 2 роки тому +1

    This video is amazing. I just found out about you last night and already watched all your videos. Your hilarious.

  • @carlosedubarreto
    @carlosedubarreto 2 роки тому +2

    This is simply amazing. Thank you A LOT.

  • @ValloGaming
    @ValloGaming 2 роки тому +1

    hey i need help i was getting python: can't open file 'F:\Tutorial
    gp\instant-ngp\scripts
    ender.py': [Errno 2] No such file or directory
    and i checked under scripts and render.py is not there is that way ?

    • @EveryPoint
      @EveryPoint  2 роки тому

      You have 2 options: use bycloudai’s render.py script or use run.py
      ByCloud’s GitHub fork: github.com/bycloudai/instant-ngp-Windows
      Or, you can use run.py which is in our advanced tips video at the 1 hour mark: ua-cam.com/video/_xUlxTeEgoM/v-deo.html

  • @faizanuddin.
    @faizanuddin. 2 роки тому +1

    first created the fox nerf but after that when i used my own images and gave it colmap commad it doesnt gives me a transform json file what should i do it says
    D:\a\opencv-python\opencv-python\opencv\modules\imgproc\src\color.cpp:182: error: (-215:Assertion failed) !_src.empty() in function 'cv::cvtColor'

    • @EveryPoint
      @EveryPoint  2 роки тому

      That is a new one for us. Are you using RAW files or HDR video?

    • @faizanuddin.
      @faizanuddin. 2 роки тому

      @@EveryPoint i figured it out the problem was i used a video which was converted to image sequence bht the video was captured by a phone in 1080 due to my smartphone was forcing image stabilization some alit of images has some amount of motion blur and i was using colmao exhaustive matching which crashes sometime and its not good with image sequences another creator suggested me to use colmap sequential matching which works good and the final nerf was really good and clean with very less noise

  • @stephantual
    @stephantual 10 місяців тому

    Subscribed - wish you made more videos, they are valuable and educative!

  • @antoniopepe
    @antoniopepe 2 роки тому +2

    Great ... I have a question.. is possible to export a sort of point cloud? Would be great.

  • @eliorkalfon191
    @eliorkalfon191 2 роки тому +10

    Some thoughts about making it a real time and a good quality system:
    1. For estimating camera positions you could use the loftr transformer from Kornia library (and not colmap) for key points detection and matching, I think it’s much faster
    2. For smooth mesh maybe neural tsdf can do the trick if you aren’t using it yet;)
    3. It could be great if you add normals estimation for the reconstructed 3d coordinates
    Good job!

    • @EveryPoint
      @EveryPoint  2 роки тому +2

      Perhaps the NVIDIA AI team is reading these comments!

    • @fraenkfurt
      @fraenkfurt 2 роки тому

      @Elior ... With your knowledge on that topic would it be theoretically possible to realtime-render this in vr or is this something out of scope in terms hardware-requirements or/and on how the rendering-engine works?

    • @eliorkalfon191
      @eliorkalfon191 2 роки тому +1

      @@fraenkfurt With today’s methods near real time could be achievable, maybe 0.1 fps (each scene is a “frame” in this context) and faster in end to end product. Hardware limitations are crucial for sure. Recently I read a paper which called “Tensorf - Tensorial radiance field” , they said that a mixture between this and ngf could lead some interesting results. I don’t know exactly about what did you mean with rendering engines since I have worked only with 3d structures and in none real time environment.

    • @EveryPoint
      @EveryPoint  2 роки тому

      @@eliorkalfon191 The fact that you would need to render the scene twice with slight offsets and at a high resolution would mean your hardware would have to be very very high end. Cost prohibitive at this point. The real-time rendering on our RTX 3080 is running at a very low resolution. At 1920x1080 we render 1 frame every 3 seconds.

    • @pabloapiolazza4353
      @pabloapiolazza4353 2 роки тому

      Does using more images improve the final quality? Or at some point it doesn't matter anymore?

  • @1985Step
    @1985Step 2 роки тому +2

    An extremely well done video, congratulations!
    Could you please share the photos used for the bridge reconstruction? In case already done, where can I find them?
    Thank you.

  • @aznkidbobby
    @aznkidbobby 2 роки тому +4

    Can you export the file and take measurements on the 3D model?

    • @EveryPoint
      @EveryPoint  2 роки тому +1

      You can export a mesh, however, it is lower quality than you would produce with traditional photogrammetry.

    • @Xodroc
      @Xodroc 2 роки тому

      @@EveryPoint There goes the Unreal Engine Nanite dreams with this tech!

  • @_casg
    @_casg Рік тому

    So like I can’t really use the mesh obj model?

  • @techieinside1277
    @techieinside1277 2 роки тому +5

    Great video mate! I was wondering how do you go about exporting the model + texture so as to use it with blender?

    • @EveryPoint
      @EveryPoint  2 роки тому +1

      NVIDIA Instant Nerf does not produce a high quality textured mesh yet. It's primary use is for alternative view synthesis. We suggest keeping an eye on advancements as the technology is quickly advancing.

    • @techieinside1277
      @techieinside1277 2 роки тому +1

      @@EveryPoint I see. Can we export the output we currently get , because some of my scans look great, and i wish i could just export it for use in blender.

    • @techieinside1277
      @techieinside1277 2 роки тому +1

      I have ngp setup and it's working great so far.

    • @jsr7599
      @jsr7599 2 роки тому +1

      @@EveryPoint Does it provide something to work off of? Is it possible at all to create a gltf / glb file with this technique?
      I'm new to all of this, by the way. Thanks for sharing.

    • @EveryPoint
      @EveryPoint  2 роки тому +1

      @@techieinside1277 As you probably have noticed by now, the mesh output is not optimal. Currently, traditional photogrammetry will produce a better useable textured mesh model.

  • @GhostMcGrady
    @GhostMcGrady 2 роки тому +1

    Is there a way to take your first dataset and json and compile it with a second one? I.E. string multiple rooms of a house together from separate datasets?

    • @EveryPoint
      @EveryPoint  2 роки тому

      Technically, you could do something like this. The limitation would be the total VRAM this project would take to run.

    • @GhostMcGrady
      @GhostMcGrady 2 роки тому +1

      @@EveryPoint Right, after posting the question I came to find how limited in scale you can get. Thanks for the amazing tutorial & response.

    • @EveryPoint
      @EveryPoint  2 роки тому

      We expect that the scale issue will improve over time. Also, a services could be built on cloud based service where hardware limitations could be overcome.
      Remember, it’s only a technology that first came out 2 years ago!

  • @JamesJohnson-ht4gi
    @JamesJohnson-ht4gi 2 роки тому +4

    Seeing this stuff from start to finish caters to my learning style. Soooo flipping helpful! Thanks for the tutorial! Have you seen Nvidia's 'nvdiffrec' yet? Apparently it's like photogrammetry, but it spits out a model AND a complete PBR material set!

    • @EveryPoint
      @EveryPoint  2 роки тому

      Yes, it uses neural networks to compute SDF and materials as separate flows into a solid model.

    • @0GRANATE0
      @0GRANATE0 2 роки тому

      is it "easy" to install/run it? what are the input data? also a video or just a single image?

  • @sethdonut
    @sethdonut 2 роки тому +1

    the automatic bezier curves on the camera paths... THANK you

    • @EveryPoint
      @EveryPoint  2 роки тому +1

      One reason we keep using Instant NeRF! The camera path tools are handy!

  • @TheBoringLifeCompany
    @TheBoringLifeCompany 2 роки тому +1

    I wonder when some sort of documentation will appear?

    • @EveryPoint
      @EveryPoint  2 роки тому

      There is quite a bit of documentation on the GitHub Page

  • @jeffreyeiyike122
    @jeffreyeiyike122 Рік тому

    Please i am having issues with the custom dataset. The rendering is poor.

  • @gozearbus1584
    @gozearbus1584 2 роки тому +2

    Has there been an update to iNERFs?

    • @EveryPoint
      @EveryPoint  2 роки тому +1

      There are updates just about weekly.

  • @HKCmoris
    @HKCmoris 2 роки тому

    :/ I'm getting
    'colmap' is not recognized as an internal or external command,
    operable program or batch file.
    FATAL: command failed
    and I can't figure out why it makes me wanna tear my hair out

    • @EveryPoint
      @EveryPoint  2 роки тому

      Our apologies for the late reply. COLMAP needs to be added to your PATH, assuming it has been installed.

  • @astral_md
    @astral_md 6 місяців тому

    Awesome !
    Is there a way to save RGB and depth and tother textures from a view ?

  • @martndemmer6405
    @martndemmer6405 2 роки тому +2

    i work myself as well as a non coder through all and have the same issues with python 3.9 and python 3.10 (what i use for another somehow important task for me) is there anyway to solve it without removing it ?

    • @EveryPoint
      @EveryPoint  2 роки тому

      If you have build issues, we suggest editing the CMakeCache where 3.10 was used and rebuilding the codebase.
      We also suggest you can try adding the build folder to your python path in the environments editor. This may solve issues you have.

    • @martndemmer6405
      @martndemmer6405 2 роки тому

      @@EveryPoint i managed in the end after i deinstall all ... and then install new ... and i started to create some nerfs ... instant NGPs but the results are terrible :( .... i used the same data sets which i used before for photogrammetry ... for example i used 700 pictures of an forest with bridge ... and in photogrammetry it all worked but in nerf ...it looks like mess ??? then i tried other more tiny sets but as well absolute disapointing results ... do i do something wrong ... it looks to me that colmap does all good and then when i start the instant NGPs it is not doing the job propperly ???

  • @alvaroduran7444
    @alvaroduran7444 2 роки тому +3

    Great video thanks for the information! I was wondering if you have had any experience with reflective surfaces, As you know that is usually the Achilles heel in photogrammetry.

    • @EveryPoint
      @EveryPoint  2 роки тому +4

      They are also an Achilles heal for NeRFs. It creates a parallel world inside of the mirror.

    • @BenEncounters
      @BenEncounters 2 роки тому

      @@EveryPoint That is actually interesting to know

  • @wafaWaff
    @wafaWaff 2 роки тому +1

    Can you help me please to make a decision between using Nvidia Instant NeRFs and Meshroom from AliceVision

    • @EveryPoint
      @EveryPoint  2 роки тому

      Depends on what you need as the results. If you need meshes and good surface data, then Meshroom is ideal. Instant NGP produces images.

  • @mmmuck
    @mmmuck 2 роки тому +1

    Wonder if you can convert this to usable poly mesh

    • @EveryPoint
      @EveryPoint  2 роки тому +1

      Look at nvdriffrec if you want to do that.

  • @rubenbernardino6658
    @rubenbernardino6658 Рік тому

    Thank you Jonathan for a phenomenal and very effective tutorial. It could still be improved if it was made available in HD or higher resolution. Some of the fonts on the video content appear too small when I watch the video out of full screen.

  • @jeffreyeiyike122
    @jeffreyeiyike122 Рік тому

    Good day, I am having issues putting the object inside the unit box. What are the parameters am i suppose to change?

  • @xrsgeomatics
    @xrsgeomatics 2 роки тому

    Could you help me to fix this? thank you
    ERROR: Not enough GPU memory to match 12924 features. Reduce the maximum number of matches.
    ERROR: SiftGPU not fully supported

    • @EveryPoint
      @EveryPoint  2 роки тому

      This is an issue with COLMAP. Did you install and/or compile the version with GPU support?

  • @sheidekamp2485
    @sheidekamp2485 2 роки тому +1

    Hi! thank you for the great video. Is there a way to render a cropped scene? Because the entire background jumps back when I render or reopen the scene. I want to render without too many clouds

    • @EveryPoint
      @EveryPoint  2 роки тому

      You have two options: edit the aabb scale in the transforms file. Or, you can hack the run.py script to render video cropped in the GUI. Perhaps this will be a future video.

    • @scy-heidekamp
      @scy-heidekamp 2 роки тому

      @@EveryPoint That would be cool, because I changed the scale in transform.json, but the crop resets to 16 when opening the scene or rendering.

  • @TomaszSzulinski
    @TomaszSzulinski 2 роки тому

    I have a problem "'colmap' is not recognized as an internal or external command,"
    Can somebody know what is going on?

    • @EveryPoint
      @EveryPoint  2 роки тому

      You may need to install and add it to path.

  • @rikopara
    @rikopara 2 роки тому +1

    This stream was really helpful but for some reason my render.py script isn't exist. Also I've downloaded ffmpeg but can't find it destination to add to the path.

    • @rikopara
      @rikopara 2 роки тому +1

      Oh, looks like i've solved it. Render.py was only in bycloudai's fork.

    • @EveryPoint
      @EveryPoint  2 роки тому

      @@rikopara Yes! You can create your own render script too. However, bycloudai's version works great. As for ffmpeg, most likely it is here: C:\ffmpeg\bin

    • @svenbenard5000
      @svenbenard5000 2 роки тому

      Hi! Did you find out how to add the script? I tried copying the one from bycloudai' but it still does not seem to work. I get the error "ModuleNotFoundError: No module named 'pyngp'". I tried installing his version, but only the newly updated version works for my PC.

    • @rikopara
      @rikopara 2 роки тому

      @@svenbenard5000 Did you copy whole fork or just render.py file? Using of newest build with bycloudai's render.py file works for me.

    • @rikopara
      @rikopara 2 роки тому

      @@svenbenard5000 Also check for "pyngp" files is /instant-ngp/build dir. If there's no any you probably skipped some installation steps

  • @emotivedonkey
    @emotivedonkey 2 роки тому +1

    Thanks for the breakdown, Jonathan! But how does one go about starting the GUI without initiating training for a new data set? I just want to be able to Load the .msgpack from a previously trained project.

    • @EveryPoint
      @EveryPoint  2 роки тому

      Use ./build/testbed --no-gui or python scripts/run.py
      You can load the saved snapshot with Python bindings load_snapshot / save_snapshot (see scripts/run.py for example usage)

    • @jeffreyeiyike122
      @jeffreyeiyike122 Рік тому

      @@EveryPoint Please i am having issues using customs datasets. The rendering is always poor with the customs dataset but okay when i use synthetic dataset from the vanilla nerf

  • @ricksarq22
    @ricksarq22 2 роки тому +1

    It worked.....Thank you soo much

  • @Shaban_Interactive
    @Shaban_Interactive 2 роки тому

    I am getting Memory Clear Error. i havE RTX3080, i used 170 Photos (Nikon). I will try with lower resolution images tonight. i hope it works

    • @EveryPoint
      @EveryPoint  2 роки тому

      Most likely you used too many high resolution imagery. NeRF is quite VRAM heavy. Try reducing the pixel count by half.

    • @Shaban_Interactive
      @Shaban_Interactive 2 роки тому +1

      @@EveryPoint Thanks for the advice. I dropepd the picture count to 80 and it worked like a charm. Thank you again 🙏

    • @EveryPoint
      @EveryPoint  Рік тому

      Good to hear!

  • @tuwshinjargalamgalan1966
    @tuwshinjargalamgalan1966 2 роки тому +1

    how to save 3D model or point clouds?

    • @EveryPoint
      @EveryPoint  2 роки тому

      You can save a mesh using marching cubes. However, the quality of the mesh lower than traditional photogrammetry.

  • @PatrickCustoBlanch
    @PatrickCustoBlanch 2 роки тому +2

    Do you know if it's possible to run a multi gpu set up ?
    Great video btw!

    • @EveryPoint
      @EveryPoint  2 роки тому

      Currently, no it does not.

  • @eatonasher3398
    @eatonasher3398 Рік тому

    Curious: could you mix in a couple higher definition images to increase the quality? If so, would you have to place different weights to get that better result?

  • @vladimiralifanov
    @vladimiralifanov 2 роки тому +1

    Thanks for the video. Is there any way to rotate th scene? When i try to do it with my mouse it just spins in wrong direction. I tried to align the center but coudnt make it work

    • @EveryPoint
      @EveryPoint  2 роки тому +1

      No, you would have to modify your transforms. If the whole scene is sideways, sometimes deleting the image metadata and rerunning COLMAP will fix the issue.

    • @vladimiralifanov
      @vladimiralifanov 2 роки тому

      @@EveryPoint thanks 🙏

    • @tasteyfoood
      @tasteyfoood 2 роки тому

      @@EveryPoint what's the functional reasoning behind the lack of a rotate? It's a 3D object right? I feel like I'm missing something..

    • @EveryPoint
      @EveryPoint  2 роки тому +1

      @@tasteyfoood rotating the scene in the GUI? Also, what you are seeing in a nerf is not a discrete object, it's a radiance field where every coordinate space in the field has an "object" but it may be transparent.

    • @tasteyfoood
      @tasteyfoood 2 роки тому

      @@EveryPoint thanks it’s helpful to realize it’s not producing an “object”. I think my issue may have stemmed from trying to rotate a sliced section of the radiance field and being confused that it wasn’t rotating with the sliced section as the center point

  • @toncortiella1670
    @toncortiella1670 2 роки тому +1

    Sorry if you said it in the video but, you can download that 3d? Like obj or mtl

    • @EveryPoint
      @EveryPoint  2 роки тому

      A very poor quality one. This is not the NeRF you’re looking for.

    • @fatima.zboujrad7049
      @fatima.zboujrad7049 5 місяців тому

      @@EveryPoint please is there another nerf implementation that produces good quality 3d in real-time((or close to)?

  • @SeriouslyBadFight
    @SeriouslyBadFight 5 місяців тому +1

    They need to implement the ability to render your instant nerf into a 3d rendering software. Something that’s not so gpu intensive. Something that could be modified to a mobile device.

    • @thenerfguru
      @thenerfguru 5 місяців тому

      I suggest you look into Gaussian Splatting.

  • @beytullahyayla7401
    @beytullahyayla7401 Рік тому

    Hi, is there any chance to export data that we obtained at .obj format ?

  • @barisatiker
    @barisatiker 2 роки тому +4

    I wish NeRF should be default in After Effects, Houdini, Unity and Unreal...definitely a revolution for XR!

    • @EveryPoint
      @EveryPoint  2 роки тому +1

      We imagine it becoming part of the NVIDIA Omniverse

  • @mandelacakson8034
    @mandelacakson8034 2 роки тому +1

    I Use Nvidia RTX 2060 Super. 32gig ram and AMD Ryzen 7 3800X 8-Core Processor. Will it be able to handle it?

    • @EveryPoint
      @EveryPoint  2 роки тому +1

      Yes! Your limit will be the VRAM on the 2060. Keep your input image resolution to 1920x1080

  • @Nibot2023
    @Nibot2023 Рік тому

    Are these instructions still relevant? Just curious if you still need all this. I downloaded the instant NGP.

  • @MrCmmg
    @MrCmmg 2 роки тому

    one question, is the 1080ti gpu still compatible with the nerf ai technology? or do i need to have a RTX series gpu?

    • @EveryPoint
      @EveryPoint  2 роки тому

      1080 ti works, however, training and rendering times will be lengthy. NVDIA suggest 20XX or greater.

  • @LifeLightLabs
    @LifeLightLabs 2 роки тому +1

    Is this possible on a Mac M1?

    • @EveryPoint
      @EveryPoint  2 роки тому +1

      No, this is only supported with PC and Linux machine with a NVIDIA GPU

    • @hanikhatib4091
      @hanikhatib4091 2 роки тому

      ​@@EveryPoint what about on Colab? p.s. I am unable to run NERF on my Mac M1. I have around 125 pictures of a nice art piece (4k resolution, 360 degree shots, around 400 MB of total data). I would love to complete this project but I am afraid compatibility might be the bottleneck.

  • @jweks8439
    @jweks8439 Рік тому

    Hi there, I was trying to implement a project using this and was wondering if there was a way to crop (min x,y,z and max x,y,z) without using the gui (using the command line preferably)
    I am using an RTX 3050 Ti
    would be a great help to me if u can guide me on how to do it or where to look since as far as I can tell u're the only one who actually helps me understand what's going on
    Thanks a lot

    • @jeffreyeiyike122
      @jeffreyeiyike122 Рік тому

      Hi, How are you doing? I am having problems rendering customs dataset. The result is always poor. Is there a way one can get the image in the box and get a good rendering

    • @jweks8439
      @jweks8439 Рік тому

      @@jeffreyeiyike122 try adjusting the aabb, the optimal value differs from scene to scene

    • @jeffreyeiyike2358
      @jeffreyeiyike2358 Рік тому

      @@jweks8439 I have tried adjusting the aabb between 1 and 128 but the rendering and psnr isn’t improving.

    • @jweks8439
      @jweks8439 Рік тому

      ​@jeffreyeiyike2358 If you're getting bad rendering with only your custom data the problem might be with the custom data provided. so first try rending the sample data provided by the instant ngp in the data folder such as the fox and armadillo. If these data render fine then consider reading their transforms file to try and replicate the parameters preferred for such as scene. Moreover check your input images whether it be frames of a video or plain images and see whether there is any blurry or shaky ones and remove them to improve the quality if the render. It is worth notting as well that if u are using images and not a video with colmap, the images might be shot with an insufficient number of overlapping images which can lead to a loss in detail. From my testing as well, I found that u should avoid direct light as reflections tend to show on the rendered mesh, so a diffused light works best for retaining detail and accurate color and texture of the scene.
      Hope I was of some help😊

    • @jeffreyeiyike2358
      @jeffreyeiyike2358 Рік тому

      @@jweks8439 I will be happy if I can set up a zoom meeting with you. For the fox and armadillo it works fine. I noticed the bounding box is not on the object. I used videos and not images because if there are no good overlap colmap would fail and not produce images and the transforms.json so I always use videos

  • @anthonysamaniego4388
    @anthonysamaniego4388 2 роки тому +1

    Thanks for the straightforward directions. I got the app installed and it worked well, but now it says "This app can't run on your PC." Any ideas? Thanks

    • @EveryPoint
      @EveryPoint  2 роки тому

      How are you launching the app? You should be launching it via anaconda. Perhaps try running in admin mode.

    • @anthonysamaniego4388
      @anthonysamaniego4388 2 роки тому

      @@EveryPoint Tried anaconda and visual studio. Also tried running in admin and I get the same error. I read it could be related to windows secruity/antivirus protection, but no luck when I disable those.

    • @anthonysamaniego4388
      @anthonysamaniego4388 2 роки тому +1

      Got it to work after a reinstall. Now I'm running into an issue when running the render.py script. I'm getting - "RuntimeError: Network config "data\into_building\base.msgpack" does not exist. Any ideas?

    • @EveryPoint
      @EveryPoint  2 роки тому +1

      @@anthonysamaniego4388 did you save a snapshot after training? This is necessary to do prior to training. Saving the snapshot will generate that missing file.

    • @anthonysamaniego4388
      @anthonysamaniego4388 2 роки тому +1

      @@EveryPoint That was it! Thank you!!!

  • @ranam
    @ranam 2 роки тому +2

    can i do the same on colab

    • @EveryPoint
      @EveryPoint  2 роки тому +1

      We are not sure if there is a collab version of Instant NGP yet. There is a collab version of Nerfstudio though that can run instant-ngp.

    • @ranam
      @ranam 2 роки тому

      @@EveryPoint thank you sir I will try it

  • @jeffg4686
    @jeffg4686 2 роки тому +1

    Thanks for sharing.

  • @CarsMeetsBikes
    @CarsMeetsBikes 2 роки тому

    Can I run this in Google collab??

  • @NoelwarangalYT
    @NoelwarangalYT Рік тому

    Do we need to learn coding

  • @nightmisterio
    @nightmisterio 2 роки тому +1

    They should have easy online demos to use in allot of these kinds of things

    • @EveryPoint
      @EveryPoint  2 роки тому

      Instant-NGP is not productized yet which is why there is a lack of a installer and full tutorials.

    • @TheBoringLifeCompany
      @TheBoringLifeCompany 2 роки тому

      It's enough to make your first render in 6-8 hours even with entry skills.
      Setting up all the stuff consumes time but rather rewarding.

  • @software-sage
    @software-sage 2 роки тому

    If someone made an iOS app that allows you to upload a bunch of pictures and send it off to a remote server with a GPU, that would be a very popular app.

    • @wafaWaff
      @wafaWaff 2 роки тому

      kiri engine app

  • @user-oj4hr5rh6i
    @user-oj4hr5rh6i 2 роки тому +1

    Nice work! Although very expensive

    • @EveryPoint
      @EveryPoint  2 роки тому

      Expensive hardware is needed for sure. However, that is the truth for photogrammetry and 3D modeling as well.

    • @user-oj4hr5rh6i
      @user-oj4hr5rh6i 2 роки тому

      @@EveryPoint Thanks for your comments. Looking forward to see more amazing stuff from your channel.

  • @ihatelink6658
    @ihatelink6658 2 роки тому

    Really work

  • @tszkichin4538
    @tszkichin4538 2 роки тому +1

    thx for soft mate

  • @baconee7047
    @baconee7047 2 роки тому +1

    lmao i was tNice tutorialnking ths sa tNice tutorialng then i saw ur comnt

  • @DSJOfficial94
    @DSJOfficial94 2 роки тому +1

    damn

  • @maxpower_891
    @maxpower_891 2 роки тому +1

    да сделайте вы нормальный человеческий интерфейс чтоб любой мог воспользоваться этой программой

    • @EveryPoint
      @EveryPoint  2 роки тому

      That would be nice. However, this is still in research phase. Eventually we expect NVIDIA to productize it. In the meantime, check out Luma Lab's beta.

  • @thelightherder
    @thelightherder 2 роки тому +3

    I've had success with this build and have been messing around with using it (here's a test: ua-cam.com/video/JbiCMN2lPAQ/v-deo.html). I had some issues, mostly confounding and inconsistent, but I'll mention them all here in case it helps (I'm pretty new to this stuff, so it might seem obvious to some).
    I'm using Windows 10, NVIDIA GeForce RTX 2070.
    I followed bycloudia's Github fork (github.com/bycloudai/instant-ngp-Windows) and video (ua-cam.com/video/kq9xlvz73Rg/v-deo.html). The build went smoothly the first time, but I did have some trouble finding the exact versions of some things.
    I used Visual Studio 16.11.22 (not 16.11.9) and CUDA 11.7 (not 11.6). I used OpenEXR-1.3.2-cp37-cp37m-win_amd64 (not OpenEXR-1.3.2-cp39-cp39-win_amd64 - this one gave me "Does not work on this platform." I chose different versions until one worked).
    I'm using Python 3.9.12 (this is what is returned when python --version is used on Anaconda Prompt, but, on Command Prompt, it says 3.9.6 (at one point it said 3.7 - confounding)).
    Everything went smoothly, and I first tried my own image set of photos shot outwards around a room. When testbed.exe was launched, everything was extremely pixelated. This resolution can be changed by unchecking the Dynamic resolution box and sliding the Fixed resolution slider (the higher the number, the lower the resolution. Things might go really slow, and it will be hard to check the Dynamic resolution box again. It's easier to slide the slider to a higher number, then check that box).
    My image set, though, did not produce anything recognizable as a room. Apparently this works better when looking inward at a subject. I had success with the mounted fox head example.
    Using the command "python scripts/colmap2nerf.py --colmap_matcher exhaustive --run_colmap --aabb_scale 16 --images data/" creates the transforms.json file. There's some inconsistency from what bycloudai says about the aabb_scale number. He states that a lower number, like 1, would be for people with a better GPU, and 16 with a moderate GPU. But, the NVIDIA folks say "For natural scenes where there is a background visible outside the unit cube, it is necessary to set the parameter aabb_scale in the transforms.json file to a power of 2 integer up to 128, at the outermost scope." For my above youtube example, I used 128 - this looked much better than using 2. This number, though, needs to be changed in the transforms.json text file, because only a number from 1-16 is accepted in the above command.
    The Camera path tab window is hidden behind the main tab window. Reposition your 3D scene using the mouse and scroll button on mouse, then hit "Add from cam" to create a camera keyframe (after creating a snapshot in the main tab). To play the keyframes, slide the auto play speed to choose the speed of playback, and click the above camera path time slider (so intuitive!). You'll see the playback in the little window. If you click READ, it will playback in the big window, but it seems to mess up the axis of rotation or something (not sure what this READ is, but I don't suggest clicking it!).
    All was going well, but when I hit esc and tried to render out the video, I had a few problems. First, I hadn't copied the the render.py script from bycloudai into my script folder. Once that was copied, I got an error about the pyngp module not being present (this seems to be a common problem). But, that folder was there. I removed the .dir from that folder, and I didn't get that pyngp error anymore. I got another error (this is where things are inconsistent and confounding again). Completely by mistake I realized I could run the render command in the Developer Command Prompt, but not the Anaconda Command Prompt. Worked perfectly. But...at one point while I had another image set training, everything froze, had to do a hard reboot. When I tried to run testbed.exe again, I got "This PC cannot run this program" Windows popup. After trying several things to get this to run again, I realized the file was 0KB. No other exe files had this problem, and I ran a virus check and everything was clean.
    I started a new folder, and re-did bycloudai's compile steps. After that, everything worked perfectly, including the rendering out of the video file in the Anaconda Prompt, and keeping the .dir on the pyngp folder (go figure). Hope that helps some folks.
    Oh, and check out some other AI stuff I've messed with here: ua-cam.com/video/MoOtNMgFOxk/v-deo.html

    • @thenerfguru
      @thenerfguru 2 роки тому

      100% of this makes sense. I believe a lot of the issues you ran into were because instant-NGP has been updated a lot since bycloudai’s fork and this video. Also, you were most likely not always working in the conda environment. I have quite a few updates going live on this channel tomorrow.

    • @thelightherder
      @thelightherder 2 роки тому +1

      @@thenerfguru Cool. Are there tricks to getting a cleaner 3D scene? I’d love to use this to do moves around vehicles like in my test, but the image is a bit fuzzy still. In examples I’ve seen in other videos things are much crisper.

    • @EveryPoint
      @EveryPoint  2 роки тому +1

      Start with sharp photos and the deepest field of view possible. Also? Keep the scene as evenly lit as possible. Take loops far enough away from the vehicle that you see it all in one shot. Remember that the view in the GUI does not look nearly as sharp.

    • @thelightherder
      @thelightherder 2 роки тому

      Another confounding issue - after closing Anaconda Prompt and reopening, when using the command "python scripts/colmap2nerf.py --colmap_matcher exhaustive --run_colmap --aabb_scale 16 --images data/" I'm now getting, out of nowhere "File "scripts/colmap2nerf.py", line 19, in
      import cv2
      ModuleNotFoundError: No module named 'cv2'",
      And weirdly, the command only works in Developer Command Prompt.

    • @thelightherder
      @thelightherder 2 роки тому

      @@EveryPoint Do you know what the various training options are, and how they effect the final outcome? For instance, what is "Random Levels"? I notice when clicked, the loss graph changes drastically (the line gets much higher when clicked). Also, do you know how to read this loss graph? I know there's a point of diminishing returns - is this what this graph indicates, and is it when the line is high, or low (much of the time I'm seeing the line spiking up and down, completely filling the vertical space). Is there a number of Steps that, on average, should be achieved? I've let it run all night and gotten around a million steps, I'm not sure if the result was any better than a much lower number (and, I have a 2070 - I'm not sure if the 3090 gets to this number in a ridiculously shorter time period).