Matthew Tancik
Matthew Tancik
  • 10
  • 320 703
Learned Initializations for Optimizing Coordinate-Based Neural Representations
Learned Initializations for Optimizing Coordinate-Based Neural Representations
Matthew Tancik*, Ben Mildenhall*, Terrance Wang, Divi Schmidt, Pratul P. Srinivasan, Jonathan T. Barron, Ren Ng
Project Page: tancik.com/learnit
arXiv: arxiv.org/abs/2012.02189
Переглядів: 6 292

Відео

Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains
Переглядів 20 тис.3 роки тому
NeurIPS 2020 Spotlight. This is the 3 minute talk video accompanying the paper at the virtual Neurips conference. Project Page: bmild.github.io/fourfeat Paper: arxiv.org/abs/2006.10739 Code: github.com/tancik/fourier-feature-networks Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains Matthew Tancik*, Pratul P. Srinivasan*, Ben Mildenhall*, Sara Fridovich-Kei...
NeRF: Neural Radiance Fields
Переглядів 278 тис.4 роки тому
NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis Ben Mildenhall*, Pratul P. Srinivasan*, Matthew Tancik*, Jonathan T. Barron, Ravi Ramamoorthi, Ren Ng *denotes equal contribution Project Page: www.matthewtancik.com/nerf Paper: arxiv.org/abs/2003.08934 Code: github.com/bmild/nerf
Flash Photography for Data-Driven Hidden Scene Recovery
Переглядів 7244 роки тому
Flash Photography for Data-Driven Hidden Scene Recovery Matthew Tancik, Guy Satat, Ramesh Raskar
StegaStamp: Invisible Hyperlinks in Physical Photographs
Переглядів 4 тис.5 років тому
StegaStamp: Invisible Hyperlinks in Physical Photographs Learn more at www.tancik.com/stegastamp
Burton Conner i3 2014
Переглядів 3,7 тис.10 років тому
Burton Conner i3 2014
Burton Conner i3 2013
Переглядів 6 тис.11 років тому
Burton Conner i3 2013
A Place in Time
Переглядів 9711 років тому
Work in Progress
Albuquerque: quondam
Переглядів 82011 років тому
Albuquerque: quondam
Lamborghini Blender test render
Переглядів 2 тис.13 років тому
Lamborghini Gallardo model made in blender. Work in Progress

КОМЕНТАРІ

  • @erknee6071
    @erknee6071 Місяць тому

    I find a new idea based on this paper for anomaly detection.

    • @huytruonguic
      @huytruonguic Місяць тому

      why would the datapoints you consider anomaly actually anomalies? what if the trained network happens to not be good enough to discover a subspace that makes those datapoints normal?

    • @erknee6071
      @erknee6071 Місяць тому

      @@huytruonguic You can try to design a conditional signal into the mlp, maybe regularize it with variational or sparse coding manner. You'll see something amazing is about to happen. I build this idea on medical anomaly detection and it works well. There are many properties of Fourier features not discovered in this paper.

  • @Nickfies
    @Nickfies 3 місяці тому

    what exactly is theta and phi respectively? is one the rotation around vertical axis and phi the tilt?

  • @barleyscomputer
    @barleyscomputer 5 місяців тому

    amazing

  • @JihyeSofiaSeoDr
    @JihyeSofiaSeoDr 5 місяців тому

    Thanks a lot! I needed to learn this for a job interview ❤

  • @Den-zf4eg
    @Den-zf4eg 6 місяців тому

    Якою програмою можна це зробить?

  • @Metalwrath2
    @Metalwrath2 8 місяців тому

    This looks amazing, why is it not being used in GANs?

    • @arminh7946
      @arminh7946 6 місяців тому

      could you explain the reason if you have gained insights by any chance?

  • @linshuang
    @linshuang 9 місяців тому

    Pretty fucking cool

  • @THEMATT222
    @THEMATT222 10 місяців тому

    Noice 👍

  • @user-sv1ov8bp2i
    @user-sv1ov8bp2i 11 місяців тому

    I am very pleased to work with you. It is an application that allows people to earn money very easily. I recommend it to everyone. I have never made money so easily in my life, thank you very much .etamax❤❤

  • @daddu1989l
    @daddu1989l Рік тому

    Great work. Could you please share and teach how you created such a nice presentation?

  • @piotr780
    @piotr780 Рік тому

    3:00 how this animation on the right is produced ?

  • @romannavratilid
    @romannavratilid Рік тому

    Hm... so its basically something like photogrammetry...? This could also help photogrammetry right...? Like i capture only lets say 30 photos... But the resulting mesh and texture might look like it was made from i dont know... 100+ photos...? do i understand this correctly?

  • @josipkova5402
    @josipkova5402 Рік тому

    Hi this is really interesting. Can you tell me maybe how much costs one rendering of about 1000 photos? Which program is used for that? Thanks :)

  • @antonbernad952
    @antonbernad952 Рік тому

    While the hot dogs were spinning at 1:58, I got really hungry and had an unconditional craving for hot dogs. Still nice video, thanks for your upload!!!11OneOneEleven

  • @antonbernad952
    @antonbernad952 Рік тому

    Nice video, thanks for your upload!!11OneOneEleven

  • @user-tt2qn1cj1x
    @user-tt2qn1cj1x Рік тому

    Thanks for sharing and also mentioning the other contributors to NeRF creation and development.

  • @YTFPV
    @YTFPV Рік тому

    Amazing stuff i need to wrap my head on how the depth is generated at 3:22 with the Christmas tree ? I am working on movie where we had to generate depth from the plate and we use all the tool in book but it's always flickering pretty bad never has nice. How would i use this if it's possible?

  • @spider853
    @spider853 Рік тому

    how was it train?

  • @hherpdderp
    @hherpdderp Рік тому

    Am I understanding correctly that what you are doing here is rendering the nodes of neural network in 3d? If so I wonder if it could have non CG uses?

  • @directorscut4707
    @directorscut4707 Рік тому

    Mind Blowing! Cant wait to have this in google maps or VR implemented and explore the world!

  • @xiaoyanqian6898
    @xiaoyanqian6898 2 роки тому

    Hi, thank you for the great work. I just wonder what software you used to make this video that could vividly show the iterations, the Fourier features and its Std, frequencies, and reconstruction.

  • @HonorNecris
    @HonorNecris 2 роки тому

    So with NeRF, how does the novel view actually get synthesized? I think there is a lot of confusion lately with these showcases as everyone associates them with photogrammetry, where a 3D mesh is created as a result of the photo processing. Is each novel view in NeRF created per-pixel based on an algorithm and you are animating the resulting frames of these slight changes in perspective to show 3 dimensionality (the orbital motion you see), or is a mesh created that you are moving a virtual camera around to create these renders?

    •  2 роки тому

      It's the first. No 3D model is created at any moment. You have a function of density wrt to X,Y,Z though, so even though everything is implicit, you can recreate the 3D model from it. Think of density as "somethingness" from which we can probably construct voxels. TO get a mesh is highly non-trivial though This is kinda what they are doing when showing a depth map, they probably integrate distance with density along the viewing ray.

  • @jwyliecullick8976
    @jwyliecullick8976 2 роки тому

    Wow. The utility is constrained by the images used to feed the neural network, which may not reflect in varied model environmental factors. If you have images of a flower on a sunny day, rendered in a cloudy day scene, they will look realistic -- for a sunny day. Anything short of raytracing is cartoons on a Cartesian canvas. This is an amazing technique -- super creative application of neural nets to imagery data.

  • @5MadMovieMakers
    @5MadMovieMakers 2 роки тому

    Looks neat!

  • @huanqingliu9634
    @huanqingliu9634 2 роки тому

    A seminal work!

  • @TheMazyProduction
    @TheMazyProduction 2 роки тому

    This is extremely impressive

  • @ArnoldVeeman
    @ArnoldVeeman 3 роки тому

    That's photogrammetry... 😐 (Edit) Except, it isn't... It's a thing I dreamt of for years

  • @davidmarquez1915
    @davidmarquez1915 3 роки тому

    Pluggin for SketchUp?

  • @ScienceAppliedForGood
    @ScienceAppliedForGood 3 роки тому

    This looks very impressive, progress here seems on the same level when GANs where introduced.

  • @TiagoTiagoT
    @TiagoTiagoT 3 роки тому

    What's the difference between this and transfer learning?

  • @ak_fx
    @ak_fx 3 роки тому

    Can we export 3d model?

  • @ONDANOTA
    @ONDANOTA 3 роки тому

    Are radiance fields compatible with 3d editors like Blender?

  • @dewinmoonl
    @dewinmoonl 3 роки тому

    cool research

  • @DanFrederiksen
    @DanFrederiksen 3 роки тому

    Nice. The two input angles are screenspace x y coords? and the x y z is the camera position in the training? how do you extract the depth data from the simple topology then?

  • @sheevys
    @sheevys 3 роки тому

    How different is this from SIRENs: vsitzmann.github.io/siren/?

    • @sheevys
      @sheevys 3 роки тому

      NVM, I got it. This work applies the sine layer once and then uses normal MLP layer with sigmoid activation function, while SIRENs are using sine layers thoughtout the depth of the whole network

  • @letianyu981
    @letianyu981 3 роки тому

    Dear Fellow Scholars ! This is two minutes paper with Dr Károly Zsolnai-Fehér ?!?! What a time to be alive !

  • @russbg1827
    @russbg1827 3 роки тому

    Wow! This means you can get parallax in a VR headset with a 360 video from a real environment. I was sad that wouldn't be possible.

  • @nhandexitflame8747
    @nhandexitflame8747 3 роки тому

    how can i use this? i coudlnt find anything so far. please hjelp!

  • @erichawkinson
    @erichawkinson 3 роки тому

    Can this method be applied to stereoscopic equirectanular images for use in VR headsets?

  • @ArturoJReal
    @ArturoJReal 3 роки тому

    Consider my mind blown.

  • @raycaputo9564
    @raycaputo9564 4 роки тому

    Amazing!

  • @adeliasilva409
    @adeliasilva409 4 роки тому

    ps5 grafics

  • @IRONREBELLION
    @IRONREBELLION 4 роки тому

    Hello. this NOT Dr. Károly Zsolnai-Fehér

  • @driscollentertainment9410
    @driscollentertainment9410 4 роки тому

    I would love to speak with you about this!

  • @Jptoutant
    @Jptoutant 4 роки тому

    been trying for a month to run the example scenes, anyone got thru ?

  • @kwea123
    @kwea123 4 роки тому

    I almost reproduced everything they have! check out my implementation github.com/kwea123/nerf_pl and ua-cam.com/play/PLDV2CyUo4q-K02pNEyDr7DYpTQuka3mbV.html

    • @kwea123
      @kwea123 4 роки тому

      According to my research, I want to clarify some things: 1. The training time can be largely reduced if we optimize the code, to about 5-8 hours per scene on 1 gpu. 2. We can use this to do photogrammetry using 360 degree captured photos, the result is more neat than many existing softwares 3. They say the inference is very slow, 5-30 sec per image, that is true, because it renders all rays passing through all pixels, and there seems to be no way to accelerate on the software part. However, if instead of entire pictures, we use volume rendering technique, real time 3d rendering is possible! I tested on Unity with 256^3 texture, it renders at 100FPS!

  • @igg5589
    @igg5589 4 роки тому

    I refuse to believe this! :)

  • @unavidamas4864
    @unavidamas4864 4 роки тому

    UPDT

  • @vinesthemonkey
    @vinesthemonkey 4 роки тому

    It's NeRF or Nothing

  • @thesral96
    @thesral96 4 роки тому

    Is there a way to try this with my own inputs?

    • @Jptoutant
      @Jptoutant 4 роки тому

      archive.org/details/github.com-bmild-nerf_-_2020-04-10_18-49-32