Light Fields - Videos From The Future! 📸

Поділитися
Вставка
  • Опубліковано 23 жов 2024

КОМЕНТАРІ • 532

  • @brickdesign6438
    @brickdesign6438 3 роки тому +226

    This is absolutely incredible. Imagine a movie where you can actually look around the scene! What a time to be alive!

    • @marcozolo3536
      @marcozolo3536 3 роки тому +31

      Upskirts finally take on a whole new definition

    • @Manks08
      @Manks08 3 роки тому +20

      It would make for an incredibly immersive crime movie where the viewer could look around a scene for evidence at the same time as the actors.
      Eg looking behind a picture frame or ornament.

    • @billbauer9795
      @billbauer9795 3 роки тому +2

      >What a time to be alive!
      You are right. 10-20 years from now most of us won't have indoor plumbing any longer.

    • @sgbench
      @sgbench 3 роки тому +6

      It has its limits. As soon as you leave the volume that was originally occupied by the camera sphere, there are gaps in the data that become increasingly obvious.

    • @ScorgeRudess
      @ScorgeRudess 3 роки тому +1

      not just look arround, but walk inside it!

  • @ManMadeOfGold
    @ManMadeOfGold 3 роки тому +320

    This is one of the few papers that really made me think "this is the future". Feels like wizardry to rotate and pan around a video, rather than a still image. Not a super practical thing, but it's dang cool!

    • @LimitedWard
      @LimitedWard 3 роки тому +16

      Two immediate applications I can think of:
      1. making better video instructions for the assembly of complex 3D parts in manufacturing.
      2. VR porn. Because, why not?

    • @kamathln
      @kamathln 3 роки тому +7

      Not a super practical thing? Ask any Movie director

    • @javierflores09
      @javierflores09 3 роки тому +2

      ​@@kamathln I mean, not for normal videos, sure this has its own utility but you normally want to center on whatever the thematic of the video is and not something off the scene

    • @Kram1032
      @Kram1032 3 роки тому +6

      @@javierflores09 I think regular movie directors (i.e. not ones who want to focus specifically on providing lightfield movies) will actually want this as well:
      It means they can much more seamlessly adjust the camera position in post, allowing for insane levels of fine control. So they'll end up with a static view (or perhaps a binocular view ala current 3D movies) but it's extremely finely adjusted to their every whim.
      It's also gonna improve the workflow of what I *think* is called Z-compositing? Wherein all your data is in RGBD and so you can composite some CG elements into the scene at exactly chosen depths and because your footage has depth info too, clipping and occlusion will automatically work correctly instead of requiring expensive, error-prone semi-manual frame-by-frame masking.
      Essentially this gives even more information, including to stuff that's completely occluded by other stuff. Like, you could digitally add in a reflective surface and it would potentially reveal what's behind a thing *in the digital reflection*
      etc.
      So yeah I'm sure film makers will love this.

    • @kamathlaxminarayana301
      @kamathlaxminarayana301 3 роки тому

      @@javierflores09 Try centering on hyper-active children/pets :D

  • @MarioManTV
    @MarioManTV 3 роки тому +56

    This was exactly the sort of thing I was looking for after the last video with the video selfies. This looks like an incredibly promising way to make VR video more immersive than ever before.

    • @McDonaldsCalifornia
      @McDonaldsCalifornia 3 роки тому +4

      VR porn ist gonna be a saviour for early funding of this technology I'd wager

  • @skylerlehmkuhl135
    @skylerlehmkuhl135 3 роки тому +45

    I could see this being used for movie production; imagine being able to film once and then tweak the camera movement in post production.

    • @NoogahOogah
      @NoogahOogah 3 роки тому +12

      It would quintuple the data storage requirements tho

    • @tomnewsom9124
      @tomnewsom9124 3 роки тому +6

      That's what Lytro were aiming for with their Immerge camera. I don't think anybody was interested though, cos they folded and got gobbled up by Google.

    • @Nobody-Nowhere
      @Nobody-Nowhere 3 роки тому +9

      imagine shooting with a rig that has like 20 movie cameras attached to it... maybe its just easier to know what you are doing in the first place

    • @yaelm631
      @yaelm631 3 роки тому +1

      It is possible ua-cam.com/video/9qd276AJg-o/v-deo.html
      Look at intel volumetric video studio

    • @DodaGarcia
      @DodaGarcia 3 роки тому +1

      @@NoogahOogah which almost all technology advancement has done lol

  • @Andreadel96
    @Andreadel96 3 роки тому +11

    I just tried their demo in VR and it was very impressive. Things felt so real, like I was there (I know, of course it's VR, but still!). Now I am sad, that I cant watch every video like this. Also the one sitting at a lake or at the ocean were very relaxing. This has definitely the possibility to be used for virtual tourism. AMAZING!

  • @yaelm631
    @yaelm631 3 роки тому +59

    I first saw this tech used in vr with google lightfields (on steam), if this becomes affordable I would be really happy

    • @yaelm631
      @yaelm631 3 роки тому +4

      We can download a 8GB demo !
      augmentedperception.github.io/deepviewvideo/

    • @CIinbox
      @CIinbox 3 роки тому +1

      That was already a few years ago. Although they were still pictures, it still looked amazing as if being there.

  • @dustinwaree
    @dustinwaree 3 роки тому +187

    Imagine People walking around with that Monster taking Selfies 🤭

    • @jearlblah5169
      @jearlblah5169 3 роки тому +10

      *karen has entered the chat*
      What do you mean “it’s not for sale”????
      I DEMAND TO SPEAK TO YOUR MANAGER!!!!

    • @loleq2137
      @loleq2137 3 роки тому +5

      @@jearlblah5169 😐 why

  • @kotokrabs
    @kotokrabs 3 роки тому +60

    Wow, add tobii tracking device and this will revolutionise the media!

    • @jambalaya201
      @jambalaya201 3 роки тому +8

      What is a tobii tracking device?

    • @kanishkachakraborty
      @kanishkachakraborty 3 роки тому +4

      @@jambalaya201 Tracks exactly which part of the screen your eyes are looking at

    • @rtyzxc
      @rtyzxc 3 роки тому +6

      What about just using a VR headset? This is basically designed for VR.

    • @kanishkachakraborty
      @kanishkachakraborty 3 роки тому +3

      @@rtyzxc It will forever be less convenient than looking at a screen. With tobii eye tracking you can reach a middle ground between immersion and convenience.

    • @geekswithfeet9137
      @geekswithfeet9137 3 роки тому

      Just use camera eye tracking.... It's almost like we know a guy with a bit of expertise in camera based detection...

  • @TheLastCodebender
    @TheLastCodebender 3 роки тому +34

    "Hold on to your papers" that joke always makes me smile 😂

  • @TheZenytram
    @TheZenytram 3 роки тому +60

    at each video, we get closer and closer to magic.

    • @overloader7900
      @overloader7900 3 роки тому +16

      Any technology advanced enough is indestinguishable from magic
      Any magic arcane enough is indestinguishable from science

    • @מיכאלניצן-ק4ח
      @מיכאלניצן-ק4ח 3 роки тому +1

      @@overloader7900 I really like how you said that:) I'll be quoting you

    • @martiddy
      @martiddy 3 роки тому +4

      @@מיכאלניצן-ק4ח Is one of the three Arthur Clarke's laws

    • @overloader7900
      @overloader7900 3 роки тому +2

      @@martiddy And its variation, thanks for telling

  • @Chain83
    @Chain83 3 роки тому +5

    Download the VR version from the website!
    I realised this was the first time ever that I could experience *true* 3D video (not just stereo). It is something else! I kept wanting to pet the damn dog! :D

  • @finnaginfrost6297
    @finnaginfrost6297 3 роки тому +3

    This could work to accelerate VR-as-a-service platforms, where the remote game engine renders an entire light field for the approximate location of the VR headset, which is delivered once every second or two with extremely high compression. The client's PC decompresses the light field and maps the actual head movements and rotations onto the now-old data, simulating a highly responsive remote rendering service of a semi-static environment.

  • @dru4670
    @dru4670 3 роки тому +3

    Imagine watching back your memories played back in that.

  • @koz_iorus1954
    @koz_iorus1954 3 роки тому +14

    1:27 That animation was so cool!

  • @pinustaeda
    @pinustaeda 3 роки тому +8

    My jaw dropped when I saw the dog behind the fence! It was almost perfect, such an amazing implementation.

  • @stevenru4516
    @stevenru4516 3 роки тому +3

    There's so much development in this field, please, cover them more
    *waiting room for 5 papers down the line*

  • @AlexandreBizri
    @AlexandreBizri 3 роки тому +1

    This domain always has been fascinating to me. I once reimplemented on GPU the method of the paper "Soft 3D Reconstruction for View Synthesis", and it made me learn so much about light and depth.

  • @0mnishade
    @0mnishade 3 роки тому +1

    I really hope this leads to better predictive image in-painting soon. Having something of this quality on a phone with minimal hardware changes would be absolutely incredible

  • @michahermann7869
    @michahermann7869 3 роки тому +1

    Having both played Cyberpunk and just seen these videos on my VR Headset, it is almost frightening how close this is to a supposed Brain dance Experience from the game. The indefinitely repeating glitches on the side of the view field make it even more realistic. Awesome how close it even is from being actually there in the scene. Just want to up the resolution a notch. Now you have to interpolate space from only two cameras and their movements in space, get them small and fit them into eye implants. Voilà, Kiroshi eye implants from the game.

  • @fanxia3234
    @fanxia3234 3 роки тому +2

    This is so magic… I was think about this yesterday and don't know what keyword I should search in Google, then you creat this video today! WoW!!

  • @fr3zer677
    @fr3zer677 3 роки тому +1

    The Google lights fields demo on my vive left me wanting more amazing light field content. I'm happy to see the amount of progress being made in this fascinating field.

  • @Fighter_Builder
    @Fighter_Builder 3 роки тому +2

    I really can't wait for this to catch on. True VR videos? YES PLEASE!

  • @quitegonejim1125
    @quitegonejim1125 3 роки тому

    Fecking amazing! Tutorial videos (and all videos/movies) of the future will have so much more!!

  • @nelsondiaz5415
    @nelsondiaz5415 3 роки тому +7

    Imagine now how Walking Simulator games, would be! WHAT A TIME TO BE ALIVE!

  • @EVILBUNNY28
    @EVILBUNNY28 3 роки тому

    I watched this entire video with my mouth gaped in awe. I cannot wait until 2 maybe 3 papers down the line where the artefacts are less noticeable and the content may be optimised in a way so that it could work over a limited connection such as mobile data. Hopefully another 5/10 years down the line we will be able to achieve the same camera angles all available on your smart phone. who knows! truly what a time to be alive.

  • @microwavememes
    @microwavememes 3 роки тому +4

    it feels so weird for humanity to be so perfectly preserved for the first time ever

  • @TessaBury
    @TessaBury 3 роки тому +1

    This feels way more enjoyable to me than 360 VR. I like the idea of a 3D video space where there's parallax if I happen to shift in my seat or tilt my head. And more importantly, the viewer has an actual sense of scale and depth- They can lean in to get a closer look at something in a Maker video, for example.

  • @LKDesign
    @LKDesign 3 роки тому

    This needs to get promoted big time. It needs a web platform providing content from spectacular places around the world. It needs a dedicated app for the wide spread use with mobile devices via browser and cardboard experiences.
    Go to rallies, fairs, TV studios, landmarks and great vistas to create a bulk of content.
    How odd that the relatively young concept of wide angle stereoscopy already seems to have found its match.
    I'm curious to see what's the future will hold for this technology.

  • @DasIllu
    @DasIllu 3 роки тому

    Reminds me a bit of the Lytro Lightfield Camera.
    Tiny box capturing all incoming light and lets you change the focal plane as well as your point of view by a few degrees.
    Imagine what you could do with an array of Lytros and VR+Eyetracking (incl. pupil dilation to calculate the eyes aperture to adjust depth of field) for replay. Total immersion

  • @vincent4652
    @vincent4652 3 роки тому +1

    I can already imagine how much more immersive this will make VR videos. No more of that jarring/nauseating feeling from having the world shift with head movement.

  • @powergannon
    @powergannon 3 роки тому +1

    This technique with 360 degree viewing looks like it would allow for really immersive VR movies

  • @c0dexus
    @c0dexus 3 роки тому

    For VR, it's not just about rotating your head, it's 6 degrees of freedom. The image is still correct when you move your head and this is why lightfield images feel so real in VR, it's like the realism of a photo/video combined with the freedom of real-time 3d.

  • @lilhuzky28
    @lilhuzky28 3 роки тому +3

    Your channel is amazing man really make it enjoyable to learn and it’s interesting and needful concepts!!!

  • @odw32
    @odw32 3 роки тому +3

    I do think we'll see bitrate for these formats decrease in the near future. All current codecs including H265 are made to compress a series of nearly similar 2D images through time, not to compress multiple parallel streams. Just like how there's similar information from one frame to the next, there will also be very similar information from one stream to the next in this case.

    • @TwoMinutePapers
      @TwoMinutePapers  3 роки тому +2

      Great piece of feedback. Thank you so much for posting this!

  • @MariusLuding
    @MariusLuding 3 роки тому +1

    There is a Demo for this in VR, which makes it even more impressive, as the content can be rendered in stereoscopic mode, because all information necessary for that is included in the dataset.

  • @Knecken
    @Knecken 3 роки тому

    I can see this being useful for video/movie editors to perfect that camera angle / pan after the fact!

  • @sharkbeats1397
    @sharkbeats1397 3 роки тому +22

    Wow that was amazing.

  • @kendokaaa
    @kendokaaa 3 роки тому +5

    For anyone wanting to view Light Field photos (not video) in VR, Google released a free app for SteamVR in 2018 that's pretty awesome:
    store.steampowered.com/app/771310/Welcome_to_Light_Fields/

  • @johnnyswatts
    @johnnyswatts 3 роки тому +1

    Light fields are amazing - that's the field in which I'm doing my PhD. There is so much promise in this technology, and we've come a long way since the brilliant Lytro camera.

  • @Denyernator
    @Denyernator 3 роки тому

    Virtual theatre plays! Live streamed in VR! Potentially a great way to keep theatres open during a pandemic, if only this were commercially available

  • @spartv1537
    @spartv1537 3 роки тому +2

    watching these videos only for "What a time to be alive!" words

  • @jayxi5021
    @jayxi5021 3 роки тому +12

    Can't wait for the Dall-E paper from OpenAi (and its video)

    • @MrJaggy123
      @MrJaggy123 3 роки тому +3

      While we wait for this channel to cover it, I found Yannic Kilcher's video on it to be pretty good : ua-cam.com/video/j4xgkjWlfL4/v-deo.html

    • @jayxi5021
      @jayxi5021 3 роки тому

      @@MrJaggy123 watched that already :3

    • @lucaslucas191202
      @lucaslucas191202 3 роки тому

      Damn you got lucky then

  • @willguggn2
    @willguggn2 3 роки тому +2

    The moment you mentioned reflective surfaces I caught myself pretty much only looking at the mirror in the shop. :D

  •  3 роки тому +1

    This is actually the future. Can't wait to watch these kinds of videos on my VR headset.

  • @Poney01234
    @Poney01234 3 роки тому

    I've been having this idea for years:
    Take a huge concert in the daylight.
    The lead musician asks everyone to take a short video at the same time, for example of a jump he does or something cool.
    Then these 30,000 people upload their video on a given platform (RIP 4G antennas).
    => You get a messy, inconsistant, but free and unimaginably huge dataset you can perform 3D reconstruction on!
    After a lot of cleaning and reconstructing, you could walk through the crowd, turn around the stage, etc.
    I know some stadiums are equipped with 4D Replay or other technologies that allow for similar experiences, but only for a very limited portion of the venue.

  • @MrSaemichlaus
    @MrSaemichlaus 3 роки тому +4

    Soooo, photogrammetry applied to each frame of a video scene captured from various angles to reconstruct the 3D scene?

  • @MBPaperPlanes
    @MBPaperPlanes 3 роки тому +1

    I think lots of people on here are confused about the difference between 360 video, photogrammetry, and light fields. It's not just about stitching multiple videos together and slapping it on a 3D model. A true light field is the 5D value of every point of light within a volume. They are capturing a few 2D slices (where the sensor collects the photons) and the algorithm is filling in all the rest for the (hemi)spherical volume. Then it's compressing that down into something that can be streamed, which is probably a 1,000:1 ratio. It's like compressing an IMAX feature film into the size of a JPG file!

  • @Game_Sometimes
    @Game_Sometimes 3 роки тому +10

    360 degree cameras can accomplish some of this, which is ok, but we definitely can improve.

    • @EVRLYNMedia
      @EVRLYNMedia 3 роки тому +1

      the thing with 360 degree cameras is that it still only captures from one position, that being where the camera is. with this technology, you can kind of move around as the camera and experience real depth effects, unlike a 360 video where you basically move around a gigantic frame, but depth may not be accurate

  • @ilazerxxx4894
    @ilazerxxx4894 3 роки тому +1

    What's weird is that the camera of the future looks a lot like the human eye. What a time to be alive!

  • @Veptis
    @Veptis 3 роки тому

    There is a studio in France that uses a ligh stage and shoots 3D video. So they do a photogrammetry reconstruction for every single frame.

  • @Dismythed
    @Dismythed 3 роки тому

    Here is how to do it with a single market-ready phone camera: create a camera app that when you click the button to take an image, it starts filming a video, allowing you to make a verticle circular motion with the camera in a 2 foot diameter, then you click the button the second time to take the photo. Then the camera app constructs the 3D image using the algorhythm and the images from the video. You can fight artifacts and blurring by letting the program draw detail information in a single 10x10 pixel area from a single frame that most closely matches the surrounding pixels.

  • @play005517
    @play005517 3 роки тому

    We may need a custom entropy encoding format that look across all view ports for these highly coupled videos. Most of the image across different physical or logical cameras are essentially the same. Only some highlighted area shown in 1:44 are actually different so we can skip other part and only keep a key frame of it.

  • @Theminecraftian772
    @Theminecraftian772 3 роки тому +1

    That's awesome stuff. I look forward to the hardware for it becoming cheap, commonplace, and integrated.
    I'm also looking for some more solutions to data compression. Have the AI scientists designed something for general compression of noisy data yet?

  • @adriaanb7371
    @adriaanb7371 3 роки тому

    I just realized that it makes a lot of difference in watching VR. It is the small head movements that do not show in the image that cause nausea and break the illusion of reality in VR. So yeah, much more relevant than a silly 3d effect.
    Remember the head pivots around the neck joint, not the eyes, so there is always a parallax effect that needs to be handled even when the viewer is sitting still in the vr scene while just looking around.

  • @yade5979
    @yade5979 3 роки тому +1

    Imagine they implement this technology in VR movie experiences where you become a character inside a movie like a horror one for example, and than you experience the entire spectacle first hand.

  • @user-ej4md7tm3y
    @user-ej4md7tm3y 3 роки тому +1

    This is looking incredibly promising! I still remember seeing a demo at a conference of what LYTRO was doing with lightfields (2016-ish), I was blown away at the time. Too bad they went 📉 but I'm glad to see lightfield technology still going strong

  • @raptokvortex
    @raptokvortex 3 роки тому +1

    I can see this becoming a new method of viewing content... like video over books.

  • @AlexSeewald
    @AlexSeewald 3 роки тому

    Now THIS is something I am genuinely very impressed with - I was already impressed with the Tom Grennan video on the PSVR, but that was a lot of work. This does it completely automatically. I'm definitely going to keep an eye on those papers.

  • @andarted
    @andarted 3 роки тому +1

    Combining image inpainting with the data of two or three extra cams seems like a natural next step, imho.

  • @MrGreglego
    @MrGreglego 3 роки тому

    I love your quiet almost-ASMR voice! It's great!

  • @markmywords5342
    @markmywords5342 3 роки тому +80

    AI is going to teach us something about light we didn't already know existed in real life

    • @Baleur
      @Baleur 3 роки тому +29

      100%, it's only a matter of time before AI "scientists" become a reality.

    • @moahammad1mohammad
      @moahammad1mohammad 3 роки тому +1

      @@Baleur
      And then AI takes over and kills/enslaves us all

    • @sgbench
      @sgbench 3 роки тому +1

      @@moahammad1mohammad What makes you think an AI would have any reason or motivation to kill or enslave us?

    • @markmywords5342
      @markmywords5342 3 роки тому +3

      @@sgbench *Programs robot to walk*
      *Person stands in front of robot*
      *Robot doesn't see person*
      *Person codes robot to see person as obstacle*
      *Robot "overcomes obstacle"*
      lolol

    • @sgbench
      @sgbench 3 роки тому +2

      @@markmywords5342 In your example, a person programmed the robot to be evil. The robot didn't become evil on its own.

  • @frankiesomeone
    @frankiesomeone 3 роки тому +4

    Check out "Welcome to Light Fields" on Steam for VR

  • @rolfathan
    @rolfathan 3 роки тому

    Absolutely stunning work.

  • @tetlamed
    @tetlamed 3 роки тому +2

    Still love hearing the word "doctor" at the beginning of his new videos!

  • @Demnus
    @Demnus 3 роки тому +1

    That's an old stuff that's going with it's roots up to Lytro cameras from 2006. But still the stuff doesn't stop to impress and be "the future" for 15 years straight in a row.

    • @IceMetalPunk
      @IceMetalPunk 3 роки тому

      No, it's not. They're using the term "light field" to mean something very different than the light-field cameras like Lytro's.

    • @Demnus
      @Demnus 3 роки тому +2

      @@IceMetalPunk Actually it is. The method of taking many images from many different angles at the same time, was described in 1908 and that was called Light Field and it's exactly what was done in this case. Lytro innovation was to make an optical raster film of micro lenses that's supposed to write all viewing angles in one single frame at one single exposure moment. Limiting resolution and angle differences and as a result ability to move camera. Still technology in general the same.

    • @LKRaider
      @LKRaider 3 роки тому

      Means you can probably get results like these in a camera embedded on your phone

    • @Demnus
      @Demnus 3 роки тому +2

      ​@@LKRaider Technically yes. But the quality of execution might vary. And it will take a lot of fiddling. And the bigger the "virtual aperture"(distance between farthest points from which you got pictures) the more you'll have the ability to move the camera. And technically if you get pictures from every possible angle, you'll get the ability to freely move the camera inside the scene. If your "aperture" wold be small, you'll be able to move camera only in that vicinity, and may be have ability to change focus and some other properties, like optical aperture postfactum. Of Course also you can capture still scene, making many photos from many different points. Ether way, it's all by its core just interpolation between those frames, based on coordinates of actual cameras position and the position of the virtual camera. The main point of this research, as I understand in the actual way they do this interpolation that is more likely has better quality and may be a bit faster than previous attempts. Here they build some kind of mesh representation of the scene, which has some similiarity with 3d scanning. But 3d scanning by itself very similar to light field concept.

  • @AmazonFinds.2k
    @AmazonFinds.2k 3 роки тому

    Wow i just tried the thing online, and being able to change the camera angle makes it feel just like a video game. Thats crazy

  • @BomageMinimart
    @BomageMinimart 3 роки тому

    This totally fucking rocks! Just awesome to see where this will go for entertainment and teaching/instruction.

  • @caedmonswanson2378
    @caedmonswanson2378 3 роки тому

    Something I'd be really interested in is AI video and image compression. Imagine being able to compress videos much higher than normally possible and still retain most of the detail. Stuff like that is kind of happening with NVIDIA DLSS (which upscales gameplay resolution from like 1080 to 4K), but I think that making a system dedicated to compression could be very cool. I'm not sure if this already exists, but I think that something like this will definitely be used in the future. Imagine being able to watch videos and movies while using 25% of the data. That could save a lot of money in ISP bills.

  • @superfluidity
    @superfluidity 3 роки тому

    This would be amazing on a flat screen with a head tracker. Or even better one day if they can make a light-field display, i.e. a two dimensional array of tiny projectors that project directly into the eyes of any number of viewers.

  • @davidm.johnston8994
    @davidm.johnston8994 3 роки тому

    So interesting! Thank you Károly!

  • @LanceThumping
    @LanceThumping 3 роки тому +1

    Can't wait to see a paper that defines a new compression method specifically for light fields. H265 is good but I seriously doubt that it's designed with light fields in mind.

  • @daYps3
    @daYps3 3 роки тому +1

    I know that Cyberpunk has been slated - but this reminds me very much of the braindances from that game - you can explore previous memories & events in a similar way to find out additional clues around the scene!

  • @anjaninator
    @anjaninator 3 роки тому +1

    Being able to capture (or capture some and fill in the rest w/ AI) such footage is important for when VR tech is ubiquitous

  • @brettcameratraveler
    @brettcameratraveler 3 роки тому

    The cameras and original raw footage are huge impractical for any form of wide consumer adoption but perhaps for the largest of film crews. A different approach that would solve these two issue is to crowd source cell phone photos and videos of popular sites to create a cloud based 6dof 3D model of the background scene in any direction and then to insert any live action into that model from the cameras already in the form factor of our phones. When 5G networks become widespread you might see the creation of something like this to relive your memories in fully walkable 6dof VR, etc.

    • @brettcameratraveler
      @brettcameratraveler 3 роки тому

      Alternatively, you could record a normal video with your camera of the live action and then if you wanted to preserve the scene in 6dof then you would take 15 seconds to walk around in a 6 foot circle and film it in 360. With the creation of the right software, the resulting video could be pieced together into create not only a 360 background but a 360 6dof background you would be free to move a few feet in. The live action moment you first recorded would be inserted within the virtual space of that second 360 6dof scene. Anyway this is also another solution that works within the form factor limits of a phone so wide adoption is potentially possible.

  • @DanielRodriguez-gm1ih
    @DanielRodriguez-gm1ih 3 роки тому

    Can’t wait to watch VR movies with this technique in the future.

  • @id104335409
    @id104335409 3 роки тому +1

    To those who like this idea - take a look into Lytro camera. They managed to produce their first light field camera model available for purchase and then they went under. Not too many people interested in the technology. With this special camera you can capture more information and store it in a file that needs to be reconstructed in their software, where you can chose the angle and focus of your picture and then export it in a normal photo. This is mainly why nobody was interested. Other than that - this technology looks like magic compared to what we are used to. If 3D TVs didn't die out - this would have complemented them very well. But nobody wanted to invest more money in 3d. Companies that worked on light field on 3d TVs with no glasses needed also went in oblivion. If that didn't happen - today instead of fighting over hdr- we would have be looking at TVs with popping picture you can look at from all angles.
    In VR this light field technology allows for removing the bulky heavy lenses that sit on your nose and thus making VR glasses as thin as regular glasses. Also - this technology allows for your eyes to move around and focus inside the image, making the experience much more realistic and less vomit inducing. However - Nvidia - the only people behind it - also backed out and stopped developing their 3d and VR glasses.
    How about that?

  • @numero7mojeangering
    @numero7mojeangering 3 роки тому +1

    It's like we have the real world on your computer

  • @ThunderDraws
    @ThunderDraws 3 роки тому

    yeah I checked this out in VR a few months ago - absolutely the future!

  • @sdjhgfkshfswdfhskljh3360
    @sdjhgfkshfswdfhskljh3360 3 роки тому

    Our world contains a lot more information than regular person can think of. Imagine capturing all of it :)

  • @nathaos01
    @nathaos01 3 роки тому

    I think this would be a really practical thing for movie making

  • @junoexpress6
    @junoexpress6 3 роки тому

    This is like Photogrammetry but for video, very impressive

  • @roucoupse
    @roucoupse 3 роки тому +1

    A good paper this week. Thank you.

  • @chrisofnottingham
    @chrisofnottingham 3 роки тому

    It is very impressive and I have actually watched a few 'virtual pov' videos on UA-cam but also I kind of completely hate the actual experience.

  • @deaultusername
    @deaultusername 3 роки тому

    This reminds me of a kickstarter camera where you could change the focus of an image After taking the image. No idea what happened that tech but its very similiar.

  • @Puffycheeks
    @Puffycheeks 3 роки тому +1

    in VR this is mindblowing.

  • @jamescombridgeart
    @jamescombridgeart 3 роки тому

    Omg the reflections updating blew my mind the most

  • @DerSolinski
    @DerSolinski 3 роки тому

    Light Fields aren't new.
    But I have to admit that this is some outstanding work.
    Even so a camera sphere of half a meter is some what impractical.
    Usually small high speed light field cameras are used in the industry for quality control in fully automated systems.

  • @mho...
    @mho... 3 роки тому

    soo, basically they made compund eyes + the algorithm to use it?! nice work, very interesting for 1st person drone pilots!
    but my question is, how small can the aperature be, before we get spacing issues?! around golfball size would be nice to have honestly!

  • @li_tsz_fung
    @li_tsz_fung 3 роки тому

    If the video encoder can treat the different perspectives as the same video, the compression rate can be much better. There must be so much similar information.

  • @chrismofer
    @chrismofer 3 роки тому

    just amazing. I loved Google's lightfeild demo, I wonder if ai volumetrics is less computationally intensive to play back live than Googles implementation

  • @ganjanaut
    @ganjanaut 3 роки тому

    Would be cool in an interactive detective movie, pause time, rewind to inspect etc...

  • @alexm7023
    @alexm7023 3 роки тому

    the lightfield VR program is for free on steam, it has couple really cool videos.

  • @MarcusHast
    @MarcusHast 3 роки тому

    I'm convinced that I have seen a paper which combined multiple video streams into a light field video. And it was made to use standard video decoder hardware in creative ways to accelerate this process. I do believe it was presented by Disney, but I have not been able to find it again so I might have dreamed it. :-P

  • @cosmotect
    @cosmotect 3 роки тому

    This channel is basically a window into the future

  • @michaelmartinez8578
    @michaelmartinez8578 3 роки тому

    The next step has to be doing this with fewer cameras. Being able to do this with 5 cameras would be a HUGE step.

  • @nbohr1more917
    @nbohr1more917 3 роки тому

    This is sort of what you would need for "glasses free" 3D TV. You cannot transmit 20+ parallax views in HD (or better) given the limitations of network transmission. You would need to transmit some images and some 3D geometry and then re-construct some of the parallax frames.

  • @MortenHaggren
    @MortenHaggren 3 роки тому +4

    Not to be confused with the Commercial Light field camera "Lytro" 🤔

    • @kurtnelle
      @kurtnelle 3 роки тому

      What is the difference?

  • @Maxjoker98
    @Maxjoker98 3 роки тому +5

    Hm, interesting. I bet with something more domain-specific than h265(or other 2D video codecs) you can get way better compression.

    • @LKRaider
      @LKRaider 3 роки тому

      Indeed, many commonalities between the videos could be compressed.

    • @Maxjoker98
      @Maxjoker98 3 роки тому +1

      ​@@LKRaider Yeah, my thought as well. You could replace the binning(DCT, FFT, etc.) implementations with ones supporting 3D spaces. I'm not very familiar with modern codecs, but I imagine it couldn't be too difficult to test this. I also wonder how this would compare to just tiling a bunch of 2D streams cleverly in a giant image, using a traditional 2D codec.

  • @cybisz2883
    @cybisz2883 3 роки тому

    Two Minute Papers is a bit late. You were able to download and watch these exact same lightfield videos in vr since Siggraph 2020 in August, via the github link in the description. That said, they are extremely impressive! They're the only 6DOF VR videos you can see today.

  • @Phiwipuss
    @Phiwipuss 3 роки тому +14

    That's some bright stuff! [pun intented]