Who here thinks they could mount this bad boy on a gimbal? ============================= 💭Join our Discord Channel💬 ► discord.gg/3aeNPU7GHu 🐦Twitter ► twitter.com/frame_voyager 📷Instagram ► instagram.com/framevoyager/ 🎵TikTok ► www.tiktok.com/@framevoyager Join our UA-cam channel 📺 ►ua-cam.com/channels/mXGDFnFh95WlZjhwmA5aeQ.htmljoin =============================
Lyto failed in many ways. The worst of which is that they did not make a product that can be realisticly sold or used my the market. They made a giant beast when the entire industry was moving to more compact systems
I think they had the right idea honestly, this type of cinema camera and pricing model is the type to be directed at very large production companies, and if even just one of the biggest film production companies rented this camera, their story would've gone much differently...
I actually bought a Lytro Illum back in 2018 out of curiosity for the light field thing. I payed around 400 euros since the camera was already a fail and reseller were trying to sell the last stocks remained. I should say that the camera was very disappointing, at the point where the unboxing experience was the best thing about it. The image was quite bad, especially for a camera supposed to cost €1300, the dynamic range was very very limited at the point where even my mid-tear samsung phone was almost better. It used a special format of file to save the images and you needed a proprietary software on your computer in order to be able to change the focus and iris (more on that later) in post, then you could save the final result as a Jpeg or even as a 3d image. I should say that the software part worked quite well at least for me and there weren't bugs I noticed. The problem was that the changes you could made on the depht of field were quite limited and when you changed the focus the edges of the objects certain times were quite rough. All in all wasn't (at least for me) enough of a feature to justify the price and the quality drop of the image. I think that camera was a very big missed opportunity: you could see there was a lot of potential, the camera was well build, the firmware worked very well (as far first generations go) and the technology behind it was amazing. The problem, in my opinion, was that they focused too much on the light field feature and not on the actual quality of image itself. I think, and I believe is the same for every photographer, that the light field tech can be a nice added feature but nothing more than that and surely not a replacement for good image, and I this is the massive mistake Lytro company did. I'm an Italian speaker btw, so sorry in advance for grammatical errors.
Exactly, in the End the image quality is what makes or breaks a camera, just look at Arri or Blackmagic. They both focused on image quality, and both are quite successful with it, despite their competitor pumping out 12k sensors (for Arri) and generally having more features (for BM)
Yup and instead of turning to VR and the cinema industry, they could have improved their colour science and other stuff. Literally, I'm quite sad that they had to shut down 😢.
I worked at Lytro for a bit, during the Illum until just before the immerge. It's really interesting to see a historical documentary style thing on stuff I actually lived through. And seeing people I know personally. Good stuff :-)
Appreciate it! Been cool to hear from people who have worked for these companies and get feedback 😂 try to do the best we can with the information available for us!
Was the big-ass camera with the half meter sensor made of smaller sensor tiles? I can't expect they designed and fabbed a wafer-sized sensor for only the price of a luxury car.
Apple have been trying to implement these features for years. First using the multiple cameras for depth information and more recently LiDAR. Removing the background for photos is already there and within the next few years as the A series chips improve this will definitely come to video on iPhones as well as the accuracy of the depth separation.
@@rcarter1690 Using a separate camera is fundamentally flawed, as the perspective is different. At least for Cinema-grade separation, you'd either need a sensor that records both brightness and ToF at the same time, or a system like lightfields. We've been working with Intel RealSense cameras for quite some time, and the separation between either the structured light cameras or the ToF means that the objects in the foreground will always obscure some part of the scene that the other cameras actually do see.
@@mnomadvfx Yea but each new sensor meant more complexity in the lense paths if they used 200 full frame sensors and they extracted more information from the beam path that would get absolutely insane. It makes sense google bought them. That was the only company that would dare to even see a possibility in handling that kind of complexity
Otoy made a really cool demonstration of lightfields for VR by simply spinning a DSLR camera on a rig around and that created 1 frame lightfields (not animations). There's still nothing like that out there. People who were developing lightfield tech absolutely sucked at commercializing it. Such a shame. Neural radiance fields are sort of getting there without the need to have expensive rigs and cameras.
Yeahhhh... The marketing kinda felt like it was coming from a tech college or something. Which makes sense lol. Fascinating tech, gotta wonder if it will ever be viable or even in this century lol
@@FrameVoyager I would like to know how they managed to capture the direction information of the light rays. Perhaps the 300G per second was necessary to collect images on multiple sensors so they could form a 3D space
@@yuxuanhuang3523 If you know the angle of the sensor to other sensors you can decide the angle of the "ray". It's actually not that complicated - lots of papers on this topic.
They weren't trying to make a camera though, they were demonstrating their new data compression system called ORBX that can also be used for light fields.
It's actually almost 21MP or almost 6k. There is a 6x6 grid of pixes under every microlens. So you take 755MP and divide it by 36. This equates to a resolution of 20.83MP or about 6k. It wasn't sold for Google. It was an error in reporting, the employees went to Google.
@@Neojhun I had already written why this is not true, but keeps getting deleted due to the links. Anyways, only early reports said that. Reports that came out after clarify that they were not. You can find one on CNet as well as others so long as they are datedlate March 2018 or later. The early March reports were the erroneous ones. You will also not find the purchase in alphabet inc 2018 quarterly earnings reports, or any SEC filings. Also for what it's worth in 2018 the year of the demise of Lytro the website for lytro redirects to Raytrix in Germany, and still does. Now, it is possible Google bought some items, but they certainly didn't buy the company there would be concrete company papers.
"hard to tell exactly what Google is planning on using the technology for in future" Google used the camera tech they bought with Lytro to build Project Starline. Basically glasses-free 3D/AR conference calling, using a huge light-field display Lytro had been working on, and highly efficient networking/compression algorithms from other google projects like AV1/Duo
as a videographer and editor i think this was the most amazing and mind blowing revolution of all time for camera industry now a days this can be made much much smaller even phones can use time of light and lidar to achieve re focusing and relighting but compared to this its so muchmore than that its the actual hardware thats capturing all of those things if you combine this hardware technology and todays sensors i believe it would be flawless
if it actually recorded at 400GBps there literally would not be any feasible way to record data from it. even now the fastest commercially available drives top out below 10GBps, and in 2016 things were a lot slower. you would need enormous drive arrays of dozens of drives in RAID 0, which would be very very unreliable, if they used anything commercial, and even if the solution was custom with custom interfaces, would still need thousands of NAND Flash storage chips to record at that data rate for more than a few minutes. No way in hell they had anything approaching a working prototype that would deliver those results
Yeahhh, I know they had some kind of "special storage" solution for it. But who knows, no one ever really got to use the camera for real so we'll never know 😭
@@FrameVoyager it would need 40 drives with write speed of 10GB/s or better strapped in raid 0 and a lot of multisocket server's to handle 400GB/s also if you would take 30TB ssd's then you could fit 75s of footage on it for hour of footage you would need 48 of these arrays aka 1920 drivers with is 7680 lanes of PCIE that equals to 60 AMD EPYC CPU's and that hour of footage would be in total of 57.6PB totaly reasonable right??
You don't need to record video to SSD directly, just keep it in RAM, it would cost just around $800 for one second, just a couple of millions for one hour
Thars not true. we have commercial data storage solutions that can top 512gbps it can also be remote look up pcie over fiber. This is not a ramdisk or anything but that could work too but be expensive to scale. The fastest consumer drives top out around 14gbps right now with pcie5 and there are consumer raid cards that can get full pcie 5 16x usage which is much faster than 10GBPS keep in mind pcie 7 is already done and there are even more niche solutions. You're talking about a camera that cost millions. It is definitely feasible, while maybe not practical.
Couldn't agree more! I hope they really figure out how to use it one day. Imagine the cost cutting applications for filming movies or anything? You really can fix it in post lol
Oh, at some point for sure! I think some of the concepts you'll start to see a lot in VFX work anyways. And you are kind of seeing some of this with the way they are doing virtual productions.
Also light field already existed 100 years and lytro tried to pretend they did it first. Adobe did it back in 2004, raytrix has done it since 2008. Outside niche scientific applications it’s useless. Keep harping on though, it’s good know who the brain deads are for when euthanization is legalized later.
I remember thinking the first Lytro camera they released was revolutionary and wish I had bought one at the time. This technology is exactly what you'd think someone like James Cameron would be interested in and I'm actually surprised he, or someone of his pedigree, didn't invest in it. However, I would not be surprised to see this tech make a comeback in the near future.
Filmmakers know what they want to capture, including what they want to focus on in any given shot. Having that 'baked in' is not a problem looking for a solution.
I remember when Lytros came out a few years ago. Some people thought it was an innovative idea, but they were forgotten almost immediately. I honesty didn't think they got beyond the first two consumer still cameras.
@@FrameVoyager What killed them was their image quality was awful. Even with their supposedly amazing bus-sized cinema camera, that short film looks muddy and strange.
Im torn, on the one had this is a totally incredible technological breakthrough, but in the other hand i shiver at the thought of taking even more "artistic" decisions in the editing room, and without the need for a focus puller, or even a DP to consult to, it makes it too easy for the artistic vision of the DP to be deviated from or ignored, i love that Arri went for the Baked in looks from the camera in the alexa 35, its bold, but it puts the control back in the hands of the filmmakers, at least in my opinion, not that planoptic cameras dont have a place in cinema, i just wouldnt like it to replace the current tech completely.
Totally get what you're saying! We are kind of at a weird crossroads at this point in the history of filmmaking. Will we keep more grounded like we always have or will everything start to become more automated? And which is better? Fascinating times
I don't foresee plenoptic cameras completely replacing conventional digital ones (heck, analog films are still around to some degree). However, there is a possibility of some aspiring directors or cinematographers experimenting with light field cameras.
Lytro would never have taken away today's traditional Digital cameras even if it was smaller and lighter because there is so much character in Lenses and even in the Dynamic Range of your sensor, Lytro tech is amazing but the results are extremely digital, I believe the DOF is SImulated and until they simulated good looking DOF with some character I think the results always looked really really nasty, Think Fast blur on Premiere or After effects.
Ah yes , i remember that LYTRO refocus camera, it was featured on Tested by Norman Chan! I can't believe it was already 11 years ago! 😅 Back then that technology was mind-blowing! 😃
I feel when the new CEO took it over , the company started to die, as original CEO was more correct in making consumer products, which they should had improved over time. But the cinema camera specs sounded too much like a scam.
I remember when the Lytro was released to the public. It sounded fancy-schmancy. If they had only improved on the consumer product, expanding to a professional photographer level version, they could have been relatively successful. Imagine if development had progressed enough to fit it into an iPhone?
Proud owner of a Lytro Illum 40, not an easy camera, not the best, and doesn't have all the whistles and bells, no more support, and a handful of fans left but I love it.
I had the chance to see the camera and equipment at the company in Dec 2016. Not only the camera but the hard drives were as big as a large mini fridge with hundreds of petabytes of capacity! There were so many wires all around which could transfer terabytes of data to the hard drives. It looked like a sci-fi machine! My mind was blown...
I was waiting for the bandwidth and it didn't disappoint. If they had partnered up with google to include a datacenter in the subscription they'd have changed everything. You pay the fee, lytro sends you the camera and google starts construction for the datacenter. It'd work.
I know this isn't strictly 'camera' related but it kind of is! And since this camera was going to allow you to change the DOF in post; I need to talk about this. So in videogames, you are essentially always looking through a camera. But even though there is so much you can do with a manual DOF; it is hardly utilized ever. The only time you may see it is when you are playing a FPS and you look down the barrel of a gun. One game that bucked this trend blew my mind when it used it and even now; no game has tried to do what this game did. That game was Alien Isolation. Alien Isolation allowed you to shift your focus between your radar and the environment. Something you don't even think about is that when you do something as simple as look at your phone; that is DOF. The background is blurry but you don't even notice. However in games DOF is rarely used in these instances. But in Alien Isolation they used it to tremendous effect. It was so immersive. So why do I bring this up? Well if this camera ever became a reality; imagine what we could do with movies? VR movies could be the thing of the future. Imagine it. They film an entire scene with this camera and with eye tracking; the movie knows where you are looking and focuses on that object. You could watch a movie and focus on what the person was reading whilst your friend watched the same movie and looked at the characters expression instead. I don't think we will get this for another 20 years or so but goddamn it could be amazing.
I remember this thing, it was for sure an interesting idea and the PR was so bad. I still dreamed of getting to try it out and I still dream of owning the one they made someday. My own white whale
There's only one director I can think of, that would be mad enough, to use this tree log camera for a full movie and that person is none other, than Quentin Tarantino. Imagine a VR movie as one of his last, crazy!
Faceshift was an inexpensive facial motion capture software my brother and I had purchased a license for in its early beta stages. It was super easy and the results, after working out some quirks, were amazingly accurate and we even had a nice relationship with the creator of it as we were pretty active testers. Out of nowhere the company ended up getting sold to Apple and the software was locked after everybody’s annual licenses went up. Apple used this tech for their silly Animoji feature and with the remaining months of our license, we made our multi-award winning web series “The Review - A Fatal Frame Fan Film” which is on our channel and all of the CG ghosts used it for their facial animations. It was such a shame as the software was so good and affordable before it got bought out and if given time I’m sure Lytro would have worked out their cinema camera which would have been amazing to use on our first feature film as I’ve been rototscoping for the past several months which has been slow, painful, but somehow fun!
Honestly, this would be the savior for 3d video. I always thought it was stupid because IRL, things you aren't focused on are blurry. With current 3d video tech, whatever was in focus in the shot is the only thing that can ever be in focus when viewed. It's quite distracting if one wants to look at something else in the scene. It breaks it down into a gimmick.
Ah I was following Lytro from launch but had no idea they went into broadcast/cinema. I really hoped Panasonic, Sony or other camera manufacturer would buy them and optimise the technology. Panasonic in particular, because they have always been at the leading edge of capturing frames from video in their GH series micro 4/3rds cameras. And light field tech seemed the next logical step.
I remember the Lytro Immerge videos that promised VR video where you could walk and look around real video footage, peering around corners and around people. To me that's still the dream for VR, rather than rendered graphics, or depthless 360 videos. But, I suspect we'll find it easier to set up an array of Kinnect-style depth cameras, rather than bothering with Lightfield cameras. But time will tell! Can't wait for more innovation on that front in VR, either way!
Like many things we use today, when they first invented, they were way ahead of its time. Most basic network and internet concepts were conceived in the 1960's and it took 40 years for the other technologies to make it practical to the population. I still think lightfield idea is revolutionary, we just haven't found the right application and current electronics is not powerful enough make it good enough.
if they had made a 1080p consumer video camera for $5000 that would have been cool. especially if aimed at VFX (same size as the big original black magic ursa, but with ability to create easy chroma screens would have been interesting for smaller VF workflows).
I don't believe for a second that they've abandoned this technology. They've maybe continued to develop it for the military or security-sensitive utility of this technology, but abandoning it for good, I don't think so.
Thanks for sharing this fascinating technology. This is the field in which I'm doing my PhD. Just a quick correction on how a light field camera works: they don't have multiple sensors inside them, just a regular camera sensor. The difference is in the array of tiny lenses in front of the sensor and the processing you can do with that additional information. Light field cameras are in regular use in industry but have yet to find their niche in the consumer market.
A guy I worked with got the consumer tube (is there a word for a 3D rectangle?) and the fact you can change focus points afterwards was neat. But was bummed out by the obvious need of their software making distribution of said files challenging. I had zero idea the cinema thing even happened! I do hope to see it again someday and not buried in some warehouse to whom eventually owns the IP
They tried to solve problems no one was having. It's an interesting technology but if your image was out of focus, you simply just took another photo. Maybe the VR application would be useful but it's still way too early.
What was the output resolution? Light field works by capturing multiple versions of the same image. Would that 40k (each of the light field elements being added together) be "downsampled" to a final 4k output?
Lightfields are the future. Such a shame that the company tried to run before knowing how to walk properly. They have set back the technology instead of advancing it.
Thank you for making this video. I loved Lytro, as young kid I was dreaming working with a light field camera. I wonder how many hours of research you spend and then all editing and getting stock of video. What it seems weird and even a good fuel for conspiracy theories is why all such tech suddenly vanish. I’m aware they were too ahead in time for a business perspective, but now vr and so many other things would do amazing things. Imagine the 360 camera on an Helicopter for bird eye experience on sky on a vr. Just to mention one thing.
The reason they vanished is pretty obvious. Their image quality was awful. Focusing correctly the first time isn't hard, and there is no purpose for a camera that takes a poor-quality image.
I can't see any other comments mention this , but the technology lives on in Google, in stuff like project starline , which basically uses the lightfield tech for video calls
This story is much deeper and it is really strange. Lytro invented a new camera that had no lenses, but just a ball radar like and could see. Then Google bought the company and created a tech which is just multiple common cameras together, and they used the same name, as if they wanted to make that new tech Lytro invented disappear, and confuse the public, no ask yourself, what could that new tech camera see? and why they rushed to cancel it?
It doesn't use multiple sensor, just one normal sensor with a lot of small lenses. Essentially capturing several low resolution pictures focused on different layers. But data alone beats physics using various level of computational imaging. Think between stacking for denoise/super resolution, to depth estimation, to nerfs Light stages are used. Which simply shoot from different angles to get frames of 3D models.
I was totally bewildered when Lytro announced this weird camera and I still am, that they ever considered it. I kind of developed a hate obsession around this company 8 years ago that I didn't even remember until this video. Their technology is incredible I'm sure, but all I can see is the images they produce, and I was really not impressed by their quality. No cinematographer would _ever_ have used this thing to shoot a movie. You _cannot_ fake the beautiful, buttery-smooth bokeh effect produced by a well-crafted cinema prime lens. The precision of sensors on cinema cameras with regard to color is more than a company like Lytro can possibly hope to compete with. Also it's totally unwieldly and impossible to fit to sliders, rigs, gibs, cranes, gimbals, steadycams, etc. A RED camera would run circles around this thing in terms of visual quality, and will fit on all of those. And for what? So you can focus in post? That's such a niche and gimmicky reason to totally sacrifice image quality. It's the classic "solution in search of a problem". Which also runs totally contrary to the needs of the industry they're trying to break into.
I bought 18 Illum cameras (when they were heavily discounted) for an array. Worked great ....but the computing power necessary was way too much for live action!
Oh man I was looking forward for these to be taking off, but I guess they were too ahead of their time. I'm sure it's the future though! Just imagine the RAW 20 bit video file size that will come out of future light field cinema cameras :D
This or it paved the way for something like it. I really have always believed filmmaking will start becoming a little more automated with the processes allowing creators more control over visuals than they do now. Will be interesting to see what comes next
I blame the new CEO for causing the company to crash. The founder started and built smaller, portable solutions with consumers in mind. The new CEO took the company to building a monstrous, highly specialized machine that even special-effects professionals balked at. It’s a shame. I hope Google will keep the spirit of innovation alive and release consumer-friendly products later on.
We're already seeing that Technology is action, especially for Google and their Pixel series devices. I'm literally positive they bought Lytro explicitly for all the Computational Photography they could harness from the company. And now they shoehorn all of that into the Pixel series phones.
I wrote my BA-thesis about the question if lightfield will be the future of cinema cameras. The shortcut, the option of refocusing and adjusting the motionblur or aputure in post are nice to have, but not worth the amount of data and the pricetag. The real benefit of Lightfield could be the possibility of capturing immersive videos, means the viewer is able to see not just a 3D but 4D Video. Moving your head means to see the Videocontent from another perspective so you not just get the illusion of more dimensions but you are also able to look behind different objects. The real problem is the possibilities to show lightfields as an immersive Video are quiet few. In the last years some Startups and also big companies like Samsung or LG had introduced their Prototypes. Some of them seemed quiet promising... we will see
I spoke to different experts (ARRI, VFX-Engineers...), they meant that the consumer market (especially smartphones) could be the turning point where lightfield will have there breakthrough. Feel free to hit me up if you have any questions Greetings from Germany
That giant Lytro cinema rig reminded me of the "Blimp" -- the old Technicolor camera with three film strips for each primary color. Big, expensive, noisy and a chore to work with the material. Lytro simply used brute force to achieve fundamental post-process gimmicks already possible with the advancement of machine learning algorithms for image processing, at much lower price and keeping your existing tool-chain setup.
I actually remember the pool video back in 2013. Honestly felt like the tech was revolutionary but possibly too expensive. Guess I was right, seeing what happened to the company.
I remembered this company had a very promising product/future. But it is also a risk of bringing new technology into the market. I think A.I generated image and animation will be interesting to investigate now.
I ended up with one of those Lytro consumer cameras after the company folded, played around with it a bit well after the company closed. Form factor was interesting, if somewhat limiting. The ecosystem of camera/software etc was a hassle to use. I wonder if the push for a maturation of the tech and then pushing for some sort of standards in file systems or OS support on the consumer side for more acceptance/buy in
That moment I realise it's also Lytro that made the lytro illum allegedly camera of the future a buggy laggy terrible quality camera which had terrible battery life and was oversized with a tiny sensor..
Just by saying at 9:02 that they are "super super super super 35" he gave an impression of someone who doesn't know the first thing about conventional cameras, which they were supposed to compete with: super 35 is smaller than 35mm
The only thing I could properly rationalise for the scale of that camera was "400GB/s" That's both something I can imagine and also unthinkable. I think something like 24TB a minute of footage. While you can make storage that will support that, its an entire server rack on wheels in traditional storage terms of just SSDs, thousands of them.
I have a feeling that google would take advantage of the technology to suddenly compete in the camera market by making it compact and stuff. but regardless I hope this concept got to be developed and be open for public,. vfx cgi and editing stuffs would go ballistics with this.
If 3D makes a big comeback like it did between 2009 and 2012, this camera would get a metric shizton of big studio $$$$ pushing it forward. Having a perfect depth map made in camera would feed so nicely into existing VFX pipelines, allowing for really amazing stereoscopic effects, but based on real depth and not an interpolation built from rotoscope masks made in india and depth maps made in a 3D conversion sweat shop in Vancouver.
i wonder where the actual cinema camera went to. I kinda hope its in a F-117 like state of retirement where its technically retired and not public facing but is still being used and studied
What's really happening, is they divide up the sensor into many tiny chunks, each with a micro lens in front with a different focal length. Basically they're taking one small photo for each focus distance, and that's all there is to it, there's no 5D uber math going on, all you're doing is pick out the small slice of the photo that's the focal distance you want. Their cameras, because of this fact, still require focusing, they just don't have to be as precise because each shot is taking a matrix of smaller photos at different distance of focus. Sure they could do some 5D uber math with these information but there's no need to do that for the results they produce in practice. All they have to do really, is compose the images together for a smooth DOF effect. Kind of like the fake bokeh we have in phones today.
Exactly, all of the talk of light fields, math transformations, photon direction, etc... Was all nonsense when talking about the camera. Sure, those things are real when talking about actual light fields... But no Lytro camera was actually capturing light fields.
@@hbp_ yes, there are some similarities. Canon basically splits every pixel in two and there's a normal number of pixels. Where as Lytro splits every pixel into a million, and there's like 16 pixels.
I doubt they had the technology to build such microlens array with different focal lengths, my guess is they are just spherical "dots" like embossed plastic sheets for decorations. I'm guessing the 5D uber math is both real and not, like they might be just overlaying all the images but with different blur strengths to turn up locally unblurred images, but for the lack of words in geniuses' heads the way they explained was "preserving incident angles of photon beams" and all that
Who here thinks they could mount this bad boy on a gimbal?
=============================
💭Join our Discord Channel💬 ► discord.gg/3aeNPU7GHu
🐦Twitter ► twitter.com/frame_voyager
📷Instagram ► instagram.com/framevoyager/
🎵TikTok ► www.tiktok.com/@framevoyager
Join our UA-cam channel 📺 ►ua-cam.com/channels/mXGDFnFh95WlZjhwmA5aeQ.htmljoin
=============================
A piece of cake for my DJI OM 4. But I have to go to the gym first...😂
Real men strap that to a steadicam mount
Not sure about a gimbal, but the smallrig tripod would handle it easily
😂😂😂 💯
Did google actually buy the company? I thought that their employees just went to Google without a buyout.
Lyto failed in many ways. The worst of which is that they did not make a product that can be realisticly sold or used my the market. They made a giant beast when the entire industry was moving to more compact systems
But still their vision was great. Steve jobs would have approved it! And maybe even bought it before Google did.
I think they had the right idea honestly, this type of cinema camera and pricing model is the type to be directed at very large production companies, and if even just one of the biggest film production companies rented this camera, their story would've gone much differently...
I actually bought a Lytro Illum back in 2018 out of curiosity for the light field thing. I payed around 400 euros since the camera was already a fail and reseller were trying to sell the last stocks remained. I should say that the camera was very disappointing, at the point where the unboxing experience was the best thing about it. The image was quite bad, especially for a camera supposed to cost €1300, the dynamic range was very very limited at the point where even my mid-tear samsung phone was almost better. It used a special format of file to save the images and you needed a proprietary software on your computer in order to be able to change the focus and iris (more on that later) in post, then you could save the final result as a Jpeg or even as a 3d image. I should say that the software part worked quite well at least for me and there weren't bugs I noticed. The problem was that the changes you could made on the depht of field were quite limited and when you changed the focus the edges of the objects certain times were quite rough. All in all wasn't (at least for me) enough of a feature to justify the price and the quality drop of the image. I think that camera was a very big missed opportunity: you could see there was a lot of potential, the camera was well build, the firmware worked very well (as far first generations go) and the technology behind it was amazing. The problem, in my opinion, was that they focused too much on the light field feature and not on the actual quality of image itself. I think, and I believe is the same for every photographer, that the light field tech can be a nice added feature but nothing more than that and surely not a replacement for good image, and I this is the massive mistake Lytro company did.
I'm an Italian speaker btw, so sorry in advance for grammatical errors.
Exactly, in the End the image quality is what makes or breaks a camera, just look at Arri or Blackmagic. They both focused on image quality, and both are quite successful with it, despite their competitor pumping out 12k sensors (for Arri) and generally having more features (for BM)
All oftheir experiments had this problem, expensive and poor quality, for A gimm8ck
Yup and instead of turning to VR and the cinema industry, they could have improved their colour science and other stuff. Literally, I'm quite sad that they had to shut down 😢.
I think the edges are bad because the edges of objects scatter light (like how the edges of shadows are blurry)
Mid-tier.
I worked at Lytro for a bit, during the Illum until just before the immerge. It's really interesting to see a historical documentary style thing on stuff I actually lived through. And seeing people I know personally. Good stuff :-)
Appreciate it! Been cool to hear from people who have worked for these companies and get feedback 😂 try to do the best we can with the information available for us!
Buzz Hays was my S3D prof/mentor in college. I loved hearing about the tech and seeing what yall were working on when he was at class in person.
Was the big-ass camera with the half meter sensor made of smaller sensor tiles? I can't expect they designed and fabbed a wafer-sized sensor for only the price of a luxury car.
@@noname7271You pay a luxury car to rent the camera, there was no purchase option. I imagine the sensor was indeed wafer-sized.
Let's hope this technology is coming back one day, in a smaller package. Just the ability to do green screen without a green screen is brilliant.
Apple have been trying to implement these features for years. First using the multiple cameras for depth information and more recently LiDAR. Removing the background for photos is already there and within the next few years as the A series chips improve this will definitely come to video on iPhones as well as the accuracy of the depth separation.
@@rcarter1690 Using a separate camera is fundamentally flawed, as the perspective is different. At least for Cinema-grade separation, you'd either need a sensor that records both brightness and ToF at the same time, or a system like lightfields.
We've been working with Intel RealSense cameras for quite some time, and the separation between either the structured light cameras or the ToF means that the objects in the foreground will always obscure some part of the scene that the other cameras actually do see.
Like, their consumer products were compelling enough. This sure was a loss in the world of photography.
It's called image segmentation
@@birdpump I am aware of that. Read the subcomment.
The design of the camera is almost robocopstyle and futuristic. 400GB/s videostream is absolutely mindblowing!
Half a meter wide sensor??? That's insane!
Nice video man, this series is really interesting and it never disappoints me!
Insane right? And appreciate it! Been fun creating and led to some cool stories. Lots more to come and spinoff series we've already started!
Not really.
They used an array of lenses and sensors to achieve a greater total sensor area.
@@mnomadvfx Yea but each new sensor meant more complexity in the lense paths if they used 200 full frame sensors and they extracted more information from the beam path that would get absolutely insane. It makes sense google bought them. That was the only company that would dare to even see a possibility in handling that kind of complexity
Otoy made a really cool demonstration of lightfields for VR by simply spinning a DSLR camera on a rig around and that created 1 frame lightfields (not animations). There's still nothing like that out there. People who were developing lightfield tech absolutely sucked at commercializing it. Such a shame. Neural radiance fields are sort of getting there without the need to have expensive rigs and cameras.
Yeahhhh... The marketing kinda felt like it was coming from a tech college or something. Which makes sense lol. Fascinating tech, gotta wonder if it will ever be viable or even in this century lol
@@FrameVoyager I would like to know how they managed to capture the direction information of the light rays. Perhaps the 300G per second was necessary to collect images on multiple sensors so they could form a 3D space
@@yuxuanhuang3523 If you know the angle of the sensor to other sensors you can decide the angle of the "ray". It's actually not that complicated - lots of papers on this topic.
They weren't trying to make a camera though, they were demonstrating their new data compression system called ORBX that can also be used for light fields.
It's actually almost 21MP or almost 6k. There is a 6x6 grid of pixes under every microlens. So you take 755MP and divide it by 36. This equates to a resolution of 20.83MP or about 6k.
It wasn't sold for Google. It was an error in reporting, the employees went to Google.
Nope, Google did acquire Lytro the company mostly for it's I.P. assets I guess. It happened after many staff had already moved to google.
@@Neojhun I had already written why this is not true, but keeps getting deleted due to the links.
Anyways, only early reports said that. Reports that came out after clarify that they were not. You can find one on CNet as well as others so long as they are datedlate March 2018 or later. The early March reports were the erroneous ones.
You will also not find the purchase in alphabet inc 2018 quarterly earnings reports, or any SEC filings. Also for what it's worth in 2018 the year of the demise of Lytro the website for lytro redirects to Raytrix in Germany, and still does. Now, it is possible Google bought some items, but they certainly didn't buy the company there would be concrete company papers.
Leonardo da Vinci must have been a time traveler hahaha he knew exactly what he was talking about
Talk about being waaaayyyyy ahead of your time.
"hard to tell exactly what Google is planning on using the technology for in future" Google used the camera tech they bought with Lytro to build Project Starline. Basically glasses-free 3D/AR conference calling, using a huge light-field display Lytro had been working on, and highly efficient networking/compression algorithms from other google projects like AV1/Duo
Interesting... 3d conference calling lol
Google Street?
UA-cam should take the hint by now and support 40K quality on it's videos 👀
I know!!! Right? 👀
as a videographer and editor i think this was the most amazing and mind blowing revolution of all time for camera industry now a days this can be made much much smaller even phones can use time of light and lidar to achieve re focusing and relighting but compared to this its so muchmore than that its the actual hardware thats capturing all of those things if you combine this hardware technology and todays sensors i believe it would be flawless
if it actually recorded at 400GBps there literally would not be any feasible way to record data from it. even now the fastest commercially available drives top out below 10GBps, and in 2016 things were a lot slower. you would need enormous drive arrays of dozens of drives in RAID 0, which would be very very unreliable, if they used anything commercial, and even if the solution was custom with custom interfaces, would still need thousands of NAND Flash storage chips to record at that data rate for more than a few minutes. No way in hell they had anything approaching a working prototype that would deliver those results
Yeahhh, I know they had some kind of "special storage" solution for it. But who knows, no one ever really got to use the camera for real so we'll never know 😭
@@FrameVoyager it would need 40 drives with write speed of 10GB/s or better strapped in raid 0 and a lot of multisocket server's to handle 400GB/s also if you would take 30TB ssd's then you could fit 75s of footage on it for hour of footage you would need 48 of these arrays aka 1920 drivers with is 7680 lanes of PCIE that equals to 60 AMD EPYC CPU's and that hour of footage would be in total of 57.6PB totaly reasonable right??
27 PCIe Gen5 SSDs running RAID0 🤯
You don't need to record video to SSD directly, just keep it in RAM, it would cost just around $800 for one second, just a couple of millions for one hour
Thars not true. we have commercial data storage solutions that can top 512gbps it can also be remote look up pcie over fiber. This is not a ramdisk or anything but that could work too but be expensive to scale. The fastest consumer drives top out around 14gbps right now with pcie5 and there are consumer raid cards that can get full pcie 5 16x usage which is much faster than 10GBPS keep in mind pcie 7 is already done and there are even more niche solutions. You're talking about a camera that cost millions. It is definitely feasible, while maybe not practical.
Light field photography is just insane, amazing and brilliant.
Couldn't agree more! I hope they really figure out how to use it one day. Imagine the cost cutting applications for filming movies or anything? You really can fix it in post lol
And also dead.
Owning 2 lytro still cameras
They are awsome (tho low res) bits of technology
The technology was too far ahead of its time, it will I'm sure make a comeback in the future.
Oh, at some point for sure! I think some of the concepts you'll start to see a lot in VFX work anyways. And you are kind of seeing some of this with the way they are doing virtual productions.
There's a problem when you live too much ahead of time.
Also light field already existed 100 years and lytro tried to pretend they did it first. Adobe did it back in 2004, raytrix has done it since 2008. Outside niche scientific applications it’s useless. Keep harping on though, it’s good know who the brain deads are for when euthanization is legalized later.
I think the tech create more problems than solve existing problem.
The technology may have been ahead of its time, but their color accuracy and sharpness were about 10 years behind the times.
What a loss!! As an engineer myself I was finishing school wanting to work for Lytra. What an amazing technology
I remember thinking the first Lytro camera they released was revolutionary and wish I had bought one at the time. This technology is exactly what you'd think someone like James Cameron would be interested in and I'm actually surprised he, or someone of his pedigree, didn't invest in it. However, I would not be surprised to see this tech make a comeback in the near future.
Filmmakers know what they want to capture, including what they want to focus on in any given shot. Having that 'baked in' is not a problem looking for a solution.
I remember when Lytros came out a few years ago. Some people thought it was an innovative idea, but they were forgotten almost immediately. I honesty didn't think they got beyond the first two consumer still cameras.
Probably because they rebranded away from consumers and more to businesses and production. Probably what killed them
@@FrameVoyager What killed them was their image quality was awful. Even with their supposedly amazing bus-sized cinema camera, that short film looks muddy and strange.
I bought the 1600 dollar camera and I enjoyed it, but then Lytro discontinued its cameras and all support, they left us out in the cold!
Im torn, on the one had this is a totally incredible technological breakthrough, but in the other hand i shiver at the thought of taking even more "artistic" decisions in the editing room, and without the need for a focus puller, or even a DP to consult to, it makes it too easy for the artistic vision of the DP to be deviated from or ignored, i love that Arri went for the Baked in looks from the camera in the alexa 35, its bold, but it puts the control back in the hands of the filmmakers, at least in my opinion, not that planoptic cameras dont have a place in cinema, i just wouldnt like it to replace the current tech completely.
Totally get what you're saying! We are kind of at a weird crossroads at this point in the history of filmmaking. Will we keep more grounded like we always have or will everything start to become more automated? And which is better? Fascinating times
I don't foresee plenoptic cameras completely replacing conventional digital ones (heck, analog films are still around to some degree). However, there is a possibility of some aspiring directors or cinematographers experimenting with light field cameras.
@@FrameVoyager I just got a super 16 krasnogorsk, lol.
Lytro would never have taken away today's traditional Digital cameras even if it was smaller and lighter because there is so much character in Lenses and even in the Dynamic Range of your sensor, Lytro tech is amazing but the results are extremely digital, I believe the DOF is SImulated and until they simulated good looking DOF with some character I think the results always looked really really nasty, Think Fast blur on Premiere or After effects.
Ah yes , i remember that LYTRO refocus camera, it was featured on Tested by Norman Chan! I can't believe it was already 11 years ago! 😅 Back then that technology was mind-blowing! 😃
😅😅😅 such a fascinating camera... Don't know if it will ever be practical but still cool tech for sure
I feel when the new CEO took it over , the company started to die, as original CEO was more correct in making consumer products, which they should had improved over time. But the cinema camera specs sounded too much like a scam.
Yeah it was definitley a pie in the sky kind of company
I remember when the Lytro was released to the public. It sounded fancy-schmancy. If they had only improved on the consumer product, expanding to a professional photographer level version, they could have been relatively successful. Imagine if development had progressed enough to fit it into an iPhone?
Proud owner of a Lytro Illum 40, not an easy camera, not the best, and doesn't have all the whistles and bells, no more support, and a handful of fans left but I love it.
I had the chance to see the camera and equipment at the company in Dec 2016. Not only the camera but the hard drives were as big as a large mini fridge with hundreds of petabytes of capacity! There were so many wires all around which could transfer terabytes of data to the hard drives. It looked like a sci-fi machine! My mind was blown...
I was waiting for the bandwidth and it didn't disappoint. If they had partnered up with google to include a datacenter in the subscription they'd have changed everything. You pay the fee, lytro sends you the camera and google starts construction for the datacenter. It'd work.
I know this isn't strictly 'camera' related but it kind of is! And since this camera was going to allow you to change the DOF in post; I need to talk about this.
So in videogames, you are essentially always looking through a camera. But even though there is so much you can do with a manual DOF; it is hardly utilized ever. The only time you may see it is when you are playing a FPS and you look down the barrel of a gun.
One game that bucked this trend blew my mind when it used it and even now; no game has tried to do what this game did. That game was Alien Isolation.
Alien Isolation allowed you to shift your focus between your radar and the environment. Something you don't even think about is that when you do something as simple as look at your phone; that is DOF. The background is blurry but you don't even notice. However in games DOF is rarely used in these instances. But in Alien Isolation they used it to tremendous effect. It was so immersive.
So why do I bring this up? Well if this camera ever became a reality; imagine what we could do with movies? VR movies could be the thing of the future.
Imagine it. They film an entire scene with this camera and with eye tracking; the movie knows where you are looking and focuses on that object. You could watch a movie and focus on what the person was reading whilst your friend watched the same movie and looked at the characters expression instead.
I don't think we will get this for another 20 years or so but goddamn it could be amazing.
I remember this thing, it was for sure an interesting idea and the PR was so bad.
I still dreamed of getting to try it out and I still dream of owning the one they made someday. My own white whale
It would be cool to get a miniaturized version of this someday for sure! Sad it didn't work out
There's only one director I can think of, that would be mad enough, to use this tree log camera for a full movie and that person is none other, than Quentin Tarantino.
Imagine a VR movie as one of his last, crazy!
This is better than any Csi:Miama episode. You don't even habe to nerd about cameras to enjoy the format.
This is the the kind of fringe tech you’d find in an alternate universe.
@12:08 well now we would kinda know what's behind the recently very good cameras in the Google Pixel phones?
Faceshift was an inexpensive facial motion capture software my brother and I had purchased a license for in its early beta stages. It was super easy and the results, after working out some quirks, were amazingly accurate and we even had a nice relationship with the creator of it as we were pretty active testers. Out of nowhere the company ended up getting sold to Apple and the software was locked after everybody’s annual licenses went up. Apple used this tech for their silly Animoji feature and with the remaining months of our license, we made our multi-award winning web series “The Review - A Fatal Frame Fan Film” which is on our channel and all of the CG ghosts used it for their facial animations. It was such a shame as the software was so good and affordable before it got bought out and if given time I’m sure Lytro would have worked out their cinema camera which would have been amazing to use on our first feature film as I’ve been rototscoping for the past several months which has been slow, painful, but somehow fun!
Honestly, this would be the savior for 3d video. I always thought it was stupid because IRL, things you aren't focused on are blurry. With current 3d video tech, whatever was in focus in the shot is the only thing that can ever be in focus when viewed. It's quite distracting if one wants to look at something else in the scene. It breaks it down into a gimmick.
Ah I was following Lytro from launch but had no idea they went into broadcast/cinema. I really hoped Panasonic, Sony or other camera manufacturer would buy them and optimise the technology. Panasonic in particular, because they have always been at the leading edge of capturing frames from video in their GH series micro 4/3rds cameras. And light field tech seemed the next logical step.
Yeahhhh, it's probably what killed Lytro as a company too. I think they tried to do too much
If they released a true 6k camera with proper workflow they'd be great for steadycam work and not worrying much about pulling focus when running.
💯 they just went too big with this
@@FrameVoyager also their single camera true 3D capture was so promising.
I remember the Lytro Immerge videos that promised VR video where you could walk and look around real video footage, peering around corners and around people. To me that's still the dream for VR, rather than rendered graphics, or depthless 360 videos. But, I suspect we'll find it easier to set up an array of Kinnect-style depth cameras, rather than bothering with Lightfield cameras. But time will tell! Can't wait for more innovation on that front in VR, either way!
Like many things we use today, when they first invented, they were way ahead of its time. Most basic network and internet concepts were conceived in the 1960's and it took 40 years for the other technologies to make it practical to the population. I still think lightfield idea is revolutionary, we just haven't found the right application and current electronics is not powerful enough make it good enough.
if they had made a 1080p consumer video camera for $5000 that would have been cool. especially if aimed at VFX (same size as the big original black magic ursa, but with ability to create easy chroma screens would have been interesting for smaller VF workflows).
I do recall Google made something out of it and there were big demos all over the place. They might still be available on the web
Ang Lee would love this thing. I can't help but think this is the tech he's looking for in his 3D/HRF experiments.
WONDER IF THEY WILL BRING THIS BACK
Was it even real in the first place? haha
I don't believe for a second that they've abandoned this technology. They've maybe continued to develop it for the military or security-sensitive utility of this technology, but abandoning it for good, I don't think so.
Not the tech but definitely this camera
Love this channel! You are the best and keep me up to date on everything I care about. Thanks for all your hard work and time commitment!
Appreciate it! Lots more to come!
They tried to solve a problem no one had and the marketplace let them know this.
Thanks for sharing this fascinating technology. This is the field in which I'm doing my PhD. Just a quick correction on how a light field camera works: they don't have multiple sensors inside them, just a regular camera sensor. The difference is in the array of tiny lenses in front of the sensor and the processing you can do with that additional information. Light field cameras are in regular use in industry but have yet to find their niche in the consumer market.
I was on set in a shoot and I saw you uploaded a new video, I apologised to the ACs and came home to watch it as soon we did the last shot
😂😂😂 hope it was worth it! Hahaha
I am more excited to watch the new ABANDONED episode than any show out right now! You put a lot of work in your videos and it shows.
😅😅😅 appreciate it! Just happy to be able to bring these stories out! Such cool cameras
A guy I worked with got the consumer tube (is there a word for a 3D rectangle?) and the fact you can change focus points afterwards was neat. But was bummed out by the obvious need of their software making distribution of said files challenging. I had zero idea the cinema thing even happened! I do hope to see it again someday and not buried in some warehouse to whom eventually owns the IP
They tried to solve problems no one was having. It's an interesting technology but if your image was out of focus, you simply just took another photo. Maybe the VR application would be useful but it's still way too early.
What was the output resolution? Light field works by capturing multiple versions of the same image. Would that 40k (each of the light field elements being added together) be "downsampled" to a final 4k output?
More than likely. I think it had several different functional levels.
Man what a joy to see an "On The Verge" clip from when they were good, those were good times.
Light field camera to me just sounds like pure magic to me. Weird how I was early to watched this even though I am not subscribed yet. Subbed.
It really is! I mean I did a deep dive into conceptually how it works but it still blows my mind. And appreciate the Sub haha!
Don't sleep on this technology, it was way ahead of it's time. When semi-conductors / processors can catch up, this technology will come back.
Lightfields are the future. Such a shame that the company tried to run before knowing how to walk properly. They have set back the technology instead of advancing it.
Thank you for making this video. I loved Lytro, as young kid I was dreaming working with a light field camera. I wonder how many hours of research you spend and then all editing and getting stock of video.
What it seems weird and even a good fuel for conspiracy theories is why all such tech suddenly vanish. I’m aware they were too ahead in time for a business perspective, but now vr and so many other things would do amazing things. Imagine the 360 camera on an Helicopter for bird eye experience on sky on a vr. Just to mention one thing.
The reason they vanished is pretty obvious. Their image quality was awful. Focusing correctly the first time isn't hard, and there is no purpose for a camera that takes a poor-quality image.
I can't see any other comments mention this , but the technology lives on in Google, in stuff like project starline , which basically uses the lightfield tech for video calls
When you need to kill a mouse, you wont need orbital canon bombardment...
This story is much deeper and it is really strange. Lytro invented a new camera that had no lenses, but just a ball radar like and could see. Then Google bought the company and created a tech which is just multiple common cameras together, and they used the same name, as if they wanted to make that new tech Lytro invented disappear, and confuse the public, no ask yourself, what could that new tech camera see? and why they rushed to cancel it?
It doesn't use multiple sensor, just one normal sensor with a lot of small lenses. Essentially capturing several low resolution pictures focused on different layers.
But data alone beats physics using various level of computational imaging. Think between stacking for denoise/super resolution, to depth estimation, to nerfs
Light stages are used. Which simply shoot from different angles to get frames of 3D models.
I was totally bewildered when Lytro announced this weird camera and I still am, that they ever considered it. I kind of developed a hate obsession around this company 8 years ago that I didn't even remember until this video. Their technology is incredible I'm sure, but all I can see is the images they produce, and I was really not impressed by their quality.
No cinematographer would _ever_ have used this thing to shoot a movie. You _cannot_ fake the beautiful, buttery-smooth bokeh effect produced by a well-crafted cinema prime lens. The precision of sensors on cinema cameras with regard to color is more than a company like Lytro can possibly hope to compete with. Also it's totally unwieldly and impossible to fit to sliders, rigs, gibs, cranes, gimbals, steadycams, etc. A RED camera would run circles around this thing in terms of visual quality, and will fit on all of those.
And for what? So you can focus in post? That's such a niche and gimmicky reason to totally sacrifice image quality.
It's the classic "solution in search of a problem". Which also runs totally contrary to the needs of the industry they're trying to break into.
I bought 18 Illum cameras (when they were heavily discounted) for an array. Worked great ....but the computing power necessary was way too much for live action!
Fabulous! Video. I always wondered what happened to them.
I remember Lytro! If I were an investor I would have lost everything. I never understood why it went nowhere. To me it was like magic
I think it's so beyond it's time the infrastructure and the demand is just not there for it yet. Also the storage required 😅
Oh man I was looking forward for these to be taking off, but I guess they were too ahead of their time. I'm sure it's the future though! Just imagine the RAW 20 bit video file size that will come out of future light field cinema cameras :D
This or it paved the way for something like it. I really have always believed filmmaking will start becoming a little more automated with the processes allowing creators more control over visuals than they do now. Will be interesting to see what comes next
It should be brought back, computer hardware now is significantly improved. VR and movies watched with VR headsets would be mind-blowing 🤯
I blame the new CEO for causing the company to crash. The founder started and built smaller, portable solutions with consumers in mind. The new CEO took the company to building a monstrous, highly specialized machine that even special-effects professionals balked at. It’s a shame. I hope Google will keep the spirit of innovation alive and release consumer-friendly products later on.
The irony of this video: The most replayed moment (1:46) has the most potato quality images captured as a 4K video.
Its rare to see working tech thats too ahead of its time. Really was amazing
We're already seeing that Technology is action, especially for Google and their Pixel series devices. I'm literally positive they bought Lytro explicitly for all the Computational Photography they could harness from the company. And now they shoehorn all of that into the Pixel series phones.
I wrote my BA-thesis about the question if lightfield will be the future of cinema cameras. The shortcut, the option of refocusing and adjusting the motionblur or aputure in post are nice to have, but not worth the amount of data and the pricetag. The real benefit of Lightfield could be the possibility of capturing immersive videos, means the viewer is able to see not just a 3D but 4D Video. Moving your head means to see the Videocontent from another perspective so you not just get the illusion of more dimensions but you are also able to look behind different objects.
The real problem is the possibilities to show lightfields as an immersive Video are quiet few. In the last years some Startups and also big companies like Samsung or LG had introduced their Prototypes. Some of them seemed quiet promising... we will see
I spoke to different experts (ARRI, VFX-Engineers...), they meant that the consumer market (especially smartphones) could be the turning point where lightfield will have there breakthrough. Feel free to hit me up if you have any questions
Greetings from Germany
That giant Lytro cinema rig reminded me of the "Blimp" -- the old Technicolor camera with three film strips for each primary color. Big, expensive, noisy and a chore to work with the material. Lytro simply used brute force to achieve fundamental post-process gimmicks already possible with the advancement of machine learning algorithms for image processing, at much lower price and keeping your existing tool-chain setup.
I actually remember the pool video back in 2013. Honestly felt like the tech was revolutionary but possibly too expensive. Guess I was right, seeing what happened to the company.
Lyrtro was so cool when it came out. Too bad I didn't have the money. But the properiety software was a deal breaker
Yeahhhh, it's cool tech for sure! Kind of makes me wonder if it's a bridge technology. Something that will inspire something similar
@@FrameVoyager I actually just bought one of the base cameras for $50 after watching this. Will give it a go.
Video: "All available for $125,000...."
Me: "Woah, really that's all?"
Video "...subscription price..."
Me: "Oh..."
After all this years I still don't know what is Squarespace and newer seen any website related to it.
I remembered this company had a very promising product/future.
But it is also a risk of bringing new technology into the market.
I think A.I generated image and animation will be interesting to investigate now.
Always is a risk. And trying to get into the film industry is not necessarily easy or profitable
Arri, Panavision and RedBox are all left feeling deeply disgruntled as their left asking, "are we all a bloody joke to you??" 🎥 🤭
😂😂😂
that technology came too early...
This series is genius, and that trailer is something else. I doth my cap to you sir.
😅😅😅 appreciate it!
I ended up with one of those Lytro consumer cameras after the company folded, played around with it a bit well after the company closed. Form factor was interesting, if somewhat limiting. The ecosystem of camera/software etc was a hassle to use. I wonder if the push for a maturation of the tech and then pushing for some sort of standards in file systems or OS support on the consumer side for more acceptance/buy in
"Hey, did you import that video like I asked yesterday?" "Yeah, it'll be done importing next week."
That moment I realise it's also Lytro that made the lytro illum allegedly camera of the future a buggy laggy terrible quality camera which had terrible battery life and was oversized with a tiny sensor..
Bro I've beeeen waiting for this!
haha gotta pace out these cool ones!
combine light field technology and eye tracking in a VR headset and you get the ultimate setup
Just by saying at 9:02 that they are "super super super super 35" he gave an impression of someone who doesn't know the first thing about conventional cameras, which they were supposed to compete with: super 35 is smaller than 35mm
The only thing I could properly rationalise for the scale of that camera was "400GB/s" That's both something I can imagine and also unthinkable. I think something like 24TB a minute of footage. While you can make storage that will support that, its an entire server rack on wheels in traditional storage terms of just SSDs, thousands of them.
400GB/s?! No wonder it failed
I have a feeling that google would take advantage of the technology to suddenly compete in the camera market by making it compact and stuff. but regardless I hope this concept got to be developed and be open for public,. vfx cgi and editing stuffs would go ballistics with this.
If 3D makes a big comeback like it did between 2009 and 2012, this camera would get a metric shizton of big studio $$$$ pushing it forward. Having a perfect depth map made in camera would feed so nicely into existing VFX pipelines, allowing for really amazing stereoscopic effects, but based on real depth and not an interpolation built from rotoscope masks made in india and depth maps made in a 3D conversion sweat shop in Vancouver.
This series is the best on the internet!
😅😅😅 appreciate it!
Heared of lightfield displays used in some ar glasses so they give you an accurate representation of depth.. man this technology is truly wicked
40K wtf!!? That’s mind blowing.
Right? insane
i wonder where the actual cinema camera went to. I kinda hope its in a F-117 like state of retirement where its technically retired and not public facing but is still being used and studied
Their consumer cameras were neatly designed, but in my experience they were like a complicated iPhone portrait mode, really a shame!
Lytro seriously made the coolest shit. It makes me so sad theyre gone
What's really happening, is they divide up the sensor into many tiny chunks, each with a micro lens in front with a different focal length. Basically they're taking one small photo for each focus distance, and that's all there is to it, there's no 5D uber math going on, all you're doing is pick out the small slice of the photo that's the focal distance you want.
Their cameras, because of this fact, still require focusing, they just don't have to be as precise because each shot is taking a matrix of smaller photos at different distance of focus.
Sure they could do some 5D uber math with these information but there's no need to do that for the results they produce in practice. All they have to do really, is compose the images together for a smooth DOF effect. Kind of like the fake bokeh we have in phones today.
Sounds a bit like Canon's dual pixel RAW.
Exactly, all of the talk of light fields, math transformations, photon direction, etc... Was all nonsense when talking about the camera. Sure, those things are real when talking about actual light fields... But no Lytro camera was actually capturing light fields.
@@hbp_ yes, there are some similarities. Canon basically splits every pixel in two and there's a normal number of pixels. Where as Lytro splits every pixel into a million, and there's like 16 pixels.
I doubt they had the technology to build such microlens array with different focal lengths, my guess is they are just spherical "dots" like embossed plastic sheets for decorations. I'm guessing the 5D uber math is both real and not, like they might be just overlaying all the images but with different blur strengths to turn up locally unblurred images, but for the lack of words in geniuses' heads the way they explained was "preserving incident angles of photon beams" and all that
How do you think all those tiny chunks come together to a picture? You guessed it: Math.
Seems like Google is using Lytro's light field capture tech for their Project Starline immersive videoconferencing system.
Cost must of been astronomical over that development time.
Hence why they didn't sell it 😂😂😂