@@skolarii If I'm not mistaken this was because the image(s) in question were "manipulated", that is they were quite substantially enlarged in order to be able to "discern" the actions/behaviour of the defendant. And pictures were still not at all "clear", and since they were ""Zoomed in and enlarged"" naturally a certain ratio of the pixels were NOT captured by the camera (drone video), but rather were "created by the interpreting software". And "in a nutshell" it was this that made the judge "throw out" SOME of the "photographic evidence", though some was allowed. At least that is how I understood it, but if You know differently I'd be interested to take part of Your information and knowledge about it. Best regards.
@@acberkowitzIt was really nothing. The judge was just dumb & the lawyers involved were just dumb. The prosecutors presented the "zoomed in & enlarged" version of the photograph, the defense objected & the judge accepted the objection because apparently, "zooming in" on an image makes it "manipulated". They basically said if you can't see something without zooming in, you can't use it. Like, seriously!
Cameras are not allowed because Samsung's facility is full of dismantled products from competitors. Every product category bearing an uncanny resemblance to the competition that preceded it doesn't come for free.
I love it when you guys just go off the rails. David and Ellis were so stressed about what is a photo and then there's Andrew cracking jokes and teasing David hahaha never change guys
27:05 That Do it! part was the funniest thing I have heard in a long time.. The wait, the tone, the slight hesitation, the delivery were incredibly good I laughed outloud multiple times. Thank you Ellis..
The photo not photo debate reminded me of the "ceci n'est pas une pipe" artwork form Magritte 🇧🇪 (1929) where the painting is just a drawing of a pipe with a writing that says "this is not a pipe". Because it's actually a 2D drawing representation of a pipe....
Why is the UA-cam thumbnail color sorting a thing before being able to sort your UA-cam Music playlists by artist, title, etc. Very backwards priorities on UA-cam/Google's part
@@josh44026 Yes. He said that he was exhausted that week because of the Vision Pro videos. And loosing some trivia questions to David and Andrew didn’t help either 😊.
I just keep seeing these colour wheels on my phone feed and scroll past, wondering WTF is all this about??!! Didn't know it was some new UA-cam masterplan to f*ck with our feeds even more!! 😜
49:48 Shared experiences already exist in Apple Vision Pro. It's left with app developers to integrate those experiences. Anyone who have watched some of the developer videos on Apple Vision Pro released in June 2023 knows what they are.
Explainer for 21:29 - DeepMind's Chinchilla was a "compute-optimal" language model. Before Chinchilla, LLM Scaling was heavily inclined towards increasing the 'model size' (i.e. no. of parameters) and relatively less inclined towards increasing the dataset size used for Training the model. So, basically if you have 4 times the amount of 'compute' resources, it was like multiply the model size by ~3.5 and dataset size by ~1.2. With Chinchilla, they corrected this by balancing the model-size v dataset size choice. So, post Chinchilla it was proven that if you have 4 times the amount of compute, you could 2×modelsize and 2×dataset size and achieve a more optimal result. Conclusion: LLM performance improves as we increase model size, dataset size and amount of compute used for training. For optimal performance, all 3 factors must be scaled up in tandem.
I have Gemini advance and you can set the new app to replace your Google Assistant but it doesn't necessarily replace your Google Assistant. It just when I trigger my assistant between many different ways that you can accomplish that. It goes through Gemini first and then It determines does it need to handle what I'm asking or does it need to delegate that to the assistant. It's awesome!
And it's horrible at delegating. I tried it and it was super glitchy. Uninstalled the app and went back to Assistant. I'm on the Pixel 8 Pro too. Kinda disappointed because I was wanting to have a conversation with a LLM with the convenient integration of Assistant.
@@CapCannoli what do you mean It's horrible at delegating? If I ask it something like set a timer for 25 minutes It sets a timer for 25 minutes, but it shows a little flag icon or toast notification if you will showing that the assistant did that. If I ask it something like a complex math problem or pretty much anything else that just requires an answer rather than a task it needs to complete on my phone the answer is given to me from Gemini and if it's a task that needs to be completed on my phone like set a timer or a calendar thing or other things like that the Google Assistant handles that. It does it beautifully.
@@CapCannoli also, I believe it's the pixel 8 Pro that is glitchy or buggy not necessarily integrating Gemini with the assistant. I'm using the Google pixel 7 with AP21.240119.009.
@@bl5843 I asked it to play a song and to launch an app. Neither command worked. Both times it told me that it couldn't control my devices. It also wouldn't listen after I said "Hey Google." It would launch at the bottom like Google Assistant but stop listening immediately. It would only work if I hit the microphone button to talk into it, then launch into the Gemini app. Also, never had any glitches with the phone before Gemini, and now that it's gone assistant is working fine.
I'm using the Galaxy s20 ultra and I have Gemini and it has replaced the assistant app. I got to say it's delegating well on my phone and has worked just as well if not better than assistant. I am really loving it so far and so excited to see how far it goes. I live my life with Google Assistant helping out and it's so far NOT a downgrade
Hey. Philosopher here. I've never researched the topic. But if you're interested. You can watch me postulate on photos... A photo is real insomuch as it is an accurate representation of a moment in time. Everyone has a different perspective of the moment and each lense, sensor, person, or film has a different reproduction of that moment in time. But no matter what. That moment was captured. And the quality of the photo can then be assessed. More post processing brings that photo further away from the original. And/or sometimes depending on the photographer they can edit the photo so that it looks even closer to the way the photographer saw the moment from their perspective. But no matter which way you process the photo. There once was a moment in time that was captured and that is the essence of a photo.
I think the new metric for VR will be “ppd” or pixels per degree. So a combination of field of view and resolution. By that metric an iPhone’s Retina display is much higher (as it, in fact, appears to be).
Purist agree with David on photos are film old school. I think photos also include digital as long as it only processed to give you the best version of what you are seeing. Any altering or post process photoshopping, AI manipulation or filtering that fundamentally changes the image from reality are a different category of photo. Filters like black and white and sepia are okay but manipulations, art work, and other types of photos are not real because they are not what was observed. We still call them photos because they started as this but they should be called altered images. It is all just down to technicalities, definitions, and philosophy.
@32:00 @Marques The point being made for a photo definition was before digital photos. The light landing on paper or film brings it back to the 35mm days. When the light hits the film it has the original photo, and from there you can edit it how you wish.
I totally agree with that Samsung guy. The moment a modern digital camera captures a photo it's heavily processed: - Picture profiles - Auto white balance - Active D-Lighting (Nikon) - Portrait impression balance (Nikon) - The way skin colors are handled in green colored environments (leaves, grass, trees) (was he running or was he sick?) - Surely Canon, Fuji, Sony have similar image improving steps that alter reality. More so concerning smartphone cameras: - Each night shot is a composite of multiple shots - Heavy noise reduction maybe deletes small objects like letters or hole writings on a sign or a wall. - Nobody knows how a smartphone or AI handles different parts of the photo individually after hitting the shutter button. With 10 different cameras you will end up with 10 different pictures and you won't be able to say how the scenery's lighting really was if you weren't there yourself.
Why doesn’t Apple allow users when prompted to say “hey Siri” to use any phrase that they would like to use.. That would also function a bit like a personal password.
Light hitting the sensor when it's off still CREATES an image, but that image is not picked up/recorded by anything. Just like a pinhole camera, where the "sensor" is just the back wall of the box, there is an image there, but it's not captured permanently. The key here is the lens (optical element) in the process, that creates the picture (eyes, camera lenses, or the hole in the pinhole camera), that is what renders the image, after that is just a capturing process, be it film or a photo-diode, and that's where different methods come into play, so even a film stock is not the original image, because each type uses a slightly different chemical composition, altering the colours and exposure already.
I watch a lot of comedy videos, but haven't laughed as much as I did during your conversation about 'what a photo is' in ages. Thanks for the great laugh!😂
I gotta give it to you Marques, you’d make a really good scientist and/or epidemiologist. Your analysis of the UA-cam’s new multi thumbnail metric and the its pitfalls is really critical and well thought through 👍
Not sure if maybe mine is different, but I installed the Gemini app, and saying 'Hey Google' activates Gemini and also allows for all my google assistant tasks (smart devices, routines, etc.) to work. It seems to me Gemini can be a full replacement of Google assistant
It would be cool if Gemini becomes capable enough to just let you decide what you want the trigger name to be. So you could say Hey "Google","Bard", "Gemini", etc. or whatever you'd like and it can still trigger the action.
29:14 I like Wikipedia’s definition: “A photograph (also known as a photo, image, or picture) is an image created by light falling on a photosensitive surface, usually photographic film or an electronic image sensor”. To Marques’s “if the sensor is off, nothing is created” argument I would say that if the sensor is off, then it is no longer a photosensitive surface. Hence, the definition stands.
Andrew, Ellis, and David going on a pedantic rant with the “what even is a picture?” tickles my brain so much! “Color isn’t real.” I see you, David! Bluejays aren’t blue 😉
Not only can you not control home devices with Gemini, you also can't use Gemini if you use Google Assistant instead. I just want to be able to ask assistant to turn on my lights AND ask Gemini questions when I want.
That might be helpfull when I have seen a thumbnail and not know what video was it. Colour memory is the strongest possible way to find that video. And why not just scroll on videos sorted by color to find what I was looking for and it shows up.
Summary of the podcast: UA-cam is messing around, Google messes up their naming again, Top samsung ex and biggest tech youtube talk about what photos are, more apple vision pro stuff and car stuff
The Samsung exec maybe trys to say that all the photos you take with electronic sensors never show reality how it actually was. Our eyes can perceive more brightness levels than any display can show (thats why tonemapping is a thing). Since no photo you see on a display is even close to reality and only an approximation of reality I can understand where the thought "there are no *real* pictures" might come from.
30:00 thats a quite good stand actually. in short: A PHOTO is the data gathered on the sensor. A negative or digital. If its developed and processed (analog or digital) is called an IMAGE. Iam shocked theres seemingly no one in that convo who has done analog photos ever? the photograph is generated by the light falling onto the photo(chemical) surface. That creates the negative (for digital cameras its a string of 0 and 1). Now the photographer (etither a person, or algorithm) decides how to interpret that thing. A “real” photographer now chooses how to treat that negative, to lighten up specific parts, alter colours, dodge and burn, smoothen particles etc. - thats called “developing” and ists still called that in the digital world. After the developing of the PHOTOGRAPH its not a photo anymore but an IMAGE.
For VR headsets, you want the Pixels Per Degree metric _(PPD)._ Quest 2 had 20, Quest 3 has 25 and iFixit estimates the Apple PPD at 34, so a pretty huge boost of 36% improvement, but it's not orders of magnitude different. The OLED displays + PPD improvement really separates it from a lot of other headsets from a visual quality perspective.
Before digital cameras were invented. Taking a photo was just a matter of physics. After digital cameras came about. The gained the capacity to be digital altered. You have photos but and they you have a sub category of digitally altered photos
1:09:20 take this with a huge grain of salt but I think the reason for the Taycan having an extra button is because for the duration of pressing the button it is overloading the motors. You can actually do this with a special kind of electric motors. But because its overloading you can obviously only do it for a short time. But I have no idea if thats actually the case. Thats the first thing I thought of though
Regarding that “new” UA-cam thumbnail colour sorting feature, I had access to it for probably the second half of last year but it recently disappeared. I’m in Canada so I wonder if we were among the first to test the feature.
49:31 pinning windows is great but also, imagine making pinned windows public for other vision pro users. Like, at the studio, shared pinned windows floating with chats, announcements for others in specific spots. Can be a great way to reward perks.
It would be great if you guys discuss with ryan trahan on how he felt on the vision pro.. He wore it for 50hrs straight to in single strap band🤯 Love the pod❤
I feel like the categorization of videos by UA-cam is a psychology thing. Personally I'm more aligned to clicking a video with a red /orange thumbnail. So, I think they were actually uo to something. Not that they were bored as you guys say Viewer's choice to click on a video is heavily influenced by the thumbnail Color is definitely major influence.
Google is both the most amazing and most annoying company ever IMO. The reason they are developing Gemini outside of google assistant is probably because there are 2 isolated teams working on each product. Google incentivizes innovation and product launches, they don't have time to blend new features into existing products that may ruin the reputation of said products. So they send a team off to innovate on a new product, that team realizes some features would be good to include that overlap with existing products, but the process is launch first and blend later. Then it seems like they often forget to blend later and just shut down the one not making a meaningful difference in profits or engagement and lose the features they once had. Then another team wants to bring those features back, and develops a new product that has similar overlapping features, and round and round the carrousel we go.
I think this is spot on, there are so many examples of forgotten features falling victim to their disinterest in keeping a project alive long enough to reach its full potential.
To me, a photo is something that resembles close to what my eyes see when using the same lens. A smartphone photo with mult hdr is still a photo because image sensors are not as good as our eyes for hdr. By multiframing within our eye’s latency range, they are able to come closer to what i am actually seeing so that’s a photo. For a night mode, if it is using AI or whatever to make the photo look as I am seeing, it’s still a photo to me. A zoomed photo is still a photo. If i put that zoom lens to my eyes, if the image is as I would also see, then it’s a photo. But the moment the photos look like it was day instead of night, or the moon looks more hd than to my eyes, it’s a composite. The moment the zoom shots look more sharpened than they would to my eyes if i used the same lens, it’s a composite. The moment the ai does something that creates an information on the image that I wont be able to see in real life using the same lens, it’s a composite
A photograph is what we consider a captured moment in time, whether the image is processed, altered or left untouched. It is a graphic, created by the absorption of photons and then made into something we can practically use.
Clearly I am the only person in the world who does not care what a thumbnail shows, the video description is what makes 99% of my decision the thumbnail is literally just a reference point in my brain as have a I seen this video before
"Bard" felt really good as a brand name, as it evoked something you could have a conversation with, that told you new things "Gemini" felt good for their knowledge model, as it evoked the idea that it was different versions of the same model. And it could change and adapt to Gemini 2, or another new model name, without muddying up the branding for their conversational endpoint
I feel like veritasium tends to undersell his thumbnail. For exactly his last video was just "blue LEDs are hard to make" but ended up being a very interesting video (as always)
Hey, I got that color sort of thing as well. I thought it was a new feature. I opted out/stopped using it after a few minutes coz that's not how I choose what to watch... but maybe I subconsciously do and never knew??
A photo - Regardless of the mechanics used to capture it, is a collection of pigments, pixels, etc. that when viewed represents a moment in time that means something to the observer and may allow that observer to experience something about that moment.
The UA-cam color feature is really helpful when you are bored af and not sure what to watch. I have been using it for a couple of days, and it looks fun
I hope Apple enables focus mode, like when you are at home, you can toggle the Home mode, so that all the virtual displays suddenly just be shown around your home (where you set them up), it also applies to work, like when you are in the office, enable Office Mode, then all apps that you set up will just open. Apple can just use location to notify the user what focus mode to enable, then they will call this machine learning, but it works, both marketing and user experience.
20:47 You should read up about Manfred Gotta he created a lot brand names for german companies. I guess most prominently in the US the Porsche Panamera is named by him and the smart car series by Mercedes is named by him.
Now that I think about it, I got the red blue green filter on youtube about 3 months ago and I never used it which disappeared after a survey from youtube. nobody asked but I felt like saying it.
So, as a creator who has, y'know, 1/10,000th the subs as the main channel, Test & Compare/A/B testing is something I've wanted for years. At my size, and maybe the crew can reflect on this too, my subscription base is so small that CTR is misleading. CTR and watch time can be amazing, but, counter-intuitively, that can lead to YT not offering impressions to the video because only my subscriber base is clicking through and watching it and non-subs aren't clicking on it at all, leading to distorted stats. It's frustrating to not have enough data to be able to optimize without skating directly into clickbait territory with titles and thumbnails and instead having to gamble with totally different concepts and hope that catches the eyes of non-viewers.
Andrew at 27:07 "You're manually breathing." I was working and had to stop for a break
I'm mad that I got got by a man in a podcast lol
Lmaoo that got me too 🤣
Andrew has some epic one-liners. 😂
This was perfect 🤣
Imagine showing photo evidence on court and the opposing lawyer rejects it with "Photo isn't real."
That argument was used in the Kyle Rittenhouse trial and the judge accepted it and excluded photo evidence from the trial
@@skolariireally, do you remember what it was specifically, bc I don't remember that happening
"As said by technology expert Marques Brownlee in so and so episode of Waveform podcast". They will be citing the show in court haha!
@@skolarii If I'm not mistaken this was because the image(s) in question were "manipulated", that is they were quite substantially enlarged in order to be able to "discern" the actions/behaviour of the defendant. And pictures were still not at all "clear", and since they were ""Zoomed in and enlarged"" naturally a certain ratio of the pixels were NOT captured by the camera (drone video), but rather were "created by the interpreting software". And "in a nutshell" it was this that made the judge "throw out" SOME of the "photographic evidence", though some was allowed.
At least that is how I understood it, but if You know differently I'd be interested to take part of Your information and knowledge about it.
Best regards.
@@acberkowitzIt was really nothing. The judge was just dumb & the lawyers involved were just dumb. The prosecutors presented the "zoomed in & enlarged" version of the photograph, the defense objected & the judge accepted the objection because apparently, "zooming in" on an image makes it "manipulated". They basically said if you can't see something without zooming in, you can't use it. Like, seriously!
Just wanted to say that I appreciated Adam's dad joke after Ellis said "it's a meta decision" at 30:07
Me too!
I hate that NO ONE acknowledged it
it was acknowledged by me and it was hilarious 🤣
23:48 - 33:43
Most chaotic part ever and I'm all here for it!😆❤️
"ELLIS IS STRESSED" 😂 28:32
Its the funniest thing ever😂😂
- You can't take photos inside our Samsung facilities...
- What is a photo?!
so photo leaks of a product are not leaks because photos are not real and it did not violate a non disclosure agreement, right? RIGHT? XD
Cameras are not allowed because Samsung's facility is full of dismantled products from competitors. Every product category bearing an uncanny resemblance to the competition that preceded it doesn't come for free.
@@tonyhawk123you don’t understand irony?
@@tonyhawk123 imagine being you
@@tonyhawk123it's more that it's full of military hardware lol.
I love it when you guys just go off the rails. David and Ellis were so stressed about what is a photo and then there's Andrew cracking jokes and teasing David hahaha never change guys
1:06:12 "are you rapping?" Got me crying 😂
"We're not taking photos, we're living life" is iconic haha
27:05 That Do it! part was the funniest thing I have heard in a long time.. The wait, the tone, the slight hesitation, the delivery were incredibly good I laughed outloud multiple times. Thank you Ellis..
"Samsung said so!" got me rolling further 😂😂
😭
“You’re now manually breathing”🤣🤣
The photo not photo debate reminded me of the "ceci n'est pas une pipe" artwork form Magritte 🇧🇪 (1929) where the painting is just a drawing of a pipe with a writing that says "this is not a pipe". Because it's actually a 2D drawing representation of a pipe....
Brain: interprets mix of colours as white and magenta
Ellis: Let's go fill Moomin valley with crime!
Why is the UA-cam thumbnail color sorting a thing before being able to sort your UA-cam Music playlists by artist, title, etc. Very backwards priorities on UA-cam/Google's part
Or even be able search songs in a playlist on mobile
You went from "No such thing as a photo" to "Commit crimes" real quickk
Marques is much more relaxed than the previous trivia episode. Refreshingly relaxed 😊
"Marquise" 💀
Oops! Marques 😊
you mean trivia finale
@@josh44026 Yes. He said that he was exhausted that week because of the Vision Pro videos. And loosing some trivia questions to David and Andrew didn’t help either 😊.
The difference in IQ between Marques and the others is soo huggeee! I don't know how he keeps himself sane around these jokers.
Didn't realize that youtube color thing was being tested on random people. I got that the other day and was like "who the hell thought of this"
I just keep seeing these colour wheels on my phone feed and scroll past, wondering WTF is all this about??!! Didn't know it was some new UA-cam masterplan to f*ck with our feeds even more!! 😜
I first got it months ago.
.... and I liked it. Showed me better videos than the main feed.
Digital photos are not real.
I didn't think it was thumbnail colour related, I've been ignoring it on my feed for a month now. Might check it out now 🔴🔵🟢
49:48 Shared experiences already exist in Apple Vision Pro. It's left with app developers to integrate those experiences. Anyone who have watched some of the developer videos on Apple Vision Pro released in June 2023 knows what they are.
Explainer for 21:29 - DeepMind's Chinchilla was a "compute-optimal" language model. Before Chinchilla, LLM Scaling was heavily inclined towards increasing the 'model size' (i.e. no. of parameters) and relatively less inclined towards increasing the dataset size used for Training the model. So, basically if you have 4 times the amount of 'compute' resources, it was like multiply the model size by ~3.5 and dataset size by ~1.2. With Chinchilla, they corrected this by balancing the model-size v dataset size choice. So, post Chinchilla it was proven that if you have 4 times the amount of compute, you could 2×modelsize and 2×dataset size and achieve a more optimal result. Conclusion: LLM performance improves as we increase model size, dataset size and amount of compute used for training. For optimal performance, all 3 factors must be scaled up in tandem.
I have Gemini advance and you can set the new app to replace your Google Assistant but it doesn't necessarily replace your Google Assistant. It just when I trigger my assistant between many different ways that you can accomplish that. It goes through Gemini first and then It determines does it need to handle what I'm asking or does it need to delegate that to the assistant. It's awesome!
And it's horrible at delegating. I tried it and it was super glitchy. Uninstalled the app and went back to Assistant. I'm on the Pixel 8 Pro too. Kinda disappointed because I was wanting to have a conversation with a LLM with the convenient integration of Assistant.
@@CapCannoli what do you mean It's horrible at delegating? If I ask it something like set a timer for 25 minutes It sets a timer for 25 minutes, but it shows a little flag icon or toast notification if you will showing that the assistant did that. If I ask it something like a complex math problem or pretty much anything else that just requires an answer rather than a task it needs to complete on my phone the answer is given to me from Gemini and if it's a task that needs to be completed on my phone like set a timer or a calendar thing or other things like that the Google Assistant handles that. It does it beautifully.
@@CapCannoli also, I believe it's the pixel 8 Pro that is glitchy or buggy not necessarily integrating Gemini with the assistant. I'm using the Google pixel 7 with AP21.240119.009.
@@bl5843 I asked it to play a song and to launch an app. Neither command worked. Both times it told me that it couldn't control my devices. It also wouldn't listen after I said "Hey Google." It would launch at the bottom like Google Assistant but stop listening immediately. It would only work if I hit the microphone button to talk into it, then launch into the Gemini app.
Also, never had any glitches with the phone before Gemini, and now that it's gone assistant is working fine.
I'm using the Galaxy s20 ultra and I have Gemini and it has replaced the assistant app. I got to say it's delegating well on my phone and has worked just as well if not better than assistant. I am really loving it so far and so excited to see how far it goes. I live my life with Google Assistant helping out and it's so far NOT a downgrade
Hey. Philosopher here. I've never researched the topic. But if you're interested. You can watch me postulate on photos...
A photo is real insomuch as it is an accurate representation of a moment in time. Everyone has a different perspective of the moment and each lense, sensor, person, or film has a different reproduction of that moment in time. But no matter what. That moment was captured. And the quality of the photo can then be assessed.
More post processing brings that photo further away from the original. And/or sometimes depending on the photographer they can edit the photo so that it looks even closer to the way the photographer saw the moment from their perspective. But no matter which way you process the photo. There once was a moment in time that was captured and that is the essence of a photo.
Well said
@@iroar5982 thanks!
Took the words right out of my mouth 😁
I would seperate the word "photo" and "image". So what the sensor captures as raw data is the photo, and after that it is turned into an image.
If that's the case, then I'd like my phone to save both, so I have the reference as well as the result.
54:29 caught David slip one through 😂
I noticed that too, was looking for anyone else who did
I think the new metric for VR will be “ppd” or pixels per degree. So a combination of field of view and resolution. By that metric an iPhone’s Retina display is much higher (as it, in fact, appears to be).
That's been a metric for VR for quite some time now, it just hasn't caught on outside of the VR market.
@@VaalkinTheOnly Yes, I was trying to be diplomatic in striking down the ppi comparison.
❤❤Thanks for covering across the board❤❤
The RAW image before the processing is the photo. Simple as that.
Purist agree with David on photos are film old school. I think photos also include digital as long as it only processed to give you the best version of what you are seeing. Any altering or post process photoshopping, AI manipulation or filtering that fundamentally changes the image from reality are a different category of photo. Filters like black and white and sepia are okay but manipulations, art work, and other types of photos are not real because they are not what was observed. We still call them photos because they started as this but they should be called altered images. It is all just down to technicalities, definitions, and philosophy.
"We're not taking photos, we're living life" - Ellis (2024)
This is the notification I wanted. UK, it's the weekend, and the Waveform is my start! :).
@32:00 @Marques The point being made for a photo definition was before digital photos. The light landing on paper or film brings it back to the 35mm days. When the light hits the film it has the original photo, and from there you can edit it how you wish.
I totally agree with that Samsung guy.
The moment a modern digital camera captures a photo it's heavily processed:
- Picture profiles
- Auto white balance
- Active D-Lighting (Nikon)
- Portrait impression balance (Nikon)
- The way skin colors are handled in green colored environments (leaves, grass, trees)
(was he running or was he sick?)
- Surely Canon, Fuji, Sony have similar image improving steps that alter reality.
More so concerning smartphone cameras:
- Each night shot is a composite of multiple shots
- Heavy noise reduction maybe deletes small objects like letters or hole writings on a sign or a wall.
- Nobody knows how a smartphone or AI handles different parts of the photo individually after hitting the shutter button.
With 10 different cameras you will end up with 10 different pictures and you won't be able to say how the scenery's lighting really was if you weren't there yourself.
"Are you rapping?" 😂😂😂😂
Literally laughed out loud
Why doesn’t Apple allow users when prompted to say “hey Siri” to use any phrase that they would like to use.. That would also function a bit like a personal password.
Light hitting the sensor when it's off still CREATES an image, but that image is not picked up/recorded by anything. Just like a pinhole camera, where the "sensor" is just the back wall of the box, there is an image there, but it's not captured permanently. The key here is the lens (optical element) in the process, that creates the picture (eyes, camera lenses, or the hole in the pinhole camera), that is what renders the image, after that is just a capturing process, be it film or a photo-diode, and that's where different methods come into play, so even a film stock is not the original image, because each type uses a slightly different chemical composition, altering the colours and exposure already.
This is the notification I wanted. UK, it's the weekend, and the Waveform is my start! :)
Hey Matt!!!
Haha same, always the highlight of my Friday afternoon. Weekend starts now!
Same in Australia!
@@peekabooicancu Aussie, Aussie, Aussie…
Oh, now it's Friday
I watch a lot of comedy videos, but haven't laughed as much as I did during your conversation about 'what a photo is' in ages. Thanks for the great laugh!😂
I gotta give it to you Marques, you’d make a really good scientist and/or epidemiologist. Your analysis of the UA-cam’s new multi thumbnail metric and the its pitfalls is really critical and well thought through 👍
Two computers walk into a Bard...and bartender asks: "are you gemini?"
No, I'm Leo, give me a drink..!! 😜
54:28 is that the first F-bomb on the podcast?
Focus modes make so much sense for what Marques is talking about at 47:30
Not sure if maybe mine is different, but I installed the Gemini app, and saying 'Hey Google' activates Gemini and also allows for all my google assistant tasks (smart devices, routines, etc.) to work. It seems to me Gemini can be a full replacement of Google assistant
really?
It would be cool if Gemini becomes capable enough to just let you decide what you want the trigger name to be. So you could say Hey "Google","Bard", "Gemini", etc. or whatever you'd like and it can still trigger the action.
"boy, what's the weather outside"
29:14 I like Wikipedia’s definition: “A photograph (also known as a photo, image, or picture) is an image created by light falling on a photosensitive surface, usually photographic film or an electronic image sensor”. To Marques’s “if the sensor is off, nothing is created” argument I would say that if the sensor is off, then it is no longer a photosensitive surface. Hence, the definition stands.
Andrew, Ellis, and David going on a pedantic rant with the “what even is a picture?” tickles my brain so much!
“Color isn’t real.” I see you, David! Bluejays aren’t blue 😉
4:47 I had this for like 2 weeks and then it disappeared
Not only can you not control home devices with Gemini, you also can't use Gemini if you use Google Assistant instead. I just want to be able to ask assistant to turn on my lights AND ask Gemini questions when I want.
That might be helpfull when I have seen a thumbnail and not know what video was it. Colour memory is the strongest possible way to find that video. And why not just scroll on videos sorted by color to find what I was looking for and it shows up.
Summary of the podcast: UA-cam is messing around, Google messes up their naming again, Top samsung ex and biggest tech youtube talk about what photos are, more apple vision pro stuff and car stuff
54:28 fucking 🤣 Forgot to beep it..
The Samsung exec maybe trys to say that all the photos you take with electronic sensors never show reality how it actually was. Our eyes can perceive more brightness levels than any display can show (thats why tonemapping is a thing). Since no photo you see on a display is even close to reality and only an approximation of reality I can understand where the thought "there are no *real* pictures" might come from.
30:00 thats a quite good stand actually.
in short: A PHOTO is the data gathered on the sensor. A negative or digital. If its developed and processed (analog or digital) is called an IMAGE.
Iam shocked theres seemingly no one in that convo who has done analog photos ever? the photograph is generated by the light falling onto the photo(chemical) surface. That creates the negative (for digital cameras its a string of 0 and 1). Now the photographer (etither a person, or algorithm) decides how to interpret that thing. A “real” photographer now chooses how to treat that negative, to lighten up specific parts, alter colours, dodge and burn, smoothen particles etc. - thats called “developing” and ists still called that in the digital world.
After the developing of the PHOTOGRAPH its not a photo anymore but an IMAGE.
For VR headsets, you want the Pixels Per Degree metric _(PPD)._ Quest 2 had 20, Quest 3 has 25 and iFixit estimates the Apple PPD at 34, so a pretty huge boost of 36% improvement, but it's not orders of magnitude different.
The OLED displays + PPD improvement really separates it from a lot of other headsets from a visual quality perspective.
Before digital cameras were invented. Taking a photo was just a matter of physics. After digital cameras came about. The gained the capacity to be digital altered. You have photos but and they you have a sub category of digitally altered photos
I'm with Andrew "Bard is better" 22:09
1:09:20 take this with a huge grain of salt but I think the reason for the Taycan having an extra button is because for the duration of pressing the button it is overloading the motors. You can actually do this with a special kind of electric motors. But because its overloading you can obviously only do it for a short time.
But I have no idea if thats actually the case. Thats the first thing I thought of though
i love the fact that samsung just triggered a communications theory class conversation between them for 15 minutes
I stumbled upon this red green blue feature the other day, not going to lie. Blue is my favorite. Red is a little aggressive
I liked green a lot!
Regarding that “new” UA-cam thumbnail colour sorting feature, I had access to it for probably the second half of last year but it recently disappeared. I’m in Canada so I wonder if we were among the first to test the feature.
les goooo I’m watching this and after the trivia finaleee! 🎉
49:31 pinning windows is great but also, imagine making pinned windows public for other vision pro users. Like, at the studio, shared pinned windows floating with chats, announcements for others in specific spots. Can be a great way to reward perks.
1:19:59
"The rivian r1 is the fastest reversing car ive driven"
Rimac navera rn:
No better feeling than coming home and seeing that the new Waveform episode is out
Y’all missed a bleep at 54:29
the way David pronounces 'weekeepedia' is adorable
"can you explain this?"
Ellis: "No!"
lol I was laughing audibly
It would be great if you guys discuss with ryan trahan on how he felt on the vision pro.. He wore it for 50hrs straight to in single strap band🤯
Love the pod❤
27:05 Just had to repeat this part over and over again.
Best one yet from this podcast. 👏😂
You all are quickly becoming my favorite podcast. Just recently stumbled across this.
i just realized that when someone says button counter in a video the buttons glow!! (go to 7:05, play the video and look at the like/dislike buttons)
Microsoft Hololens has/had that sharing 3d model thing that Marques talked about......
Fun Fact about Samsung. Samsung group also has a construction company (Samsung C&T) which built the Burj Khalifa.
Andrew: I loved your comment about the Vision Pro dropping when trying the immersing Car Play in the Tycan. Great and funny comment
The photo conversation has forever changed my life. 😂 Although, some would argue that perhaps Samsung has changed my life...through you guys. 🤔
I love Turbo (go faster) buttons in EVs. This is the one thing I miss about the LEAF.
I feel like the categorization of videos by UA-cam is a psychology thing. Personally I'm more aligned to clicking a video with a red /orange thumbnail. So, I think they were actually uo to something. Not that they were bored as you guys say Viewer's choice to click on a video is heavily influenced by the thumbnail Color is definitely major influence.
That meta discussion about photos was hilarious.
Google is both the most amazing and most annoying company ever IMO. The reason they are developing Gemini outside of google assistant is probably because there are 2 isolated teams working on each product. Google incentivizes innovation and product launches, they don't have time to blend new features into existing products that may ruin the reputation of said products. So they send a team off to innovate on a new product, that team realizes some features would be good to include that overlap with existing products, but the process is launch first and blend later. Then it seems like they often forget to blend later and just shut down the one not making a meaningful difference in profits or engagement and lose the features they once had. Then another team wants to bring those features back, and develops a new product that has similar overlapping features, and round and round the carrousel we go.
I think this is spot on, there are so many examples of forgotten features falling victim to their disinterest in keeping a project alive long enough to reach its full potential.
To me, a photo is something that resembles close to what my eyes see when using the same lens. A smartphone photo with mult hdr is still a photo because image sensors are not as good as our eyes for hdr. By multiframing within our eye’s latency range, they are able to come closer to what i am actually seeing so that’s a photo. For a night mode, if it is using AI or whatever to make the photo look as I am seeing, it’s still a photo to me. A zoomed photo is still a photo. If i put that zoom lens to my eyes, if the image is as I would also see, then it’s a photo.
But the moment the photos look like it was day instead of night, or the moon looks more hd than to my eyes, it’s a composite. The moment the zoom shots look more sharpened than they would to my eyes if i used the same lens, it’s a composite. The moment the ai does something that creates an information on the image that I wont be able to see in real life using the same lens, it’s a composite
1:12:50 Electrical cars. There is no magical getting more battery storage at the end of a ride than when you started by using regenerative brakes.
A photograph is what we consider a captured moment in time, whether the image is processed, altered or left untouched. It is a graphic, created by the absorption of photons and then made into something we can practically use.
Clearly I am the only person in the world who does not care what a thumbnail shows, the video description is what makes 99% of my decision the thumbnail is literally just a reference point in my brain as have a I seen this video before
46:26 “THIS!” is exactly what I envisioned when they first debuted the Vision Pro!
"Bard" felt really good as a brand name, as it evoked something you could have a conversation with, that told you new things
"Gemini" felt good for their knowledge model, as it evoked the idea that it was different versions of the same model. And it could change and adapt to Gemini 2, or another new model name, without muddying up the branding for their conversational endpoint
I feel like veritasium tends to undersell his thumbnail. For exactly his last video was just "blue LEDs are hard to make" but ended up being a very interesting video (as always)
Hey, I got that color sort of thing as well. I thought it was a new feature. I opted out/stopped using it after a few minutes coz that's not how I choose what to watch... but maybe I subconsciously do and never knew??
43:18 the best video is the one with the guy driving the Cyber truck wearing vision Pro and tapping with his fingers😂
4:51 I got it too but I don't understand the usability of it
I really enjoyed the "what is a photo" discussion. Its completely unproductive, but amusing and interesting discussion.
I also got the RGB UA-cam sorting, wouldn't have noticed it if you haven't mentioned it, honestly never noticed any of the sorting trends before. 😅
Bard was a cuter name
A photo - Regardless of the mechanics used to capture it, is a collection of pigments, pixels, etc. that when viewed represents a moment in time that means something to the observer and may allow that observer to experience something about that moment.
Loved the "what is a photo" segment 😂 was a lot of fun... *existential crisis*
The UA-cam color feature is really helpful when you are bored af and not sure what to watch. I have been using it for a couple of days, and it looks fun
I hope Apple enables focus mode, like when you are at home, you can toggle the Home mode, so that all the virtual displays suddenly just be shown around your home (where you set them up), it also applies to work, like when you are in the office, enable Office Mode, then all apps that you set up will just open. Apple can just use location to notify the user what focus mode to enable, then they will call this machine learning, but it works, both marketing and user experience.
26:10 In a way, they are all right 😮
20:47 You should read up about Manfred Gotta he created a lot brand names for german companies. I guess most prominently in the US the Porsche Panamera is named by him and the smart car series by Mercedes is named by him.
Imagine working on the vision pro and when it comes to present your presentation, you go like “Oh! I forgot the presentation at the gas station!”
Now that I think about it, I got the red blue green filter on youtube about 3 months ago and I never used it which disappeared after a survey from youtube.
nobody asked but I felt like saying it.
So, as a creator who has, y'know, 1/10,000th the subs as the main channel, Test & Compare/A/B testing is something I've wanted for years.
At my size, and maybe the crew can reflect on this too, my subscription base is so small that CTR is misleading. CTR and watch time can be amazing, but, counter-intuitively, that can lead to YT not offering impressions to the video because only my subscriber base is clicking through and watching it and non-subs aren't clicking on it at all, leading to distorted stats. It's frustrating to not have enough data to be able to optimize without skating directly into clickbait territory with titles and thumbnails and instead having to gamble with totally different concepts and hope that catches the eyes of non-viewers.