Optic Flow Solutions - Computerphile
Вставка
- Опубліковано 1 сер 2024
- Optical Flow solutions - following on from Dr French's previous video explaining Optic Flow, we dive in to some ways to tackle the problem.
/ computerphile
/ computer_phile
This video was filmed and edited by Sean Riley.
Computer Science at the University of Nottingham: bit.ly/nottscomputer
Computerphile is a sister project to Brady Haran's Numberphile. More at www.bradyharan.com
I recently made a tool which extracts motion vectors from MPEG-4 and H.264 encoded video which can be used for tracking. Advantage is that the motion estimations happens during encoding of the video (or on the sender side in case of a video stream). Thus, no computational effort on the decoding/receiving end is needed to compute optical flow.
Oooohh I would love to hear more about your tool! Does it use ffmpeg?
Love this series, do more!
Love the description of the algorithms! 💜
*Computerphile: Flow...*
*SmarterEveryDay: Hey there!*
*Computerphile: Optic flow*
*SmarterEveryDay: Bye*
The nice thing when you're just using this information for video compression is that you don't really care about ambiguities. You just need to recreate the original image, how that's accomplished and whether that describes how it moved in reality doesn't matter
thanks a lot! this video is very instructive!
it is used for frame blending in real-time visual effects where you can't afford having 120fps flipbooks, instead you have a very low fps flipbook and you interpolate the pixels using motion vectors textures.
Could you use optical flow for astrophotography autoguiding?
Yes, that is how the Celestron NexGuide and Orion StarShoot work. You select a star or sometimes a few stars for the guiders to follow, so they do not need to process every pixel in the image.
Every Stacking Software basically works like that. You select a few stars on the first few images and the software can recognize the direction of the stars.
How does this differ from motion vectors in Mpeg video compression?
Very detailed video!! Really love this series
Not really into vid stuff so not sure if this is what you were refering to but shouldn't you just start with center pixel and examine the rest of the pic for potential places it moved to then examine the area around those places to see if it matches with original place and then use your calculations when it matches?
I've seen this sort of thing in Biology before. Instead of making instrumentation that would provide spacial information as well as video, scientists commit to the Sisyphean task of trying to make something from almost-nothing. It's kind of obvious why you'd want to do this from existing video, but soon existing video will be boring, and spacial-video will be ubiquitous, and then also boring. It hurts to watch people spend their lives on this sort of research. Great video, 10/10, will subscribe.
@Radoslav Mráz While you are right, an Intel Realsense Depth Camera does cost around $200, in 10 years it will be $20, and all this research will be unused. We won't rely on heuristic software to make expensive guesses while cheap hardware will sense directly and know with certainty. This happens all the time in Academia.
No English CC :(
Video compression would be another common thing where such techniques are being used.
imho this should've been 1 video, the previous one just stops at random and this clearly picks up right after that cut
If the bar was checkered you could clearly see where its moving, right?
Yes because you had more edges and more detail.
Some nice links here: en.wikipedia.org/wiki/Optical_flow
thank's very detials
It's cool to know all the complicated equations beind almost every cheap Chinese drone!
love the videos but can you please use a whiteboard instead of paper, the sound of markers is quite annoying
I personally like that sound :P
But why?
I hate how imprecise/rambling both of these videos were.
gitlab or it didn't happen? #showMeTheCode!
I know! That was my problem with the first video; he never explains it. It's so vague!
- "What they do is, they say, actually, let's look at the neighboring pixels here and we're going to, basically, put in a constraint into our solver that says our *u* and *v* here should be quite similar to an average *u* and an average *v* in the local neighborhood..."
WHAT? What constraint? What solver? What *u* and *v*? Aren't those what we're trying to find out in the first place? Isn't the "solver" the part you should be explaining?
Then he goes on rambling about a "global approach" and a "local" like he totally just explained something there, when actually he might've just said "and then magic happens" for all the knowledge that conveyed.
I sure hope this dude's not supposed to be teaching this in class.
It would be great if at some point you stated the equations we are solving. This was very handwavey and doesn't really explain anything but some of the ideas of the process.
More like, optic blow! Haha
1st
Dude get with 2019...
cloud mad
Why's your mum mad m8?
Cloud who cares? wanna get attention huh? find another way then
Tatiana stay mad
Your face is distracting. Keep the camera on the equations, diagrams and data visualizations. Please. I just get a picture of what you are talking about, and the camera jerks back to you talking and waving hands, and spends most of the time looking at you. Which has nothing to do with flow modeling and algorithms, except indirectly. Blame it on the camera person. UA-cam might be all about entertainment. But you don't have to go with that flow. Stay focused on the topic. Who has time to sort faces from diagrams?