Optic Flow Solutions - Computerphile

Поділитися
Вставка
  • Опубліковано 1 сер 2024
  • Optical Flow solutions - following on from Dr French's previous video explaining Optic Flow, we dive in to some ways to tackle the problem.
    / computerphile
    / computer_phile
    This video was filmed and edited by Sean Riley.
    Computer Science at the University of Nottingham: bit.ly/nottscomputer
    Computerphile is a sister project to Brady Haran's Numberphile. More at www.bradyharan.com

КОМЕНТАРІ • 42

  • @LukasBommes
    @LukasBommes 4 роки тому +11

    I recently made a tool which extracts motion vectors from MPEG-4 and H.264 encoded video which can be used for tracking. Advantage is that the motion estimations happens during encoding of the video (or on the sender side in case of a video stream). Thus, no computational effort on the decoding/receiving end is needed to compute optical flow.

    • @dannyperry5961
      @dannyperry5961 3 роки тому

      Oooohh I would love to hear more about your tool! Does it use ffmpeg?

  • @timelesstrance3720
    @timelesstrance3720 4 роки тому +7

    Love this series, do more!

  • @cyndicorinne
    @cyndicorinne Рік тому

    Love the description of the algorithms! 💜

  • @DjSapsan
    @DjSapsan 4 роки тому +30

    *Computerphile: Flow...*
    *SmarterEveryDay: Hey there!*
    *Computerphile: Optic flow*
    *SmarterEveryDay: Bye*

  • @DrMcCoy
    @DrMcCoy 4 роки тому +6

    The nice thing when you're just using this information for video compression is that you don't really care about ambiguities. You just need to recreate the original image, how that's accomplished and whether that describes how it moved in reality doesn't matter

  • @juanying364
    @juanying364 4 роки тому +1

    thanks a lot! this video is very instructive!

  • @GameDevGeeks
    @GameDevGeeks 3 місяці тому

    it is used for frame blending in real-time visual effects where you can't afford having 120fps flipbooks, instead you have a very low fps flipbook and you interpolate the pixels using motion vectors textures.

  • @GilesBathgate
    @GilesBathgate 4 роки тому +19

    Could you use optical flow for astrophotography autoguiding?

    • @jajwarehouse1
      @jajwarehouse1 4 роки тому +12

      Yes, that is how the Celestron NexGuide and Orion StarShoot work. You select a star or sometimes a few stars for the guiders to follow, so they do not need to process every pixel in the image.

    • @smort123
      @smort123 4 роки тому

      Every Stacking Software basically works like that. You select a few stars on the first few images and the software can recognize the direction of the stars.

  • @misterhat5823
    @misterhat5823 4 роки тому +2

    How does this differ from motion vectors in Mpeg video compression?

  • @jackwang8105
    @jackwang8105 4 роки тому

    Very detailed video!! Really love this series

  • @zxuiji
    @zxuiji 4 роки тому +1

    Not really into vid stuff so not sure if this is what you were refering to but shouldn't you just start with center pixel and examine the rest of the pic for potential places it moved to then examine the area around those places to see if it matches with original place and then use your calculations when it matches?

  • @cannaroe1213
    @cannaroe1213 4 роки тому +1

    I've seen this sort of thing in Biology before. Instead of making instrumentation that would provide spacial information as well as video, scientists commit to the Sisyphean task of trying to make something from almost-nothing. It's kind of obvious why you'd want to do this from existing video, but soon existing video will be boring, and spacial-video will be ubiquitous, and then also boring. It hurts to watch people spend their lives on this sort of research. Great video, 10/10, will subscribe.

    • @cannaroe1213
      @cannaroe1213 4 роки тому

      @Radoslav Mráz While you are right, an Intel Realsense Depth Camera does cost around $200, in 10 years it will be $20, and all this research will be unused. We won't rely on heuristic software to make expensive guesses while cheap hardware will sense directly and know with certainty. This happens all the time in Academia.

  • @aniketsolanki3246
    @aniketsolanki3246 4 роки тому +1

    No English CC :(

  • @unvergebeneid
    @unvergebeneid 4 роки тому +3

    Video compression would be another common thing where such techniques are being used.

  • @suicidalbanananana
    @suicidalbanananana 4 роки тому +3

    imho this should've been 1 video, the previous one just stops at random and this clearly picks up right after that cut

  • @danielhaupt2066
    @danielhaupt2066 4 роки тому

    If the bar was checkered you could clearly see where its moving, right?

    • @soranuareane
      @soranuareane 4 роки тому +3

      Yes because you had more edges and more detail.

  • @PhilBoswell
    @PhilBoswell 4 роки тому +1

    Some nice links here: en.wikipedia.org/wiki/Optical_flow

  • @khalidsabeel3978
    @khalidsabeel3978 4 роки тому

    thank's very detials

  • @jacekjagosz
    @jacekjagosz 4 роки тому +9

    It's cool to know all the complicated equations beind almost every cheap Chinese drone!

  • @utkarshsaboo
    @utkarshsaboo 4 роки тому +1

    love the videos but can you please use a whiteboard instead of paper, the sound of markers is quite annoying

  • @jeffreymorris1752
    @jeffreymorris1752 4 роки тому

    But why?

  • @totlyepic
    @totlyepic 4 роки тому +22

    I hate how imprecise/rambling both of these videos were.

    • @recklessroges
      @recklessroges 4 роки тому +2

      gitlab or it didn't happen? #showMeTheCode!

    • @gorkyrojas3446
      @gorkyrojas3446 4 роки тому +5

      I know! That was my problem with the first video; he never explains it. It's so vague!
      - "What they do is, they say, actually, let's look at the neighboring pixels here and we're going to, basically, put in a constraint into our solver that says our *u* and *v* here should be quite similar to an average *u* and an average *v* in the local neighborhood..."
      WHAT? What constraint? What solver? What *u* and *v*? Aren't those what we're trying to find out in the first place? Isn't the "solver" the part you should be explaining?
      Then he goes on rambling about a "global approach" and a "local" like he totally just explained something there, when actually he might've just said "and then magic happens" for all the knowledge that conveyed.
      I sure hope this dude's not supposed to be teaching this in class.

  • @Jelle_NL
    @Jelle_NL 4 роки тому +3

    It would be great if at some point you stated the equations we are solving. This was very handwavey and doesn't really explain anything but some of the ideas of the process.

  • @narogen3431
    @narogen3431 4 роки тому +6

    More like, optic blow! Haha

  • @cloud6525
    @cloud6525 4 роки тому +2

    1st

  • @RichardKCollins
    @RichardKCollins Рік тому

    Your face is distracting. Keep the camera on the equations, diagrams and data visualizations. Please. I just get a picture of what you are talking about, and the camera jerks back to you talking and waving hands, and spends most of the time looking at you. Which has nothing to do with flow modeling and algorithms, except indirectly. Blame it on the camera person. UA-cam might be all about entertainment. But you don't have to go with that flow. Stay focused on the topic. Who has time to sort faces from diagrams?