Sony | Event-based Vision Sensor (EVS) to detect only changes in moving subjects -Full ver.-

Поділитися
Вставка
  • Опубліковано 21 вер 2024
  • An Event-based Vision Sensor(EVS) is designed to mimic the mechanism of the human eye. It realizes high-speed data output with low latency by capturing the movement of the subject as a change in luminance.
    The possibility of various applications may expand with Sony's Event-based Vision Sensor. This video provides a basic description and application examples.
    #SonySemiconductorSolutions #EventBasedVisionSensor #SonyEVS
    ◆Web - Sony's Image Sensors for Industry -
    www.sony.net/c...
    Click here to see products lineup and technology information of Sony's EVS.
    ◆Web - Sony Semiconductor Solutions Group
    www.sony-semic...

КОМЕНТАРІ • 42

  • @tonyintieri
    @tonyintieri 3 роки тому +4

    Sony + Prophesee = Great technology!

  • @electroncommerce
    @electroncommerce 3 роки тому +4

    Congrats Sony!

  • @Sey357
    @Sey357 3 роки тому +3

    WOW 👑 SONY GOD FOREVER#1 ✌️ 👑

  • @Tony-tr3di
    @Tony-tr3di 3 роки тому

    Head raising so many doubts. Lets wait for hands on preview. Hope for the best 💕💕💕

  • @diegovelasco914
    @diegovelasco914 3 роки тому +1

    Te amo Sony

  • @Александр-л8з3э
    @Александр-л8з3э 2 роки тому +1

    This is future of robotics vision

  • @dinahplacido5786
    @dinahplacido5786 Рік тому

    Way better than my old one. Perfect height

  • @stevenlk
    @stevenlk 6 місяців тому

    wow this is basically capturing ground truth optical flow

  • @arielatomguy
    @arielatomguy 3 роки тому +2

    These sensors encapsulate so many possibilities, yet it is important to remember that the platform on which the camera/sensor is positioned must remain completely still. it would be very interesting if a next gen version of these sensors can compensate for platform motion using either IMU or additional sensors pointed at other directions. Having the sensor receive such motions as input could perform the ambient subtraction within the sensor.

    • @tobidelbruck
      @tobidelbruck 2 роки тому

      It's true that a moving camera increases the data rate, but still our measurements with driving scenes show that in typical 20ms "frames" of accumulated brightness change events, less than 10% of the pixels are activated, making these activity-driven frames a nice fit to AI hardware that can exploit activation sparsity.

  • @edygk20kstm64
    @edygk20kstm64 3 роки тому +5

    This tech can be a great asset to paranormal research.

  • @hajime5486
    @hajime5486 3 роки тому +4

    Sony you're biggest fan here. I live in Tokyo I would love to work for you. Message me!

    • @UniversalIndian-fh6st
      @UniversalIndian-fh6st 3 роки тому +1

      Me too. A Lifetime Sony Fan ❤️❤️❤️
      Love from India 🇮🇳 ❤️❤️❤️❤️❤️ 🙏🏻🙏🏻

  • @NowyKurs
    @NowyKurs 3 роки тому

    When this sensor would be used in camera's? Looks like a decent quality.

  • @vladodamjanovski
    @vladodamjanovski 3 роки тому +1

    I am really trying to decipher the concept. Maybe you can help with a bit more elaboration. As far as I know, a video compression like HEVC analyses moving subject with great accuracy already, admittedly after the capturing by the sensor, through the encoder. Is this a similar concept but within the sensor itself?

    • @ES-qy2ju
      @ES-qy2ju 3 роки тому +5

      No
      HEVC is a codec, it needs frames to make a video.
      This sensor doesn't capture frames, but pure data.
      The " temporal resolution" on video its the framerate, a video of 30 fps is limited because you can't see more than 30 images in a second, you can't see more if you play it in slow motion.
      With DATA you dont have that limitation, because there are no frames, just points that move from point A to point B on a canvas, you can slow down the speed as you like.

    • @JustinHunnicutt
      @JustinHunnicutt 2 роки тому +1

      Think of it as a grid of analog brightness sensors but instead of measured brightness at each point they output the derivative (or instantaneous change) of the brightness value. If that isn't right someone please comment because I know that's not how it actually works but I thought it might help grasp the concept.

  • @GauthamSaiVadicherla
    @GauthamSaiVadicherla 11 місяців тому

    can self-driving cars operated by these

  • @bibeksutradhar3590
    @bibeksutradhar3590 3 роки тому

    Awesome

  • @user-mf7li2eb1o
    @user-mf7li2eb1o 3 роки тому

    Cool stuff

  • @sumandas9039
    @sumandas9039 3 роки тому +1

    😍😍😍

  • @kimberlytierney1369
    @kimberlytierney1369 3 роки тому

    Amazing technology!

  • @vk2630
    @vk2630 3 роки тому +3

    Trying to understand what is the benefit of this technology

    • @ES-qy2ju
      @ES-qy2ju 3 роки тому +1

      Ai efficiency, surveillance, obtain data without the need to analyze the image in post process

    • @butinloris5756
      @butinloris5756 3 роки тому

      2:30 ...

  • @churamontgomery6063
    @churamontgomery6063 Рік тому

    Wow

  • @GreenishlyGreen
    @GreenishlyGreen Рік тому

    Vr?

  • @JustinHunnicutt
    @JustinHunnicutt 2 роки тому

    Can someone explain to me why this is inherently better than a high frame rate sensor with some processing to focus on change in luminance. Is it just the fact that it's equivalent to a super high frame rate? Or could some give a specific case where this would work better than a high frame rate sensor and the processing I mentioned, besides not having to do that processing. I'm not knocking the tech I just don't see the benefit besides offloading the processing. I'm sure these examples exist or the product wouldn't exist.

    • @tobidelbruck
      @tobidelbruck 2 роки тому +1

      It has the USP that you can beat the latency-power tradeoff of frame cameras, plus you get really large DR and minimal motion blur.

    • @kylebowles9820
      @kylebowles9820 6 місяців тому

      I know this is an old comment but basically its pixels are async and very dense in time, you get very high resolution and continuous temporal gradients for things like fast motion tracking in robots, factories, and AR/VR. The dynamic range is also better than many so called 'night vision' cameras I've used, and it works in the daylight just as well. Great for vehicles, robots, outdoor AR/VR. It also has a few features like per-pixel hardware bandpass filters to either filter out or deliberately capture flickering / vibration for industrial applications and robotics. I have been playing with them for a few years now.

  • @shivam627
    @shivam627 3 роки тому

    👍

  • @newdar-ff5bz
    @newdar-ff5bz 3 місяці тому

    sounds like lcd vs oled !!

  • @rcmoedas
    @rcmoedas 3 роки тому

    👍🏽
    3

  • @BlackPrism100
    @BlackPrism100 3 роки тому

    Sony, metahero and WDW. Future is comming.

  • @aidedflyer173
    @aidedflyer173 2 роки тому

    Let me find out this is project Skynet..

  • @jhonyhill1
    @jhonyhill1 3 роки тому

    This I done when I got pass out... 🤨

  • @Blag_Cog
    @Blag_Cog 3 роки тому

    How many frames per second? Or should I say states per second? How many hertz lol.

    • @ES-qy2ju
      @ES-qy2ju 3 роки тому

      it probably depends on the shutter speed because there are no frames.

    • @Blag_Cog
      @Blag_Cog 3 роки тому

      @@ES-qy2ju yeah thats what I was thinking. How many "events" per second

    • @ES-qy2ju
      @ES-qy2ju 3 роки тому +1

      @@Blag_Cog a typical and common sensor with electronic shutter speed can capture up to 12000 "images".
      An event sensor would give the illusion of unlimited framerate

    • @Blag_Cog
      @Blag_Cog 3 роки тому +1

      @@ES-qy2ju I could imagine this would be amazing for a wide variety of uses. It is going to be really great data for neural networks and interpolation technology.

  • @lucasn82_
    @lucasn82_ 3 роки тому +1

    Sony Alien Company 😁