AEV2A Sound-Visual Correspondence for the Table Dataset

Поділитися
Вставка
  • Опубліковано 1 бер 2019
  • A trained autoencoder constructs soundscapes out of images, then converts the audio representation into a sequence of drawings resulting in the original image. This video shows how different sound qualities are translated into drawings. This autoencoder may be used for purposes of sensory substitution translating visual information to audio, aiding the blind.
    For more information check the blog post: / translating-vision-int...
    Or the thesis itself: www.researchgate.net/publicat...

КОМЕНТАРІ • 1

  • @SHORTY999
    @SHORTY999 Рік тому +1

    Why does this seem like something I wasn't supposed to watch🤣