Autoencoders - EXPLAINED

Поділитися
Вставка
  • Опубліковано 16 лис 2018
  • Data around us, like images and documents, are very high dimensional. Autoencoders can learn a simpler representation of it. This representation can be used in many ways:
    - fast data transfers across a network
    - Self driving cars (Semantic Segmentation)
    - Neural Inpainting: Completing sections of an image, or removing watermarks
    - Latent Semantic Hashing: Clustering similar documents together.
    And the list of applications goes on.
    Clearly, Autoencoders can be useful. In this video, we are going to understand it's types and functions.
    For more content, hit that SUBSCRIBE button, ring that bell.
    Subscribe now for more awesome content: ua-cam.com/users/CodeEmporium...
    patreon: / codeemporium
    REFERENCES
    [1] Autoencoders: www.deeplearningbook.org/cont...
    [2] Sparse autoencoder (last part): web.stanford.edu/class/cs294a...
    [3] Why are sparse encoders sparse?: www.quora.com/Why-are-sparse-...
    [4] KL Divergence: en.wikipedia.org/wiki/Kullbac...
    [5] Semantic Hashing: www.cs.utoronto.ca/~rsalakhu/...
    [6] Variational Autoencoders: jaan.io/what-is-variational-a...
    [7] Xander’s video on Variational AutoEncoders (Arxiv Insights): • Variational Autoencoders
    CLIPS
    [1] Karol Majek’s Self driving car with RCNN: • Mask RCNN - COCO - in...
    [2] Auto encoder images: www.jeremyjordan.me/autoencod...
    [3] Semantic Segmentation with Autoencoders: github.com/arahusky/Tensorflo...
    [4] Neural Inpainting paper: arxiv.org/pdf/1611.09969.pdf
    [5] GAN results: • Progressive Growing of...
    #machinelearning #deeplearning #neuralnetwork #ai #datascience

КОМЕНТАРІ • 27

  • @atifadib
    @atifadib 3 роки тому +9

    This is the most underrated channel on youtube.

  • @ispeakforthebeans
    @ispeakforthebeans 5 років тому +20

    Why does this guy not have a million subscribers

    • @CodeEmporium
      @CodeEmporium  5 років тому +12

      I ask myself the same question every day

    • @UgurkanAtes
      @UgurkanAtes 3 роки тому +1

      @@CodeEmporium you dont need though

    • @senx8758
      @senx8758 3 роки тому

      @@CodeEmporium there are not too many ML engineers :)

  • @ashutoshshinde5267
    @ashutoshshinde5267 3 роки тому +1

    Great explanation!! Thank you!

  • @amortalbeing
    @amortalbeing 4 роки тому +1

    6:42 why did you say, we are considering a sigmoid activation ? what would be different if I used another activation function such as RELU?
    Would the kl term change?Do we apply this on all layers or only the last layer of the encoder ?

  • @hamzanaeem4838
    @hamzanaeem4838 3 роки тому +3

    You have great understanding in this particular domain but the way you get million of subscribers is that you should explain in deep that why there is only 1 bottle neck , why not 2 . How encoder compresses what is the working behind it , each and everything and explain it on student level so that it may understands very easily . But I appreciate your stuff . Keep it up Man !

  • @seyha3447
    @seyha3447 5 років тому +4

    What a great work! Thanks for videos. btw, can you make any videos about Conv-decon network? how it's different with Auto-encoders?

  • @mohammedfareedh
    @mohammedfareedh 4 роки тому +2

    Man your voice is so clean and pleasant

  • @krishj8011
    @krishj8011 4 роки тому

    Great video...

  • @neuodev
    @neuodev 2 роки тому +2

    This is awesome explanation 🙌.
    About to consider please don't show the subscribe button in the every 2 min it is very confusing. every thing except this look good 👍 . thnks

  • @landoftheunknown116
    @landoftheunknown116 Рік тому

    n_h in 6:43 is number of hidden layers pr number of neurons in hidden layer. Also, are we considering only 1 hidden layer?
    A very nice explanation though! I am preparing for an interview and this is like a gold treasure for me!

  • @NozaOz
    @NozaOz Рік тому

    Very cool

  • @cesarfaustoperez6372
    @cesarfaustoperez6372 2 роки тому +1

    Autoencoders and encoder-decoder are equal?

  • @est9949
    @est9949 4 роки тому +1

    9:37 can't a CNN do the same job? What's the difference between using CNN and autoencoders with convolutional layers? Or are they actually the same thing? I'm new to both types of networks, so would appreciate any elaboration. Thanks.

  • @wlxxiii
    @wlxxiii 5 років тому +3

    would love videos on NLP! :)

  • @spyzvarun5478
    @spyzvarun5478 9 місяців тому

    Why do we need the encoder and decoder to be a shallow network?

    • @LolLol-rr6eb
      @LolLol-rr6eb 5 місяців тому

      We would still want the encoder and decoder to learn as much from the input as possible. A shallow network allows many different features to be learned, thus increasing the odds that the network has learned a good amount from the input image.

  • @nagendran7781
    @nagendran7781 5 місяців тому

    Can you make a video for the beginners, this stuff with formulas and equation gets too complicated.

  • @starlord7548
    @starlord7548 3 роки тому +1

    Please change the comic san font

  • @jodumagpi
    @jodumagpi 5 років тому

    That intro though!!!

  • @harshraj7014
    @harshraj7014 4 роки тому +1

    01:22 .. data around us like images and donkeynets