Recurrent Neural Networks - EXPLAINED!

Поділитися
Вставка
  • Опубліковано 23 лис 2018
  • Understand exactly how RNNs work on the inside and why they are so versatile (NLP applications, Time Series Analysis, etc). We are also going to construct a recurrent neural net from scratch using only numpy!
    Follow me on M E D I U M: towardsdatascience.com/likeli...
    Code: github.com/dennybritz/rnn-tut...
    (Give this dude a * for his hard work!)
    Patreon: / codeemporium
    Quora: www.quora.com/profile/Ajay-Ha...
    REFERENCES
    [1] Slides from the Deep Learning book for RNNs: www.deeplearningbook.org/slid...
    [2] Andrej Karpathy’s Blog + Code (You can probably understand more from this now!): karpathy.github.io/2015/05/21/...
    [3] The Deep learning Book on Sequence Modeling: www.deeplearningbook.org/cont...
    [4] Colah’s blog on LSTMs: colah.github.io/posts/2015-08-...
    [5] Wildml for Coding RNNs: www.wildml.com/2015/10/recurre...

КОМЕНТАРІ • 29

  • @AzharKhan-to2ll
    @AzharKhan-to2ll 2 роки тому +4

    This is by far the best explanation on RNN. There are lots of efforts being put to make it. Thank you for this content. 🙏

    • @CodeEmporium
      @CodeEmporium  2 роки тому

      Thank you for recognizing my talent and glad this helps :)

  • @robertleo3561
    @robertleo3561 3 роки тому +1

    I love that you use the images from Ian’s book. I’m currently reading that one, and this keeps some nice consistency between the book and your videos

  • @akramsystems
    @akramsystems 5 років тому +1

    Subscribed, this is quality content!

  • @prahladkoratamaddi6750
    @prahladkoratamaddi6750 5 років тому +2

    Excellent tutorial, my friend! Subscribed, please keep it going!

  • @joshualiu8551
    @joshualiu8551 5 років тому +1

    Why its so similar to my knowledge of state-space system in control theory. Is there any chance that they are related?

  • @krishnamishra8598
    @krishnamishra8598 4 роки тому

    HELP !!!!! In RNN we have only 3 unique weight parameters, so during back prop. their will be only 3 parameters to update then why are RNN goes till the 1st input & creates long term dependencies thereby creates vanishing gradient problem ????

  • @onurdikici453
    @onurdikici453 Рік тому +2

    Hello, you've said at 6:36 that all h's are actually the same. Are these really the same? What I knew is only weights (V, W and U) are kept constant while unfolding, not the states themselves.

    • @markgyao
      @markgyao 5 місяців тому +1

      unless what the original author was trying to say is the function of h(t) =tanh(a(t)), the value of h(t) will be different at each stage

  • @tianyicao4417
    @tianyicao4417 4 роки тому +3

    One of the most underrated channels on UA-cam!

  • @danielhcarranza7599
    @danielhcarranza7599 5 років тому +3

    Hi man, I love your work!. It would be great if you do a tutorial about Image Captioning

    • @CodeEmporium
      @CodeEmporium  5 років тому +1

      I was thinking the same. Probably a future video. And thanks for watching!

  • @cyberguygame9096
    @cyberguygame9096 3 роки тому +1

    PLEASE! Please make a little image with what varialbe represents what in you mathmatical stuff. I am a beginner and have a little experience but it would be extremly helpful, if you do this!

  • @buttert5091
    @buttert5091 2 роки тому

    Great Video

  • @cyberguygame9096
    @cyberguygame9096 3 роки тому

    It is confusing, if you use a not for the activation but for the weighted sum... correct me if I am wrong.

  • @rembautimes8808
    @rembautimes8808 3 роки тому

    Puppy in a cup - classic. Great video

  • @jodumagpi
    @jodumagpi 5 років тому +1

    Omg! How do you know exactly what videos I want in the moment!??? Thanks!

    • @CodeEmporium
      @CodeEmporium  5 років тому +1

      I can read minds online. It's a thing you develop over time.

  • @shalinianunay2713
    @shalinianunay2713 3 роки тому +1

    Theoretically .very good. However , understanding code going difficult. Please make videos like hands-on session. So we could follow better.

  • @RedShipsofSpainAgain
    @RedShipsofSpainAgain 5 років тому +1

    Great video! One small correction: it's pronounced "Eeee-pock" not "eh pick". Epic is describing something awesome; epoch is one forward & backward pass of all training examples.

    • @CodeEmporium
      @CodeEmporium  5 років тому +3

      Thanks for the complements! About the pronunciation of epoch, I've looked at a number of sources. If you check out Merriam Webster's pronunciation, it can be either "ee-pock" or "eh pick" depending on whether you use American English or British English. I have a bad habit of combining both ;)

    • @RedShipsofSpainAgain
      @RedShipsofSpainAgain 5 років тому +2

      @@CodeEmporium Yeah I've read that as well. The "eh pick" has greater semantic ambiguity (~ higher Shannon entropy) since there are multiple meanings (more possible states), whereas "ee-pock" is much less ambiguous, and therefore it makes more sense to go with that one, if the goal is to be as clear as possible when communicating. But minor detail; really, I love your videos. Have learned so much from you! Keep up the great work

    • @aakashagarwal2602
      @aakashagarwal2602 5 років тому +3

      @@RedShipsofSpainAgain lol dude! Geek alert!!! peace out :) Great video indeed!

    • @RedShipsofSpainAgain
      @RedShipsofSpainAgain 5 років тому +2

      @@aakashagarwal2602 lol yup. Once you learn about Shannon entropy, you see its applications EVERYWHERE! it really is a cool theory

  • @avananana
    @avananana 5 років тому +4

    Halfway through the video, do you want to marry me because I'd say yes any day? Beautifully made and I'm not sure if I have more questions or less than before but I can confidently say that you've helped me a ton.