A friendly introduction to Recurrent Neural Networks

Поділитися
Вставка
  • Опубліковано 26 січ 2025

КОМЕНТАРІ • 622

  • @mctrjalloh6082
    @mctrjalloh6082 7 років тому +539

    Who else think Luis Serrano is a genius teacher ? Wow !

    • @SerranoAcademy
      @SerranoAcademy  7 років тому +14

      Thank you. :)

    • @mctrjalloh6082
      @mctrjalloh6082 7 років тому +2

      You are welcome professor !

    • @NguyenDuy-jd6sm
      @NguyenDuy-jd6sm 5 років тому +1

      he explained a quite complex problems in a very intuitive and easy to understand way

    • @SuperWiseacre
      @SuperWiseacre 5 років тому +1

      Amazing explanation

    • @realMuskDonald
      @realMuskDonald 5 років тому +1

      @@SerranoAcademy Really great lecture :) Thanks

  • @vijaypatneedi
    @vijaypatneedi 4 роки тому +100

    "If you can't explain it simply, you don't understand it well enough..."
    You proved it can be done...!

    • @BigAsciiHappyStar
      @BigAsciiHappyStar 8 місяців тому

      if you can't explain it with at least one bad pun, then you don't understand the concept of humour well enough!

  • @ravishankerjonnalagadda1390
    @ravishankerjonnalagadda1390 6 років тому +66

    Watched 7:30 mins and before i complete the rest of the video i felt an overwhelming need to tell you that you taught this concept in a brilliant manner

    • @ahmedkhaled5852
      @ahmedkhaled5852 5 років тому +2

      Lol I actually did the same and went straight to the comment section in the same minute

  • @sp90009
    @sp90009 6 років тому +18

    Thank you Luis. It's a rare talent, to explain things in such a clear and simple way.

  • @sethweiss
    @sethweiss Рік тому +1

    I've been "just getting through" my machine learning class for the last 9 weeks and now after watching this video I finally feel like I understand these concepts!

  • @dmitrykarpovich7579
    @dmitrykarpovich7579 6 років тому +94

    "The Vector of the Chicken." I wonder how many times in the history of humanity that phrase has been uttered.

    • @SerranoAcademy
      @SerranoAcademy  6 років тому +3

      Dmitry Karpovich
      hahaha, I wonder if it's the first time! :)

    • @giphe
      @giphe 3 роки тому

      I feel like there is a joke or pun somewhere in there, but I cant find it...

  • @karl11154
    @karl11154 2 роки тому

    I have looked at many videos and I rarely comment so my words carry a lot of weight. This is hands down the best tutorial I have seen yet for machine learning.

  • @BrandonRohrer
    @BrandonRohrer 7 років тому +249

    Brilliant. I love how you spell out the matrices that implement the rules of the neural network. Great job pulling back the curtain on the Wizard.
    Also Mt Kilimanjerror!

    • @BrandonRohrer
      @BrandonRohrer 7 років тому +6

      And thanks for the shoutout :)

    • @SerranoAcademy
      @SerranoAcademy  7 років тому +13

      Thanks! Coming from you, this is very high praise, higher than Mt. Kilimanjerror! (actually, was between that one and Mt. Rainierror... maybe for the next error function) :)

    • @Mr68810
      @Mr68810 5 років тому +3

      Before reading this comment I was just about to say that it's cool aproach with matrices!

  • @bubblesgrappling736
    @bubblesgrappling736 4 роки тому

    best video on the topic so far. firstly explaining the topic in a almost oversimplified manner, and then gradually increase complexity and difficult terminology. Perfect teaching style!

  • @krishnahappysmile
    @krishnahappysmile 5 років тому +27

    This is hands down one of the best tutorials I've ever seen on a Machine Learning topic. The quality and the ease of explanation with which the video was made and presented really helped me understand the scary concept of RNN in a very uncomplicated way. Thank you very much.

  • @deckplate1
    @deckplate1 4 роки тому +1

    I have been through every single RNN video trying to understand it and you are the only one that has explained it well. I am sick of abstract topics such as NN's being unapproachable because of teachers who don't know how to explain things with a tamer vocabulary and EXAMPLES. Lots and lots of examples.

  • @samsontan1141
    @samsontan1141 4 роки тому +6

    Dude you can teach this supposedly extremely advanced theory to a primary school kid with your brilliant way of explanation. Respect and thank you so much!

  • @akhilgangavarapu9728
    @akhilgangavarapu9728 4 роки тому

    My hunting for clarity on RNN ended with this video. I had read many medium articles and saw the videos too. Putting all those together can't reach this video. Thank you
    Luis Serrano

  • @ricardosantos4900
    @ricardosantos4900 4 роки тому

    Intelligence, simplicity and didactic. Three ingredients of a genial Machine Learning teacher!

  • @blesucation
    @blesucation Рік тому +1

    Just want to leave a comment so that more people could learn from your amazing videos! Many thanks for the wonderful and fun creation!!!

  • @_PremKharat
    @_PremKharat Рік тому

    Watching this video for the first time exactly after 6 years.
    simplest and amazing explanation thank you sir

  • @abhirampattarkine995
    @abhirampattarkine995 5 років тому +4

    I really love your ability to convert extremely complex concepts into simple things by giving day to day life examples. Hats off to you!!!!

  • @conghoannguyen446
    @conghoannguyen446 Місяць тому

    You are the best. All of your videos are awesome. Even a 5-year-old can understand what you are saying. I respect your contribution !!

  • @smithcodes1243
    @smithcodes1243 2 роки тому +1

    I have no words for this guy, what a legend! Thank you for being such a great teacher!

  • @himanshuladia9099
    @himanshuladia9099 7 років тому

    This is easily the best RNN explanation on the internet.

  • @ryan00005
    @ryan00005 3 роки тому

    wow! this is the exact tutorial I've looking for ages, by starting with examples and motivation, then moving into matrices multiplication. now I think a dump like me can understand how RNN works...

  • @WoonCherkLam
    @WoonCherkLam 6 років тому +3

    All the other tutorials just explained NN as a black box. Your use of matrices for the explanation really helped strengthen the understanding! :)

  • @dipendraphuyal8914
    @dipendraphuyal8914 3 місяці тому

    after watching 50 videos about RNN, finally this one teaches me the idea.

  • @DanielRamBeats
    @DanielRamBeats 4 роки тому +5

    I can't believe I got to learn this for free, thank you!

  • @rakeshsinghrawat6356
    @rakeshsinghrawat6356 7 років тому

    Sir ,you have a talent for representing complication things in a simplest manner.

  • @christophersoelistyo1905
    @christophersoelistyo1905 6 років тому +1

    By far the clearest and most approachable intro to recurrent NNs I've come across!

  • @Bramsmelodic
    @Bramsmelodic 2 роки тому

    This is a kickass explanation of RNN.. You are a genius in teaching.. Trust me i am a student of one of the renowned institutes of the world.. but i didn’t get to hear this simple and effective way of teaching

  • @AniruddhaKalburgi
    @AniruddhaKalburgi 4 роки тому

    Before I go any further, I really liked how you stated what Machine Learning does to us.
    Genius!!!

  • @amitbuch
    @amitbuch 5 місяців тому

    Real classic intro to a complicated topic. Love the smooth introduction. Just perfect.

  • @ashutoshtripathi1681
    @ashutoshtripathi1681 4 роки тому

    I am normally very lazy in commenting, but this guy made me do it. You Sir are awesome!!!

  • @worldof6271
    @worldof6271 4 роки тому

    your pytorch udacity courses helped me out. But without understanding the topic, I went to youtube and again help out. Thanks for the course. You will explain so that the child can understand

  • @merikidemas970
    @merikidemas970 2 роки тому

    This is gold!. How do you like a YT video more than once?
    The errorrest and kilimanjerror pun was perfect!

  • @l2edz
    @l2edz 5 років тому +2

    The most intuitive introduction to RNNs that I've come across thus far! Thank you!

  • @neelkamal8729
    @neelkamal8729 4 роки тому

    Your method of teaching with all those images is really awesome

  • @arifakhterrangon7030
    @arifakhterrangon7030 10 місяців тому

    17:50 is the most important figure of the whole video. The explanation was very good, simple and easy!

  • @AvinashReddy21
    @AvinashReddy21 6 років тому +1

    Luis,
    Great video as always ! I am in udacity machine learning nanodegree program and I love your teaching style. Please keep making videos you are making a big difference for people like us.

    • @SerranoAcademy
      @SerranoAcademy  6 років тому

      Thank you for your message, Avinash! Great to hear that you enjoy the program! Definitely, as time permits, I'll keep adding videos here. Cheers!

  • @mrmr4737
    @mrmr4737 3 роки тому +1

    Fantastic!! By presenting simple Neural network operations as matrix multiplications you have explained the basics of RNNs to me in a way that no one on UA-cam was able to do! You're fantastic Luis💛

  • @nazishjahan04
    @nazishjahan04 Рік тому

    Luis Serrano, you have an incredible ability to represent tough concepts in such an interesting way

  • @shobhitnair2
    @shobhitnair2 5 років тому

    Seriously for putting it in such a easy way and you have given a great idea of how it is working internally through matrices rather than using nodes and edges because it is way too difficult to understand and really thank you for making it such easy.

  • @mostinho7
    @mostinho7 4 роки тому +1

    4:01 shows how a nn can map simple inputs to specific outputs.
    Uses the same as Stanford notation for feed forward Wx
    Weights matrix, each row is the weights of coming into a node in the hidden/output layer

  • @capeandcode
    @capeandcode 6 років тому +15

    God! This one is a saviour. It changed my perspective towards NNs.

  • @Burst4All
    @Burst4All 3 роки тому

    now i can practically teach my students about gradient descent, very intuitive lessons here.thanks alot

  • @mobasshirbhuiyanshagor3611
    @mobasshirbhuiyanshagor3611 3 роки тому

    Your voice and teaching skill both are soothing enough.

  • @chethan93
    @chethan93 5 років тому

    One of the best videos for beginning DNNs. It sets our psyche properly for all the things to come in Deep Neural Networks.

  • @robophil_
    @robophil_ 6 місяців тому

    I wish I had teachers like you in UNI. Thank you!!!

  • @somdubey5436
    @somdubey5436 4 роки тому

    Best explanation of RNNs i found on UA-cam. Thanks a tonne.

  • @kartikpodugu
    @kartikpodugu 5 років тому

    a must video for everybody trying to understand RNN. Really appreciate your work to make basic concepts simpler for audience.

  • @babawaleojedapo7235
    @babawaleojedapo7235 4 роки тому

    Hello Luis, please don't stop making these videos. Your NN series are awesome. I had to come back to comment on this. Thanks a lot man.

  • @MrNiceseb
    @MrNiceseb 4 роки тому +2

    Can someone help @14:00: How a sunny matrix [1,0] gets mapped to [111], and still be defined as a sunny matrix, when sunny matrix is defined as [1 0]? Or whichever are 111's let the sunny through?

    • @benthomas4624
      @benthomas4624 4 роки тому

      My intuition is that the 6x3 weather matrix simply expands the sunny / rainy matrices. So the [1,0] sunny matrix becomes a [1,1,1,0,0,0] matrix, and the [0,1] rainy matrix becomes a [0,0,0,1,1,1] matrix. These dimensions allow the matrix addition in the merge step. Functionally, the top 3 values in the expanded sunny matrix being 1 mean that the '2' that appears in the matrix sum will always appear in the top half, i.e. the same food. When it is rainy, the bottom half is 1's, meaning the '2' will appear in the bottom half, and the next food in order will be made.
      The graphic is slightly confusing, the [1,0] isn't mapped just to [1,1,1], but the entire 6x1 matrix

  • @Jabrils
    @Jabrils 7 років тому +16

    aha nevermind, this was answered at 20:55 -Another question, @--11:50-- when you show the food & weather matrices, practically speaking, how would these parameters be found? via training your network right? It's my understanding that these matrices represent the weights to, I guess I can call them the first hidden layer nodes, is this correct? Do you mind clarifying this a bit more please? I'd really like to make sure I understand the material but I was born a visual learner haha :D-

  • @PedroTrujilloV
    @PedroTrujilloV 2 місяці тому

    Thanks!

  • @satrioem6162
    @satrioem6162 3 роки тому +3

    3 years in AI engineering and I've never seen an interpretation of neural network like this. Amazing Sir!

  • @young.4499
    @young.4499 5 років тому

    This is the best for beginners. You deserve more likes!

  • @jaynilpatel8700
    @jaynilpatel8700 7 років тому +9

    It really was an amazing video. It was really nice to see how such an esoteric topic was presented in really simple way. Keep it going dude!

  • @grayyan5966
    @grayyan5966 2 роки тому

    This is the best and most easily understanding introduction I have ever heard. Fantastic!

  • @vishalmendekar2034
    @vishalmendekar2034 4 роки тому

    You are one of the best tutors here. You make complex things look damn easy. Thanks lot for all your videos

  • @younus6133
    @younus6133 6 років тому +4

    Best,easy and simple explanation of RNN.
    keep up the great work.Thanks

  • @anummalik7733
    @anummalik7733 4 роки тому

    I Must Say Excellent teacher u are.. really i have been searching this topic and again and again was confused. Today i watch your video. Welldone it was clearly described. I am impressed .keep it up

  • @alfredolacayo5803
    @alfredolacayo5803 2 роки тому

    Congrats Luis, what an awesome video! The concept of RNN was broken down to the bare minimum and the rest of the explanation stemmed from this simple principle, brilliant!

  • @Jabrils
    @Jabrils 7 років тому +93

    Luis! I love your NN series, but question that threw me off a bit. @18:06 when you add the inputs, how did you get [0,1,0,1,2,1] when the first node is 1+0 & the second node is 0+0, shouldn't it equaled to [1,0,0,1,2,1] or is there some other input that I am missing? I mean ultimately it's irreverent because after the Non-Linear function it transforms into a 0, but just want to make sure I am not missing anything there aha.

    • @SerranoAcademy
      @SerranoAcademy  7 років тому +36

      Dang! Yes you're totally right, that's a typo, it should be 1,0,0,1,2,1... Thank you!
      And yes, also right that the non-linear function makes it 0 anyway, but yeah, I put the one in the wrong place.

    • @Jabrils
      @Jabrils 7 років тому +14

      phew, okay cool. & thanks for the video! I've yet came across a simple explainer on how to write a LSTM RNN & this did the trick for me, keep up the great work!

    • @remaithi
      @remaithi 6 років тому +18

      Jabrils I appreciate your recommendation for this video ;)

  • @edphi
    @edphi 3 роки тому

    the best RNN tutorial period. Thanks

  • @Molaga
    @Molaga 4 роки тому

    This tutorial has reinforced my understanding and see it in a new light. Superb explanation. Very very clear. Thanks very much.

  • @vijaypalmanit
    @vijaypalmanit 4 роки тому

    Bingo ! My journey to understand RNN intuitively finally ends, thanks for this great video.

  • @pratimakalyankar2447
    @pratimakalyankar2447 4 роки тому +1

    Great Explaination sir in very simple language thats signof the best teacher

  • @patrickmatimbe18
    @patrickmatimbe18 6 років тому +3

    thank you so much brother you a genius. you've enlightened my mind towards RNN. I've watched plenty of videos trying to figure out what's going on, but your video gave me hope. thank you so much.

  • @64_bit80
    @64_bit80 6 років тому

    Thank you for making this video! Most articles on RNNs didn't explicitly explain how two inputs were added to make a proper output

  • @ngozik-opara4373
    @ngozik-opara4373 2 роки тому

    Thank you for demystifying the RNN. It is really beginner-friendly, thank you.

  • @harrytaller9403
    @harrytaller9403 6 років тому

    This is the best video that user has seen which explains complex things in simple way

  • @darshanbari2439
    @darshanbari2439 4 роки тому

    The best UA-cam video I've ever seen

  • @GiladIlani
    @GiladIlani 2 роки тому

    Best explanation I found for RNN.

  • @Cruz-94
    @Cruz-94 5 років тому

    De todas as explicações que vi na internet até agora, a sua é de longe a melhor. Ao contrário do convencional, você procura explicar claramente esse conceito que inicialmente é extremamente abstrato. Muita coisa complexa que não podia dizer que compreendia, agora percebo que estou começando a entender. Muito obrigado!

  • @Charles-rn3ke
    @Charles-rn3ke 5 років тому

    Hi. I think the thing that makes RNN useful is when your output is in same type as input. Whether the input is sequence is not that crucial because regular CNN can also extra feature from from sequential input properly.

  • @chandlerlabs2478
    @chandlerlabs2478 3 роки тому

    Very nice knowledge transfer Luis! This is my first week of introduction to Neural Networks! I followed you completely, until we get into the food matrix. 13:25 minute mark and I "tapped out." This (simpleman) explanation encourages me to stick with my learning and I'm sure after watching this a few more times, I will gain a better understanding. Great job!! Thank you. P.S. Bought your book;-)

  • @kpmaynard
    @kpmaynard 7 років тому

    Thank you very much for that prompt response, Luis!! You really have the knack for clarifying these fundamental issues. I understand clearly now the motivation for recurrent neural networks.

  • @krishnateja4688
    @krishnateja4688 7 років тому

    Best explanation on RNN I have seen so far. Thanks for doing this

  • @tuhinmukherjee8141
    @tuhinmukherjee8141 4 роки тому

    Sir, this is really amazing. Loved this example in general because Neural Networks as a linear transformation in general sounds so cool!

  • @behysun
    @behysun 6 років тому

    i LOVED THE INTRODUCTORY WITH YOUR PICTURES, exactly what happened to me.

  • @afnanalali4285
    @afnanalali4285 7 років тому

    your way in explanation is very cool you look like open the mind of the person and put the material inside, please we want more videos about deep learning applications like object tracking and even videos on programming languages to build the deep learning

  • @lathifahdhiya723
    @lathifahdhiya723 5 років тому

    Whoa this helps a lot. I watch a bunch of videos about this and I keep getting confused. Glad I find this video. Thank you!

  • @suryabh7387
    @suryabh7387 6 років тому

    Amazing .. wonderful.. What a great teacher you are!! Lot of prep required to explain a complicated subject in few minutes with an easy example.

  • @diegomen
    @diegomen 3 роки тому

    Congrats Luis! It is explained a quite complex problems in a very intuitive and easy to understand way

  • @drt-on-ai
    @drt-on-ai 5 років тому

    Hands down the best vid I have ever seen. Great job mate. Great job.

  • @susovan97
    @susovan97 6 років тому +3

    Thanks for your video, I did get a friendly introduction to RNN :) It reminds me so much of the Hidden Markov Models (HMM in short; here, what he cooks is the hidden state and the weather is the observation in your diagram at 10:39 of this video. I guess I'll do some search how HMM and RNN's are connected! Your comments are most welcome here!

  • @bhavikdudhrejiya4478
    @bhavikdudhrejiya4478 4 роки тому

    Awesome presentation of RNN. that's the way peoples learn ML then ML becomes a piece of cake.
    I become a big fan of your bro.

  • @puraana1940
    @puraana1940 4 роки тому +1

    Thank you for the video, your explanation is clear as crystal

  • @aza5338
    @aza5338 7 років тому

    A very clear explanation. You did a lot of work to come out with really clear teaching. Thank you very much

  • @rembautimes8808
    @rembautimes8808 3 роки тому

    This video is great and a must view for those interested in AI. But good to watch it while having some food 🥘 in case the examples of the perfect roommate makes you jealous or hungry

  • @GeeksfromIndia
    @GeeksfromIndia Рік тому

    This is 2023 and still I found this video very useful and interesting.

  • @raycrooks1240
    @raycrooks1240 5 років тому

    I have searched and searched and searched and at last an absolutely fantastic introduction suite of courses to dip your toes into that explains and shows what ML and RNN are all about. My next challenge is to find the easiest way to build my own RNN or LSTM and would welcome any suggestions on how best to go about this challenge. Again thank you so much for the time and effort you have clearly put in to this.

  • @AnkitYadav-lf1ud
    @AnkitYadav-lf1ud 6 років тому +4

    This was an outstanding explanation of RNN... Thanks for making this :)

  • @thankgoditsover
    @thankgoditsover 4 роки тому

    I am really enjoying learning from your Neural Networks playlist. Thank you so much for such amazing teaching and great quality content.

  • @xXLanyuzAnlunXx
    @xXLanyuzAnlunXx 6 років тому +9

    Love it! discovered that NN can be represented as a matrix.

    • @Grandremone
      @Grandremone 4 роки тому

      Didn't know this as well! Is this always the case?

  • @nurlanyusifli4386
    @nurlanyusifli4386 3 роки тому

    whole day was looking for this

  • @zbynekba
    @zbynekba 5 років тому

    Especially the mapping between the operations on matrices and the network of nodes helps visualize the topic. Great job, sir, indeed! Thank you

  • @johannahultgren2887
    @johannahultgren2887 Рік тому

    Best video I found that explained RNN, thank you🙏🙏🙏

  • @pvsplpakhi
    @pvsplpakhi 4 роки тому

    Excellent video. I understood it only after watching this video. Tried many earlier. Good Service

  • @ancuta8430
    @ancuta8430 4 роки тому

    Great explanation!!!...it really can't get more simpler than this. I've watched most of the videos on the subject but this was the one that really made it clear. Thanks

  • @alexvass
    @alexvass 2 роки тому

    Thanks

    • @alexvass
      @alexvass 2 роки тому

      also great book

    • @SerranoAcademy
      @SerranoAcademy  2 роки тому

      Thank you so much for your kind donation, Alex!! Have a great day!

  • @cameronscott4964
    @cameronscott4964 7 років тому +1

    Thank you for making this video! It's allowed me to understand RNNs in terms of matrices much more clearly!

  • @HazemAzim
    @HazemAzim 2 роки тому +1

    Definitely one the best videos if not the best on the RNN concepts . It would be great addition to link the intuition with mathematical rigor . How could you map the last example in particular to mathematical notation , input x, hidden states h(t) , and predicted output y^(t) , and more important how do you relate the weight matrices drafted manually with Whh, Wxh, and the known equations of Vanilla RNN. Finally it would be a great pedagogical addition if you can train an RNN using Keras / TF on this toy example and extract the weights and compare it with the manually presented weights . This will greatly add value to your explanations