A Short Introduction to Entropy, Cross-Entropy and KL-Divergence

Поділитися
Вставка
  • Опубліковано 15 лис 2024

КОМЕНТАРІ • 470

  • @jennyread9464
    @jennyread9464 6 років тому +566

    Fantastic video, incredibly clear. Definitely going to subscribe!
    I do have one suggestion. I think some people might struggle a little bit around 2m22s where you introduce the idea that if P(sun)=0.75 and P(rain)=0.25, then a forecast of rain reduces your uncertainty by a factor of 4. I think it's a little hard to see why at first. Sure, initially P(rain)=0.25 while after the forecast P(rain)=1, so it sounds reasonable that that would be a factor of 4. But your viewers might wonder why you can’t equally compute this as, initially P(sun)=0.75 while after the forecast P(sun)=0. That would give a factor of 0!
    You could talk people through this a little more, e.g. say imagine the day is divided into 4 equally likely outcomes, 3 sunny and 1 rainy. Before, you were uncertain about which of the 4 options would happen but after a forecast of rain you know for sure it is the 1 rainy option - that’s a reduction by a factor of 4. However after a forecast of sun, you only know it is one of the 3 sunny options, so your uncertainty has gone down from 4 options to 3 - that’s a reduction by 4/3.

    • @AurelienGeron
      @AurelienGeron  6 років тому +60

      Thanks Jenny! You're right, I went a bit too fast on this point, and I really like the way you explain it. :)

    • @god-son-love
      @god-son-love 6 років тому +1

      Shouldn't one use information gain to check the extent of reduction ? IG = (-1log2(1) - 0log2(0) ) - (-(3/4)log2(4/3)-(1/4)log2(1/4)) = 0.01881437472 bit

    • @dlisetteb
      @dlisetteb 6 років тому +4

      thank youuuuuuuuuuuuuuuuu

    • @rameshmaddali6208
      @rameshmaddali6208 6 років тому +19

      Actually I understand the concept from your comment than the video itself :) thanks a lot

    • @maheshwaranumapathy
      @maheshwaranumapathy 6 років тому +8

      awesome, great insight i did struggle to get it at first place. Checked out the comments and bam! Thanks :)

  • @ArxivInsights
    @ArxivInsights 6 років тому +260

    As a Machine Learning practitioner & UA-cam vlogger, I find these videos incredibly valuable! If you want to freshen up on those so-often-needed theoretical concepts, your videos are much more efficient and clear than reading through several blogposts/papers. Thank you very much!!

    • @AurelienGeron
      @AurelienGeron  6 років тому +19

      Thanks! I just checkout out your channel and subscribed. :)

    • @pyeleon5036
      @pyeleon5036 6 років тому +2

      I like your video too! Especially the VAE one

    • @fiddlepants5947
      @fiddlepants5947 5 років тому +5

      Arxiv, it was actually your video on VAE's that encouraged me to check out this video for KL-Divergence. Keep up the good work, both of you.

    • @grjesus9979
      @grjesus9979 4 роки тому

      thank you, at first i messed up trying to understand but now reading your comment i understamd it. Thank you! 😊

  • @revimfadli4666
    @revimfadli4666 4 роки тому +459

    This feels like a 1.5-hour course conveyed in just 11 minutes, i wonder how much entropy it has :)

    • @grjesus9979
      @grjesus9979 3 роки тому +2

      hahaha

    • @anuraggorkar5595
      @anuraggorkar5595 3 роки тому +1

      Underrated Comment

    • @klam77
      @klam77 3 роки тому +3

      ahhh....too clever. the comment has distracted my entropy from the video. Negative marks for you!

    • @Darkev77
      @Darkev77 3 роки тому

      @@klam77 Could you elaborate on his joke please?

    • @ashrafg4668
      @ashrafg4668 3 роки тому +4

      @@Darkev77 The idea here is that most other resources (videos, blogs) take a very long time (and more importantly say a lot of things) to convey the ideas that this video did in a short time (and with just the essential ideas). This video, thus, has low entropy (vs most other resources that have much higher entropy).

  • @xintongbian
    @xintongbian 6 років тому +43

    I've been googling KL Divergence for some time now without understanding anything... your video conveys that concept effortlessly. beautiful explanation

  • @agarwaengrc
    @agarwaengrc Рік тому +2

    Haven't seen a better, clearer explanation of entropy and KL-Divergence, ever, and I've studied information theory before, in 2 courses and 3 books. Phenomenal, this should be made the standard intro for these concepts, in all university courses.

  • @SagarYadavIndia
    @SagarYadavIndia Рік тому

    Beautiful short video, explaining the concept that is usually a 2 hour explanation in about 10 minutes.

  • @yb801
    @yb801 Рік тому +2

    Thank you , I have always confused about these three concepts, you make these concepts really clear for me.

  • @Dinunzilicious
    @Dinunzilicious 3 роки тому

    Incredibly video, easily one of the top three I've ever stumbled across in terms of concise educational value. Also love the book, great for anyone wanting this level of clarity on a wide range of ml topics.
    Not sure if this will help anyone else, but I was having trouble understanding why we choose 1/p as the "uncertainty reduction factor," and not, say 1-p or some other metric. What helped me gain an intuition for this was realizing 1/p is the number of bits we would need to encode a uniform distribution if every event had the probability p. So the information, -log(p), is how many bits that event would be "worth" were it part of a uniform distribution. This uniform distribution is also the maximum entropy distribution that event could possibly come from given its probability...though you can't reference entropy without first explaining information.

  • @metaprog46and2
    @metaprog46and2 4 роки тому +2

    Phenomenal explanation of a seemingly esoteric concept into one that's simple & easy-to-understand. Great choice of examples too. Very information-dense yet super accessible for most people (I'd imagine).

  • @JakeMiller2020
    @JakeMiller2020 4 роки тому

    I always seem to come back to watch this video every 3-6 months, when I forget what KL Divergence is conceptually. It's a great video.

  • @billmo6824
    @billmo6824 2 роки тому

    Really, I definitely cannot come up with an alternative way to explain this concept more concisely.

  • @chenranxu6941
    @chenranxu6941 3 роки тому +1

    Wow! It's just incredible to convey so much information while still keeping everything simple & well-explained, and within 10 min.

  • @sushilkhadka8069
    @sushilkhadka8069 4 місяці тому

    Wow best explaination ever, I found this while I was in college. I just come here once a year just to refresh my intution.

  • @011azr
    @011azr 6 років тому +4

    Sir, you have a talent to explain stuff in a crystal clear manner. You just make something that is usually explained by a huge sum of math equations to be something so simple like this. Great job, please continue on making more UA-cam videos!

  • @glockenspiel_
    @glockenspiel_ 4 роки тому +2

    Thank you, very well explained! I decided to get into machine learning in this hard quarantine period but I didn't have many expectations placed on me. Thanks to your clear and friendly explanations in your book I am learning, improving and, not least, enjoying a lot. So thank you so much!

  • @Rafayak
    @Rafayak 6 років тому +24

    Finally, someone who understands, and doesn't just regurgitate the wikipedia page :) Thanks alot!

  • @summary7428
    @summary7428 3 роки тому

    this is by far the best and most concise explanation on the fundamental concepts of information theory we need for machine learning..

  • @aa-xn5hc
    @aa-xn5hc 6 років тому +41

    you are a genius in creating clarity

  • @陈亮宇-m1s
    @陈亮宇-m1s 6 років тому +8

    I come to find Entorpy, but I received Entorpy, Cross-Enropy and KL-divergence. You are so generous!

  • @jdm89s13
    @jdm89s13 5 років тому +1

    This 11-ish minute presentation so clearly and concisely explained what I had a hard time understanding from a one hour lecture in school. Excellent video!

  • @michaelzumpano7318
    @michaelzumpano7318 2 роки тому +3

    Wow! This was the perfect mix of motivated examples and math utility. I watched this video twice. The second time I wrote it all out. 3 full pages! It’s amazing that you could present all these examples and the core information in ten minutes without it feeling rushed. You’re a great teacher. I’d love to see you do a series on Taleb’s books - Fat Tails and Anti-Fragility.

  • @paulmendoza9736
    @paulmendoza9736 Рік тому

    I want to like this video 1000 times. To the point, no BS, clear, understandable.

  • @s.r8081
    @s.r8081 4 роки тому +1

    Fantastic! This short video really explains the concept of entropy, cross-entropy, and KL-Divergence clearly, even if you know nothing about them before.
    Thank you for the clear explaination!

  • @alirezamarahemi2352
    @alirezamarahemi2352 2 роки тому

    Not only this video is fantastic in explaining the concepts, but also the book "Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow_ Concepts, Tools, and Techniques to Build Intelligent Systems-O’Reilly Media (2019)" is the best book I've studied on machine learning subject by the same author (Aurélien Géron).

  • @AladinxGonca
    @AladinxGonca 6 місяців тому +1

    You are the most talented tutor I've ever seen

  • @GreenCowsGames
    @GreenCowsGames Рік тому

    I am new to information theory and computer science in general, and this is the best explanation I could find about these topics by far!

  • @frankcastle3288
    @frankcastle3288 3 роки тому

    I have been using cross-entropy for classification for years and I just understood it. Thanks Aurélien!

  • @khaledelsayed762
    @khaledelsayed762 2 роки тому

    Very elegant indicating how cognizant the presenter is.

  • @hassanmatout741
    @hassanmatout741 6 років тому +2

    This channel will sky rocket. no doubt. Thank you so much! Clear, visualized and well explained at a perfect pace! Everything is high quality! Keep it up sir!

  • @gowthamramesh2443
    @gowthamramesh2443 6 років тому +11

    Kinda feels like 3Blue1Brown's version of Machine learning Fundamentals. Simply Amazing

    • @AurelienGeron
      @AurelienGeron  6 років тому +5

      Thanks a lot, I'm a huge fan of 3Blue1Brown! 😊

  • @fahdciwan8709
    @fahdciwan8709 4 роки тому +1

    phew !! as newbie to Machine Learning without a background in maths this video saved me, else i never expected to grasp the Entropy concept

  • @jackfan1008
    @jackfan1008 6 років тому +1

    This explanation is absolutely fantastic. Clear, concise and comprehensive. Thank you for the video.

  • @ilanaizelman3993
    @ilanaizelman3993 5 років тому

    Thanks, for people who are looking for ML explanation: Cross-Entropy is computed with -log(0.25)

  • @fberron
    @fberron 3 роки тому

    Finally I understood Shannon's theory of information. Thank you
    Aurélien

  • @CowboyRocksteady
    @CowboyRocksteady Рік тому

    i'm loving the slides and explaination. I noticed the name in the corner and thought, oh nice i know that name. then suddenly... It's the author of that huge book i love!

  • @achillesarmstrong9639
    @achillesarmstrong9639 5 років тому +2

    This is the 3rd time I watch this video. In April , September, and the December 2018. The first time I watched, I thought I understood this topic, but I know that I knew nothing back then.

  • @colletteloueva13
    @colletteloueva13 Рік тому +2

    One of the most beautiful videos I've watched and understood a concept :')

  • @bingeltube
    @bingeltube 6 років тому +2

    Very recommendable! Finally, I found someone who could explain these concepts of entropy, cross entropy in very intuitive ways

  • @sagnikbhattacharya1202
    @sagnikbhattacharya1202 6 років тому +3

    You make the toughest concepts seem super easy! I love your videos!!!

  • @maryamzarabian4617
    @maryamzarabian4617 2 роки тому

    thank you for useful video , and also really thanks for your book . You express very difficult concepts of machine learning like a piece of cake .

  • @voraciousdownloader
    @voraciousdownloader 4 роки тому +1

    Really the best explanation of KL divergence I have seen so far !! Thank you.

  • @DailyHomerClips
    @DailyHomerClips 5 років тому

    this is by far the best description of those 3 terms , can't be thankful enough

  • @misnik1986
    @misnik1986 3 роки тому

    Thank you so much Monsieur Geron pour cette explication simple et limpide

  • @sanjaykrish8719
    @sanjaykrish8719 4 роки тому +3

    Aurelien has a knack for making things simpler. Check out his Deep leaning using TensorFlow course in Udacity. It's amazing.

  • @Dr.Roxirock
    @Dr.Roxirock Рік тому +1

    I really enjoyed the way you are explaining it. It's so inspiring watching and learning difficult concepts from the author of such an incredible book in the ML realm. I wish you could teach via video other concepts as well.
    Cheers,
    Roxi

  • @ashutoshnirala5965
    @ashutoshnirala5965 4 роки тому

    Thankyou for such a wonderful and to the point video. Now I know: Entropy, Cross Entropy, KL Divergence and also why cross entropy is such a good choice as loss function.

  • @-long-
    @-long- 5 років тому

    Guys this is the best explanation on Entropy , Cross-Entropy and KL-Divergence.

  • @samzhao1827
    @samzhao1827 4 роки тому

    Very few of people can explain like you to be honest! I read so many decision tree tutorial and they are actually talking the same thing(information gain), but after I reading their articles I got 0 understanding still, big thanks to this video!

  • @timrault
    @timrault 6 років тому +5

    Hey Aurélien, thanks so much for this great video ! I have a few questions :
    1/ I struggle with the concept of uncertainty. In the example where p(sun)=0.75 and p(rain)=0.25, what would be my uncertainty ?
    2/ At 6:42, I don't understand why to use 2 bits for the sunny weather means that we are implicitly predicting that it'll be sunny every four days on average.
    3/ Would it be a bad idea to try to use a cross entropy loss for something different from classification (i.e. where the targets wouldn't be one-hot vectors) ? I think there is a possibility that we can find a predicted distribution q different from the true distribution p, which would also minimise the value of the cross entropy, but I'm not sure.

  • @YYchen713
    @YYchen713 2 роки тому

    Fantastic video! Now all the dots are connected! I have used loss function for NN machine learning, but not knowing the math behind it! This is so enlightening!

  • @jamesjenkins9480
    @jamesjenkins9480 2 роки тому +1

    I've learned about this before, but this is the best explanation I've come across. And was a helpful review, since it's been a while since I used this. Well done.

  • @swapanjain892
    @swapanjain892 6 років тому +1

    You have no idea how much this video has helped me.Thanks for making such quality content and keep creating more.

  • @romanmarakulin7448
    @romanmarakulin7448 5 років тому +1

    Thank you so much! Not only it helped me understand KL-Divergence, but also it is helpful to remember the formula. From now I will place signs in right places. Keep it up!

  • @areejabdu3125
    @areejabdu3125 6 років тому

    this explanation really helps the learner in understanding such vague scientific concepts, thanx for the clear explanation !!

  • @LC-lj5kd
    @LC-lj5kd 6 років тому +2

    ur tutorial is always invincible. quite explicit with great examples. Thanks for ur work

  • @sagarsaxena7202
    @sagarsaxena7202 5 років тому +1

    Great work in the explanation. I have been pretty confused with this concept and the implication of Information theory with ML. This video does the trick in clarifying the concepts while providing a sync between information theory and ML usage. Thanks much for the video.

  • @ykkim77
    @ykkim77 4 роки тому +1

    This is the best explanation of the topics that I have ever seen. Thanks!

  • @matthewwilson2688
    @matthewwilson2688 6 років тому +2

    This is the best explanation of entropy and KL I have found. Thanks

  • @akshiwakoti7851
    @akshiwakoti7851 4 роки тому

    Hats off! One of the best teachers ever! This definitely helped me better understand it both mathematically and intuitively just in a single watch. Thanks for reducing my 'learning entropy'. My KL divergence on this topic is near zero now. ;)

  • @m.y.s4260
    @m.y.s4260 4 роки тому +7

    5:12 A tiny typo: the entropy should have a negative sign

  • @robinranabhat3125
    @robinranabhat3125 6 років тому +92

    you are 3blues1brown kind of guy. nowadays i see lot of youtubers making machine learning videos by repeating the words found in research papers and wikipedia . u r different

    • @bhargavasavi
      @bhargavasavi 4 роки тому +2

      Grant Sanderson is like the Morgan Freeman of visual Mathematics.....I wish his videos existed during my earlier days in college

  • @VincentKun
    @VincentKun Рік тому +1

    Ok, i maybe should pay more attention when reading my books, but when i heard here that CrossEntropy is entropy + KL it made sense, then when i read my notes i wrote something similar, but without even realizing how big it was.

  • @mohamadnachabe1
    @mohamadnachabe1 5 років тому +1

    This was the best intuitive explanation of entropy and cross entropy I've seen. Thanks!

  • @thegamersschool9978
    @thegamersschool9978 2 роки тому

    I am reading your book! and oh man oh what a book!!! first I thought how the book and video has exactly same example for explanation until I saw the book of yours on the later part of the video, and realized it's you it's so great to listen to you after reading you!!

  • @ekbastu
    @ekbastu 5 років тому +2

    I came here to learn how to correctly pronounce his name :).
    The content is simply great. Thanks a lot.

  •  Рік тому +1

    the best video on cross entropy on youtube so far

  • @qeithwreid7745
    @qeithwreid7745 4 роки тому +2

    Your book rocks.
    EVERYONE BUY THE HANDS ON GUIDE
    Edit: in fact if you can’t afford it contact me and make a case I might buy you it.

  • @rampadmanabhan4258
    @rampadmanabhan4258 4 роки тому +2

    Great video! However, I have a doubt related to around 7:11 onwards. I don't understand the point where you say that "the code doesn't use messages starting with 1111, and hence the sum of predicted probabilities is not 1". Could you explain this?

  • @aiyifei4732
    @aiyifei4732 4 роки тому

    Thanks, explain is clear. I found it's clean and easy to understand compare with my lecture notes. I don't even think they mentioned the history and derivation/origin

  • @unleasedflow8532
    @unleasedflow8532 3 роки тому

    Nicely conveyed what is to be learned about the topic. I think I absorbed all the way. Best tutorial, keep dropping video like this.

  • @se123acabaron
    @se123acabaron 5 років тому

    Fantastic video! It made me understand and get together many "loose" concepts. Thank you very much for this contribution!

  • @ramensusho
    @ramensusho 5 місяців тому

    The no. of bits I received is way higher than I expected !!
    Nice video

  • @salman3112
    @salman3112 6 років тому +2

    Your channel has become one of my favorite channels. Your explanation of CapsNet and now this is just amazing. I am going to get your book too. Thanks a lot. :)

  • @DEEPAKKUMAR-sk5sq
    @DEEPAKKUMAR-sk5sq 5 років тому +1

    Please do a video on 'PAC learning'. It seems very complex. Your way of explanation can make it easy!!

  • @GuilhermeKodama
    @GuilhermeKodama 5 років тому +1

    the best explanation I ever had about the topic. It was really insightful.

  • @deteodskopje
    @deteodskopje 5 років тому

    Very nice. Really short yet clearly grasping the point of these concepts. Subscribed.
    I was really excited when I found this chanel. I mean the book Hands On Machine Learning is maybe the best book you can find these days

  • @tonywang7933
    @tonywang7933 3 місяці тому

    Still can't believe these can be taught in such a short video.

  • @shuodata
    @shuodata 5 років тому

    Best Entropy and Cross-Entropy explanation I have ever seen

  • @meerkatj9363
    @meerkatj9363 6 років тому +1

    I've seen all your videos now. You've taught me a lot of things and this was some good moments. Can't wait for more. Thanks so much

  • @julioreyram
    @julioreyram 3 роки тому

    I'm amazed by this video, you are a gifted teacher.

  • @elvisng1977
    @elvisng1977 3 роки тому

    This video is so clear and so well explained, just like his book!

  • @EinSteinLiT
    @EinSteinLiT 6 років тому

    very clear and well-structured explanation. Your book is great, too!Thank you very much!

  • @ramonolivier57
    @ramonolivier57 4 роки тому +1

    Excellent explanation and discussion. Thank you very much!!

  • @helelemamayku6302
    @helelemamayku6302 6 років тому +1

    Normally when I like a video, I just click the like button. Since this is sooooo helpful, I will also leave a comment to thank you for making this.

  • @leastactionlab2819
    @leastactionlab2819 4 роки тому +1

    Great video to learn interpretations of the concept of cross-entropy.

  • @MrFurano
    @MrFurano 6 років тому

    To-the-point and intuitive explanation and examples! Thank you very much! Salute to you!

  • @danyalkhaliq915
    @danyalkhaliq915 4 роки тому

    super clear .. never I heard this explanation of Entropy and Cross Entropy !

  • @Vladeeer
    @Vladeeer 6 років тому

    I have that book, didn't realized you wrote it until now.

  • @laura_uzcategui
    @laura_uzcategui 5 років тому

    Really good explanation, the visuals were also great for understanding! Thanks Aurelien.

  • @alyalyfahmy
    @alyalyfahmy 6 років тому +1

    Thank you very much for this excellent video. Looking for a similar one on the topic of Expectation Maximization.

    • @AurelienGeron
      @AurelienGeron  6 років тому +2

      Thanks Aly! Basically, you can think of EM as a generalization of K-Means.
      K-Means is a clustering algorithm that works like this: first you randomly select k points called "centroids" (there are various ways to do that, but the simplest option is to pick k instances randomly from the dataset and place the centroids there). Then you alternate two steps until convergence: (1) assign each instance to the closest centroid, (2) update each centroid by moving it to the mean of the instances that are assigned to it. I recommend you search for an animation of this process, it's really quite simple, fast and often very efficient. This is guaranteed to converge, since both steps always reduce the mean squared distance between the instances and their closest centroid (this number is called the "inertia"). Unfortunately, the algorithm may converge to a local optimum, so you would typically repeat the whole process multiple times and pick the best solution (i.e., the one with the lowest inertia).
      Okay, now EM is basically the same idea, but instead of just searching for the cluster centers, the algorithm also tries to find each cluster's density, size, shape and orientation. Typically, we assume that the clusters are generated from a number of Gaussian distributions (this is called a Gaussian Mixture Model), so basically the clusters look like ellipsoids. Like K-Means, the EM algorithm alternates between two steps: the Expectation step (assigning instances to clusters), and the Maximization step (updating the cluster parameters). However, there are a few differences: during the Expectation step, EM uses soft clustering rather than hard clustering: this means that each instance is given a weight for each cluster, rather than being assigned to the closest cluster. Specifically, the algorithm estimates (using the current cluster parameters) the probability that each instance was generated by each cluster (this is called the cluster's "responsibility" for that instance). Next, the Maximization step updates the cluster parameters, i.e., the centroid, the covariance matrix (which determines the ellipsoid's size, shape and orientation), and the cluster's weight (basically how many instances it contains relative to the other clusters; you can think of it as the cluster's density). For example, to update a cluster's centroid, the algorithm computes a weighted mean of all the instances, using the cluster's responsibilities for the weights (so if the algorithm estimated that a particular instance had a very small probability of belonging to this cluster, then it will not affect the update much).
      To summarize: EM is very much like K-Means, but using soft-clustering, and based on a probabilistic model that allows it to capture not only each cluster's center, but also its size, shape and orientation. Check out Scikit-Learn's user guide on GaussianMixture for more details. Hope this helps! :)

  • @JoeVaughnFarsight
    @JoeVaughnFarsight Рік тому

    Merci Aurélien Géron, c'était une très belle présentation !

  • @sajil36
    @sajil36 5 років тому

    Dear Aurélien Géron,
    I have the following questions. It would be great if you can answer these also.
    1. How about continuous systems where the number of states possible is not discrete. Is it possible to use entropy in such cases?
    2. What if we have no idea about the probability distribution of the weather states? In such case how can we assign more bits to rare events and less number of bits to frequent events?.
    3. In cross-entropy calculation, the same number of bits for each state is assumed rather than varying number of bits (more bits to rare events and less number of bits to frequent events) why?

  • @chinmaym92
    @chinmaym92 6 років тому +2

    I rarely comment on videos, but this video is so good. I just couldn't resist. Thank you so much for the video. :)

  • @MrMijertone
    @MrMijertone 6 років тому +1

    I had to find a word for how well you explain. Perspicious. Thank you.

    • @AurelienGeron
      @AurelienGeron  6 років тому

      I just learned a new word, thanks James! :)

  • @davidbeauchemin3046
    @davidbeauchemin3046 6 років тому

    Awesome video, you made the concept of entropy so much clearer.

  • @vaishanavshukla5199
    @vaishanavshukla5199 4 роки тому +2

    great understanding
    and very good mentor

  • @andrewtwigg
    @andrewtwigg 3 роки тому

    Thanks for the explanation, very clear and complements your excellent book

  • @mankaransingh2599
    @mankaransingh2599 5 років тому +3

    lol, i was reading only your book when i searched for 'cross entropy' and boom, i never knew you had a youtube channel too !

    • @AurelienGeron
      @AurelienGeron  5 років тому +3

      Haha, I hope you enjoy it! :) I haven't posted a video in months, because I've been busy moving to Singapore and writing the 2nd edition of my book, but as soon as I finish the book I'll get back to posting videos!

    • @mankaransingh2599
      @mankaransingh2599 5 років тому +2

      ​@@AurelienGeron Good luck with that ! You are a great teacher.

  • @PerisMartin
    @PerisMartin 6 років тому

    Your explanations are so much better than other "famous" ML vloggers (... looking at you Siraj Raval!). You truly know what you are talking about, even my grandma could understand this!! Subscribed, liked and belled. More, please!

    • @AurelienGeron
      @AurelienGeron  6 років тому

      Thanks Martin, I'm glad you enjoyed this presentation! My agenda is packed, but I'll do my best to upload more videos asap. :)

  • @OmriHarShemesh
    @OmriHarShemesh 6 років тому +1

    I really enjoyed your book and these videos! Keep them coming! Even though some part of my PhD had to do with Information Theory I enjoyed the way you explain IT and Cross Entropy in a very practical way. Helped understand why it is used in machine learning the way it is. Looking forward for more great videos (and maybe a second book?)!

    • @AurelienGeron
      @AurelienGeron  6 років тому

      Thanks Omri, I'm glad you enjoyed the book & videos. :) I recently watched a great series of videos by Grant Sanderson (3Blue1Brown) about the Fourier Transform, and I loved the way he presents the topic: I thought I already knew the topic reasonably well, but it's great to see it from a different angle. Cheers!

    • @OmriHarShemesh
      @OmriHarShemesh 6 років тому

      Yes, the Fourier transform is a fascinating and multifaceted topic ;) In physics we use it very often for very surprising reasons. I'm looking for a book similar to yours which focuses specifically on NLP with Python and is very well written and modern. Do you have any recommendations? Thanks! Omri