Entropy & Mutual Information in Machine Learning

Поділитися
Вставка

КОМЕНТАРІ • 44

  • @nanthawatanancharoenpakorn6649

    I've watched a tons of vdo about Entropy. This is the best !

  • @FreeMarketSwine
    @FreeMarketSwine 11 місяців тому +2

    You are one of the best data science/ML teachers that I've found on UA-cam and I really hope keep making videos.

    • @AlaphBeth
      @AlaphBeth  11 місяців тому +1

      Thank you so much for your feedback.

  • @SajjadZangiabadi
    @SajjadZangiabadi Рік тому +1

    This video is absolutely fantastic! The presenter's clear explanations and engaging visuals made it a pleasure to learn from. I'm grateful for the valuable insights and real-world examples provided. Well done!

  • @RajivSambasivan
    @RajivSambasivan Рік тому +1

    Absolutely fantastic video. This is the best information theoretic feature selection explaination that I have come across - so accessible, so well explained. Kudos on a fantastic job.

    • @AlaphBeth
      @AlaphBeth  Рік тому

      Thanks for the feedback, much appreciated.

  • @MissPiggyM976
    @MissPiggyM976 2 місяці тому +1

    Very clear, many thanks!

  • @salonikothari7494
    @salonikothari7494 2 роки тому +2

    i have been into this topic for a month... and so so happy to find your lecture series !!! Thank you so much. So clear and precise.. Treasure :)

  • @RafaelQuirinoVex
    @RafaelQuirinoVex Рік тому +2

    What an excellent lecture. Its really hard to find such good ones. Hats off for you!

    • @AlaphBeth
      @AlaphBeth  Рік тому +1

      Thank you so much for your feedback, much appreciated.

  • @someonewhowantedtobeahero3206
    @someonewhowantedtobeahero3206 2 роки тому +1

    Thanks a lot. I was really struggling with the relationship between MI and Entropy, also with the notations, until I came across your video.

    • @AlaphBeth
      @AlaphBeth  2 роки тому

      Thank you, appreciate the feedback

  • @achyuthnandikotkur5647
    @achyuthnandikotkur5647 2 роки тому +2

    This is very useful to me, as I'm planning to leverage this concept in my masters thesis. Thanks again!

    • @AlaphBeth
      @AlaphBeth  2 роки тому +1

      Thanks, appreciate the feedback and good luck with your Master study.

  • @thousandTabs
    @thousandTabs 2 роки тому +2

    Excellent lecture Dr. Khushaba! I was searching for a visual explanation of these concepts, now I feel like I actually understand how MI works! Thank you so much, please make more videos like this.

    • @AlaphBeth
      @AlaphBeth  2 роки тому

      Thank you for the feedback, much appreciated.

  • @omridrori3286
    @omridrori3286 2 роки тому +1

    amaizing amazing amazing woow please make more like this in information theory

    • @AlaphBeth
      @AlaphBeth  2 роки тому +1

      Thanks for the feedback

  • @sevdakhlzd2194
    @sevdakhlzd2194 2 роки тому +1

    Amazing work, many thanks Dr. Khushaba.

    • @AlaphBeth
      @AlaphBeth  2 роки тому

      Thanks for the feedback

  • @djs749
    @djs749 2 роки тому +1

    Simply put..Excellent. It is a kind request to add more of this nature. Regards

    • @AlaphBeth
      @AlaphBeth  2 роки тому

      Thanks a lot for the feedback, much appreciated. I have added another one on transfer entropy yesterday, just look for “finding causal relationships” in my channel. It extends further on the concepts presented here to look for causality rather than just mutual information.

    • @djs749
      @djs749 2 роки тому +1

      @@AlaphBeth Thank you so much. The videos and explanations are excellent. Excellent resources for learning. Many thanks. Warm Regards

  • @nadravface
    @nadravface 2 роки тому +2

    Thank you!!! This is what I was looking for!!! If you update your microphone it will be ideal.

    • @AlaphBeth
      @AlaphBeth  2 роки тому

      Thank you, appreciate the feedback.

  • @sevdakhlzd9397
    @sevdakhlzd9397 2 роки тому +1

    Thanks a lotttt for this video!!!
    I highly appreciate your time and dedication.

    • @AlaphBeth
      @AlaphBeth  2 роки тому

      Thank you, appreciate your feedback

  • @janschmidt1218
    @janschmidt1218 Рік тому +2

    Thank you for this great explanation! Helped me a lot. I have one question: In 31:20 you say what is I(X,Y,Z) in the Venn diagram. From what I understood from the example with just two variables, X and Y, the area for I(X,Y) was the envelope of the Venn diagram (at least it was marked like this). Why is it here not the envelope of the three circles?
    Thank you very much already in advance!

    • @AlaphBeth
      @AlaphBeth  Рік тому +1

      Hello, thanks for your feedback. About your question: for the case of two variables, the exterior envelope of the two circles is H(X,Y) that is the entropy of the two variables. Mutual information, I(X;Y) is given by the overlapping region between the two circles. So you start from entropies, and the joint areas are the portions that one variable knows about another and hence reducing entropy - this is mutual information.
      The same goes for the case of three variables. I(X;Y;Z) is the joint part between the three circles and the outside envelop for the three circles is the entropy.

    • @janschmidt1218
      @janschmidt1218 Рік тому +1

      Thank you, ah yes I mixed up the entropy and the mutual information. It’s clear now, thanks!

  • @yuhuawei1763
    @yuhuawei1763 2 роки тому +1

    Wonderful talking! well explained!

    • @AlaphBeth
      @AlaphBeth  2 роки тому

      Thank you, appreciate the feedback.

  • @johnsondlamini3375
    @johnsondlamini3375 2 роки тому +1

    Amazing!

  • @indrakishorebarman5267
    @indrakishorebarman5267 Рік тому

    you are applying Histogram Approach to discrete data or continuous data?

  • @JunqiYan-m6v
    @JunqiYan-m6v Рік тому +1

    Hi, first of all, this is an excellent video! I have a question on page 13. in each square of the 2 dimensional graph, what should be the filled in value? at the left lower square, is the value there equal to the sum of the number of data samples from the first bin of the feature represented by the horizontal axis and the number of data samples from the first bin of the feature represented by vertical axis? Thanks for your clarification!

    • @AlaphBeth
      @AlaphBeth  Рік тому

      Thank you for your feedback. The answer is not the sum of the individual values, as you just need to think multidimensional rather than single dimension. The answer for that lower left bin is a count of the number of samples from feature 1 that falls within the range of the first bin while the samples in the second feature fall within the range of the first bin of that feature. In that example it is given as how many samples from feature 1 fall within the range of 0.1 to 0.34 while at the same time the samples from feature2 had values within the range of 1 to 1.59.
      Think of this as driving on a highway with 4 lanes. How many times did the car on lane 1 drive on a speed of 60 to 70 km for example while, at the same time, the car on lane 2 had a speed between 80 to 90km for example. When you do individual cars you just look at car on lane 1 individually from other cars and count number of times it drove on a specific speed regardless of other cars on other lanes.

  • @KhaledMohammed-i9r
    @KhaledMohammed-i9r Рік тому +1

    nice explanation. can you provide us with slides?

  • @alessandrorossi1294
    @alessandrorossi1294 2 роки тому +1

    Very nice!

    • @AlaphBeth
      @AlaphBeth  2 роки тому

      Thank you, appreciate the feedback

  • @kayeezhou9427
    @kayeezhou9427 8 місяців тому

    So ',' and ';' both represent 'and', however, it is like ';' has higher priority than ','. They have same portion in the Venn diagram if there are only two variables.

  • @simawpalmer7721
    @simawpalmer7721 2 роки тому +1

    Wonderful work, thanks. Can you share the slides for reference?

    • @AlaphBeth
      @AlaphBeth  2 роки тому +2

      Thank you, appreciate the feedback. I will post a link to the slides soon.

  • @beelogger5901
    @beelogger5901 2 роки тому

    missing a formlar H(Y|X)=H(X,Y)-H(X)=p(x,y)log(1/p(x,y)) -p(x)log(1/p(x))

    • @AlaphBeth
      @AlaphBeth  2 роки тому

      Thanks for your feedback. The conditional entropy is defined in the slides using a different formula than the one you expressed for a reason and that is for anyone interested to read and discover more ways of writing the formulas to encourage further learning and development.