Gini Index and Entropy|Gini Index and Information gain in Decision Tree|Decision tree splitting rule

Поділитися
Вставка
  • Опубліковано 13 січ 2020
  • Gini Index and Entropy|Gini Index and Information gain in Decision Tree|Decision tree splitting rule
    #GiniIndex #Entropy #DecisionTrees #UnfoldDataScience
    Hi,
    My name is Aman and I am a data scientist.
    About this video:
    How does a Decision Tree Work? A Decision Tree recursively splits training data into subsets based on the value of a single attribute. Splitting stops when every subset is pure (all elements belong to a single class)
    This video explains Gini and Entropy with example.
    Below questions are answered in this video:
    1.What is Gini Index?
    2.What is Information gain?
    3.What is Entropy?
    4.What is tree splitting criteria?
    5.How is decision tree splitted?
    About Unfold Data science: This channel is to help people understand basics of data science through simple examples in easy way. Anybody without having prior knowledge of computer programming or statistics or machine learning and artificial intelligence can get an understanding of data science at high level through this channel. The videos uploaded will not be very technical in nature and hence it can be easily grasped by viewers from different background as well.
    Join Facebook group :
    groups/41022...
    Follow on medium : / amanrai77
    Follow on quora: www.quora.com/profile/Aman-Ku...
    Follow on twitter : @unfoldds
    Get connected on LinkedIn : / aman-kumar-b4881440
    Follow on Instagram : unfolddatascience
    Watch Introduction to Data Science full playlist here : • Data Science In 15 Min...
    Watch python for data science playlist here:
    • Python Basics For Data...
    Watch statistics and mathematics playlist here :
    • Measures of Central Te...
    Watch End to End Implementation of a simple machine learning model in Python here:
    • How Does Machine Learn...
    Have question for me? Ask me here : docs.google.com/forms/d/1ccgl...

КОМЕНТАРІ • 292

  • @islamicinterestofficial
    @islamicinterestofficial 3 роки тому +77

    There is a mistake in your video:
    You said to choose that attribute that has less information gain. But actually we have to choose that has high information gain...

    • @UnfoldDataScience
      @UnfoldDataScience  3 роки тому +32

      Yes Naat, thanks for pointing out. I have pinned the comments related to it in the video for everyones benefit.

    • @islamicinterestofficial
      @islamicinterestofficial 3 роки тому +5

      @@UnfoldDataScience Pleasure sir

    • @nikhilgupta4859
      @nikhilgupta4859 2 роки тому +1

      If you are saying that we have to choose high information gain. Then as per video we should take the impure node. For pure node gini would come 0 and hence 0 IG. Isn't something wrong.

    • @DK-il7ql
      @DK-il7ql 2 роки тому

      At what time that has been said and corrected?

    • @RaviSingh-xx2wq
      @RaviSingh-xx2wq 2 роки тому +2

      @@DK-il7ql 10:37 he said Low information gain by mistake instead of high information gain

  • @ahmedalqershi1245
    @ahmedalqershi1245 3 роки тому +27

    I usually don't like commenting on UA-cam videos. But for this one, I felt like I had to show appreciation because truly this video was extremely helpful. University professors spend hours explaining what you just explained in 11 minutes. And you are the winner. Perfect explanation.
    Thank you so much!!!!

  • @malavikadutta1011
    @malavikadutta1011 3 роки тому +15

    Institutes spend two hours in explaining these two concepts and you made it clear in some minutes.excellent Explanation .

  • @jehanbhathena6270
    @jehanbhathena6270 2 роки тому +5

    This has become my favourite channel for ML/Data Science topics,thank you very much for sharing your knowledge

  • @zainahmed6502
    @zainahmed6502 3 роки тому +1

    Wow! Not only was your explanation amazing but you also answered every single comment! True dedication. Keep it up!

  • @akhilgangavarapu9728
    @akhilgangavarapu9728 4 роки тому +2

    If i feel any concept is hard to understand, first thing i do is search for your videos. Very intuitive and easy to understand. Thank you so much!

    • @UnfoldDataScience
      @UnfoldDataScience  4 роки тому +1

      Your comments are my motivation Akhil. Thanks a lot. Happy learning. Tc

  • @Guidussify
    @Guidussify 16 днів тому

    Excellent, to the point, good examples. Great work!

  • @indrajithvasudevan8199
    @indrajithvasudevan8199 2 роки тому +4

    Best channel to learn ML and Data science concepts. Thank you sir

    • @UnfoldDataScience
      @UnfoldDataScience  2 роки тому

      Thanks Indrajit. Kindly share video within data science groups if possible.

  • @__anonymous__4533
    @__anonymous__4533 10 місяців тому +1

    I have an assignment due tomorrow and this helped a lot!

  • @Pesions
    @Pesions 3 роки тому +2

    You have a really good explanation skills, thank you man , i finally understand it

  • @travelbearmama
    @travelbearmama 4 роки тому +14

    With your clear explanation, I finally understand what Gini index is. Thank you so much!

  • @shyampratapsingh4878
    @shyampratapsingh4878 3 роки тому +1

    The simplest and best explanation so far.

  • @KASHOKKUMARgnitcECE
    @KASHOKKUMARgnitcECE 6 місяців тому

    Thanks bro...explained in easy manner...

  • @kunaldhuria3935
    @kunaldhuria3935 3 роки тому

    short simple and sweet, thank you so much

  • @indronilbhattacharjee2788
    @indronilbhattacharjee2788 3 роки тому

    finally i am getting some clear explanations for various concepts

  • @alexandre52045
    @alexandre52045 Місяць тому

    Thanks for the video ! It was really clear and well executed. Would have been great to detail the entropy calculation though, I find it a bit elusive without an example

  • @joeycopperson
    @joeycopperson 6 місяців тому

    thanks for clear and easy explanation

  • @abhijitkunjiraman6899
    @abhijitkunjiraman6899 4 роки тому

    This is brilliant. Thank you so much!

  • @vishesh_soni
    @vishesh_soni 2 роки тому +1

    Your first video that I came across. Subscribed!

  • @valor36az
    @valor36az 3 роки тому

    I just discovered this channel what a gem

    • @UnfoldDataScience
      @UnfoldDataScience  3 роки тому

      Thanks a lot. please share with others in various data science
      groups as well.

  • @bhargavsai8181
    @bhargavsai8181 3 роки тому

    This is On point, thank you so much.

  • @priyankabachhav5315
    @priyankabachhav5315 2 роки тому +2

    Thank you so much sir, before watching this video I have watched 4 videos related to impurity but everyone is doing mixup of entropy and impurity n it was not really clear like what exactly formula is, how does it works.. But after watching ur video.. It is tottaly cleared now. Thank you for this beautiful n clear explanation

  • @hassangharbi3687
    @hassangharbi3687 2 роки тому

    Very goog and clear, i'm french speaking and i had understood almost everything

  • @nalisharathod6098
    @nalisharathod6098 3 роки тому

    Great Explanation !! very helpful . Thank you :)

  • @muhyidinarif9248
    @muhyidinarif9248 3 роки тому

    thank you so much, this helps me a lot!!!

  • @mavaamusicmachine2241
    @mavaamusicmachine2241 Рік тому

    Thank you for this video very helpful

  • @eiderdiaz7219
    @eiderdiaz7219 4 роки тому

    love it, very clear explanation

  • @Shonashoni1
    @Shonashoni1 2 роки тому

    Amazing explanation sir

  • @anandramm235
    @anandramm235 3 роки тому +1

    Crystal Clear Sir!! Keep Going!!

  • @awanishkumar6308
    @awanishkumar6308 3 роки тому

    I appreciate your concepts for Gini and Entropy

  • @9495tj
    @9495tj 2 роки тому +1

    Awesome video.. Thank You so much!

  • @Kumarsashi-qy8xh
    @Kumarsashi-qy8xh 4 роки тому

    sir Your explanation really very much helps me thank you

  • @ARJUN-op2dh
    @ARJUN-op2dh 3 роки тому

    Simple & clear

  • @fromthenorthfromthenorth8224
    @fromthenorthfromthenorth8224 3 роки тому

    Thanks for this clear and well explain Gini index.... Thanks ....

  • @seanpeng12
    @seanpeng12 3 роки тому +1

    Your explanation is awesome, thanks.

  • @deepikanadarajan3407
    @deepikanadarajan3407 3 роки тому

    very clear explanation and very helpfull

  • @Sagar_Tachtode_777
    @Sagar_Tachtode_777 3 роки тому

    Thank you for your wonderful explanation.
    Please make a video on PSI and KS index.

  • @Kumarsashi-qy8xh
    @Kumarsashi-qy8xh 2 роки тому

    U r doing great job sir

  • @prernamalik5579
    @prernamalik5579 3 роки тому

    It was very informative, Sir. Thank you :)

  • @zuzulorentzen8653
    @zuzulorentzen8653 6 місяців тому

    Thanks man

  • @RaviSingh-xx2wq
    @RaviSingh-xx2wq 2 роки тому +1

    Amazing explanation

  • @response2u
    @response2u 2 роки тому

    Thank you, sir!

  • @ece7700
    @ece7700 10 місяців тому

    thank you so much

  • @johnastli9250
    @johnastli9250 4 роки тому

    Awesome work and very intuitive explanation! Thank you. I have an exam in Data Mining and you helped me sir!!

  • @MrKhaledpage
    @MrKhaledpage 4 роки тому

    Thank you, well explained

  • @yyndsai
    @yyndsai Рік тому

    Thank you, no one could have done better

  • @sandipansarkar9211
    @sandipansarkar9211 3 роки тому

    great explanation

  • @reviewsfromthe60025
    @reviewsfromthe60025 2 роки тому +1

    Great video

  • @lalitsaini3276
    @lalitsaini3276 3 роки тому +1

    Nicely explained....! Subscribed :)

  • @kamran_desu
    @kamran_desu 3 роки тому +1

    Very nice explanation and icing on the cake for comparing their performance at the end.
    Just to confirm, is Gini/IG only for classification?
    For the regression trees we would use loss functions like sum of squared residuals?

    • @UnfoldDataScience
      @UnfoldDataScience  3 роки тому +1

      That's a good question, since it's based on probability so it is applicable to classifiers. For regression, we see something like to minimize SSE or other error.

    • @mannankohli
      @mannankohli 3 роки тому +1

      @@UnfoldDataScience
      Hi sir, as per my knowledge "Information Gain" is used when the attributes are categorical in nature. while "Gini Index" is used when attributes are continuous in nature.

    • @mannankohli
      @mannankohli 3 роки тому +1

      Hi sir, as per my knowledge "Information Gain" is used when the attributes are categorical in nature. while "Gini Index" is used when attributes are continuous in nature

  • @23ishaan
    @23ishaan 3 роки тому

    Great video !

  • @sadhnarai8757
    @sadhnarai8757 3 роки тому +1

    Great content.

  • @mahimano4469
    @mahimano4469 2 роки тому +1

    Thnaks alot

  • @ranad2037
    @ranad2037 Рік тому

    Thanks a lot!

  • @jarrelldunson
    @jarrelldunson 3 роки тому +1

    Thank you

  • @chrisamyrotos8313
    @chrisamyrotos8313 4 роки тому

    Very Good!!!

  • @adityasrivastava78
    @adityasrivastava78 Рік тому

    Good teaching

  • @yohanessatria2220
    @yohanessatria2220 2 роки тому +1

    So, the only difference between Gini and Information Gain is only the performance speed right? I assume with the same state of descision making and data, both Gini and Information Gain will be able to pick the same best attribute, right?
    Great video btw!

    • @UnfoldDataScience
      @UnfoldDataScience  2 роки тому

      That is correct. Also the internal mathematical formula is different.

  • @OverConfidenceGamingYT
    @OverConfidenceGamingYT 3 роки тому +1

    Thank you ❣️

  • @shubhangiagrawal336
    @shubhangiagrawal336 3 роки тому +1

    very well explained

  • @soheilaahmadi4807
    @soheilaahmadi4807 2 роки тому

    Hi Great explanation. Thank you so much. Do you have any videos explaining the criteria for Decision Tree regression?

    • @UnfoldDataScience
      @UnfoldDataScience  2 роки тому

      Thanks a lot. for Regression, not yet, will upload soon.

  • @nikhildevnani9207
    @nikhildevnani9207 2 роки тому

    Amazing explanation aman . I have one doubt like suppose there are 5 columns(4 independent and 1 target). For split i have used 1,2,4,3 columns and other person is using 3,2,1,4. Then on what factors we can decide either my splits are best or the other guy's split is best.

  • @samhitagiriprabha6533
    @samhitagiriprabha6533 4 роки тому +2

    Awesome Explanation, very sharp! I have 2 questions:
    1. Since this algorithm calculates Gini index for ALL splits in EACH column, is this process time-consuming?
    2. What if the algorithm finds TWO conditions where GINI Index is 0. Then how does it decide which condition to split on?
    Thank you in advance!

    • @UnfoldDataScience
      @UnfoldDataScience  4 роки тому

      1. It is process consuming but it does not happen one by one internally for numerical columns, algorithm tries to figure out in which direction it should move smartly. For categorical columns it happens one by one and time consuming.

    • @UnfoldDataScience
      @UnfoldDataScience  4 роки тому +1

      2.0 means homogeneous sets hence no further split will happen

  • @stevenadiwiguna1995
    @stevenadiwiguna1995 3 роки тому

    Hi! i want to make sure about gini index. You said that "criteria of the split will be selected based on minimum GINI INDEX from all the possible condition". Is it "gini index" or "weighted gini index"? Thanks a lot tho! Learn a lot from this video!

  • @SivaKumar-rv1nn
    @SivaKumar-rv1nn 3 роки тому

    Thankyou sir

  • @subhajitdutta1443
    @subhajitdutta1443 2 роки тому

    Hello Aman,
    Hope you are well. I have a question. Hope you can help me here.
    If probability(P) =0,
    Then Gini Impurity becomes = 1,
    as per the formula.. Then why it always ranges from 0 to 0.5?
    Thank you,
    Subhajit

  • @melvincotoner4878
    @melvincotoner4878 3 роки тому +1

    thanks

  • @preranatiwary7690
    @preranatiwary7690 4 роки тому

    Good one again! Please add more technical videos as well where audience is not a layman but someone who is into data science.

    • @UnfoldDataScience
      @UnfoldDataScience  4 роки тому

      Thanks for your feedback. I ll definitely cover advance topics as well as we move forward with subsequent topics.

  • @vishalrai2859
    @vishalrai2859 3 роки тому +1

    Thank you so much sir please do some projects

  • @sahilmehta885
    @sahilmehta885 Рік тому +1

    ✌🏻✌🏻

  • @datafuturelab_ssb4433
    @datafuturelab_ssb4433 3 роки тому +1

    Great explaination
    I have que
    Is gini index negative

  • @anthonyamponsah1693
    @anthonyamponsah1693 4 роки тому

    hello, very insightful. You almost explained the best times to use either of the criterion. Can you shed more light into that. The best kind of criterion to use for data in a model

    • @UnfoldDataScience
      @UnfoldDataScience  4 роки тому

      Hi Anthony, it is usually not easy to say which method(gini/entropy) works on what kind of data beforehand. Usually we try to check with various options to see model performance and then choose one. Hope this clarifies. Thank you.

    • @anthonyamponsah1693
      @anthonyamponsah1693 4 роки тому +1

      @@UnfoldDataScience Yeah Thank you.
      can i get your email? I'd like to stay in touch

    • @UnfoldDataScience
      @UnfoldDataScience  4 роки тому

      Sure it's there in my UA-cam.

  • @karthikganesh4679
    @karthikganesh4679 3 роки тому +1

    Sir kindly explain entropy in detail just like the way you presented gini index

  • @mx1327
    @mx1327 4 роки тому +1

    does the CART go through all the possible numerical values under loan to find the best condition? If you have a large amount of data, then should it be very slow?

    • @UnfoldDataScience
      @UnfoldDataScience  4 роки тому +1

      That is a good question. Thanks for asking. In general, for a numerical variable, first split point is chosen randomly and then the point is optimized based on "in which direction" loss function is moving. Please note, loss in this case is the node purity after split.

  • @skvali3810
    @skvali3810 2 роки тому

    i have one question aaman . at root node is the gini are Entropy is high are low..

  • @prasanthkumar632
    @prasanthkumar632 4 роки тому

    Aman, Can you please explain entropy also with an example like you did for Gini Index

  • @abhishekraturi
    @abhishekraturi 3 роки тому +1

    Just to make clear, the Gini index ranges from 0 to 0.5 and not 0 to 1. Jump to to video at 7:10

    • @UnfoldDataScience
      @UnfoldDataScience  3 роки тому

      Yes, this is the common comment from many users. Your are right Abhishek.

  • @bishwajeetsingh8834
    @bishwajeetsingh8834 Рік тому

    Which one to choose, like how by seeing the data I can assume, what we can use gini or IG?

    • @UnfoldDataScience
      @UnfoldDataScience  Рік тому

      Cant decide in advance, its more of trial and error(there are some directions though)

  • @anildelegend
    @anildelegend 3 роки тому +1

    Good explanation.. But correction needed. Gini oscillates between 0 and 0.5.. The worst split could half positive half negative.. Gini impurity for that wing is 0.5 also overall weighted gini would be 0.5..
    It is entropy that oscillates between 0 and 1.

    • @UnfoldDataScience
      @UnfoldDataScience  3 роки тому +1

      You are Right Anil. This feedback is coming from other viewers as well may be I mentioned this part wrong in video. I am pinning your comment to top for everyone's benefit. Thanks again.

  • @rosh70
    @rosh70 2 роки тому

    Can you show one numerical example using entropy? when the formula starts with a negative sign, how can the value be positive? Just curious.

  • @abhishekgautam231
    @abhishekgautam231 4 роки тому

    Indeed the math is quite interesting. Thanks for sharing.

  • @bhagyashreemourya7071
    @bhagyashreemourya7071 3 роки тому +1

    I'm a bit confused between Gini and Entropy. I mean is it necessary to use both methods while analyzing or we can go for any one of them?

    • @nikhilgupta4859
      @nikhilgupta4859 2 роки тому +1

      We have to use only one of them. Which one to choose depends on data.

    • @UnfoldDataScience
      @UnfoldDataScience  2 роки тому

      Depends on case not both to be used

  • @amnazakria3876
    @amnazakria3876 4 роки тому

    sir how you choose the loan amnt as root node ?we have to find gini for all columns and then select the root node?

    • @UnfoldDataScience
      @UnfoldDataScience  4 роки тому

      Hi Amna, this is a good question. Thanks for asking. Yes for all olumns and
      select the optimal split.

  • @ykokadwar
    @ykokadwar 2 роки тому

    Can you help to explain intuitively the Entropy equation

  • @awanishkumar6308
    @awanishkumar6308 3 роки тому

    But if we have datasets with multiple columns like more than this example then how we will decide select which input column should be splited?

  • @dracula5505
    @dracula5505 3 роки тому +1

    Do we have to calculate both gini and entropy to figure out which is best for the dat
    aset??

  • @ruqaiyajaved6590
    @ruqaiyajaved6590 2 роки тому

    Very informative video sir. I would like to know whether to calculate gini index/entropy manually if we go for decision tree using R studio? I basically want to know what to do after getting the decision tree in R studio? should I stop there and report the decision tree as it is Or prune it. Can you please explain the concept of pruning the regression tree and classification tree in R studio using a simple example. It would be of great help😇 thank you.. Kindly revert back.

    • @UnfoldDataScience
      @UnfoldDataScience  2 роки тому

      Hi Ruqaiya, very good questions:
      1. you no need to calculate manually - tool will calculate
      2. After getting tree, you model is fit, you can use it for prediction
      3. You must prune your tree - otherwise it may overfit
      4. I will explain pruning in separate video.

  • @GopiKumar-ny3xx
    @GopiKumar-ny3xx 4 роки тому +1

    Nice presentation.. Keep going....

  • @gmcoy213
    @gmcoy213 4 роки тому +1

    So if i am using the C5.0 algorithm? Which Separation technique will be used?

  • @geethanjaliravichandhran8109
    @geethanjaliravichandhran8109 3 роки тому

    well sir how the root node selection criteria occurs if two data sets shares same and lowest gini index value

  • @umair.ramzan
    @umair.ramzan 3 роки тому +1

    I think we select the split with the highest information gain when using entropy. Please correct me if I'm wrong.

    • @abdobourenane9294
      @abdobourenane9294 3 роки тому +1

      You are right, When an internal node is split, the split is performed in such a way so that information gain is maximized.

    • @UnfoldDataScience
      @UnfoldDataScience  3 роки тому +1

      Thanks Abdo. Yes maximum IG is considered for split. Probably I missed to include in video.

    • @abdobourenane9294
      @abdobourenane9294 3 роки тому

      @@UnfoldDataScience You are welcome. i also get some new informations from your video

  • @abelhirpo3109
    @abelhirpo3109 2 роки тому

    it is a nice tutor Sir ! But how could it be such category comes true ? since you made greater or equal to 200 and should be inclusive to the GINI index ?

  • @saumyamishra9004
    @saumyamishra9004 3 роки тому

    Firstly sir , how much i know higher the information gain gooder the split.
    & I wann know that is any of them is for continues variable?

  • @PrithivirajSaminathan
    @PrithivirajSaminathan 4 роки тому +4

    buddy, gini does not lie between 0 to 1 .. its entrophy that lies between 0 to 1
    gini is always less than 0.5 so it always lies between 0 to 0.5

    • @UnfoldDataScience
      @UnfoldDataScience  4 роки тому +1

      I think yes, Gini lies between 0 to 1. Please help me with more details if you disagree.

  • @siddhantpathak3162
    @siddhantpathak3162 3 роки тому

    I calculated the Gini Index for (4, 2) splits, it came as 4/9. Shouldn't it come close to 1 ? Since it is the worst case scenario?

    • @UnfoldDataScience
      @UnfoldDataScience  3 роки тому

      Need to check with data and calculate however not always mandatory that it will be close to 1.

  • @frosty2164
    @frosty2164 2 роки тому +1

    which model has less bias and high variance-logistic, decision tree or random forest? can you please help

    • @UnfoldDataScience
      @UnfoldDataScience  2 роки тому +1

      Decision tree high variance low bias
      Logistics regression - high bias, low variance
      Random forest - Tries to reduce the high variance of decision tree. Bias is low.

    • @frosty2164
      @frosty2164 2 роки тому

      @@UnfoldDataScience Thank you very much.. can you also share the reason behind this.. or if you got any link where i can understand

  • @shivanshjayara6372
    @shivanshjayara6372 3 роки тому +1

    sir i am confused regarding the selection criteria for root node. Some where i have studied that whose I.G.-E value is maximum, that feature will be selected as root node...and here you have said that whose I.G. is less will be selected as root node....I am confused.

    • @UnfoldDataScience
      @UnfoldDataScience  3 роки тому

      That's a good question.
      Entropy and IG are related. Understand it like this, entropy should be less and IG should be more.
      IG from a split = entropy of parent node - entropy of child nodes created.
      Here, decision tree will try to split in such a way that IG is maximum, in other way entropy is reduced to maximum extent. Hope it's clear now.

    • @shivanshjayara6372
      @shivanshjayara6372 3 роки тому

      @@UnfoldDataScience thanks for this response but is it true that, Gain value is max that will be selected as root node after that splitting take place based on that root node (feature). If we get the pure split then no further splitting take place but if we get any impure split then splitting will take based on that feature whose gain is second highest among those feature. Isi tarah hai na?

    • @shivanshjayara6372
      @shivanshjayara6372 3 роки тому

      @@UnfoldDataScience and if possible plz give me ur email id. I have few more questions. I need to give the images of some points so that u can help me out.

  • @subhajitdutta1443
    @subhajitdutta1443 Рік тому

    How gini index ranges from 0 to 1? For best case it is 0 and for worst case it is 0.5..then how it is possible? Please explain..

  • @abhinai2713
    @abhinai2713 3 роки тому

    @10:38 where the information gain is high ,there we try to split to node right??

    • @UnfoldDataScience
      @UnfoldDataScience  3 роки тому

      That is a good question. The formula you see @10:38 is for entropy of a node.
      Information gain for a split = Entropy of node - Entropy of child nodes after the split
      Decision tree splits at the place where the information gain is highest. In other way you can say , decision tree splits where entropy is reduced to largest extent.

  • @tanzeelmohammed9157
    @tanzeelmohammed9157 Рік тому

    Sir, range of Gini Index is from 0 to 1 or 0 to 0.5? i am confused