Learning To See [Part 13: Heuristics]

Поділитися
Вставка
  • Опубліковано 17 бер 2017
  • In this series, we'll explore the complex landscape of machine learning and artificial intelligence through one example from the field of computer vision: using a decision tree to count the number of fingers in an image. It's gonna be crazy.
    Supporting Code: github.com/stephencwelch/Lear...
    welchlabs.com
    @welchlabs
  • Наука та технологія

КОМЕНТАРІ • 100

  • @mircoheitmann
    @mircoheitmann 7 років тому +54

    How to watch Welch Labs:
    -See that pop-up every now and then
    -Get Popcorn
    -Go 1080p
    -Go Fullscreen
    Because you deserve that! :)
    EDIT: Be sad because it's already over :(

    • @maxzorazon
      @maxzorazon 7 років тому +3

      I am an ecologist I go 480, full sound, and a cup of water

    • @DanielKivariTeacher
      @DanielKivariTeacher 7 років тому

      Watch the last one first, then the new one!

    • @fudgesauce
      @fudgesauce 7 років тому +14

      No, the way to watch is to wait three months, then binge watch them all.

    • @WelchLabsVideo
      @WelchLabsVideo  7 років тому +10

      :)

  • @VulpeculaJoy
    @VulpeculaJoy 7 років тому +22

    8:55 NO, DON'T YOU DARE TO END THE EPISODE HERE!
    9:10 FUCK!

  • @ppureni
    @ppureni 7 років тому +4

    these series of videos are the best I´ve seen, I love them, the only problem is wait for the next one!

  • @victor-iyi
    @victor-iyi 7 років тому +2

    Welch Labs is amazing! I love♥️ his videos. Really entertaining, calming, motivative and more importantly: informative!
    Keep doing what your doing man!
    5 stars for you ⭐️⭐️⭐️⭐️⭐️

  • @truppelito
    @truppelito 6 років тому +1

    Making shit up -> Heuristics
    Man, you cracked me up so much. There's literally no other way I could define heuristics ahahah

  • @alejandrofabiangarcia5917
    @alejandrofabiangarcia5917 4 роки тому +1

    My friend, you won all my attention and finish the vídeo with "...but we will learn it in the next vídeo!" :D
    I don't resist to wait tomorrow although now is late!

  • @user-jj2zv4ic5k
    @user-jj2zv4ic5k 7 років тому +3

    Please! I beg you do a video on trigonometry!
    The videos you made are awesome!

  • @Fireking300
    @Fireking300 7 років тому +3

    Thanks for the video! Really enjoying this series.

  • @evyatarbaranga5624
    @evyatarbaranga5624 7 років тому +3

    great video! can't wait for the next one!

  • @andrejoseph3630
    @andrejoseph3630 7 років тому +16

    Thank you I loved this video so much I pressed like and turned my monitor upside down so I could press it again

  • @jesusjimenes
    @jesusjimenes 7 років тому +2

    This series is gonna make me cry because it's never simple and it's never over but that's computer science I guess

  • @roekeloos
    @roekeloos 7 років тому +3

    Great series, can't wait for the next video.

  • @andrewatwell3905
    @andrewatwell3905 5 років тому +1

    I want to start by saying these videos have been awesome, I will be sure to watch the full series. One thing I noticed, the equations seem to be flipped at 8:21 in for the top row.

  • @yakov9ify
    @yakov9ify 7 років тому +1

    Guys if you haven't noticed we already saw what curve he is going to use in the first episode, it's just a semi-circle.

  • @nabeelkabeer1624
    @nabeelkabeer1624 7 років тому +7

    always waiting for the next one...

  • @jojojorisjhjosef
    @jojojorisjhjosef 7 років тому +1

    I love these videos, I just wish they din't boost my confidence of knowledge so much unlike my knowledge.

  • @bfournier1884
    @bfournier1884 7 років тому +4

    This was awesome stuff, thanks !
    My guess for the impurity function is a non linear one : a one that will make your average value for the whole node actually change by a lot when a choice has high impurities, penalizing a lot the node that are even a bit impure (I believe you're going to use a gaussian curve more than a parabola, but not sure)

  • @samovarmaker9673
    @samovarmaker9673 5 років тому +3

    Question: would quantum computers be able to permit the 'long' method of machine learning (the one with exponential growth)?

  • @rolininthemud
    @rolininthemud 7 років тому +3

    I can't handle the suspense!

  • @mpete0273
    @mpete0273 7 років тому +2

    Random forests do shockingly well for how easy they are to train. At 1 / 1000th the train time of a neural network you get 90 - 95% of the accuracy on some tasks, and superior accuracy on some non-sensory tasks.

  • @arpyzero
    @arpyzero 7 років тому +29

    Is the answer the normal distribution? If my Probability and Statistics course taught me anything, the answer's always the normal distribution.

    • @ryanmurray5973
      @ryanmurray5973 7 років тому +4

      Immediately lose faith in statistics.

    • @yakov9ify
      @yakov9ify 7 років тому +2

      RandomPanda0 Its not you can see which curve he is usingin episode 1

  • @88Nieznany88
    @88Nieznany88 7 років тому +3

    omgggggggggg i want next part so badly

  • @darwinkim1504
    @darwinkim1504 7 років тому +9

    Why take a weighed average? Why not just use the minimum of both impurities? Since we're classifying all data, an increase in impurity on one side will always lead to a decrease on the other. Finding the node that makes the line clearer by having impurities that are extreme i.e. having the largest difference in impurity seems like a smarter idea.

    • @Reddles37
      @Reddles37 5 років тому +1

      The minimum won't work because once you have a completely pure node it will say you have perfect purity, even if the rest of the tree is all mixed up.
      The sum doesn't work because as you add more nodes the sum will increase and you want it to be a normalized value. So you divide the sum by the number of nodes, which is just the average and has the problem he mentioned in the video where it doesn't work right if nodes have different numbers of entries. For example, if you had a node that separated out a single event on one side and left the rest all mixed up, it would get a high score from a simple sum or average because half of the nodes are classified correctly even though most of the data isn't.

  • @FederationStarShip
    @FederationStarShip 7 років тому +2

    Damn cliffhanger again! Is it about splitting in such a way as to maximise the normalised information gain? (Or entropy?)

  • @owendeheer5893
    @owendeheer5893 7 років тому +1

    Great video again!

  • @zhengqunkoo
    @zhengqunkoo 7 років тому +1

    I don't wanna wait until next time

  • @jacobs4728
    @jacobs4728 7 років тому +59

    I absolutely love these videos, but I wonder. Why are the videos so short and infrequent? These are absolutely amazing, and I would love it so much more if we could get more.

    • @arpyzero
      @arpyzero 7 років тому +30

      Well, you see, videos take time and effort to make. If the video seems effortless to make, then the video creator is doing a good job, but honestly, making videos like this is not a quick and simple thing to do.

    • @hokumisolated3551
      @hokumisolated3551 7 років тому +6

      Good quality videos take a lot of time.

    • @Patapom3
      @Patapom3 7 років тому +18

      All the details and perfection in these videos require a lot of time!
      Although I would love to see them come up more often, I wouldn't like them being rushed and unfinished.
      He's doing an excellent job, I love it! :D

    • @aey2243
      @aey2243 7 років тому +7

      Infrequent? They have been coming out every two weeks! So glad!

  • @mpete0273
    @mpete0273 7 років тому +1

    The weighted average impurity for every split is 2/5 because 2/5 of our training examples belong to the minority class (yellow).

  • @barellevy6030
    @barellevy6030 7 років тому +1

    I love this videos!

  • @stivenmax1809halocraft
    @stivenmax1809halocraft 7 років тому +1

    Fastantuc Channel I am a new subscriber from Colombiano, I love the Ciencia, Physics, Mathematics and Química xD

  • @DDranks
    @DDranks 7 років тому +7

    My heuristic answer for the problem: get rid of linearity!

  • @khai6009
    @khai6009 7 років тому +1

    I'm in so much suspense

  • @AnastasisGrammenos
    @AnastasisGrammenos 7 років тому +3

    The hype!!!

  • @TylerMatthewHarris
    @TylerMatthewHarris 7 років тому +2

    yaay more videos

  • @martinw3621
    @martinw3621 6 років тому +2

    You should put "Machine Learning", gets a lot more views. Great videos!

  • @justinward3679
    @justinward3679 7 років тому +2

    Time for some educated guesses!

  • @tisajokt7676
    @tisajokt7676 7 років тому +6

    Part 13, another cliffhanger ;_;

  • @jannis5641
    @jannis5641 7 років тому +1

    Great video, as always!
    What IDE do you use for your python code?

  • @ebigunso
    @ebigunso 7 років тому +3

    I guess instead of taking the total impurity of the whole classification, but rather the impurity in each labeled group and choose the classification that contains the least impure group within.

  • @cooltv2776
    @cooltv2776 7 років тому +6

    how long do these videos take to make?
    and is it realistic for you to upload more often?

    • @tapwater424
      @tapwater424 7 років тому +9

      +Alex Gregory
      He gotta do research. Make a script. Polish the script. Record the audio. Make time lapses for visualizing. Make computer graphics for visualizing. Edit the video and upload it.
      That's a lot of work.

    • @cooltv2776
      @cooltv2776 7 років тому +3

      I can tell that these videos are a lot of work, I was just wondering how long it took to make one
      and I also know there are reasons to upload less often than you can (namely consistency)

    • @mpete0273
      @mpete0273 7 років тому +3

      I wonder if he works full time on the side. He should set up a Patreon account.

  • @simrnchahl
    @simrnchahl 7 років тому +10

    (Next Time) , Come-On Seriously

  • @95rockanglez
    @95rockanglez 7 років тому +2

    mistake at 8:20, some of the equations dont match up the IR and IL.

  • @KeyserSoseRulz
    @KeyserSoseRulz 7 років тому +1

    No part 14 ? Damn it, I want my drug!

  • @paedrufernando2351
    @paedrufernando2351 2 роки тому

    @1.51 since the bracnhes are classified both as -(the classifcation erros are a summation of both).. correct?

  • @m0mosenpai
    @m0mosenpai 6 років тому

    I see comments complaining about there being less info in videos and more examples/things to experiment with . But in my opinion, this is exactly the approach which is required in order to build a scientific understanding and intuition for something. It is these examples that made me feel like i was the one discovering all of this myself step by step and it kind of simulates what actually goes inside the minds of great scientists and researchers. I always thought this is the right way to learn something or get someone interested/intrigued in something. So well done Welch Labs and keep making videos like you do! They are amazing! :D

  • @scottramsay3671
    @scottramsay3671 7 років тому +1

    I think you switched the I_Totals for x_1 and x_2 at 8:22. Maybe put an annotation because this confused me a bit. Great work though :)

    • @WelchLabsVideo
      @WelchLabsVideo  7 років тому +2

      Sorry about that - can't believe I missed it!

  • @MrBroggolinb
    @MrBroggolinb 7 років тому +3

    damn.. that cliffhanger tho

  • @rikkertkoppes
    @rikkertkoppes 7 років тому +2

    Really enjoy these videos, but the music during the talk makes it very hard to follow

  • @jordangentges2631
    @jordangentges2631 7 років тому +2

    at 8:36 you put the caculations for I[total of 1] under I[2] and vise versa for I[1].

    • @DanielKivariTeacher
      @DanielKivariTeacher 7 років тому +1

      Noooooooooo! That would drive me insane! I think that I[3] could be off as well. That is so something I would do.

    • @WelchLabsVideo
      @WelchLabsVideo  7 років тому +2

      Damn! I can't believe I missed this! I really need to do some type of peer review on the next series.

  • @ThomasLefort-JesuisuneIA
    @ThomasLefort-JesuisuneIA 7 років тому +5

    Where is your Patreon ?

    • @WelchLabsVideo
      @WelchLabsVideo  7 років тому +6

      Coming later this year!

    • @michailnenkov
      @michailnenkov 7 років тому +1

      Any other way to support you in the meantime?

    • @WelchLabsVideo
      @WelchLabsVideo  7 років тому +3

      I do have a book for sale: www.welchlabs.com/resources Thanks for watching!

  • @SirCutRy
    @SirCutRy 7 років тому +2

    Square mean error?

  • @arielmetamorphosis
    @arielmetamorphosis 7 років тому +1

    Ochi chernye

  • @strahd999
    @strahd999 7 років тому +1

    Entropy!

  • @non-pe8xn
    @non-pe8xn 7 років тому +1

    I came as fast as I heard

  • @mprphy6
    @mprphy6 7 років тому +1

    you sound like ian stewart

  • @brianhack5806
    @brianhack5806 6 років тому +1

    WATCH THIS ONE SIMPLE CRAZY TRICK THAT WILL MAKE AI SMART

  • @ryanmurray5973
    @ryanmurray5973 7 років тому +1

    8:35 the equations are in the wrong places.

  • @duckymomo7935
    @duckymomo7935 7 років тому +1

    MLE?

  • @jsimone33
    @jsimone33 7 років тому +2

    but I wanna know nowwwwwwwwwwwwwwwwwwwww

  • @iustinianconstantinescu5498
    @iustinianconstantinescu5498 7 років тому +4

    My intuition: you use a normal distribution instead of a triangular one.

    • @DanielKivariTeacher
      @DanielKivariTeacher 7 років тому +1

      As an approximation to the binomial distribution?!? I could totally see that!

    • @yakov9ify
      @yakov9ify 7 років тому +1

      Iustinian Constantinescu Nope he uses a semi-circle as can be seen in episode one

    • @nibblrrr7124
      @nibblrrr7124 7 років тому +1

      The impurity heuristic function _I(p)_ is not a probability distribution!
      While we could plug in parts of the formula that describes the shape of the normally distributed probability density function (the bell curvy thing) to kinda make it work (and maybe even get decent learning results by luck), I can't think of any theoretical justification for why that would make any sense. So that should make you feel suspicious about that intuition.
      (You're right about the rough shape of it though. We're looking for a concave function, and the middle part of the normal pdf is concave. ;) )

    • @iustinianconstantinescu5498
      @iustinianconstantinescu5498 7 років тому

      nibblrrr Thank you!!

  • @andrejoseph3630
    @andrejoseph3630 7 років тому +3

    *is this bold*

  • @khai6009
    @khai6009 7 років тому +1

    noooo not again

  • @fejfo6559
    @fejfo6559 7 років тому +5

    After each of these videos I wonder what have I learned? -That the next episode will be really important
    Try to put more info in one episode please. Like all you said in this one is all examples seem equally bad.

  • @oussamanhairech5178
    @oussamanhairech5178 5 років тому +1

    I feel bad I understand nothing.

  • @arekolek
    @arekolek 6 років тому +1

    In the previous video, you've said "Let's test our new strategy on real data. But before we do, let's consider how our new strategy might perform." But you didn't test that strategy in the previous video. You didn't test the strategy in this video. I doubt you will test that strategy in the next video, because in this one you've already changed the strategy. So why say "let's test our new strategy" in the first place?

  • @davidalexander7611
    @davidalexander7611 7 років тому +2

    The music is distracting. It might appeal to those who are just pretending to be smart, but not to those who are really trying to follow.
    Otherwise I'm super impressed with the production.

  • @SuperRalle123
    @SuperRalle123 7 років тому +3

    Are we ever gonna get to the point? I really like the videos, but every single time it ends, I'm like "wtf? That was just an introduction".

  • @luismiguelgallegogomez8000
    @luismiguelgallegogomez8000 7 років тому +1

    If it were the complex number series, it would have ended now, maybe the heuristic that guides its path to an end is not admissible? PS: Already unsubscribed.