StatQuest: t-SNE, Clearly Explained

Поділитися
Вставка
  • Опубліковано 3 гру 2024

КОМЕНТАРІ • 749

  • @statquest
    @statquest  5 років тому +72

    Corrections:
    6:17 I should have said that the blue points have twice the density of the purple points.
    7:08 There should be a 0.05 in the denominator, not a 0.5.
    Support StatQuest by buying my book The StatQuest Illustrated Guide to Machine Learning or a Study Guide or Merch!!! statquest.org/statquest-store/

    • @linweitao6470
      @linweitao6470 4 роки тому +1

      Thanks very much for the informative lecture and it is really helpful. UMAP is more and more popular now, could you explain it and compare with tSNE as well? Thanks in advance.

    • @statquest
      @statquest  4 роки тому +6

      @@linweitao6470 I should have a UMAP StatQuest ready in a few weeks. I'm working on it right now.

    • @linweitao6470
      @linweitao6470 4 роки тому +1

      @@statquest Thanks again!

    • @CompBioQuest
      @CompBioQuest 4 роки тому +2

      @@statquest UMAP is great, I dont know if it is more popular. There are more stringent reductions out there like ICA. I wonder the thoughts of Josh about it?

    • @statquest
      @statquest  4 роки тому +2

      @@CompBioQuest I guess it largely depends on the field. Right now, genetics and molecular biology are going bonkers over UMAP. However, ICA is very interesting. Thanks to your question, I found this article which is fascinating: gael-varoquaux.info/science/ica_vs_pca.html

  • @tuongminhquoc
    @tuongminhquoc 2 роки тому +3

    Thank you. I am not sure if you remember me from the PCA video. I have a job now. My job do not have high salary, but I could now support you by donating and thank you now. 😊

    • @statquest
      @statquest  2 роки тому +1

      WOW! Thank you so much. And congratulations on getting a job!!! HOORAY!!! TRIPLE BAM! :)

    • @tuongminhquoc
      @tuongminhquoc 2 роки тому +1

      @@statquest Keep doing great work sir! Also, it would be great if you could make a video about the comparation between clustering methods. 😁

    • @statquest
      @statquest  2 роки тому +1

      @@tuongminhquoc Thanks and I'll keep that in mind!

  • @abdulgadirhussein2244
    @abdulgadirhussein2244 4 роки тому +87

    I am always blown away by how you make statistics & machine learning algorithms so simple to understand and how you graciously share your knowldege. Keep up the great work man, you are awesome!

    • @statquest
      @statquest  4 роки тому +2

      Thank you very much! :)

  • @잠꾸러기-g1s
    @잠꾸러기-g1s 3 роки тому +23

    Whenever I find statistics technique I have never seen in scientific article, I always visit your channel. Thanks a lot!!

  • @veronikaberezhnaia248
    @veronikaberezhnaia248 3 роки тому +13

    I regret I can't put 1000 likes! I read about 20 articles about t-SNE, they are similar to one another, almost identical - and they don't get me closer to the point. But your video - I watched it 4 times (because the topic is hard, at least for me) with making some and drawing - but finally I understand how it works, up to the point that I can explain it to someone else. So many thanks to you!

    • @statquest
      @statquest  3 роки тому

      HOORAY!!! TRIPLE BAM! I'm glad the video was helpful. BAM! :)

  • @RezaRob3
    @RezaRob3 4 роки тому +5

    I'm writing this comment while having watched only half way into this video, which is pretty unusual for me!
    It is so clearly explained! I once glanced at the t-SNE paper and didn't understand it. If this is what it does then this is how things like this should be explained!
    Really, we need people explaining science like this! It's possible to read scientific papers, but what they fail to do is properly communicate the core idea to the reader so that the reader quickly grasps the big picture and the intent of the mathematical details without getting lost in the details.
    Frequently, even a missing definition can make reading papers much harder for non experts.

    • @statquest
      @statquest  4 роки тому +1

      I'm glad you liked this video so much! :)

  • @gustavomorais4489
    @gustavomorais4489 4 роки тому +13

    I never leave comments, but I really feel the need to thank you for being able to explain this in such a simple way

  • @kass8036
    @kass8036 7 років тому +242

    I never knew machine learning could be as simple as... BAM

    • @thomasrad6296
      @thomasrad6296 4 роки тому +1

      Thats like the most important lesson.

    • @namimiable
      @namimiable 4 роки тому +2

      Double bam 💥

    • @kalyanben10
      @kalyanben10 4 роки тому +3

      Just a random comment so that someone can say triple bam

    • @kass8036
      @kass8036 4 роки тому +5

      Triple bam 💥

    • @birenpatel894
      @birenpatel894 3 роки тому +3

      hurayyyy we have made it to the END !!!

  • @douglasaraujo9763
    @douglasaraujo9763 4 роки тому +111

    As entertaining as watching a Walt t-SNE movie!

    • @statquest
      @statquest  4 роки тому +15

      You made me laugh out loud! BAM! :)

    • @arenashawn772
      @arenashawn772 9 місяців тому +1

      Best stat-word-play of the year! 😂

  • @atakanekiz
    @atakanekiz 5 років тому +270

    Great explanations! Can you please do a video explaining UMAP and potentially how it compares to t-SNE? Thanks!

  • @jjlian1670
    @jjlian1670 5 років тому +12

    Josh is so far my favorite UA-camr that is able to explain complex stats concepts so smoothly.

    • @statquest
      @statquest  5 років тому +1

      Thank you so much! :)

  • @nanopore-sequence
    @nanopore-sequence 4 роки тому +7

    I am a student in Japan.
    I'm not good at English, but it was very easy to understand and I learned a lot:)

  • @OnSightNoMore
    @OnSightNoMore 4 роки тому +7

    It's impressive how you managed to explain the essential concepts of this chain of algorithms in such a clear way! I'm sharing this video with my beginner fellows, who normally flee as soon as I say words like nearest-neighbor or stochastic.
    Thank you very much!

    • @statquest
      @statquest  4 роки тому

      Thank you very much! :)

    • @willykitheka7618
      @willykitheka7618 2 роки тому +2

      🤣🤣🤣🤣it's that terrifying?!? Barbara Oakley in her book, "a mind for numbers" called them zombies🤣🤣🤣

  • @edridgedsouza1170
    @edridgedsouza1170 4 роки тому +55

    "This is Josh Starmer, and you're watching Tisney Channel!"

  • @gayathrikurada3315
    @gayathrikurada3315 4 роки тому +5

    Josh.. Your explanation is always "simple and easy to understand" even for layman.You are simply "The life Saviour" !!!
    Thank you so much :)

    • @statquest
      @statquest  4 роки тому +1

      Hooray! I'm glad my video was helpful. :)

  • @sarangak.mahanta6168
    @sarangak.mahanta6168 2 роки тому +1

    The only educational channel which brings a smile to my face.

  • @DoanQuocHoan
    @DoanQuocHoan 4 роки тому +3

    I was so confusing about t-SNE until I watched this. It's clear and very easy to understand! Thank you! Like your BAM. :D

  • @Ravi5ingh
    @Ravi5ingh 4 роки тому +2

    It's rare to come across such a brilliant explanation.

  • @thedrunkprogrammer1474
    @thedrunkprogrammer1474 4 роки тому +1

    I really can't appreciate you enough for your videos.
    Books and blogs only make sense after I watch your videos!

    • @statquest
      @statquest  4 роки тому

      Thank you very much! :)

  • @snackbob100
    @snackbob100 4 роки тому +4

    Josh, i literally love your videos, they are really helping me get through my ADV CS degree. I am going to buy one of your shirts, and wear it on campus as a thank you!

    • @statquest
      @statquest  4 роки тому +1

      That would be awesome!!! Thank you very much! :)

  • @Kmysiak1
    @Kmysiak1 4 роки тому +1

    This explanation almost makes tSME sound like a clustering technique not a reduction technique..... That said, this was by far the best explanation I've heard to date.

    • @statquest
      @statquest  4 роки тому +1

      That's a good observation. In many ways t-SNE is a hybrid method that reduces dimensions by clustering.

    • @Kmysiak1
      @Kmysiak1 4 роки тому +1

      @@statquest Now if you can explain how to interpret a tSME plot. This would help immensely as it's virtually impossible to determine the correct perplexity number without understanding how to interpret the plot. This seems like one of those "blackbox" methods which we just trust. Keep up the great work!

  • @nikhilgoparapu8183
    @nikhilgoparapu8183 4 роки тому +2

    Very clearly explained!
    Loved the way you explained such a complicated concept so intuitively.
    Thank you.

  • @sagar_bro
    @sagar_bro 4 роки тому +4

    I just love the way you start all your videos! Stat-Questtttttt :)

  • @abhaymathur9332
    @abhaymathur9332 5 років тому

    this is such an awesome explanation of tsne that i dont need to watch any other video or read any other website/book. I dont think there can be a better explanation. Superlike.

  • @ramnarasimhan1499
    @ramnarasimhan1499 7 років тому +1

    Fantastic video. I really appreciate all the slides that you made to get the animation effect. It really helped. Possibly the best explanation of t-SNE around. Keep up the good work.

  • @goeCK
    @goeCK 5 років тому +2

    Came here for understanding the t-SNE plots used in single cell transcriptomics - which I finally did, thanks! Overall, you helped me out already plenty of times!
    To display cells in during cell fate transition/acquisition e.g. different time points during neurodevelopment, often pseudo-temporal ordering is used.
    Since scRNA seq is becoming more and more popular, this might be a good next topic

    • @erazael
      @erazael 5 років тому

      Same here, and I did not expect to understand so fast and clearly!

  • @saiakhil4751
    @saiakhil4751 3 роки тому +3

    Why I couldn't stop bamming the like button??!! You're the best Josh!!

  • @jannelis2845
    @jannelis2845 5 років тому +2

    Very well explained ! Your video was recommended to us by our professors at ETH-Zürich.:)

  • @shanthinagasubramanian2866
    @shanthinagasubramanian2866 2 роки тому +1

    Very nice way of teaching ! ML concepts CLEARLY EXPLAINED and BAM adds lot of curiosity in the videos :) Thanks for your videos. And not to forget your songs are really nice :)

  • @srishtikumar5544
    @srishtikumar5544 4 роки тому +1

    Excellently explained! I really like your simple, clear, concise explanation - those 3 factors make a world of difference. And, great animations.

  • @lilmoesk899
    @lilmoesk899 7 років тому

    Great as always. I've heard of t-SNE before, but this was my first real introduction to it. Definitely want to go look at some more resources now.

  • @bright1402
    @bright1402 6 років тому

    This is the best video for t-SNE that I have ever seen. Thanks a lot, man

  • @alvarovs89
    @alvarovs89 2 роки тому +1

    Just hear about t-SNE and I did not quite understand how it works so I crossed my fingers hoping that josh did a video of this and of course he did!! haha
    I have my popcorn ready to enjoy this video :)

  • @axeleriksson8978
    @axeleriksson8978 7 років тому +45

    Hey, love your videos!
    Just a typo but it should be 0.05 on the values to the right at 07:19. Confused me for a second so might clear things up for others.

  • @ImmutableHash
    @ImmutableHash 6 років тому

    Awesome explanation, thank you so much! I read a few papers/books multiple times and barely have a clue, but with your vid I understand the concept just by watching it once!

  • @scifimoviesinparts3837
    @scifimoviesinparts3837 3 роки тому +1

    The Best tutorial and explanation for TSNE so far! It's of great help! Thanks a lot!

  • @parvezrafi4098
    @parvezrafi4098 6 років тому

    Thanks a lot. I really struggled to understand the concept first time I came across it in a book. Your video helped a lot. Great job!

  • @camilaarcu2254
    @camilaarcu2254 3 роки тому +1

    You are incredible, Josh Starmer!! I loved this

  • @NirajKumar-hq2rj
    @NirajKumar-hq2rj 6 років тому

    excellent explanation , this intuition helps to follow maths behind t-SNE

  • @vishnumuralidharan9858
    @vishnumuralidharan9858 Рік тому +2

    Hi Josh, I can't thank you enough for how much I have benefitted from your videos even though I do data science as part of my day job. Thank you so much for sharing your knowledge!
    One request for a video: could you do a video of when to use which methods / models in a typical data science problem? Much appreciated.

  • @imamalva5603
    @imamalva5603 4 роки тому +1

    you are the hero, keep explaining complex thing into simple. thankss

  • @pierrefoidart5368
    @pierrefoidart5368 4 роки тому

    Thanks a lot!! These videos are much more clear than any article!
    A video explaining UMAP (related to t-SNE) would be awesome !

    • @statquest
      @statquest  4 роки тому

      I'm working on UMAP. For now, however, know that it is almost 100% the same as t-SNE. The differences are very subtle.

  • @reedayoungblood
    @reedayoungblood 4 роки тому +2

    Great video - thank you! One small insertion that I think would improve it: at ~2:07, right after showing what projecting on to the X or Y axis would look like, show one more example of projecting onto an arbitrary line to try to retain as much variance as possible (basically PCA). I think this could be done in 15-20 seconds, and would be helpful in comparing t-SNE to one of its most popular alternatives, which is helpful in deciding *when* to use an algorithm - one of the hardest things for beginners like myself.

  • @redaaitouahmed8250
    @redaaitouahmed8250 4 роки тому +2

    Super Mega BAM !! So great at what you do as always ... Tons of love sent your way ! Keep up the amazing work :D

  • @chauphamminh1121
    @chauphamminh1121 5 років тому

    You make a complex idea becomes so simple and understanding ! Great video. Thanks a lot

  • @octour
    @octour 5 років тому

    Thanks for such a clear explanation. You know, your channel already in the top list for me and very soon I'll watch all your videos..

  • @HR-yd5ib
    @HR-yd5ib 7 років тому +19

    Excellent video! Perhaps you could add another video where you go through the actual algorithm and how the moves is actually computed.

  • @carlosalfonso5829
    @carlosalfonso5829 6 років тому +1

    OH God, this is a great explanation, as Radel mention below, it would be nice to have an extended video of the algorithm as the one from PCA!!

    • @statquest
      @statquest  6 років тому

      Thank you! Yes, one day I'll break the actual equations down and do "step-by-step" explanation of t-SNE.

    • @niteshturaga
      @niteshturaga 6 років тому

      Looking forward to this.

  • @RajeshSharma-bd5zo
    @RajeshSharma-bd5zo 4 роки тому +1

    One word reaction after watching this video --> AWESOME!!

    • @statquest
      @statquest  4 роки тому

      Thank you so much 😀!

  • @veeek8
    @veeek8 2 роки тому +1

    Brilliant explanation, this has been bugging me all day, thank you!!

  • @soumitachel3844
    @soumitachel3844 4 роки тому +1

    Hello Josh, thank you for coming with such incredible videos. Data scientist’s life becomes easy.😬

    • @statquest
      @statquest  4 роки тому

      Thank you! :)

    • @soumitachel3844
      @soumitachel3844 4 роки тому

      StatQuest with Josh Starmer Hi a request to do a tutorial of UMAP.

  • @rgarthwood3881
    @rgarthwood3881 5 років тому +20

    "Clearly Expalined" indeed!

  • @abhijitkumbhar1
    @abhijitkumbhar1 Рік тому +1

    Difficult concept made so simple. Just brilliant!!!!

  • @UxJoy
    @UxJoy 4 роки тому +1

    Dude this is super clear. Love the content! BAM

    • @statquest
      @statquest  4 роки тому

      Thank you very much! :)

  • @benw4361
    @benw4361 5 років тому +1

    Love the vid. I was wondering how tsne works and you broke it down great and the explanation for the t distribution was short and to the point.

  • @petersu4869
    @petersu4869 3 роки тому +2

    "Bam, I made that terminology up" :D :D , great vid, keep up the good work.

  • @precisionimmunologyincubat2315
    @precisionimmunologyincubat2315 5 років тому +2

    Thank you so much! Right now everyone in our department (Systems Genetics at NYU Langone) is using UMAP. There aren't many great videos about it - it would be awesome if you could help us understand what all the hype is about!

    • @statquest
      @statquest  5 років тому +2

      UMAP is on the to-do list. I hope to get to it in the spring.

  • @hulaalol
    @hulaalol 3 роки тому +1

    thank you so much for this nice explanation. will help me a lot in my exams

  • @deepika3389
    @deepika3389 3 роки тому +1

    Kudos, I understood so effortlessly....tripple BAM!!!

  • @chaitanyakulkarni243
    @chaitanyakulkarni243 3 роки тому +2

    Wish I could *Triple Bam* like this video! Such a simple explanation. Thanks a lot Josh :-)

  • @thoniageo
    @thoniageo 3 роки тому +1

    i am a huge fan of this channel! greetings from brazil ^^

  • @DaniTeba
    @DaniTeba 4 роки тому +1

    Thank you a lot for the video Josh.
    Let me point something out, and by minute 10:40, it looks like that t-sne perform a sort of the matrix, instead of minimizing the loss function by gradient descent.

    • @statquest
      @statquest  4 роки тому +1

      Good point. I represented it as a matrix because, internally, all of the similarity scores are maintained that way.

  • @p.b.3697
    @p.b.3697 5 років тому +1

    Thank you very much Josh . You made it easier to understand.

    • @statquest
      @statquest  5 років тому

      Hooray! I'm glad the video was helpful! :)

  • @somethingandapie
    @somethingandapie 6 років тому +1

    Subscribed because that intro gave me life!

  • @sudortd
    @sudortd Рік тому +1

    I need to watch 3 more times to fully understand. TRIPLE BAM!!!

  • @Elmirgtr
    @Elmirgtr 6 років тому +6

    Your speak like Kevin from The Office. Great explanation, thanks a lot:)

  • @mic9657
    @mic9657 Рік тому +1

    Amazing work! perfectly explained!!!

  • @simonandrews5604
    @simonandrews5604 5 років тому

    Incredibly helpful and well presented. Thank you.

  • @leixiao169
    @leixiao169 4 роки тому +1

    your explanation is very very good! thanks!!!

  • @DumplingWarrior
    @DumplingWarrior 3 місяці тому +1

    Hi Josh, great videos as always! I'm not sure if there's a video about this already, but could you do one with all the clustering or classification or dimensionality reduction methods compiled together and then compare their differences and similarities and talk about situations when we should use which? For example, after looking at many of the videos, I think I'm already a little lost on if I should use PCA or MDS or t-SNE on my data. Ty.

    • @statquest
      @statquest  3 місяці тому

      Thanks! I'll keep that in mind.

  • @abcdefghi2650
    @abcdefghi2650 2 роки тому +1

    Great videos! Great channel! Big thumbs UP!

  • @BusinessScience
    @BusinessScience 5 років тому +2

    Hey, love your videos! We are actually using it to help explain key concepts in our application-focused courses. I'd love to see UMAP (similar to t-SNE), which is a bit more scalable.

    • @statquest
      @statquest  5 років тому +3

      Thank you so much! It's on the to-do list. :)

    • @BusinessScience
      @BusinessScience 5 років тому +1

      @@statquest Awesome! I'm using your content in my courses - Students love it. PCA, K-Means, & t-SNE. Will be using your ML videos as well. Your explanations are the best!

  • @sandipansarkar9211
    @sandipansarkar9211 4 роки тому +1

    great explanation especially for beginners.Thanks

  • @prateekyadav7679
    @prateekyadav7679 4 роки тому

    I never thought I'd not understand a statquest video! :(

    • @statquest
      @statquest  4 роки тому

      Bummer. What time point was confusing?

  • @MrCEO-jw1vm
    @MrCEO-jw1vm 5 місяців тому +1

    Thank you so much for this great resource and how much investment you have made into it. I have understood this well.

    • @statquest
      @statquest  5 місяців тому

      Glad it was helpful!

  • @nathalychicaizacabezas3055
    @nathalychicaizacabezas3055 3 роки тому +1

    I am at the intro and love it already!

  • @YuanFuClausie
    @YuanFuClausie 6 років тому +2

    Great video. Just if you could explain a bit who the shape of normal curve has been determined would be wonderful! I'm a bit confused there at 4:41.

    • @statquest
      @statquest  6 років тому

      The mean of the normal curve is 0, the distance from the point we are calculating similarities to and itself. The standard deviation is a function of the density of the points around it and, I believe, the perplexity fudge factor. I can't remember the formula off the top of my head, but the higher the density of point, the smaller the standard deviation, and the lower the density of points, the higher the standard deviation.

  • @markcoffer9290
    @markcoffer9290 5 років тому +1

    Well done! I would love to see videos on handling data outliers for regressions. Thanks!

  • @iamsiddhantsahu
    @iamsiddhantsahu 7 років тому

    Nice explanation of t-SNE for beginners.

  • @daivazian
    @daivazian 6 років тому +1

    Fantastic explanation and comments. Thanks so much!

    • @statquest
      @statquest  6 років тому

      Thank you!! I'm glad you like the video. :)

  • @Bedivine777angelprayer
    @Bedivine777angelprayer Рік тому +1

    Thanks really great videos understood concepts so well

  • @l.pineau4709
    @l.pineau4709 5 років тому +2

    Thanks a lot, TRIPLE BAM for you!

  • @abarnaabalakrishnan1862
    @abarnaabalakrishnan1862 7 років тому

    VERY CLEAR EXPLANATIONS :) THANK YOU FOR ALL YOUR VIDEOS

  • @jamesang7861
    @jamesang7861 5 років тому

    the only info that's stuck clearly in my head in BAM..

  • @vikramreddy3699
    @vikramreddy3699 4 роки тому

    Thank you Josh . I love the way you present concepts with simple examples.
    Could you please explain how you decided the red dot directions to the left, where as the orange on right side @5:30 ?

    • @statquest
      @statquest  4 роки тому

      It doesn't matter what side of the curve the points are on, since the distance from the y-axis values on the curve will be the same (normal curves are symmetrical). However, in order for the points to be easily seen, I spread them out on different sides rather than piling them all up on top of each other.

    • @vikramreddy3699
      @vikramreddy3699 4 роки тому +1

      @@statquest Thank you again

  • @IslamEldifrawi
    @IslamEldifrawi 3 роки тому +1

    Thanks a million for this masterpiece !!!

  • @nguyenthituyetnhung1780
    @nguyenthituyetnhung1780 7 місяців тому +1

    thanks for your great explaination. I just wonder from 5:00 - 5:45, Why when you plot the distance on the normal curve the red and the orange is on different sides of normal curve. I thought distance didn't have direction. Can you please explain more detail about this different direction of the red and orange?

    • @statquest
      @statquest  7 місяців тому

      The normal curve is symmetrical, so we can puts the dots on either side. In this case, I used both sides so that not all the dots would overlap.

    • @nguyenthituyetnhung1780
      @nguyenthituyetnhung1780 7 місяців тому +1

      @@statquest yeah, i understood. Because we take p as similarities values so right or left is the same. Thanks a lot. Your videos help me a lot in my machine learning studying.

  • @pabloruiz577
    @pabloruiz577 5 років тому +1

    Hi @StatQuest with Josh Starmer, great video!
    The thing I am missing is what is happening in each of this steps to move each point. What are the 'attract' and 'repel' real values and how they are use to make the Similarity Matrices closer each of these steps?

    • @statquest
      @statquest  5 років тому +1

      This is a good question. The actual math is a little too messy to put in this comment, however, the idea is that the matrices are made similar using Gradient Descent, and that's where the attractions and repulsions come in. Here's a quote from the original paper (the link to the paper comes after the quote):
      Physically, the gradient [ minimized by gradient descent ] may be interpreted as the resultant force created by a set of springs between the [low dimensional point A] yi and all other [low dimensional points] yj. All springs exert a force along the direction (yi −y j). The spring between yi and y j repels or attracts the map points depending on whether the distance between the two in the map is too small or too large to represent the similarities between the two high-dimensional datapoints. The force exerted by the spring between yi and y j
      is proportional to its length, and also proportional to its stiffness, which is the mismatch (p j|i −qj|i + pi| j −qi| j) between the pairwise similarities of the data points and the map points.
      Here's the link to the paper: www.jmlr.org/papers/volume9/vandermaaten08a/vandermaaten08a.pdf

  • @sau002
    @sau002 5 років тому +1

    Excellent intro to tSNE

  • @Tony-Man
    @Tony-Man 8 місяців тому

    Hi Josh, quality content! This channel continuously helps me to understand the idea behind so that the dry textbook explanations actually make sense. I still have a question. When you calculate the unscaled similarity score, how do you exactly determine the width of your guassian? I get it in the example that we already know the cluster. If I only want to visualize the data without having pre-defined clusters, what happens then?

    • @statquest
      @statquest  8 місяців тому

      I talk more about the details of t-SNE and how it works in my videos on UMAP: ua-cam.com/video/eN0wFzBA4Sc/v-deo.html and ua-cam.com/video/jth4kEvJ3P8/v-deo.html

  • @carlmemes9763
    @carlmemes9763 3 роки тому +1

    Thanks for this wonderful video❤️

  • @MathPhysicsFunwithGus
    @MathPhysicsFunwithGus Рік тому +1

    This is a great explanation thank you!

  • @jarosawbachnio3753
    @jarosawbachnio3753 4 роки тому +1

    Hi Josh, great video, many thanks! Anyway, I still don't get how do you determine the distribution properties (like standard deviation) for calculating unscaled similarity between two points. When you introduced half as dense cluster as the others, you used normal distribution with standard deviation doubled, what is quite intuitve. But you knew that this cluster is just half as dense as the others. The question is, how to know the properties of these distribution curves?

    • @statquest
      @statquest  4 роки тому

      You estimate it from the data.

  • @davidm7765
    @davidm7765 3 роки тому +1

    Excellent work, thank you !!

  • @parthgupta1562
    @parthgupta1562 3 роки тому +1

    Beautiful explanations! Please make a video on Locally Linear Embedding too.

    • @statquest
      @statquest  3 роки тому

      I'll keep that in mind.

  • @ShubhamMajmudar
    @ShubhamMajmudar 5 років тому +1

    Made it look real simple.. thanks!

  • @daaronr
    @daaronr 3 роки тому

    Love it! A few things could still be clarified (please?):
    At 07:40, which vector of distances must add up to 1 after scaling? The sum of distances from each point to all other points (regardless of cluster)?

  • @flossenking
    @flossenking 2 роки тому +1

    Hey great explanation! So, do the x and y values in a 2D t sne plot mean anything exactly? (or, for that matter, the value of position on the one axis in the video) Our professor told us that they dont because its reduced from higher dimensions

    • @statquest
      @statquest  2 роки тому +1

      They don't mean anything, but for a different reason than your professor told you. It's not that they are reduced from higher dimensions, it's the method of how they were reduced. PCA, in contrast, is a good example of dimension reduction where the axes have meaning. For details, see: ua-cam.com/video/FgakZw6K1QQ/v-deo.html

  • @alexeilazarev9576
    @alexeilazarev9576 5 років тому +1

    Amazing explanation! Thank you!

  • @techwellness6142
    @techwellness6142 5 років тому +1

    Excellent Explaination. Tripple BAM !!!