Introduction to Machine Learning - 11 - Manifold learning and t-SNE

Поділитися
Вставка
  • Опубліковано 21 тра 2024
  • Lecture 11 in the Introduction to Machine Learning (aka Machine Learning I) course by Dmitry Kobak, Winter Term 2020/21 at the University of Tübingen.

КОМЕНТАРІ • 38

  • @justuslau2894
    @justuslau2894 3 місяці тому +3

    Absolutely amazing video course. Especially after looking at other sources I notice how valuable this is. Every video achieves to combine the intuition and math in a concise was.
    I recommend the videos to anyone who wants to learn about ML.

  • @graedy2
    @graedy2 28 днів тому

    The best video on this topic I have found so far by a large margin. Excellent work!

  • @woodworkingaspirations1720
    @woodworkingaspirations1720 9 місяців тому +2

    Worth every second. You are a blessing to humanity.

  • @oncedidactic
    @oncedidactic Рік тому +1

    Excellent talk with spot on visuals and explanations. Thanks!

  • @Denverse
    @Denverse 3 роки тому +7

    Amazingly explained, It's such a great resource.

  • @tusharv204
    @tusharv204 3 роки тому +2

    Beautiful explanations!

  • @artemshcherbakov7550
    @artemshcherbakov7550 2 роки тому +1

    Amazing! Super interesting and understandable!

  • @brianchaplin278
    @brianchaplin278 2 роки тому +2

    Great explanation with both details and good examples

  • @wenkangqi9877
    @wenkangqi9877 2 роки тому +2

    Amazing Lecture, very well explained! Thank you for sharing!

  • @migueldelvalle8975
    @migueldelvalle8975 Рік тому

    Incredibly explained. Congratulations!

  • @xwj7481
    @xwj7481 Рік тому +2

    It is an amazing course, worth the time to watch and learn from it.

  • @benjaminbenjamin8834
    @benjaminbenjamin8834 3 роки тому +4

    what an amazing explanations.......................well done............BRAVO!

  • @peterhemmings2929
    @peterhemmings2929 2 роки тому +2

    Top quality lecture, thanks for sharing

  • @berkoec
    @berkoec 2 роки тому +3

    So well explained! The best video resource I have seen on t-SNE so far!

  • @brianlarocca4390
    @brianlarocca4390 Рік тому

    Wonderful job. Really enjoy watching this.

  • @user-vm9hl3gl5h
    @user-vm9hl3gl5h Рік тому +1

    23:20 perplexity - adjust sigma for each i so that we reach perplexity=30. may be small in dense group, but big in sparse group.
    43:40 crowding problem

  • @robertzell8670
    @robertzell8670 Рік тому

    Fabulous video! This was really helpful, thank you!

  • @vivaliberte
    @vivaliberte Рік тому

    Awesome explanations. Thank you very much.

  • @Bwaaz
    @Bwaaz 7 місяців тому

    Amazing course with great vizualisations ! thank you very much

  • @warrenarnold
    @warrenarnold 7 місяців тому

    Thank you for your awesome explanation and illustrations nive thank you very much

  • @willw4096
    @willw4096 9 місяців тому

    t-SNE is 1) non-linear 2) non-parametric (aka stochastic, non-deterministic) 3:28-4:20
    8:46 MNIST
    9:22 PCA's visual
    17:14 17:57
    18:45 t-SNE's visual
    31:29❗2 separate blue clusters cannot get together
    32:41 the fix: increase "Early Exaggeration" temporarily to increase the attraction force and then decrease back

  • @joswarbellidorosas3956
    @joswarbellidorosas3956 6 місяців тому

    Thank you for this course!

  • @aashishmalhotra
    @aashishmalhotra 2 роки тому

    amazing content

  • @StarHeartsong
    @StarHeartsong Рік тому

    Bravo! Thank you very much.

  • @DucLe-kg5hx
    @DucLe-kg5hx 4 місяці тому

    amazing lecture. Please post more videos.

  • @MrDesperadus
    @MrDesperadus 9 місяців тому

    Excellent lecture, thanks

  • @jiaxuanchen8652
    @jiaxuanchen8652 Рік тому

    Amazing!

  • @juanete69
    @juanete69 Рік тому +1

    Great lesson.
    How can you use t-SNE not just for visualization but also for classification?
    Does t-SNE take into account that some variables are more related with the formation of the cluster and other just add noise?
    I mean, in some moedls you can calculate the p-value and the SHAP for each variable. Can you get this kind of information here?

  • @cihanulas5438
    @cihanulas5438 2 місяці тому

    Thanks a lot for greay content

  • @taraxmetodopilates3658
    @taraxmetodopilates3658 3 роки тому +1

    Greetings from Spain

  • @dragolov
    @dragolov 2 роки тому

    Respect!

  • @manfungyu9853
    @manfungyu9853 2 роки тому

    Very good lesson

  • @automatescellulaires8543
    @automatescellulaires8543 9 місяців тому

    amazing, thx.

  • @aricanto1764
    @aricanto1764 Рік тому +1

    This video is the bees knees

  • @DM-py7pj
    @DM-py7pj 6 місяців тому

    how can one get good results with PCA init as don't we lose valuable non-linear information?

  • @ccuuttww
    @ccuuttww 2 роки тому

    It's like your cup when u add the coffee powder into water

  • @1potdish271
    @1potdish271 2 роки тому

    Where can we find lecture notes?