Chieh Wu
Chieh Wu
  • 195
  • 315 499

Відео

20. K-means, Agglomerative, Spectral Clustering
Переглядів 1514 днів тому
20. K-means, Agglomerative, Spectral Clustering
19. Linear Discriminant Analysis (LDA)
Переглядів 1814 днів тому
19. Linear Discriminant Analysis (LDA)
18. Principal Component Analysis ( PCA )
Переглядів 1714 днів тому
18. Principal Component Analysis ( PCA )
17. Covariance Corr NMI
Переглядів 914 днів тому
17. Covariance Corr NMI
Lecture Example 3: sampling from p(x| U are in love with Prof. Wu and use the lyrics from example 2)
Переглядів 4414 днів тому
example of sampling from a conditional distribution of songs.
Lecture Example 2: sampling from p(x| "in love with Prof. Wu")
Переглядів 1614 днів тому
AI generated song demonstrating sampling from a conditional distribution.
Lecture Example 1: sampling from p(x|"in love with prof wu")
Переглядів 2414 днів тому
Lyrics Yo, Professor Wu is the best and her TA Arielle is not bad too. With Prof. Wu we’re diving deep, it’s a math crusade, Matrix, vectors - the tools of the trade. From calculus slides to the ML game, Simplify equations, let’s ignite the flame. (Verse 1) Big data vibes, I see the gradient flow, Partial derivatives, yeah, I’m in the know. Functions with inputs, multi-dimensional ride, Linear ...
Learning Machines (Phil and Emily)
Переглядів 514 днів тому
Learning Machines (Phil and Emily)
Math Crusaders: DS4400 Theme Song (Lyrics by Phil, Voice by Emily)
Переглядів 8114 днів тому
Math Crusaders: DS4400 Theme Song (Lyrics by Phil, Voice by Emily)
16. Decision Trees
Переглядів 1514 днів тому
16. Decision Trees
15. Support Vector Machine ( SVM )
Переглядів 1014 днів тому
15. Support Vector Machine ( SVM )
14. Logistic Regression
Переглядів 1014 днів тому
14. Logistic Regression
13. Understanding Information theory
Переглядів 2314 днів тому
Professor Chieh Wu's UA-cam lecture and accompanying slides introduce information theory, starting with the concept of information as inversely related to probability. The lecture then progresses to entropy (measuring uncertainty before an event), cross-entropy (measuring the difference between two probability distributions), KL divergence (the excess entropy added by one distribution to anothe...
12. Inversion, Rejection, Importance, KDE Sampling
Переглядів 714 днів тому
12. Inversion, Rejection, Importance, KDE Sampling
11. Maximum Likelihood (MLE) and Kernel Density Estimation (KDE)
Переглядів 2514 днів тому
11. Maximum Likelihood (MLE) and Kernel Density Estimation (KDE)
10. Recognizing the most common Distributions
Переглядів 1014 днів тому
10. Recognizing the most common Distributions
9. LASSO, Ridge Regression, and Elastic Net Regularization
Переглядів 1014 днів тому
9. LASSO, Ridge Regression, and Elastic Net Regularization
Theme Song (Lyrics by Phil and Voice by Emily)
Переглядів 10714 днів тому
Theme Song (Lyrics by Phil and Voice by Emily)
8. The lecture on Expectation Variance
Переглядів 914 днів тому
8. The lecture on Expectation Variance
7. Review of Probability Concepts
Переглядів 1514 днів тому
7. Review of Probability Concepts
6. Data Preprocessing and Intro to regression
Переглядів 921 день тому
6. Data Preprocessing and Intro to regression
3. L1, L2, L-infinity norms
Переглядів 1221 день тому
3. L1, L2, L-infinity norms
5. Gradient Descent Vs Closed form explained
Переглядів 4221 день тому
5. Gradient Descent Vs Closed form explained
4. Matrix and Vector calculus
Переглядів 2921 день тому
4. Matrix and Vector calculus
2. Review of Vector Matrices operations
Переглядів 1421 день тому
2. Review of Vector Matrices operations
1. Review of Calculus
Переглядів 6421 день тому
1. Review of Calculus
19 LDA eigendecomposition perspective
Переглядів 63Місяць тому
19 LDA eigendecomposition perspective
18 PCA (The Variance Perspective)
Переглядів 922 місяці тому
18 PCA (The Variance Perspective)
17 Covariance, Correlation, NMI
Переглядів 412 місяці тому
17 Covariance, Correlation, NMI

КОМЕНТАРІ

  • @lisajiang9400
    @lisajiang9400 3 дні тому

    please debut

  • @ibenesultan
    @ibenesultan 8 днів тому

    Simple and better

  • @ashwinkumar5223
    @ashwinkumar5223 22 дні тому

    Superb 🎉

  • @DeemIsTaken
    @DeemIsTaken 22 дні тому

    How'd I get to this video?

  • @مهیارجهانینسب
    @مهیارجهانینسب Місяць тому

    Thanks 👍

  • @مهیارجهانینسب
    @مهیارجهانینسب Місяць тому

    Amazing thank you sir

  • @korob20080
    @korob20080 Місяць тому

    At 15:00 the inverse (-1) is missing in the pseudoinverse

  • @franckalbinet
    @franckalbinet Місяць тому

    30:50 Should the denominator when computing the Integral operator be 5 instead of 2 (the number of data points)? But does not alter the reasoning anyway...

  • @franckalbinet
    @franckalbinet 2 місяці тому

    TOC of the lecture: 01:28 Going back to kernel Theory 03:20 Operators & Functionals 05:36 Linear Operators 07:45 Integral Operator 11:03 Practice your understanding of Integral Operator 12:38 The dimensions of Integral Operators 14:15 Integral Operators Solutions 15:17 Let's quickly review the concept of Span 16:40 Identify the Minimum basis vectors 18:30 Solution to Minimum basis vector 19:20 The minimum set of basis vectors 22:04 Solution 23:32 Symmetric matrices can be split into 3 matrices 24:51 Let's apply the same idea to RKHS 27:30 Eigenfunction allows us to use fewer basis functions 29:35 Practice understanding Eigenfunctions 31:32 More on Integral Operators 34:23 Integral Operators and Eigenfunctions 36:14 How to actually get the Eigenfunctions 40:11 Rosasco's conclusion leads to a very amazing result 42:02 Using Eigenfunctions instead of basis vectors 45:03 Let's now construct the kernel matrix 48:30 Understanding Mercer's Theorem 49:27 Is Rosasco's telling the truth?

  • @franckalbinet
    @franckalbinet 2 місяці тому

    TOC of the lecture: 00:45 Recap & Warm up 03:26 Answer 04:10 Remembering the Span in Linear Algebra 05:54 Let's extend the idea of Span to function Space 07:43 Answer for Span in RKHS 08:16 We previously learned ... 10:27 The Moore-Aronszajn Theorem 15:05 The Moore-Aronszajn Theorem in real life 19:43 The Implications of the Moore-Aronszajn Theorem 21:06 The Representer Theorem 21:51 Let's now apply Kernels to Regression 25:07 Where is the advantage? 28:48 Example using Gaussian Kernel 32:09 Evaluating after obtaining α 33:27 Using α to get f(x) 36:09 Answer these questions to consolidate what we learned 37:46 Kernelizing Linear Algorithms 40:12 Python code 40:23 But program your own Kernel Regression & "Thinking through is the key to learning"

  • @franckalbinet
    @franckalbinet 2 місяці тому

    TOC of the lecture: 00:48 A function is a combination of functions 03:00 Naming the components of a function 05:58 The basis tells you the space of possible functions, the coeffs tell you the exact function 09:33 Changes how we understand functions 14:16 Functions as points 16:48 The perspective shift 19:49 The Hypothesis perspective 22:02 The wrong hypothesis 25:10 Picking the correct hypothesis 26:20 What is f(x)? And the reproducing property 29:43 The visualization of the Reproducing Property 32:25 Another Visualization of mapping the data into RKHS 33:08 Consolidate these concepts 33:28 Understanding spaces 33:34 A set is a collection of distinct elements 34:00 A space is just a set with some defined perspectives 35:54 Here is a list of major spaces 37:03 Reproducing Kernel Hilbert Space (RKHS) 38:54 Let's define a Kernel 39:52 Let's define a Reproducing Kernel 42:12 What is a Reproducing Kernel? 44:52 How do we use Reproducing Kernels? 49:13 The kernel Matrix 51:03 Try to answer these questions to consolidate the ideas 51:15 Practice using the knowledge This lecture is a gem thanks! The Gretton's lecture referenced in this video: www.gatsby.ucl.ac.uk/~gretton/coursefiles/rkhscourse.html

  • @yintaizhang7237
    @yintaizhang7237 2 місяці тому

    Great video. Solve lots of puzzles.

  • @eliezeroliveira6105
    @eliezeroliveira6105 2 місяці тому

    This is a very underrated video. Thanks, professor! I hope you keep posting and growing your channel.

  • @amandeepnokhwal2977
    @amandeepnokhwal2977 2 місяці тому

    Ek number..

  • @해위잉
    @해위잉 3 місяці тому

    34:26 Integral operators and eigenfunctions

  • @rizwanullah9487
    @rizwanullah9487 3 місяці тому

    What a lovely way of explaination

  • @pnachtwey
    @pnachtwey 4 місяці тому

    What the professors don't tell you is that in real life there isn't a formula that can be differentiated and that the 'terrain" isn't like a bowl with an easy path to the minimum.

  • @TIENTI0000
    @TIENTI0000 4 місяці тому

    Thanks again

  • @TIENTI0000
    @TIENTI0000 4 місяці тому

    Thanks!

  • @dontknowwhataplantis
    @dontknowwhataplantis 4 місяці тому

    `Thank you for making these videos! My major is electrical engineering, so having these concepts explained via calculus and statistics makes learning machine learning more approachable. It turns a scary black box into small problems that can be solved.

  • @lisajiang9400
    @lisajiang9400 5 місяців тому

    really intuitive explanations!

  • @ayushroy6208
    @ayushroy6208 5 місяців тому

    Absolute wonder of a video!!!❤❤❤

  • @oldPrince22
    @oldPrince22 6 місяців тому

    Thank you very much.

  • @arnavep21b004
    @arnavep21b004 7 місяців тому

    Thanks a lot, Sir! I had been trying to understand the concept since 2 days, and this video gave a very easy and straightforward explanation. Thank You!

  • @darshanparsana7890
    @darshanparsana7890 7 місяців тому

    This is one of the great and refreshing video for me. I have been looking for some classy materials for kernals and Random Fourier Topics in past few days. Finally, landed at - spot on explaination - this is what I have been looking for. Thank you.🙇

  • @VikasSunkad
    @VikasSunkad 7 місяців тому

    Super Explanation...

  • @arandomwho
    @arandomwho 7 місяців тому

    Thanks for your incredible explanation in the simplest way!

  • @shagun5690
    @shagun5690 7 місяців тому

    Is ppt available ?

  • @anish72771
    @anish72771 8 місяців тому

    please public it

    • @ChiehWu
      @ChiehWu 8 місяців тому

      university asked me to take it down

    • @ChiehWu
      @ChiehWu 8 місяців тому

      i will republish it next semester

  • @anish72771
    @anish72771 8 місяців тому

    hii sir why have hidden the videos

  • @reda9877
    @reda9877 8 місяців тому

    Thank you very much

  • @KrishnaDwivedi-yd1hb
    @KrishnaDwivedi-yd1hb 8 місяців тому

    very good explanation loved it.. can you make video on PCA in machine learning

    • @ChiehWu
      @ChiehWu 8 місяців тому

      Watch this lecture series. ua-cam.com/play/PLdk2fd27CQzQo1DEthJ5MDBHoC66_1aDN.html

  • @bbanahh
    @bbanahh 8 місяців тому

    Brilliant!

  • @hkrish26
    @hkrish26 9 місяців тому

    Thanks. May I know where can I download the lectures or suggest a book

    • @hkrish26
      @hkrish26 8 місяців тому

      Thanks for having everything here.

  • @louis9854
    @louis9854 9 місяців тому

    Very good explained !

  • @AliReza-su2uv
    @AliReza-su2uv 9 місяців тому

    thank you for this! very useful

  • @John-wx3zn
    @John-wx3zn 9 місяців тому

    Just by looking at it, I know the minimum is at 1,1 so why do you use gradient descent to find it?

  • @AliReza-su2uv
    @AliReza-su2uv 9 місяців тому

    Just learning kernels thru ridge regression super helpful thanks !

  • @ShikhaMallick
    @ShikhaMallick 10 місяців тому

    Timestamps: 15:24

  • @sau002
    @sau002 10 місяців тому

    Very nicely explained.

  • @shivamchhetry9594
    @shivamchhetry9594 10 місяців тому

    Hi @chiehwu, your teaching style has really helped me grasp the intricacies of derivatives and understand their role in various algorithms. Thank you for sharing these fantastic lecture videos that connect concepts to real-life problems. I've been taking notes by pausing the videos, but it interrupts my flow. Could you please share the slides or notes on platforms like GitHub or Drive for easy download and future reference? Thanks a lot!

  • @muhannedalogaidi7877
    @muhannedalogaidi7877 10 місяців тому

    That is really a smart explanation

  • @dylaningham2259
    @dylaningham2259 10 місяців тому

    Super informative

  • @Tn3m3lc
    @Tn3m3lc 10 місяців тому

    Very nice explanation indeed. Yet I still have a question. At 21:02 you say that sum(q(z))=1 and can be excluded from the 2second member of the distribution. But why not on the first???

  • @cesar4508
    @cesar4508 10 місяців тому

    "Promo SM"

  • @sau002
    @sau002 11 місяців тому

    Very nicely explained. Thank you.

  • @amitk3474
    @amitk3474 11 місяців тому

    Please post links to slides and other materials.

  • @43SunSon
    @43SunSon 11 місяців тому

    no sound at all?

  • @moazesmail5517
    @moazesmail5517 11 місяців тому

    magnificent

  • @omerfarukcapknoglu8457
    @omerfarukcapknoglu8457 Рік тому

    thank you so so much :)