Ali Ghodsi, Lec 1: Principal Component Analysis

Поділитися
Вставка
  • Опубліковано 2 гру 2024

КОМЕНТАРІ • 169

  • @muhammadsarimmehdi
    @muhammadsarimmehdi 5 років тому +73

    I seriously hope he teaches a lot more machine learning and those lectures get published here. He is the only teacher I found who actually dives into the math behind machine learning.

    • @logicboard7746
      @logicboard7746 3 роки тому

      Agreed

    • @alizain16
      @alizain16 2 роки тому

      Tou directly Statistics k professors k lecture bhi le sakty ho.

    • @ElKora1998
      @ElKora1998 2 роки тому

      This guy owns Databricks now, the biggest ai start up in the world. He isn’t coming back anytime soon sadly 😂

    • @andrewmills7292
      @andrewmills7292 2 роки тому +3

      @@ElKora1998 Not the same person

    • @ElKora1998
      @ElKora1998 2 роки тому

      @@andrewmills7292 you sure? He looks identical!

  • @rizwanmuhammad6468
    @rizwanmuhammad6468 4 роки тому +5

    He is certainly at a level that makes you understand. Best teacher. No show off , no hand waving. Genuine teacher. Goal is to teach . Thanks you thank you

  • @andresrossi9
    @andresrossi9 4 роки тому +6

    This professor is amazing. I'm italian so it's more difficult to follow a lesson in english than in my language. Well, it was much much easier to understand PCA here for me than in any other PCA lesson or paper in italian. And not only, but he gave a more rigorous explanation too! Outstanding, really...

  • @pantelispapageorgiou4519
    @pantelispapageorgiou4519 3 роки тому +6

    No words to describe the greatness of this professor!

  • @shashanksagarjha2807
    @shashanksagarjha2807 6 років тому +31

    If you were to take my opinion, his videos are best one on ml and deep learning on youtube

    • @VahidOnTheMove
      @VahidOnTheMove 5 років тому +8

      I agree. Watch 21:00. The student repating the answer is the eigen value and the eignen vector, and the intructor says Ok this is correct but why!?
      In most videos on youtube I have seen, people who pretend to be expert do not know (or do not say) the logic behind their claims.

    • @muratcan__22
      @muratcan__22 5 років тому

      @@VahidOnTheMove exactly

    • @nazhou7073
      @nazhou7073 5 років тому

      I agree!

  • @anirudhthatipelli8765
    @anirudhthatipelli8765 Рік тому +1

    Thanks, this is by far the most detailed explanations of PCA

  • @leeris19
    @leeris19 3 місяці тому

    This is what i've been looking for! Every explanation out there just contains more question. "Get the covariance" What for ? "Do decomposition" Why ? "Use eigen vectors" Huh ??. Thank you for explaining every question I have!

  • @MoAlian
    @MoAlian 7 років тому +71

    I'd won a Fields medal if I had a professor like this guy in my undergrad.

    • @crazyme1266
      @crazyme1266 6 років тому +4

      i think you can do that right now... age is just a number when it comes to learning and creating :D

    • @pubudukumarage3545
      @pubudukumarage3545 6 років тому +5

      Crazy || ME :) only if you are not older than 40...fields medal is given for

    • @crazyme1266
      @crazyme1266 6 років тому +1

      @@pubudukumarage3545 oh thanks for the information.... I didn't knew that.... Guess I really am kinda crazy huh?? XD

    • @slowcummer
      @slowcummer 3 роки тому

      Yeah, a Garfield the cat medal I'm sure you can win. Just give him Lasagne.

    • @godfreypigott
      @godfreypigott 3 роки тому

      Clearly you would not have won a medal for your English ability.

  • @mu11668B
    @mu11668B 4 роки тому +1

    Wow. This IS what I'm looking for!! Thank you SO much!
    BTW the explanation for 20:21 is simple if you already have some experience of manipulating with linear algebra.
    Just decompose the matrix S into EΛ(E^-1) and the sum will turn into sum of ratios of eigenvalues, with the ratios sum up to 1. (assume that the data are already standardized, which is crucial.)
    Thus you have to put the ratio of corresponding eigenvector to 1 to get the max sum, which is the maximum eigenvalue.

  • @YouUndeground
    @YouUndeground 5 років тому +1

    This is the best video about this subject, including the math behind it, that I've found so far.

  • @joshi98kishan
    @joshi98kishan 6 місяців тому

    Thank you professor. This lecture explains exactly what I was looking for - why principal components are the eigenvectors of the sample covariance matrix.

  • @najme9315
    @najme9315 2 роки тому

    Iranian Professors are fantastic! and Prof. Ali Ghodsi is one of them

  • @ayushmittal1287
    @ayushmittal1287 2 роки тому

    Teachers like him make more people fall in love with the topic.

  • @hassanebouzahir2653
    @hassanebouzahir2653 Місяць тому

    3:30 We can always reduce the dimensionality of the features by projecting them onto an optimal subspace.

  • @Entilema
    @Entilema 6 років тому +5

    thank you so much! I tried to see the same topic in other videos and was impossible to understand, this is so clear, ordered and intuitively explain, awesome lecturer!

  • @jeffreyzhuang4395
    @jeffreyzhuang4395 Рік тому

    43:03 The entries in Σ are not eigenvalues of A transpose A, but square roots of eigenvalues of A transpose A.

  • @jaivratsingh9966
    @jaivratsingh9966 6 років тому +4

    Dear Prof, at 28:46 you say that tangent of f and tangent of g are parallel to each other - possibly you meant to say that gradient ie normal of f and normal of g are parallel to each other. Anyways it effectively means the same thing. Excellent video!

  • @هشامأبوسارة-ن7و
    @هشامأبوسارة-ن7و 5 років тому +1

    Good lecture. PCA tries to find the direction in the space, namely a vector, that maximises the variance of the projected points or observations on that vector. Once the above method finds the 1st principal component, the second component is the vector orthogonal to the first component.

  • @vamsikrishnakodumurumeesal1324
    @vamsikrishnakodumurumeesal1324 4 роки тому

    By far, the best video on PCA

  • @alpinebutterfly8710
    @alpinebutterfly8710 3 роки тому

    This lecture is amazing, your student are extremely lucky ...

  • @zhaoxiao2002
    @zhaoxiao2002 2 роки тому +1

    At time 1:03:26, [U, D, V] = svd (X). Question: shall we do svd(X - E(X)), since X contain pixel values in [0, 255] and the data points X is not centered to E(X)?

  • @김성주-h1b
    @김성주-h1b 4 роки тому

    This is the best pca explanation I've ever seen!! 👍👍

  • @usf5914
    @usf5914 5 років тому

    know we see teacher with clear and open mind

  • @YashMRSawant
    @YashMRSawant 5 років тому

    Sir I have one question @1:01:50. If I had only one face image with each pixel distribution independent of other but mean corresponds to original face value at that pixel. I think first, second and so on PCs are noise dominant and we are still able to see the face.?

  • @NirajKumar-hq2rj
    @NirajKumar-hq2rj 4 роки тому +1

    Around 44:50 , u explained M as set of mean values of x_i data points, shouldn’t mean (xi) = 1/d rather than 1/n *sum of xi over i = 1 to d

  • @alirezasoleimani2524
    @alirezasoleimani2524 Рік тому

    Amazing lecture. I really enjoyed every single second ....

  • @srishtibhardwaj400
    @srishtibhardwaj400 6 років тому +4

    That was an amazing lecture Sir! Thank you!

  • @kamilazdybal
    @kamilazdybal 5 років тому +2

    Great lecture! Tip to the camera person: there's no need to zoom in on the powerpoint. The slides were perfectly readable even when they were at 50% of the video area but it is much better to see the lecturer and the slide at the same time. Personally, it makes me feel more engaged with the lecture than just seeing a full-screen slide and hearing the lecturer's voice.

  • @ayouyayt7389
    @ayouyayt7389 2 роки тому

    At 17:25 sigma should be sigma squared to call it variance if not we say standard deviation.

  • @rbr951
    @rbr951 6 років тому +1

    Wonderful lecture thats both intutive as well as mathematically excellent.

  • @ferensao
    @ferensao 5 років тому

    This is a great tutorial video, I could grasp the idea behind PCA with easy and clear thoughts.

  • @josephkomolgorov651
    @josephkomolgorov651 4 роки тому

    Best lecture on PCA!

  • @suhaibkamal9481
    @suhaibkamal9481 4 роки тому

    not a fan of learning through youtube videos , but this was an excellent lecture

  • @Amulya7
    @Amulya7 2 роки тому

    43:20, aren't they the square roots of eigenvalues of XTX or XXT?

  • @xuerobert5336
    @xuerobert5336 6 років тому

    This series of videos are so great!​

  • @sudn3682
    @sudn3682 6 років тому

    Man, this is pure gold!!

  • @muratcan__22
    @muratcan__22 5 років тому +1

    everything is explained crystal clear. thanks!

  • @oguzvuruskaner6341
    @oguzvuruskaner6341 4 роки тому

    There is more mathematics in this video than a data science curriculum.

  • @abdelkrimmaarir5104
    @abdelkrimmaarir5104 5 місяців тому

    Thank you for this lecture. Where can we find the dataset used for the noisy faces?

  • @slowcummer
    @slowcummer 3 роки тому

    He's an astute mathematician with virtuoso teaching skills.

  • @fish-nature-lover
    @fish-nature-lover 7 років тому +1

    Great lecture Dr. Ali...Thanks a lot

  • @asmaalsharif358
    @asmaalsharif358 5 років тому +2

    thanks for this explanition.please how can i contact with you?i have inquiry

  • @haideralishuvo4781
    @haideralishuvo4781 3 роки тому

    Amazing lecture , Fabulous

  • @yuanhua88
    @yuanhua88 4 роки тому

    best video about pca math thanks

  • @chaoyufeng9927
    @chaoyufeng9927 5 років тому +1

    it's amazing and it really made me understand clearly!!!!!!!!!

  • @iOSGamingDynasties
    @iOSGamingDynasties 3 роки тому

    Great video! Really nice explanation

  • @streeetwall3824
    @streeetwall3824 6 років тому

    Thank you prof Ghodsi, very helpful

  • @ardeshirmoinian
    @ardeshirmoinian 4 роки тому

    so using SVD is it correct to say that columns of U are similar to PC loadings (eigenvalue scaled eigenvectors) and V is the scores matrix?

  • @qasimahmad6714
    @qasimahmad6714 3 роки тому

    Is it important to show 95% confidence ellipse in PCA? why it is so? If my data is not drawing it then what should i do ? can i used PCA score graph without 95% confidence ellipse?

  • @ksjksjgg
    @ksjksjgg 2 роки тому

    concise and clear explanation

  • @purushottammishra3423
    @purushottammishra3423 6 місяців тому

    I got answers to almost every"WHY?" that I had while reading books.

  • @GebzNotJebz
    @GebzNotJebz 5 місяців тому

    Amazing lecture

  • @kheireddinechafaa6075
    @kheireddinechafaa6075 3 роки тому

    at 18:03 I think "a square times sigma square" not sigma?

  • @aravindanbenjamin4766
    @aravindanbenjamin4766 3 роки тому

    Does anyone know the proof for the second pc ?

  • @bhrftm5178
    @bhrftm5178 5 років тому +1

    دم شما گرم. عالی بود.

  • @CTT36544
    @CTT36544 5 років тому

    1:14 Be careful that this example is not that proper. Note that PCA is basically a system for axis rotation and hence it usually does not have good applications for those data with "donut" (or, swiss roll) structure. A better way is either to use kernel PCA or MVU (maximum variance unfold).

    • @YashMRSawant
      @YashMRSawant 5 років тому +1

      I think this is not about PCA but the fact that distributions in higher dimensions can be projected to lower dimensions such that there is one to one correspondence between higher dimensional and lower dimension counterparts as much as possible.

    • @anilcelik16
      @anilcelik16 3 роки тому

      He already mentions that, the assumption is that the data is aligned close to a plane like a paper

  • @godfreypigott
    @godfreypigott 3 роки тому

    Why has lecture 3 been deleted? How do we watch it?

  • @monalisapal6586
    @monalisapal6586 3 роки тому

    Can I find the slides online ?

  • @obsiyoutube4828
    @obsiyoutube4828 5 років тому

    We need code and application areas of PCA?

  • @lavanya7339
    @lavanya7339 3 роки тому

    wow....great lecture

  • @debayondharchowdhury2680
    @debayondharchowdhury2680 5 років тому

    This is Gold.

  • @lmurdock1250
    @lmurdock1250 4 роки тому

    mind blown in the first two minutes

  • @Anil-vf6ed
    @Anil-vf6ed 7 років тому

    Dear Prof, Thanks for the lecture. Is it possible to share the lecture materials? Thank you!

  • @mojtabafazli6846
    @mojtabafazli6846 7 років тому

    Thats great , can we have access to that noisy dataset ?

  • @betterclever
    @betterclever 6 років тому

    Lecture is great but that struggle to find image size though.

  • @bhomiktakhar8226
    @bhomiktakhar8226 3 роки тому

    Oh what an explanation!!

  • @Dev-rd9gk
    @Dev-rd9gk 6 років тому

    Amazing lecture!

  • @rajeshreddy3133
    @rajeshreddy3133 4 роки тому

    Amazing lecture..

  • @rabeekhrmashow9195
    @rabeekhrmashow9195 4 роки тому

    Thank you it’s fist time I understand PC but I am studying master financial mathematics sorry I just want to do every step manual is that possible

  • @alexyakyma1479
    @alexyakyma1479 3 роки тому

    Good lecture. Thank you.

  • @mariasargsyan5170
    @mariasargsyan5170 5 років тому +2

    he is great

  • @jimm9465
    @jimm9465 6 років тому +1

    the best ever, thanks!

  • @abhilashsharma1992
    @abhilashsharma1992 4 роки тому

    at 19:32 why is var (u_1 transpose x)=u__1 transpose s u)

    • @khubaibraza8446
      @khubaibraza8446 4 роки тому +3

      S is just a notation.
      S is covariance matrix of original matrix X ,
      u1 is constant(We can say) . In variance constant become square but in the case of vectors form you write u1 u1_transpose.
      final expresssion is u1 X u1_transpose

  • @msrasras
    @msrasras 7 років тому

    Great lecture, thank you sir

  • @rampage14x13
    @rampage14x13 5 років тому

    Around 22:00 can someone explain why the function is quadratic?

    • @yannavok7901
      @yannavok7901 5 років тому +2

      t=transpose
      ^2= square
      This function is quadratic because of u and ut:
      Quadratic function for one variable has the following form : ax^2 + bx + c
      Quadratic function for two variables has the following form ax^2 + bxy + cy^2 + dx + ey + g
      Let's consider an example:
      1- Suppose vector u=[x1]
      [x2]
      then ut = [x1 x2]
      matrix S=[1/2 -1]
      [-1/2 1]
      2- ut S gives us the following vector:
      ut S = [ 1/2*x1-1/2*x2]
      [ -x1 + x2 ]
      3- ut S u gives the following function which will be a scalar if the vector u is known:
      ut S u = 1/2 * x1^2 + x2^2 -3/2 * x1 * x2
      ut S u is quadratic

    • @rbzhang3374
      @rbzhang3374 5 років тому

      Definition?

  • @husamatalla8912
    @husamatalla8912 7 років тому

    ALL Thanks Dr.Ali

  • @mayankkhanna9644
    @mayankkhanna9644 3 роки тому

    how??? How is the variance of the projected data = u^(T)SU

    • @mayankkhanna9644
      @mayankkhanna9644 3 роки тому

      Got it

    • @venkatk1591
      @venkatk1591 3 роки тому

      I am not clear on this .can you explain

    • @mayankkhanna3284
      @mayankkhanna3284 3 роки тому

      @@venkatk1591 you can see the explanation here - ua-cam.com/video/WpYoKsWKS7w/v-deo.html

  • @PradeepKumar-tl7dd
    @PradeepKumar-tl7dd 8 місяців тому

    best video oh PCA

  • @logicboard7746
    @logicboard7746 3 роки тому

    Jump @17:20 then @29:30 @42:10

  • @arnoldofica6376
    @arnoldofica6376 6 років тому

    English subtitles please!

  • @yuanhua88
    @yuanhua88 4 роки тому

    great lecture thanks!!!

  • @bosepukur
    @bosepukur 6 років тому

    excellent lecture

  • @ProfessionalTycoons
    @ProfessionalTycoons 6 років тому +1

    AMAZING!

  • @bodwiser100
    @bodwiser100 2 роки тому

    Awesome!

  • @Rk40295
    @Rk40295 2 роки тому

    بالتوفيق ان شاء الله

  • @arsenyturin
    @arsenyturin 4 роки тому

    I'm still lost :(

  • @VanshRaj-pf2bm
    @VanshRaj-pf2bm 7 місяців тому

    Ye lecture kis bachhe ke liye h

  • @chawannutprommin8204
    @chawannutprommin8204 6 років тому

    This gave me a moment of epiphany.

  • @maurolarrat
    @maurolarrat 7 років тому

    Excellent.

  • @masor17utm86
    @masor17utm86 5 років тому

    why is var (u_1 transpose x)=u__1 transpose s x)

    • @yannavok7901
      @yannavok7901 5 років тому +4

      t= transpose
      ^2= squared
      In ordrer to demonstrate that Var(ut X) = ut S u I will use
      - the könig form of the variance Var(X)= E(X^2) - E^2(X)
      - and this covariance matrix form COV(X)= E(X Xt) - E(X) [E(X)]t
      So let's start:
      We will use the köning form to define the variance:
      1- Var(ut X) = E((ut X)^2) - E^2(ut X)
      * We know that (ut X)^2=(ut X) [(ut X)]t so the first quantity becomes: E((ut X)^2) = E( (ut X) [(ut X)]t )
      The second quantity becomes: E^2(ut X)=E(ut X) [E(ut X)]t
      And we get:
      2- Var(ut X)= E( (ut X) [(ut X)]t ) - E(ut X) [E(ut X)]t
      * We know that [ut]t = u and [(ut X)]t= (Xt u) (notice that the transpose has changed the multiplication order)
      so the first quantity will change like this: E( (ut X) [(ut X)]t ) = E( (ut X) (Xt u) )
      And we get:
      3-Var(ut X) = E( (ut X) (Xt u) ) - E(ut X) [E(ut X)]t
      * We know that Expectancy of a vector(or matrix) filled with scalars gives the same vector(or matrix)
      and Expectancy of a vector(or matrix) filled with random variables gives Expectancy of that vector(or matrix)
      In others words: E(u)=u, E(ut)= ut , E(X Xt)=E(X Xt) and E(X)=E(X)
      So the first quantity becomes E( (ut X Xt u) ) = E(ut) E(X Xt) E(u)
      = ut E(X Xt) u
      and the second quantity becomes E(ut X) [E(ut X)]t = E(ut) E(X) [E(ut) E(X)]t
      = ut E(X) [ut E(X)]t
      = ut E(X) [E(X)]t u
      And we get:
      4-Var(ut X) = ut E(X Xt) u - ut E(X) [E(X)]t u
      * let's factorize by ut
      And we get:
      5-Var(ut X) = ut [ E(X Xt) u - E(X) [E(X)]t u ]
      * let's factorize by u
      And we get:
      6 -Var(ut X) = ut [ E(X Xt) - E(X) [E(X)]t ] u
      * We know that COV(X)= E(X Xt) - E(X) [E(X)]t
      And we get:
      7-Var(ut X) = ut COV(X) u
      * Here S = COV(X)
      And finally, we have:
      8-Var(ut X) = ut S u

  • @BelkacemKADRI-u6d
    @BelkacemKADRI-u6d 11 місяців тому

    c'est un grand

  • @raduionescu9765
    @raduionescu9765 3 роки тому

    WITH THE HELP OF GOD WE ADVANCE IN A STRAIGHT LINE THINKING SPEAKING BEHAVIOR ACTIONS LIFE TO THE HIGHEST STATE OF PERFECTION GOODNESS RIGHTEOUSNESS GOD'S HOLINESS EXACTLY AS WRITTEN IN THOSE 10 LAWS

  • @usf5914
    @usf5914 5 років тому

    tanks.

  • @andrijanamarjanovic2212
    @andrijanamarjanovic2212 3 роки тому

    👏👏👏👏👏👏👏👏👏👏👏👏👏

  • @aayushsaxena1316
    @aayushsaxena1316 7 років тому

    perfect :)

  • @berknotafraid
    @berknotafraid 5 років тому

    ADAMSIN

  • @nazhou7073
    @nazhou7073 5 років тому

    太神奇了!

  • @ProfessionalTycoons
    @ProfessionalTycoons 5 років тому

    2

  • @mortezaahmadpur
    @mortezaahmadpur 6 років тому

    viva Iran

  • @parameshwarareddypalle6013
    @parameshwarareddypalle6013 5 років тому

    worst lecture