Machine learning - linear prediction

Поділитися
Вставка
  • Опубліковано 18 гру 2024

КОМЕНТАРІ • 32

  • @manzooranoori2536
    @manzooranoori2536 2 роки тому +2

    I just love your lectures. Thank you. I'm learning a lot from you.

  • @forughghadamyari8281
    @forughghadamyari8281 9 місяців тому +1

    hi. Thanks for wonderful videos. please introduce a book to study for this course.

  • @panayiotispanayiotou1469
    @panayiotispanayiotou1469 6 років тому +1

    57:17 The dimensions of the X matrix are n * (d + 1) while for the theta matrix are d * 2. The first x inside the X matrix should be renamed to x_12 instead so that the columns are kept to size d

    • @dhoomketu731
      @dhoomketu731 4 роки тому

      You are right. The parameter matrix to the right of the data matrix X should have d+1 rows.

  • @kumorikuma
    @kumorikuma 8 років тому +1

    Great lectures. Thanks so much! I learn much better when I can watch a pre-recorded lecture and pause it / go back at my own pace. (Also can skip 9:30AM class heh).

  • @joehsiao6224
    @joehsiao6224 4 роки тому

    @45:59 The right hand side of prove looks wrong, specifically xi^TΘ, xi is 1xn, Θ is nx1, there is no need to transpose xi.
    Unless xi is nx1, but there is no definition of xi before.

    • @joehsiao6224
      @joehsiao6224 4 роки тому

      I found the term is being corrected in another video ua-cam.com/video/QGOda9mz_yA/v-deo.html&ab_channel=NandodeFreitas&t=22m46s

  • @bach1792
    @bach1792 9 років тому +8

    great courses man. thanks

  • @el-ostada5849
    @el-ostada5849 2 роки тому

    Thank you for everything you have given to us.

  • @charlescoult
    @charlescoult 2 роки тому

    This was an excellent lecture. Thank you.

  • @parteekkansal
    @parteekkansal 6 років тому +1

    Matrix differentiation results are valid when A is symmetric.

  • @amitav1978
    @amitav1978 10 років тому

    this looks ok but if you could help how to derive the R2 pearson coeficient and then detail that holds good I hope

  • @theneuralshift
    @theneuralshift 10 років тому

    Hi Nando,
    Thanks for the lecture. How is the undergraduate and the regular machine learning lectures different?

    • @adityajoshi5
      @adityajoshi5 10 років тому

      Undergrad course contains fundamental topics, straight from basic probability, to bayes theorem. If you don't have background in advanced statistics etc, undergrad one will be better for you..

    • @theneuralshift
      @theneuralshift 10 років тому

      Thanks aditya

    • @adityajoshi5
      @adityajoshi5 10 років тому +1

      I think I was a little - almost three quarters of a year - late in replying ;)

    • @lradhakrishnarao902
      @lradhakrishnarao902 8 років тому

      Undergrad courses include concept like linear algebra etc. You can skip those, if you know how to find differentiation and write matrix.

  • @ShaunakDe
    @ShaunakDe 11 років тому +1

    Thanks for the informative lecture.

  • @riccardopet
    @riccardopet 8 років тому

    Hello, I am following your lecture to learn a bit about ML. I was trying to derivate the expression (latex code) \sum_{i=1}^n (u_i - \vect{x}^T_i \theta)^2 that you present on the slide "Optimization Approach". I am not able to derive that expression because I get a very similar expression where the \vec{x} is not transpose. Actually, that should be in agreement with the fact that \theta is defined as a column vector and x_i should be row vector (as also shown in one of the last slides).
    Thank you very much, your lectures are very well done!

    • @ilijahadzic7468
      @ilijahadzic7468 8 років тому

      I noticed the same. The transposition in the last expression on that slide is probably an error. The x_{i} is a row-vector of dimension d, so when multiplied by a column-vector \theta yields a scalar. No need to transpose. The transposition in the first (leftmost) expression is correct.

    • @lradhakrishnarao902
      @lradhakrishnarao902 8 років тому

      you must be getting the expression y_i^Tx_i * theta. All you need to do is write it as (x_i*theta)T * y_i and your problem is solved. I know, it requires bit of rearrangement, and that's why that particular exercise has been given.

  • @letranvinhtri
    @letranvinhtri 7 років тому

    Does any one know why in 53:17 difference(0^T * X^t * X0) = 2 * X^t * X * 0?
    I think it should be (t+1) * X^t * X * 0^t

    • @naveedtahir2463
      @naveedtahir2463 7 років тому

      Combine the theta vectors and you'll understand

    • @LunnarisLP
      @LunnarisLP 7 років тому

      make sure you understand, what 0^T*X*X^T is. It's really just that.
      T means a transposed vektor, not power T

    • @LunnarisLP
      @LunnarisLP 7 років тому

      and check out what X^T*X means and what a dotproduct with itself creates ;-)

  • @chihyanglee4473
    @chihyanglee4473 7 років тому

    Really appreciate this!!

  • @pradeeshbm5558
    @pradeeshbm5558 7 років тому

    I didn't understand how to draw slop. how to find theta

    • @LunnarisLP
      @LunnarisLP 7 років тому

      pick any, then you get your loss function, then you minimize the loss :)

  • @nikolamarkovic9906
    @nikolamarkovic9906 Рік тому

    49:40 str 46

  • @KrishnaDN
    @KrishnaDN 8 років тому

    fantástico

  • @ashishjain2901
    @ashishjain2901 8 років тому

    anyone having ml related to insurance industry. please share

  • @yunfeichen9255
    @yunfeichen9255 6 років тому

    Calculus 101 anyone???? lolz???? a lot of calculus functions...