Logistic Regression

Поділитися
Вставка
  • Опубліковано 18 вер 2024

КОМЕНТАРІ • 23

  • @chaotic_singer13
    @chaotic_singer13 4 місяці тому +1

    jaane wo kaise log the jinke dimaag ko yeh samajhne ki shakti mila,
    hamne to jab seekhna chaha kathinaaiyo ka saamna kara *crying emoji *

  • @arundasari773
    @arundasari773 5 років тому +3

    Nice Video.
    I am learning Data science.I have a doubt in logistic regression. can you explain How to calculate intercept value.

    • @rishabhghosh155
      @rishabhghosh155 3 роки тому

      use the method of max log likelihood to fit the model, the numerical method to actually calculate beta (coefficient of x) is Newton-Raphson's method (see the book Elements of Statistical Learning)

  • @tolifeandlearning3919
    @tolifeandlearning3919 2 роки тому +1

    Great lecture

  • @adityarazpokhrel7626
    @adityarazpokhrel7626 3 роки тому

    Thank you mam. Very useful.
    Greetings from Nepal, T.U.

  • @arundasari773
    @arundasari773 5 років тому +4

    Hi mam,
    How to calculate intercept value in logistic regression by hand. Any formula is there for intercept value.

  • @rameshlanka171
    @rameshlanka171 4 роки тому

    Thank you......for these Explanation..Madam

  • @srikanthtammina1900
    @srikanthtammina1900 4 роки тому

    really great explanation madam

  • @ravimishra339
    @ravimishra339 6 років тому +2

    Great video, can you explain why :-
    derivative of g(beta T x) = (1 - g(beta T x)) derivative beta T x
    Video :- 17:29

  • @manaspeshwe8297
    @manaspeshwe8297 4 роки тому +1

    why are we learning P(Y|X) and what do we mean by B(beta) parameterizes X ?

  • @rajeshreddy3133
    @rajeshreddy3133 4 роки тому +2

    for gradient descent the update has to be beta = beta - alpha*delta of Beta

  • @sharatpc2371
    @sharatpc2371 3 роки тому +11

    Disappointed to see such a poor non-intuitive explanation of such a beautiful method. I can only imagine how IIT students would have studied. Videos from channels like Statquest, etc are way way better.

    • @SuryaBoddu
      @SuryaBoddu Рік тому +1

      These were my exact thoughts going through this video

  • @arunselvabio
    @arunselvabio 4 роки тому

    Thank you

  • @alluprasad5976
    @alluprasad5976 5 років тому

    how is this different from convex optimization?

  • @thecodingdice3107
    @thecodingdice3107 Рік тому

    At z tends to 0 the value is .5 3:21

  • @sivanandapanda9793
    @sivanandapanda9793 5 років тому

    Why we take log of this expression at 12:52

    • @AK-lp3ze
      @AK-lp3ze 5 років тому +4

      It's for handling numerical underflows and reducing numbers of multiplications, which is computationally expensive.

    • @akashsaha3921
      @akashsaha3921 4 роки тому

      Log is monotonic function so it can be used in the context of original function. Also log is computationally efficient when we are using SGD for the logistic optimization equation

    • @sankarse1162
      @sankarse1162 3 роки тому +1

      To simplify computation in the function where powers of number are involved