Logistic Regression for Binary Classification

Поділитися
Вставка
  • Опубліковано 6 вер 2024
  • Gradient Descent:
    • Supervised Learning : ...
    Maximum Likelihood Estimation:
    • Maximum Likelihood Est...
    Naive Bayes:
    • Naive Bayes : A simple...
    Linear Regression:
    • Linear Regression : No...
    Python Codes:
    towardsdatasci...
    www.stat.cmu.e...
    Logistic Regression (Handwritten Digit Recognition):
    realpython.com...
    Logistic Regression (University admission):
    towardsdatasci...
    Logistic Regression (Consumer Purchase):
    www.geeksforge...
    Logistic Regression (Multi-Class):
    • Logistic Regression fo...
    #DataScience​ #MachineLearning​ #LogisticRegression #LinearRegression​ #SupervisedLearning​ #NormalEquation​ #GradientDescent​ #MaximumLikelihoodEstimation​

КОМЕНТАРІ • 6

  • @HrichaAcharya
    @HrichaAcharya 3 роки тому +2

    Sir, at time 17:35 in the video how do we arrive at the relation between "\theta" (model parameters) and function h?

    • @EvolutionaryIntelligence
      @EvolutionaryIntelligence  3 роки тому +1

      Yes, thats right! :)
      So, in other words, we are explicitly designing our hypothesis function so that it satisfies the stated condition.

  • @VijayChakravarty-pz4qv
    @VijayChakravarty-pz4qv 3 роки тому

    Is Theta0 the first entry in the vector Theta. As in Theta=( Theta0, Theta1,...)?

    • @EvolutionaryIntelligence
      @EvolutionaryIntelligence  3 роки тому +1

      That depends on how theta and the input vector are defined. If theta_0 is included in theta vector, then the input vector needs to be augmented with a unit element (value=1). Its just a matter of convention, but one has to be careful while implementing the code from scratch.

  • @ShivanBhatt-sx5ib
    @ShivanBhatt-sx5ib 3 роки тому +1

    Sir, are there any advantages of expressing probability as log odds or log odds of hypothesis being a linear function?

    • @EvolutionaryIntelligence
      @EvolutionaryIntelligence  3 роки тому

      Certainly yes! Firstly, linear functions are easier to interpret. Secondly, linear functions and their gradients are easier to compute.