Complete Machine Learning In 6 Hours| Krish Naik

Поділитися
Вставка
  • Опубліковано 29 лип 2024
  • All the materials are available in the below link
    github.com/krishnaik06/The-Gr...
    Visit krishnaik.in for data sscience blogs
    Time Stamp:
    00:00:00 Introduction
    00:01:25 AI Vs ML vs DL vs Data Science
    00:07:56 Machine LEarning and Deep Learning
    00:09:05 Regression And Classification
    00:18:14 Linear Regression Algorithm
    01:07:14 Ridge And Lasso Regression Algorithms
    01:33:08 Logistic Regression Algorithm
    02:13:52 Linear Regression Practical Implementation
    02:28:30 Ridge And Lasso Regression Practical Implementation
    02:54:21 Naive Baye's Algorithms
    03:16:02 KNN Algorithm Intuition
    03:23:47 Decision Tree Classification Algorithms
    03:57:05 Decision Tree Regression Algorithms
    04:02:57 Practical Implementation Of Deicsion Tree Classifier
    04:09:14 Ensemble Bagging And Bossting Techniques
    04:21:29 Random Forest Classifier And Regressor
    04:29:58 Boosting, Adaboost Machine Learning Algorithms
    04:47:30 K Means Clustering Algorithm
    05:01:54 Hierarichal Clustering Algorithms
    05:11:28 Silhoutte Clustering- Validating Clusters
    05:17:46 Dbscan Clustering Algorithms
    05:25:57 Clustering Practical Examples
    05:35:51 Bias And Variance Algorithms
    05:43:44 Xgboost Classifier Algorithms
    06:00:00 Xgboost Regressor Algorithms
    06:19:04 SVM Algorithm Machine LEarning Algorithm
    ---------------------------------------------------------------------------------------------------------------------
    ►Data Science Projects:
    • Now you Can Crack Any ...
    ►Learn In One Tutorials
    Statistics in 6 hours: • Complete Statistics Fo...
    Machine Learning In 6 Hours: • Complete Machine Learn...
    Deep Learning 5 hours : • Deep Learning Indepth ...
    ►Learn In a Week Playlist
    Statistics: • Live Day 1- Introducti...
    Machine Learning : • Announcing 7 Days Live...
    Deep Learning: • 5 Days Live Deep Learn...
    NLP : • Announcing NLP Live co...
    ►Detailed Playlist:
    Stats For Data Science In Hindi : • Starter Roadmap For Le...
    Machine Learning In English : • Complete Road Map To B...
    Machine Learning In Hindi : • Introduction To Machin...
    Complete Deep Learning: • Why Deep Learning Is B...

КОМЕНТАРІ • 355

  • @krishnaik06
    @krishnaik06  10 місяців тому +40

    All the materials are given below github.com/krishnaik06/The-Grand-Complete-Data-Science-Materials/tree/main

  • @SahilRajput-xe6uj
    @SahilRajput-xe6uj 4 місяці тому +88

    Not recommended for beginners, but if you already have some knowledge and wants to revise concepts this is the best video. very clear and concise explation

    • @ahmedhaigi2900
      @ahmedhaigi2900 4 місяці тому +9

      Any suggestions for beginner

    • @SahilRajput-xe6uj
      @SahilRajput-xe6uj 4 місяці тому

      @@ahmedhaigi2900 Machine learning ka syllabus chakko kahi se agr nahi hai to mujhe mail dedo mai de dunga and topic wise padho hr cheez

    • @MridulChaudhary17
      @MridulChaudhary17 4 місяці тому

      ​@@ahmedhaigi2900 Andrew ng course on Coursera you can Audit that course

    • @tricksforsolving2804
      @tricksforsolving2804 3 місяці тому +1

      Thanks

    • @Dubsmashvideoss4099
      @Dubsmashvideoss4099 2 місяці тому +5

      Then recommend for beginners

  • @ytritik-st7gr
    @ytritik-st7gr 7 місяців тому +73

    When you draw the line on your screen and then it will automatically become straight , that is the best example of application of best fit line (linear regression)

    • @pratyush_779
      @pratyush_779 Місяць тому +1

      you are definitely an anime watcher aren't you!? That observation was something!!!

  • @hedithameur2383
    @hedithameur2383 5 місяців тому +17

    The world should have so many people like you sir, your way of teaching is outstanding thank you for your time to educate the world

  • @RabiaAbdulQahar
    @RabiaAbdulQahar 2 роки тому +104

    I'm amazed by seeing your understanding with every algorithm👏👏. one day I'll also be able to do the same.

  • @vipinsainilol
    @vipinsainilol 2 роки тому +6

    Excellent session...everything about ML is summarised in a single video, which provides the complete picture of the elephant!

  • @rajeshdronavalli3636
    @rajeshdronavalli3636 2 роки тому +13

    Your explanation is really good and content wise excellent sir. Thanks for sharing your videos and roadmaps and End2end explanation interview point of view .

  • @triptabhattacharjee7004
    @triptabhattacharjee7004 Рік тому +10

    Thoroughly enjoyed the videos. I was able to get over the fear of learning ML as it made my learning process smooth. Thank you ❤️

  • @theadnhsn
    @theadnhsn 10 місяців тому +5

    Really great content right here; from the rudiments to the practical application is covered here regarding all the traditional ML Algorithms! Just Amazing Period.

  • @solomonrajkumar5537
    @solomonrajkumar5537 2 роки тому +8

    the way you teach is cake wake coaching... even a ground scratching beginner can shine in DS if they watch all your Video... Thank you!!!

  • @moinalisyed4515
    @moinalisyed4515 8 місяців тому +49

    ALERT!!!!!
    For a new person who is here to explore ML and thinking whether this video is good or its just another video which will waste your time. So believe me its best ever video on youtube from a Indian. Its totally worth to watch this and make notes. From Now onwards I am a big fan of Krish Naik

    • @sandeepyadav8397
      @sandeepyadav8397 5 місяців тому

      Is it enough? Please reply

    • @JohnCena-uf8sz
      @JohnCena-uf8sz 4 місяці тому

      hoping this is not a paid comment I'm gonna watch this video

    • @MoosaMemon.
      @MoosaMemon. 3 місяці тому

      @@sandeepyadav8397 Yes, trust me it is more than enough.

    • @MoosaMemon.
      @MoosaMemon. 3 місяці тому +7

      @@JohnCena-uf8sz I assure you that it is not. I've been religiously following his ML and AI related content and I'm just so grateful that I found him. You can learn entire ML and AI by watching his videos with simple explanations. No need for any other channel.

    • @user-sv7tr4bt8l
      @user-sv7tr4bt8l 2 місяці тому +3

      bruh i'm from chile and watching this. This is the best teacher and clear explanation i could find! all of his courses!!

  • @PriyaMishraEngineer
    @PriyaMishraEngineer 2 роки тому +6

    Thank you Krish and Team , a million of course in free of cost . Thank you

  • @navaneethstark5966
    @navaneethstark5966 Рік тому +121

    6hrs ago, I don't know machine learning 💀💥. Classic✨

    • @Krish_krishna3.
      @Krish_krishna3. 6 місяців тому +1

      Really??

    • @roninbromine1670
      @roninbromine1670 5 місяців тому +1

      *I didn't knew

    • @vipprmudgal712
      @vipprmudgal712 5 місяців тому +12

      It u take only 6 hours that means u did not understand it fully I just watched it not practicingb so u can say that u still did not know ml

    • @manaschopra8998
      @manaschopra8998 4 місяці тому

      ​@@roninbromine1670didn't know*

    • @vlogwithtanishaa__
      @vlogwithtanishaa__ 3 місяці тому

      ​@@roninbromine1670I didn't Know!!

  • @sagarbadiger5554
    @sagarbadiger5554 8 місяців тому +2

    Hi Krish, I really appretiate your work, Your delivery is great, easy to undertand and remember.
    thanks for the great content.

  • @syedfayeqjeelani54
    @syedfayeqjeelani54 7 місяців тому +1

    Krish, thank you for these wonderful lectures! Much love.

  • @kamalch8928
    @kamalch8928 3 дні тому +1

    An excellent and valuable 6 hours session on ML Algos. Very handy to make the learning process on ML smoother for the people who are new to it. Thank you Sir!!

  • @tirtharoy4542
    @tirtharoy4542 2 роки тому +104

    One of the best ML videos available in the internet. This video is crisp yet covers most of the topics of ML.. Also I like the way Krish explains theory part first and then explains the same using practical examples.

    • @zaafirc369
      @zaafirc369 2 роки тому +21

      The video is an aggregation of the live machine learning community sessions that Krish did.
      But he has edited out all the time wasting discussions and kept only the most important bits where the topics are explained.
      Lots of time and efforts have been placed in compiling and editing these videos. Kudos to him for that.

    • @anexocelisia9377
      @anexocelisia9377 Рік тому +1

      Brother can you tell, is this ml video covers the whole syllabus of ml?

    • @yes.0
      @yes.0 Рік тому

      @@anexocelisia9377 ofc not, theres so much more to ml

    • @Kavi-learn
      @Kavi-learn 11 місяців тому

      @@yes.0 is the content in this video enough to crack data science interviews?

  • @shailendrasen602
    @shailendrasen602 2 роки тому +6

    that's exactly what I'm waiting for. Thankyouu Soo Muuch Sir for Sparing That Muuchh Knowledge . 😍🙏🏼🙏🏼

  • @zeroxia3642
    @zeroxia3642 Рік тому +3

    Perfect Ml video all over UA-cam... You're explanation is just amazing 🤩... Thank you so much ( I'm now only at the beginning😅... many more to go )

  • @zaafirc369
    @zaafirc369 2 роки тому +3

    Great job krish!
    Thanks for adding the timestamp 💯

  • @devangijuneja1790
    @devangijuneja1790 6 місяців тому +1

    Thank you sir for explaining the concepts in such a manner that they seem easy to understand...

  • @101_avikghosh6
    @101_avikghosh6 2 роки тому +1

    Much needed video sir.....sab video hain par apka....❤️🔥🔥

  • @alabibusuyi4492
    @alabibusuyi4492 Рік тому +3

    Your presesentation and teaching is excellent!

  • @thusharapadmanabhan9356
    @thusharapadmanabhan9356 4 місяці тому +1

    You are great!! That's all I need to say after this class.

  • @nsgodgaming
    @nsgodgaming Рік тому +7

    Hi Krish, thanks for making this. In this video you missed out PCA topic can you please make a video of that? And some detailed videos on model selection, feature selection & feature engineering.

  • @deepcontractor6968
    @deepcontractor6968 2 роки тому +18

    Perfect binge watch for interview preparation. Thanks for uploading this Krish.

    • @amrdel2730
      @amrdel2730 Рік тому +1

      at least u have interviews and work places in such field u are lucky to get to apply and earn livin with your knowledge // where i live there non

    • @rishav144
      @rishav144 Рік тому +1

      @@amrdel2730 apply to other countries bro...Simple....if opportunities are not there in your place , u have to go outside

  • @RBSTREAMS
    @RBSTREAMS Рік тому +7

    thankyou for this amazing lecture sir..its currently 2:30 am at night and i just finished this whole lecture .... i must say i gained a lot ..thankyou ❤❤❤❤

  • @ishuman
    @ishuman 7 місяців тому +8

    04:08:29
    The Gini impurity is a measure of how often a randomly chosen element from the set would be incorrectly labeled if it was randomly labeled according to the distribution of labels in the subset. The Gini impurity can be computed by summing the probability of each item being chosen times the probability of a mistake in categorizing that item. It reaches its minimum (zero) when all cases in the node fall into a single target category.
    In the case of the Iris dataset, the root node contains all the instances, and if they are evenly distributed among the three classes (setosa, versicolor, virginica), the Gini impurity will be 0.667. This is because the probability of choosing an instance from any class is 1/3, and the probability of misclassifying it is 2/3 (since there are two other classes). The calculation is as follows:
    Gini Impurity = 1 - (1/3)^2 - (1/3)^2 - (1/3)^2 = 0.667
    This indicates that there is a 66.7% chance of misclassifying a randomly chosen element from the dataset if it was labeled according to the distribution of labels in the entire dataset.
    The code you provided is plotting the decision tree. The Gini impurity for each node is calculated during the creation of the decision tree, not during the plotting. The Gini impurity is shown on the plot for each node.

    • @mdserajali538
      @mdserajali538 6 місяців тому

      How to get note of this video?

  • @drramab7055
    @drramab7055 2 роки тому +2

    Really appreciate to you sir. your explanation is very understandable sir.
    Thank you sir.

  • @Mani_Ratnam
    @Mani_Ratnam 11 місяців тому +6

    Explanation of logistic regression was the most awesome explanation that i ever found.Thank you for the session Krish.

  • @mindofmagnet3373
    @mindofmagnet3373 8 місяців тому +2

    Please be patient in this course . Definitely a awesome course

  • @Musk-Singhal
    @Musk-Singhal 3 місяці тому +1

    2:49:15 -> We set dataset='balanced' when we want the class weights to be automatically assigned as per the class distribution.

  • @ajaykushwaha4233
    @ajaykushwaha4233 2 роки тому +3

    Hi Krish, we have pycaret library which you have shown in one of your video, then is it advisable to use or we need to create individual model and compare them and finalise one, kindly advise.

  • @ShobhaSharma-kq9hy
    @ShobhaSharma-kq9hy 7 місяців тому +1

    Thanks krish. Superb delivery.

  • @mdodamani642
    @mdodamani642 2 роки тому +2

    Thank you Krish, so helpfull, as mine is commerce background i feel tuff but understanding the concepts

  • @pratikjanani743
    @pratikjanani743 4 місяці тому +1

    Great video, thanks Krish!

  • @shekharawate5898
    @shekharawate5898 2 роки тому +7

    For me this is the best video on krish channel...The knowledge and its presentation at class level...The mastery over major and minute things at its best.May lord Shiva bless you with happiness brother. Kudos...

  • @mohomednafras8509
    @mohomednafras8509 2 роки тому +6

    Clear information, clarify every important point cover all topics. Thanks krish I participated you live session also... 👍

  • @ewnetuabebe5059
    @ewnetuabebe5059 Рік тому +14

    Thank you, Krish, for such an incredible Tutorial, Have you made all the PDF files available?

  • @pratiknaikwade95
    @pratiknaikwade95 2 роки тому +2

    Very well explained....thank you sir 🥰💐💐

  • @neerajasrinivasan3429
    @neerajasrinivasan3429 Рік тому +15

    Hi Krish. This video is very helpful and lots of fun to watch and it’s amazing that within such a short span of time you’ve completed sort of a bridge course on ML. Kudos to you 👏🏻! However, I had a doubt that I would like to raise here. You mentioned in your video that Lasso Regularisation helps feature selection. If the theta or slope values are negligible, say close to zeros, then squaring them wouldn’t increase the values but decrease further right? Why can’t we do feature selection using ridge regularisation then? But for slopes greater than 1 this would make sense, however, in those cases we would not be able to neglect those right?

    • @user-yr9dh6so8m
      @user-yr9dh6so8m 7 місяців тому

      we use mod in lasso to do feature selection and for lasso we will have multiple features and slope so on adding them it will itself neglect those features that are not of use

    • @Abhishek-we4xg
      @Abhishek-we4xg 6 місяців тому

      we do feature selection using lasso regularisation because it makes some of the coefficient to zero which are not important to our analysis. may be that's the reason we used lasso for feature selection.

  • @khatiwadaAnish
    @khatiwadaAnish 2 місяці тому

    Thanks by bringing multiple live stream into a same video👍👍

  • @ewnetuabebe5059
    @ewnetuabebe5059 Рік тому +2

    What an amazing tutorial ever seen, Thank you, Krish, but Have you put all the pdf materials kindly.

  • @sethusaim1250
    @sethusaim1250 2 роки тому +6

    Thank you for putting everything together ☺️

  • @garvitsapra1328
    @garvitsapra1328 7 місяців тому +2

    For underfitting fitting models we have high bias and low variance, as bias means wrong prediction and variance means how the model is flexible enough to adapt to different datasets.

  • @sohamnaik8264
    @sohamnaik8264 2 роки тому +7

    Sir I just want to say Thank you to help us gain this knowledge and encourage us to start our data science journey

  • @kmishy
    @kmishy 2 роки тому +1

    Thanks sir, thank you for merging all videos

  • @rosnawatiabdulkudus6435
    @rosnawatiabdulkudus6435 Рік тому +2

    You are the best teacher 🥰. Regards from Malaysia.

  • @padhaidotcom9276
    @padhaidotcom9276 10 місяців тому +1

    very nice voice, no confusion for listening

  • @tarabalam9962
    @tarabalam9962 Рік тому +1

    great explanation of so many algorithms in a short time

  • @HirvaMehta01
    @HirvaMehta01 2 роки тому +3

    Thank you soo much Krish for summarising everything here.

    • @anexocelisia9377
      @anexocelisia9377 Рік тому

      Brother can you tell, is this ml video covers the whole syllabus of ml?

  • @chandramoulireddy9636
    @chandramoulireddy9636 2 роки тому +1

    sir, it's very useful algorithm. i am following this . thanks

  • @mukunthans3600
    @mukunthans3600 11 місяців тому +5

    Great explanations, Krish. I just started my data science prep and have been following you for a few days. This will be my second marathon after just finishing your statistics tutorial. It is a fun learning experience watching your lectures. Thanks again for your efforts!
    Please let me know if I am wrong. I have a query in the adjusted R square performance metric, explained around 1 hour into the video. According to the formula, When we substitute p=2, the value of adjusted Rsquare should be the same as that of Rsquare, right? However, you've shown it as lesser in your example, or is there a condition that we should only use adjusted Rsquare when the p-value is greater than 2.

    • @vira5995
      @vira5995 2 місяці тому

      did you get a job ???

  • @jiyabyju
    @jiyabyju 11 місяців тому +1

    while you might encounter Gini impurity values higher than 0.5 in the context of the Iris dataset, this is due to the multiclass nature of the problem and the specific calculation used for multiclass Gini impurity. It doesn't imply that the maximum impurity for multiclass problems is 0.5; that limit applies to the binary case.

  • @srikanthnimmala4457
    @srikanthnimmala4457 Рік тому +1

    thank you soo much sir for your great explanation

  • @AnuragGupta-19
    @AnuragGupta-19 Місяць тому +1

    Sunday spent well !! ♥️

  • @arvinthsss7959
    @arvinthsss7959 2 роки тому +2

    This is an excellent collection, thanks krish for this:)))

    • @Rahul-lg2xn
      @Rahul-lg2xn Рік тому

      bro do you have the notes for this lecture?

  • @entertainment8067
    @entertainment8067 2 роки тому +27

    Sir, make a separate playlist on, Reinforcement learning, Deep reinforcement learning and imitaiton learning. thanks

  • @akashyadav5891
    @akashyadav5891 2 роки тому +1

    Thank you soo much sir for ur efforts ☺

  • @littlecreative4097
    @littlecreative4097 6 місяців тому +1

    thank you for such a informative video Krish Naik.can you make video on standard scalar ,feature transformation another preprocessing on data before model implementation

  • @maths_impact
    @maths_impact Рік тому +2

    Hello sir, can you please make a video of improved Gini index algorithm for feature selection. I have read so many research papers where Improved Gini index algorithm used not simply Gini index. I know you have a very good knowledge and you can make it easily. Me and my friends will wait for the video.

  • @a2yautomobile931
    @a2yautomobile931 Рік тому +2

    wow! very useful content❤❤

  • @bajrangsharma3308
    @bajrangsharma3308 Рік тому +3

    I am watching this video now but could not fetch this boston housing prices data set as sk learn maintainers are telling us strongly not to use this dataset..how can i complete this tutorial now??@krishnaik sir

  • @QasimsDesk
    @QasimsDesk 11 місяців тому +1

    Wah Wah - Excellent Video

  • @huntingboy4278
    @huntingboy4278 11 місяців тому +2

    1:13:13. Underfitting me High-bias & Low-var aayega @krish naik

  • @rupalikhare3330
    @rupalikhare3330 7 місяців тому +1

    You are really good teacher

  • @adipurnomo5683
    @adipurnomo5683 2 роки тому +1

    Clear explained 👍

  • @footballfreez3846
    @footballfreez3846 13 днів тому

    Amazing teaching skills
    🤝🤝

  • @yogitgurram825
    @yogitgurram825 10 місяців тому

    sir i hope this reaches you and you reply me pls that i have a doubt that in your complete ml playlist which consists of 153 videos , i have seen some of the videos where you have explained what is what and mathematical concepts behind that and i need to ask whether the code implementation part of every algorithm is also there in that playlist or what do you suggest for code implementation

  • @TreBlass
    @TreBlass 11 місяців тому

    I don't understand at around @2:32:00, why are we not using Linear Regression directly, as there's no case of overfitting. Let me know if I make sense here, but overfitting would be there if our training model had shown 100% accuracy. We haven't checked that, and are using ridge and lasso. Please help me understand. I believe we should only have used Linear Regression for our purpose. Thanks

  • @gmanjunathbe
    @gmanjunathbe Рік тому +1

    Hi Not able to locate the community code and power point. please enable it

  • @taufiq12334
    @taufiq12334 4 місяці тому +1

    5:38:30 youre interchanging the definition of high bias and low bias

  • @CodeSnap01
    @CodeSnap01 2 роки тому

    a thousand dollar course just free. thankyou krish sir.

  • @TreBlass
    @TreBlass 11 місяців тому +5

    I have a question around 1:25:40. You mentioned that we use Lasso to avoid less important features. The lower the slope, the lower is the modulus of that slope (or theta).
    If I consider the mathematical definition,
    in L2 Regularization: Cost is is J(theta) + lambda (sum of squares of thetas)
    and in L1 Regularization: Cost is is J(theta) + lambda (sum of modulus of thetas)
    So, if the absolute value of the slope is less than one, the square of it would be lesser, and hence we would be able to discard that feature more prominently.
    Eg., (0.5)^2 = 0.25 < |0.5|
    Correct me if my understanding is wrong. Thanks

    • @TheWaylays
      @TheWaylays 9 місяців тому

      So, that is partially true, but the logic is flawed a bit. Yes - x^2 makes numbers less than 1 smaller, and numbers greater than 1 larger. And that's the whole point. If we want to decide whether a certain theta parameter is suitable to omit (meaning we don't want to select that feature), we want to look at the sole value of that parameter (or the absolute value, in this case), not the square, the reason being that squaring makes small errors smaller and large errors larger. Discarding a certain feature based on the square of the parameter would be more prone to mistakes. In other words, it gets increasingly more difficult to tell well suited and badly suited parameters apart based on squares of their values, rather than modulus of their values, as the values grow large or small (basically when they start to deviate from 1 more and more). That's how L1 differs from using L2 and why it can help with feature selection. We can square the value of the slope, but that doesn't change the slope's value itself, just how we look at it. Otherwise, we could just raise the slope to some astronomical power, and discard all slopes that were smaller than 1 (because all of them would end up close to 0 after raising to some huge power). But that does not reflect reality. If we want to look at slope values in L1, to imply some feature selection, we don't want to make those values artificially smaller or larger, because there is no benefit to that - we would basically be losing information. You usually want to apply that transformation to errors, because when it comes to predictions, the error of 4 (2^2) is obviously worse than error of 2, and error of 0.1 is not that bad, so making it 0.01 (0.1^2) isn't a big deal. So you focus on minimizing the error of 4 rather then the error of 0.01 (actual errors are 2 and 0.1). So Ridge basically treats slopes in the same way as it treats errors, and Lasso does not.
      And by the way, that is a big reason behind choosing loss functions to be (error)^2. We punish large errors and deminish small errors. Because at the end, when we look at our cost function, and the value it produces (when we sum up our losses), the small errors/losses don't add up to that much, but the large errors/losses do - so we want to focus on them a bit more. So (error)^2 is especially good for linear regression, beause it serves 3 purposes. One - squaring makes negative values positive, so the errors don't cancel out, but add up. Two - as stated previously, squaring helps disregard small errors and focus on large errors, because that's where the gain in performance is. Three, it's convex, because y_hat (the estimator of y) is linear, and linear functions are both convex and concave, so L(theta)=[ y_hat(theta) - y ]^2 is also convex (y_hat(theta) simply doesn't impact convexity, which is not true in general, like in DL or logistic regression). That grants us the abililty to use regular gradient descent algorithm without any issues. This cannot be said for things like logistic regression, or many square loss functions in deep learning, because the estimate itself is not linear, so the square may not be convex, and we might introduce multiple local minima (Loss function L(theta, y_i) is basically a composition of estimation function for ith observation minus that observation and some other function, like x^2). Therefore, for logistic regression, you adjust loss and cost functions (in reality they come directly from MLE), and for neural networks, you can use stuff like Adam optimizer and so on, so x^2 in this case is still nice and still leaves us with benefits from points 1 and 2.
      Hope that clears it up, but if not, I'm sure there's someone better than me at relaying this information somewhere on the internet. Cheers.

  • @jaimin_sagar_nividous
    @jaimin_sagar_nividous 10 місяців тому

    Hi Sir, Can you please help me with the content notes? I tried accessing the given link but it is not working.

  • @thefacts3627
    @thefacts3627 Рік тому +1

    your material link is not exist now...please krish sir send new link to study this lecture please krish sir this the humble request

  • @deepakvdesale
    @deepakvdesale 3 місяці тому

    Krish, you have been like a brother to me when it came to understanding machine learning. Hope to meet you some day.

  • @manasranjansethi4372
    @manasranjansethi4372 Рік тому

    is theta zero consider as slope? you are considering as when theta is zero the line passes through the origin. i think it may be intercept.

  • @tanvirhossain5475
    @tanvirhossain5475 2 роки тому +1

    no option to give more likes...love from bangladesh.

  • @sum89763
    @sum89763 4 місяці тому

    Sir window laptop or mac Air m2 is good for AI or ML . Easier solution available on which like getting help and future proof ?????

  • @user-op6gb4bw3h
    @user-op6gb4bw3h Рік тому +1

    Definitely good and great refresher who has exposure in ML,STATS and MATH(calculus and Algebra)but not for absolute beginners.... , if you want to learn ML without prior knowledge, Andrew's course in coursera is the best, you can audit the course for free over there.

    • @ritamsantra2372
      @ritamsantra2372 Рік тому

      i've completed the stats part, should I watch this , or should i learn the extra math parts then starts here,i mean the algebra and calculus part

    • @Kavi-learn
      @Kavi-learn 11 місяців тому

      do you know any free resources to learn machine learning?

    • @Kavi-learn
      @Kavi-learn 11 місяців тому

      @@ritamsantra2372 did you watch this video or how you went about it? and where did you learn stats part?

  • @nareshdhiman2640
    @nareshdhiman2640 2 роки тому

    Hi Krish, where is the github link. need to practice clustering practical examples

  • @codewithemmaprime
    @codewithemmaprime 10 місяців тому

    Best Machine learning content out there.😊😊😊

    • @DeepuDeepu-wz4fe
      @DeepuDeepu-wz4fe 10 місяців тому

      Can this course help for the complete beginners?? Pls reply😊

  • @harshitalalwani8127
    @harshitalalwani8127 24 дні тому

    AMAZING CONTENT!!!

  • @vinothkumar7531
    @vinothkumar7531 7 місяців тому

    Sir,One clarification.In Practical Implementation of Ridge and Lasso Regression part [02:40:00] while applying r2 score you entered paramenters as y_pred,y_test but in sklearn documentation it shows parameters y_true,y_pred in this order.I tried both which give different r2 scores.Kindly clarify this, whether either parameter's order is correct or any one of those order is correct.Help me with this sir.Waiting for your reply.

  • @muhammadzakiahmad8069
    @muhammadzakiahmad8069 Рік тому +1

    Why is Random forest not effected by Outlier?
    Ans on Google:
    The intuitive answer is that a decision tree works on splits and splits aren't sensitive to outliers: a split only has to fall anywhere between two groups of points to split them.

  • @ashutoshanand5484
    @ashutoshanand5484 11 місяців тому

    Sir material link of this course has expired can u plz provide us a fresh link for annotation notes of what u have explained in this video

  • @subhajitpatra3665
    @subhajitpatra3665 2 роки тому +1

    Sir, I have doubt in underfitting. it's variance should be low or high?

  • @shashikumarkulkarni9102
    @shashikumarkulkarni9102 7 днів тому

    In the topic Bias and Variance (5:38:38), If model performs well with training dataset then it is low bias Right?

  • @niladribiswas5644
    @niladribiswas5644 Рік тому

    Just a question if any of could reply : for Naïve Bayes the feature and target value has to be categorical?

  • @akshayattri
    @akshayattri 6 місяців тому

    linear regression - If you want to keep all +ve values, why didn't put mod instead of square, Divding by 2 doesn't effect cost function ?? Any reasons?? Hard to digest this 2 points

  • @genai142Kumar
    @genai142Kumar Рік тому +1

    Thank you Krish, this is very helpful. I'm beginner, is it possible to get the notes of the video?

  • @alibinfaizan8718
    @alibinfaizan8718 10 місяців тому

    Sir, the number of samples can never go below zero. Then why do we write 'S(v)' and 'S' in modulus, when we try to find the Information Gain?

  • @Sanyat100
    @Sanyat100 2 роки тому +1

    u r da best !!!!!

  • @eswarchandvuppala621
    @eswarchandvuppala621 2 роки тому

    Thanks a lot for these complete ML lectures

    • @anexocelisia9377
      @anexocelisia9377 Рік тому

      Brother can you tell, is this ml video covers the whole syllabus of ml?

  • @pragneshsolanki8243
    @pragneshsolanki8243 2 роки тому +2

    Please upload NLP in depth tutorials in 6-7 hours.

  • @SantoshKumar-hr3jz
    @SantoshKumar-hr3jz 2 роки тому

    Yes The Best Video on ML

  • @gauravkumar-rq4oe
    @gauravkumar-rq4oe Рік тому +2

    unable to find the class materials in community section

  • @rahulpalivela1449
    @rahulpalivela1449 2 роки тому

    sir but wasn't it underfitting in the case of linear , ridge regression when you used r-squared the accuracy of testing data was high compared to the training data .

  • @sudhirmallick1716
    @sudhirmallick1716 Рік тому

    Thank You Sir, for this vdo