7:15 AI vs ML vs DL vs DS 17:00 Machine Learning 17:49 Types of ML 19:00 Supervised ML 21:50 Regression intro (brief) 24:04 Classification (brief) 25:35 Unsupervised ML 26:08 Clustering (brief) 28:49 Dimentionality Reduction 33:10 First Algo - Linear Regression
00:14 Introduction to machine learning algorithms for data science 02:48 Introduction to machine learning algorithms and interview preparation 07:32 Artificial intelligence applications can perform tasks without human intervention. 09:57 Applications of AI in providing personalized recommendations and services 14:24 Deep learning aims to mimic the human brain and has enabled solving complex use cases. 16:26 Machine learning involves supervised and unsupervised algorithms. 20:27 Understanding independent and dependent features in supervised machine learning 22:28 Dependent feature determines problem type 26:33 Clustering helps in grouping similar data for customer segmentation 28:27 Introduction to Machine Learning Algorithms and Data Science 32:28 Introduction to linear regression for modeling data 34:56 Introduction to linear regression and hypothesis in machine learning 38:49 Understanding the concepts of theta 0 and theta 1 in machine learning. 40:53 Understanding the concept of slope in linear regression 44:47 Cost function for machine learning algorithms 46:37 Cost function helps in finding the best fit line by minimizing the distance. 50:53 Minimizing the squared error function with parameters theta 0 and theta 1 53:00 Understanding the hypothesis with intercept at origin 56:47 Theta 1 values impact cost function 58:38 Calculation of cost function using gradient descent 1:02:57 Explaining the concept of gradient descent in machine learning. 1:04:59 Convergence algorithm for reaching global minima in gradient descent. 1:09:02 Positive slope indicates weight update in convergence algorithm 1:10:53 Learning rate is crucial for reaching the global minima. 1:14:51 Cost function prevents local minima in linear regression 1:16:50 Understanding the derivative of theta j 1:21:22 Convergence algorithm for gradient descent 1:24:14 Understanding r square and adjusted r square for model performance evaluation 1:29:26 Explanation on why the mean of a particular value distance will be higher 1:31:36 Adding uncorrelated features can increase R-squared value. 1:35:41 Number of samples, predictors, and their impact on R-squared and adjusted R-squared. 1:37:33 R-squared value decreases as the number of predictors increases 1:42:34 Discussion on research papers and future topics in the course. 1:44:39 Encouragement to share and join community session
After three year I am Feel like a real school student I am missing those days 😃 i have pen in my hand my notebook and a teacher like you and note and all this thing you teach like in we note and learn in school thankyou so much for giving us enviornment. 🤞
Really amazing tutorial In youtube sufficient content is available for entry level in data science Here some good teacher like Krish But we have maintaine consistency and always be excited to learn new things, skills, and anything And enjoy while learning things because you will be always be grow in professional career if you adopt learning as a hobby
Superb sir i thoroughly enjoyed of learning machine learning. Your energy and practical way of teaching makes more eagerness towards learning data science
A man great work ,u have been doing. it required lot of motivation to teach in online. lot of work need to be done behind the screen to reach students in online mode. Understanding of concepts in depth required. keep going i am fan of ur writting and teaching
Sir I don't know why people use optimisation algorithms in regression to find coefficient whereas already it is proven that coefficient=inv(x'x)%(x'y) which is proven by likelihood function method Just enter value of x and y and and you will get estimate of coefficient no need to do iterative optimization
This method normally works when the dimension of X is within a certain limit. Say if size of X is 10^4 or greater then finding inverse becomes expensive, and the machine fails to do so. But if we use other algo like gradient descent then we can find optimal x even for large value of X.
krish sir at 1:13:44 why are we taking the slope of the local minima ? obv it will be 0 even for global minima shouldn't we check for the that particular green point ? can anyone clarify this doubt ????
Why 1/2m means dy/dx which means the small changes in j(theta1)/theta1 with respect to theta 1 so that's why 1/2m to reduce to minimum distance between actual minus predict if we do 1/m it will give us big difference so thats why we should not do average instead of average we multiply it with 2 to get slight minimal difference of SSE if divide it with m which means it will get avarage it will just affect the minimal error so that why to reduce the error we divide ➗ 2m which will give 0.0like this so all data points now closest to best fit line this is called cost function
1:00:00 during the calculation of cost function you used 1/2m, which is a normal sum calculation. but you said previously that for simplifying differentiation operation we are using 1/2m, so isn't 1/2 is unnecessary . Just 1/m would be sufficient. Also in standard books, the cost is calculated using (1/total data points) or in your case 1/m. Please clear this.
Please make such video lecture series more and more in future bcoz you lectures unlike others contain problem based on real world senerio happening very recently are very practical in nature nature and seeing this one can become job ready.. biggest thing is you don't put bookish knowledge expample . Based on your real world experience your impart practical & handy knowledge.. please keep working in same way..
@@sudhanshuedu wow we're so excited....some. Ofus can't afford one neuron but y'all helping us and m so blessed tht I found ineuron I don't have system yet so m just practicing python On my android phone....hope nearby april I get my own pc or laptop and you guy will launch some fsds job guarantee program nearby summers 😭😭❤️❤️❤️❤️ Hatsoff to you guys 👍🌹
@@sudhanshuedu sudhashu sir thank you for oneneuron platform.. started data science masters .again thank you for an amazing concept of affordable education.
Please also cover other important topics like LDA,PCA, T-sne etc. A lot of good companies ask these questions and test the in-depth understanding of the candidates. @Krish Naik.
Sir You teaching is simply awesome You make the complex concept in such easy n nice way Your Hardwork of that Blue Book is worthwhile for us also ❤🔥🔥👏🏻🙏
Sir thank you for this great job. Please sir i have a question? Are we to always assume theta one while finding the gradient descent curve? Thank you so much. I love the way you split everything for us. You are so much.
Hi sir thanks for sharing your amazing knowledge, i have a doubt at 43.00 min, here wchich difference we will take to get perfect line. is it difference between predictied points or predicted point to existance data set values?
Mr. Krish Naik. It was wonderful listening the theory. Really enjoyed. I belong to mechanical background and a beginner in AI. I have one doubt sir. In finding "Rsquared" you said "ycap" is the difference between the "actual" and the "predicted" point. Then "ycap" is nothing but "htheta x - y" right?. But you said as ycap is nothing but htheta x. It is confusing me. please clear this tomorrow.
Hello sir, I am already an enrolled student of One neuron and also started watching your series on ML on youtube, I have one query regarding Gradient descent, though I have understood the maths behind it I really want to know Do I need to understand its implementation in Python too, in Depth or just the understanding of ALGORITHM WOULD SUFFICE IN INTERVIEWS AND DURING JOBS?
7:15 AI vs ML vs DL vs DS
17:00 Machine Learning
17:49 Types of ML
19:00 Supervised ML
21:50 Regression intro (brief)
24:04 Classification (brief)
25:35 Unsupervised ML
26:08 Clustering (brief)
28:49 Dimentionality Reduction
33:10 First Algo - Linear Regression
Tnq krish.. You are the krish without cape and mask in edtech sector.. Tnq
R sq and adjusted r sq too
l
k
l
k
k
0
00:14 Introduction to machine learning algorithms for data science
02:48 Introduction to machine learning algorithms and interview preparation
07:32 Artificial intelligence applications can perform tasks without human intervention.
09:57 Applications of AI in providing personalized recommendations and services
14:24 Deep learning aims to mimic the human brain and has enabled solving complex use cases.
16:26 Machine learning involves supervised and unsupervised algorithms.
20:27 Understanding independent and dependent features in supervised machine learning
22:28 Dependent feature determines problem type
26:33 Clustering helps in grouping similar data for customer segmentation
28:27 Introduction to Machine Learning Algorithms and Data Science
32:28 Introduction to linear regression for modeling data
34:56 Introduction to linear regression and hypothesis in machine learning
38:49 Understanding the concepts of theta 0 and theta 1 in machine learning.
40:53 Understanding the concept of slope in linear regression
44:47 Cost function for machine learning algorithms
46:37 Cost function helps in finding the best fit line by minimizing the distance.
50:53 Minimizing the squared error function with parameters theta 0 and theta 1
53:00 Understanding the hypothesis with intercept at origin
56:47 Theta 1 values impact cost function
58:38 Calculation of cost function using gradient descent
1:02:57 Explaining the concept of gradient descent in machine learning.
1:04:59 Convergence algorithm for reaching global minima in gradient descent.
1:09:02 Positive slope indicates weight update in convergence algorithm
1:10:53 Learning rate is crucial for reaching the global minima.
1:14:51 Cost function prevents local minima in linear regression
1:16:50 Understanding the derivative of theta j
1:21:22 Convergence algorithm for gradient descent
1:24:14 Understanding r square and adjusted r square for model performance evaluation
1:29:26 Explanation on why the mean of a particular value distance will be higher
1:31:36 Adding uncorrelated features can increase R-squared value.
1:35:41 Number of samples, predictors, and their impact on R-squared and adjusted R-squared.
1:37:33 R-squared value decreases as the number of predictors increases
1:42:34 Discussion on research papers and future topics in the course.
1:44:39 Encouragement to share and join community session
For us you're our Andrew NG
Yea🎉
Founder of deep learning 😊
True
Who is Andrew ng
1:03:19 Assignment 1 for thita 1 =2 , j(thita1)= 2.33
After three year I am Feel like a real school student I am missing those days 😃 i have pen in my hand my notebook and a teacher like you and note and all this thing you teach like in we note and learn in school thankyou so much for giving us enviornment. 🤞
I persuing my data science degree but i never learned like learned from you 🖤
Very nice recap of all stuff in one place. Tumba tumba Dhanyavadagalu😊🙏
Really amazing tutorial
In youtube sufficient content is available for entry level in data science
Here some good teacher like Krish
But we have maintaine consistency and always be excited to learn new things, skills, and anything
And enjoy while learning things because you will be always be grow in professional career if you adopt learning as a hobby
you can also checkout Campus-X, he has very good in depth lectures.
Thanks Krish,, your blue book inspired me to start writing notes in my book.
What's blue book?
7:50
Superb sir i thoroughly enjoyed of learning machine learning. Your energy and practical way of teaching makes more eagerness towards learning data science
Sir, a lot of respect to you, Excellent session :)
KRISH is a very good teacher
so easy too understand because of your teaching
I'm given to a lots of thanks from bottom my 💓💓💓💖💖
It's awesome starting hats off @krish Sir.
Excellent 1st session. Excited to do the 2nd part tomorrow. I am trying to revise my ML knowledge to get back to competing in Kaggle.
Hi Joseph, How exactly, R u using Kaggle, Can u tell me, I was thinking to participate inHackathon, but never used Kaggle before.
All is knowledge and insights for free, thanks Kirsh.
Krish,You are amazing!
Super thank you Krish for great content and explanation
Bring DL, Maths for DS and NLP 7 Days session on each of these topics.
A man great work ,u have been doing. it required lot of motivation to teach in online. lot of work need to be done behind the screen to reach students in online mode. Understanding of concepts in depth required. keep going i am fan of ur writting
and teaching
Best content i have came across so far. Thankyou so much
Lot of respect sir🙏 can't thank you enough for these live sessions.
Lot of respect sir
Superb explanation Thank you so much
46:34 .. cost Function is wrong Sir , The right one is mention here
Cost Function(J)=\frac{1}{n}\sum_{i=0}^{n}(h_{\Theta} (x^{i})-y^{i})^{2}
Amazing explanation, I loved the way you taught . ❤
Thank you for wonderful session
wow ,respect for the way you taught here thankyou
Very very good explanation .Thank you
Tkank you sir for this amazing session
It is really great session. It took me around 4 hours to complete.
Creating my Orange book XD
Thank you so much sir making these usefull vedios.awesome explanation.
Happy Teacher's Day Sir🙏
Thank you sir for such amazing community classes
🙏 dhanyawad sir
jabardast krish bhai...
As always best lecture
Very good information
Such an excellent content
Great lecture 😊😊
nicely explained fan of yours!
great efforts(🙏)..... lack of presentation, lengthy & confusing explanation
This virtual notebook is amazing 💪🏻🔥 ppl like me will start writing notes now 😂
Name of this notebook?
1:33:33 that moment of realisation when life starts to make sense.
Awesome , Thank you Krish!
Unable to find the notes in community session. If any one has pls share gdrive link.🙏🙏🙏🙏🙏🙏
Even I am looking for the same. I have emailed them last week but didn't get any revery yet. Please let me know if you get the files.
Sir I don't know why people use optimisation algorithms in regression to find coefficient whereas already it is proven that coefficient=inv(x'x)%(x'y) which is proven by likelihood function method
Just enter value of x and y and and you will get estimate of coefficient no need to do iterative optimization
matrix inverse is not always defined and also it is very costly operation
This method normally works when the dimension of X is within a certain limit. Say if size of X is 10^4 or greater then finding inverse becomes expensive, and the machine fails to do so. But if we use other algo like gradient descent then we can find optimal x even for large value of X.
Very nice explanation Krish. Enjoyed this session.
KNN should come in Supervised Ml
Great job Krish, I really appreciate it
lot of respect you sir thank you sir
Like this information
Love u krishh u just nailed it
You are very good teacher because I'm making understand the algorithm of ml😁🙏
u are a great teacher....
krish sir at 1:13:44 why are we taking the slope of the local minima ? obv it will be 0 even for global minima
shouldn't we check for the that particular green point ?
can anyone clarify this doubt ????
Why 1/2m means dy/dx which means the small changes in j(theta1)/theta1 with respect to theta 1 so that's why 1/2m to reduce to minimum distance between actual minus predict if we do 1/m it will give us big difference so thats why we should not do average instead of average we multiply it with 2 to get slight minimal difference of SSE if divide it with m which means it will get avarage it will just affect the minimal error so that why to reduce the error we divide ➗ 2m which will give 0.0like this so all data points now closest to best fit line this is called cost function
sir please do community session on deep learning in future too.
Please do Coding of each Algorithm
Very nice session Sir
1:00:00 during the calculation of cost function you used 1/2m, which is a normal sum calculation.
but you said previously that for simplifying differentiation operation we are using 1/2m, so isn't 1/2 is unnecessary . Just 1/m would be sufficient.
Also in standard books, the cost is calculated using (1/total data points) or in your case 1/m.
Please clear this.
The two in 1/2m is to cancel out the differential if X square
Excellent session
Thanks Krish
Very good explanation Krish ❤️
Thanks @krish
It was very good
Please make such video lecture series more and more in future bcoz you lectures unlike others contain problem based on real world senerio happening very recently are very practical in nature nature and seeing this one can become job ready.. biggest thing is you don't put bookish knowledge expample . Based on your real world experience your impart practical & handy knowledge.. please keep working in same way..
Best lessons:::::love it
Sir please do 1.30 theoretical part and rest 15 minutes on implementation
धन्यवाद...
Great course..Krish can you open this for downloading??
Thank you👏
After the video please comment the syllabus of the day or at least name of algorithms covered. Thankyou!
Check the thumbnail😀😀😀
Great explanation
if this could with the practical on Jupyter notebook, that would be amazing too.
but still, Thank you sir, these efforts are amazing.
Sir will we be doing the coding portion of the algorithms as well in python in these 7 days time?
yes i will be doing an implementation part dont worry .
@@sudhanshuedu wow we're so excited....some. Ofus can't afford one neuron but y'all helping us and m so blessed tht I found ineuron
I don't have system yet so m just practicing python On my android phone....hope nearby april I get my own pc or laptop and you guy will launch some fsds job guarantee program nearby summers 😭😭❤️❤️❤️❤️
Hatsoff to you guys 👍🌹
@@sudhanshuedu where will you Upload Coding?
@@sudhanshuedu sudhashu sir thank you for oneneuron platform.. started data science masters .again thank you for an amazing concept of affordable education.
@sudhanshu kumar thank you sir
Please also cover other important topics like LDA,PCA, T-sne etc. A lot of good companies ask these questions and test the in-depth understanding of the candidates.
@Krish Naik.
You are amazing ...
Krish is the Guru
#KingKrish
My guru🙏
At 1:23:13, when he says Oh My god after writing derivation 🙈
Sir You teaching is simply awesome
You make the complex concept in such easy n nice way
Your Hardwork of that Blue Book is worthwhile for us also ❤🔥🔥👏🏻🙏
Really helpful Sir.
Super sir.
Notes available h isske??
Very nice
for thetha= 2 ouput should be ~2.6
Sir thank you for this great job. Please sir i have a question? Are we to always assume theta one while finding the gradient descent curve? Thank you so much. I love the way you split everything for us. You are so much.
Gd evening sir
Thanks krish sir
Hi sir thanks for sharing your amazing knowledge, i have a doubt at 43.00 min, here wchich difference we will take to get perfect line. is it difference between predictied points or predicted point to existance data set values?
Sir Please use y=mx+c in Linear Reg wich is clear theta is full confusing
Mr. Krish Naik. It was wonderful listening the theory. Really enjoyed. I belong to mechanical background and a beginner in AI.
I have one doubt sir. In finding "Rsquared" you said "ycap" is the difference between the "actual" and the "predicted" point. Then "ycap" is nothing but "htheta x - y" right?. But you said as ycap is nothing but htheta x. It is confusing me. please clear this tomorrow.
@@ArsalanKhan-qyy Sorry I am not clear
love u sir
Can you create video on shap values and their use in interpreting machine learning models
finished watching
I can't find the resources/ notes in community session?
Hello sir,
I am already an enrolled student of One neuron and also started watching your series on ML on youtube, I have one query regarding Gradient descent, though I have understood the maths behind it I really want to know Do I need to understand its implementation in Python too, in Depth or just the understanding of ALGORITHM WOULD SUFFICE IN INTERVIEWS AND DURING JOBS?
grerat session krish
god bless u always
Excellent session sir,!! but where i will get the notes it is not there on community live?
They provided
@@nileshsen9290 can you please tell me where I can found the notes
@@nileshsen9290 Hi sir..i could not find notes in community live...can you please tell me where i can get it?Thanks
@@nileshsen9290 where is it bro can u tell