Live Day 1- Introduction To Machine Learning Algorithms For Data Science

Поділитися
Вставка
  • Опубліковано 18 гру 2024

КОМЕНТАРІ • 278

  • @walidhossain3655
    @walidhossain3655 2 роки тому +174

    7:15 AI vs ML vs DL vs DS
    17:00 Machine Learning
    17:49 Types of ML
    19:00 Supervised ML
    21:50 Regression intro (brief)
    24:04 Classification (brief)
    25:35 Unsupervised ML
    26:08 Clustering (brief)
    28:49 Dimentionality Reduction
    33:10 First Algo - Linear Regression

  • @naveedarshad6209
    @naveedarshad6209 9 місяців тому +2

    00:14 Introduction to machine learning algorithms for data science
    02:48 Introduction to machine learning algorithms and interview preparation
    07:32 Artificial intelligence applications can perform tasks without human intervention.
    09:57 Applications of AI in providing personalized recommendations and services
    14:24 Deep learning aims to mimic the human brain and has enabled solving complex use cases.
    16:26 Machine learning involves supervised and unsupervised algorithms.
    20:27 Understanding independent and dependent features in supervised machine learning
    22:28 Dependent feature determines problem type
    26:33 Clustering helps in grouping similar data for customer segmentation
    28:27 Introduction to Machine Learning Algorithms and Data Science
    32:28 Introduction to linear regression for modeling data
    34:56 Introduction to linear regression and hypothesis in machine learning
    38:49 Understanding the concepts of theta 0 and theta 1 in machine learning.
    40:53 Understanding the concept of slope in linear regression
    44:47 Cost function for machine learning algorithms
    46:37 Cost function helps in finding the best fit line by minimizing the distance.
    50:53 Minimizing the squared error function with parameters theta 0 and theta 1
    53:00 Understanding the hypothesis with intercept at origin
    56:47 Theta 1 values impact cost function
    58:38 Calculation of cost function using gradient descent
    1:02:57 Explaining the concept of gradient descent in machine learning.
    1:04:59 Convergence algorithm for reaching global minima in gradient descent.
    1:09:02 Positive slope indicates weight update in convergence algorithm
    1:10:53 Learning rate is crucial for reaching the global minima.
    1:14:51 Cost function prevents local minima in linear regression
    1:16:50 Understanding the derivative of theta j
    1:21:22 Convergence algorithm for gradient descent
    1:24:14 Understanding r square and adjusted r square for model performance evaluation
    1:29:26 Explanation on why the mean of a particular value distance will be higher
    1:31:36 Adding uncorrelated features can increase R-squared value.
    1:35:41 Number of samples, predictors, and their impact on R-squared and adjusted R-squared.
    1:37:33 R-squared value decreases as the number of predictors increases
    1:42:34 Discussion on research papers and future topics in the course.
    1:44:39 Encouragement to share and join community session

  • @kaustubhkapare807
    @kaustubhkapare807 2 роки тому +75

    For us you're our Andrew NG

  • @currentcse1556
    @currentcse1556 Місяць тому

    1:03:19 Assignment 1 for thita 1 =2 , j(thita1)= 2.33

  • @Neuroage24
    @Neuroage24 2 роки тому +7

    After three year I am Feel like a real school student I am missing those days 😃 i have pen in my hand my notebook and a teacher like you and note and all this thing you teach like in we note and learn in school thankyou so much for giving us enviornment. 🤞

    • @Neuroage24
      @Neuroage24 2 роки тому

      I persuing my data science degree but i never learned like learned from you 🖤

  • @prabhamelady1680
    @prabhamelady1680 2 роки тому +3

    Very nice recap of all stuff in one place. Tumba tumba Dhanyavadagalu😊🙏

  • @radhekrashna2148
    @radhekrashna2148 Рік тому +6

    Really amazing tutorial
    In youtube sufficient content is available for entry level in data science
    Here some good teacher like Krish
    But we have maintaine consistency and always be excited to learn new things, skills, and anything
    And enjoy while learning things because you will be always be grow in professional career if you adopt learning as a hobby

    • @rubayetalam8759
      @rubayetalam8759 Рік тому

      you can also checkout Campus-X, he has very good in depth lectures.

  • @data_analytics_study
    @data_analytics_study 2 роки тому +8

    Thanks Krish,, your blue book inspired me to start writing notes in my book.

  • @nithyapalanisamy4904
    @nithyapalanisamy4904 2 роки тому +5

    Superb sir i thoroughly enjoyed of learning machine learning. Your energy and practical way of teaching makes more eagerness towards learning data science

  • @sasumanavidyasagar373
    @sasumanavidyasagar373 2 роки тому +7

    Sir, a lot of respect to you, Excellent session :)

  • @kennyfifo9643
    @kennyfifo9643 2 роки тому +1

    KRISH is a very good teacher

  • @ankitakadam8991
    @ankitakadam8991 2 роки тому +1

    so easy too understand because of your teaching

  • @yogeshsapkal2593
    @yogeshsapkal2593 2 роки тому +1

    I'm given to a lots of thanks from bottom my 💓💓💓💖💖

  • @naveedarshad6209
    @naveedarshad6209 9 місяців тому

    It's awesome starting hats off @krish Sir.

  • @benvelloor
    @benvelloor 2 роки тому +3

    Excellent 1st session. Excited to do the 2nd part tomorrow. I am trying to revise my ML knowledge to get back to competing in Kaggle.

    • @harshraj8391
      @harshraj8391 2 роки тому

      Hi Joseph, How exactly, R u using Kaggle, Can u tell me, I was thinking to participate inHackathon, but never used Kaggle before.

  • @kyeiwilliam8463
    @kyeiwilliam8463 9 місяців тому

    All is knowledge and insights for free, thanks Kirsh.

  • @rwagataraka
    @rwagataraka 2 роки тому +2

    Krish,You are amazing!

  • @balakrishnay07
    @balakrishnay07 10 місяців тому

    Super thank you Krish for great content and explanation

  • @aasimgandhi69
    @aasimgandhi69 2 роки тому +7

    Bring DL, Maths for DS and NLP 7 Days session on each of these topics.

  • @saanviranjith2564
    @saanviranjith2564 11 місяців тому

    A man great work ,u have been doing. it required lot of motivation to teach in online. lot of work need to be done behind the screen to reach students in online mode. Understanding of concepts in depth required. keep going i am fan of ur writting
    and teaching

  • @meenakshisharma7936
    @meenakshisharma7936 2 роки тому

    Best content i have came across so far. Thankyou so much

  • @preetgarach8451
    @preetgarach8451 2 роки тому +3

    Lot of respect sir🙏 can't thank you enough for these live sessions.

  • @gayanath009
    @gayanath009 11 місяців тому

    Superb explanation Thank you so much

  • @SidIndian082
    @SidIndian082 2 роки тому

    46:34 .. cost Function is wrong Sir , The right one is mention here
    Cost Function(J)=\frac{1}{n}\sum_{i=0}^{n}(h_{\Theta} (x^{i})-y^{i})^{2}

  • @birappagoudanavar326
    @birappagoudanavar326 2 роки тому +2

    Amazing explanation, I loved the way you taught . ❤

  • @harshitsamdhani1708
    @harshitsamdhani1708 Рік тому

    Thank you for wonderful session

  • @diwaker_yadav
    @diwaker_yadav 2 роки тому

    wow ,respect for the way you taught here thankyou

  • @sanabraham5690
    @sanabraham5690 2 роки тому

    Very very good explanation .Thank you

  • @gh504
    @gh504 2 роки тому +2

    Tkank you sir for this amazing session

  • @TheRandomPerson49
    @TheRandomPerson49 4 місяці тому

    It is really great session. It took me around 4 hours to complete.
    Creating my Orange book XD

  • @_sama_ashwitha_
    @_sama_ashwitha_ Рік тому

    Thank you so much sir making these usefull vedios.awesome explanation.

  • @Snesvlog
    @Snesvlog 2 роки тому

    Happy Teacher's Day Sir🙏

  • @pratikjadhav5244
    @pratikjadhav5244 2 роки тому

    Thank you sir for such amazing community classes

  • @ajay2347
    @ajay2347 2 роки тому +1

    🙏 dhanyawad sir

  • @gauravpardeshi6056
    @gauravpardeshi6056 2 роки тому

    jabardast krish bhai...

  • @gajendra1987
    @gajendra1987 6 місяців тому

    As always best lecture

  • @maths-tricks801
    @maths-tricks801 Рік тому

    Very good information

  • @MatheusHenrique-gh9pk
    @MatheusHenrique-gh9pk 5 місяців тому

    Such an excellent content

  • @smtbhd32
    @smtbhd32 3 місяці тому

    Great lecture 😊😊

  • @sohildoshi2655
    @sohildoshi2655 2 роки тому

    nicely explained fan of yours!

  • @akshays4943
    @akshays4943 Рік тому

    great efforts(🙏)..... lack of presentation, lengthy & confusing explanation

  • @nemeziz_prime
    @nemeziz_prime 2 роки тому +1

    This virtual notebook is amazing 💪🏻🔥 ppl like me will start writing notes now 😂

  • @wellwhatdoyakno6251
    @wellwhatdoyakno6251 2 роки тому +1

    1:33:33 that moment of realisation when life starts to make sense.

  • @Sharmaji_Youtubewale
    @Sharmaji_Youtubewale 2 роки тому

    Awesome , Thank you Krish!

  • @saturdaywedssunday
    @saturdaywedssunday Рік тому +3

    Unable to find the notes in community session. If any one has pls share gdrive link.🙏🙏🙏🙏🙏🙏

    • @anshikakhandelwal_
      @anshikakhandelwal_ 9 місяців тому

      Even I am looking for the same. I have emailed them last week but didn't get any revery yet. Please let me know if you get the files.

  • @shrikantdeshmukh7951
    @shrikantdeshmukh7951 2 роки тому +2

    Sir I don't know why people use optimisation algorithms in regression to find coefficient whereas already it is proven that coefficient=inv(x'x)%(x'y) which is proven by likelihood function method
    Just enter value of x and y and and you will get estimate of coefficient no need to do iterative optimization

    • @subrahmanyamkv8168
      @subrahmanyamkv8168 2 роки тому

      matrix inverse is not always defined and also it is very costly operation

    • @adityagole9790
      @adityagole9790 2 роки тому +2

      This method normally works when the dimension of X is within a certain limit. Say if size of X is 10^4 or greater then finding inverse becomes expensive, and the machine fails to do so. But if we use other algo like gradient descent then we can find optimal x even for large value of X.

  • @sumaraj6626
    @sumaraj6626 2 роки тому

    Very nice explanation Krish. Enjoyed this session.

  • @dheerendradayal3465
    @dheerendradayal3465 2 роки тому +5

    KNN should come in Supervised Ml

  • @himabindugottipati
    @himabindugottipati 2 роки тому

    Great job Krish, I really appreciate it

  • @rajunayak6427
    @rajunayak6427 Рік тому

    lot of respect you sir thank you sir

  • @amitkhatri5614
    @amitkhatri5614 2 роки тому +1

    Like this information

  • @swatisaini2426
    @swatisaini2426 2 роки тому

    Love u krishh u just nailed it

  • @mohamedatef4266
    @mohamedatef4266 2 роки тому +2

    You are very good teacher because I'm making understand the algorithm of ml😁🙏

  • @shreyadhumal944
    @shreyadhumal944 2 роки тому

    u are a great teacher....

  • @lakshsinghania
    @lakshsinghania Рік тому

    krish sir at 1:13:44 why are we taking the slope of the local minima ? obv it will be 0 even for global minima
    shouldn't we check for the that particular green point ?
    can anyone clarify this doubt ????

  • @Ranganadhamkrishnachaitanya27

    Why 1/2m means dy/dx which means the small changes in j(theta1)/theta1 with respect to theta 1 so that's why 1/2m to reduce to minimum distance between actual minus predict if we do 1/m it will give us big difference so thats why we should not do average instead of average we multiply it with 2 to get slight minimal difference of SSE if divide it with m which means it will get avarage it will just affect the minimal error so that why to reduce the error we divide ➗ 2m which will give 0.0like this so all data points now closest to best fit line this is called cost function

  • @priyanshumalaviya4441
    @priyanshumalaviya4441 2 роки тому +2

    sir please do community session on deep learning in future too.

  • @ayeshavlogsfun
    @ayeshavlogsfun 2 роки тому +2

    Please do Coding of each Algorithm

  • @saiharika6088
    @saiharika6088 2 роки тому

    Very nice session Sir

  • @ashu030991
    @ashu030991 2 роки тому

    1:00:00 during the calculation of cost function you used 1/2m, which is a normal sum calculation.
    but you said previously that for simplifying differentiation operation we are using 1/2m, so isn't 1/2 is unnecessary . Just 1/m would be sufficient.
    Also in standard books, the cost is calculated using (1/total data points) or in your case 1/m.
    Please clear this.

    • @shrirangjage999
      @shrirangjage999 Рік тому

      The two in 1/2m is to cancel out the differential if X square

  • @tukaramugile573
    @tukaramugile573 2 роки тому

    Excellent session

  • @jatinnandwani6678
    @jatinnandwani6678 2 роки тому

    Thanks Krish

  • @dipti2075
    @dipti2075 2 роки тому

    Very good explanation Krish ❤️

  • @1111Shahad
    @1111Shahad 2 роки тому

    Thanks @krish

  • @purna9212
    @purna9212 2 роки тому

    It was very good

  • @sudhanshukulshrestha6006
    @sudhanshukulshrestha6006 2 роки тому

    Please make such video lecture series more and more in future bcoz you lectures unlike others contain problem based on real world senerio happening very recently are very practical in nature nature and seeing this one can become job ready.. biggest thing is you don't put bookish knowledge expample . Based on your real world experience your impart practical & handy knowledge.. please keep working in same way..

  • @1mintechskills
    @1mintechskills 2 роки тому

    Best lessons:::::love it

  • @arbaazkhan9985
    @arbaazkhan9985 2 роки тому +2

    Sir please do 1.30 theoretical part and rest 15 minutes on implementation

  • @kunallokhande4397
    @kunallokhande4397 2 роки тому

    धन्यवाद...

  • @rakeshkumarrout2629
    @rakeshkumarrout2629 2 роки тому +4

    Great course..Krish can you open this for downloading??

  • @tejareddy1692
    @tejareddy1692 Рік тому

    Thank you👏

  • @ammar46
    @ammar46 2 роки тому +2

    After the video please comment the syllabus of the day or at least name of algorithms covered. Thankyou!

    • @krishnaik06
      @krishnaik06  2 роки тому +3

      Check the thumbnail😀😀😀

  • @jyotinigam9091
    @jyotinigam9091 2 роки тому

    Great explanation

  • @ModernHumanarbaz
    @ModernHumanarbaz 2 роки тому +1

    if this could with the practical on Jupyter notebook, that would be amazing too.
    but still, Thank you sir, these efforts are amazing.

  • @kaushikdey7567
    @kaushikdey7567 2 роки тому +11

    Sir will we be doing the coding portion of the algorithms as well in python in these 7 days time?

    • @sudhanshuedu
      @sudhanshuedu 2 роки тому +15

      yes i will be doing an implementation part dont worry .

    • @abhisheksinghmahra446
      @abhisheksinghmahra446 2 роки тому +5

      @@sudhanshuedu wow we're so excited....some. Ofus can't afford one neuron but y'all helping us and m so blessed tht I found ineuron
      I don't have system yet so m just practicing python On my android phone....hope nearby april I get my own pc or laptop and you guy will launch some fsds job guarantee program nearby summers 😭😭❤️❤️❤️❤️
      Hatsoff to you guys 👍🌹

    • @ayeshavlogsfun
      @ayeshavlogsfun 2 роки тому +2

      @@sudhanshuedu where will you Upload Coding?

    • @vijay_rangvani
      @vijay_rangvani 2 роки тому +1

      @@sudhanshuedu sudhashu sir thank you for oneneuron platform.. started data science masters .again thank you for an amazing concept of affordable education.

    • @kaushikdey7567
      @kaushikdey7567 2 роки тому

      @sudhanshu kumar thank you sir

  • @marsrover2754
    @marsrover2754 2 роки тому +8

    Please also cover other important topics like LDA,PCA, T-sne etc. A lot of good companies ask these questions and test the in-depth understanding of the candidates.
    @Krish Naik.

  • @dharagajera2902
    @dharagajera2902 2 роки тому

    You are amazing ...

  • @vagheeshmk3156
    @vagheeshmk3156 Рік тому

    Krish is the Guru
    #KingKrish

  • @deepanjalib.n487
    @deepanjalib.n487 2 роки тому

    My guru🙏

  • @snehalpophale6287
    @snehalpophale6287 Рік тому

    At 1:23:13, when he says Oh My god after writing derivation 🙈

  • @woblogs2941
    @woblogs2941 2 роки тому +3

    Sir You teaching is simply awesome
    You make the complex concept in such easy n nice way
    Your Hardwork of that Blue Book is worthwhile for us also ❤🔥🔥👏🏻🙏

  • @jankipatel2338
    @jankipatel2338 2 роки тому

    Really helpful Sir.

  • @Dikshu_Bhavi_Bros
    @Dikshu_Bhavi_Bros Рік тому

    Super sir.

  • @dhawan177
    @dhawan177 11 місяців тому +2

    Notes available h isske??

  • @rangarajankasturi8756
    @rangarajankasturi8756 2 роки тому

    Very nice

  • @beinginnit
    @beinginnit Рік тому

    for thetha= 2 ouput should be ~2.6

  • @adeyanjumusliyu5086
    @adeyanjumusliyu5086 2 роки тому +1

    Sir thank you for this great job. Please sir i have a question? Are we to always assume theta one while finding the gradient descent curve? Thank you so much. I love the way you split everything for us. You are so much.

  • @gauthamvenkat3218
    @gauthamvenkat3218 Рік тому

    Gd evening sir

  • @mohammedmohuddin2586
    @mohammedmohuddin2586 Рік тому

    Thanks krish sir

  • @rajeswariirugula
    @rajeswariirugula 7 місяців тому

    Hi sir thanks for sharing your amazing knowledge, i have a doubt at 43.00 min, here wchich difference we will take to get perfect line. is it difference between predictied points or predicted point to existance data set values?

  • @shaiksuleman3191
    @shaiksuleman3191 2 роки тому

    Sir Please use y=mx+c in Linear Reg wich is clear theta is full confusing

  • @stephenmanova7091
    @stephenmanova7091 2 роки тому +1

    Mr. Krish Naik. It was wonderful listening the theory. Really enjoyed. I belong to mechanical background and a beginner in AI.
    I have one doubt sir. In finding "Rsquared" you said "ycap" is the difference between the "actual" and the "predicted" point. Then "ycap" is nothing but "htheta x - y" right?. But you said as ycap is nothing but htheta x. It is confusing me. please clear this tomorrow.

  • @raj-nq8ke
    @raj-nq8ke 2 роки тому +1

    love u sir

  • @milindtakate5987
    @milindtakate5987 Рік тому

    Can you create video on shap values and their use in interpreting machine learning models

  • @sandipansarkar9211
    @sandipansarkar9211 2 роки тому

    finished watching

  • @codebond
    @codebond 2 роки тому +2

    I can't find the resources/ notes in community session?

  • @nayanshewkani9858
    @nayanshewkani9858 2 роки тому +3

    Hello sir,
    I am already an enrolled student of One neuron and also started watching your series on ML on youtube, I have one query regarding Gradient descent, though I have understood the maths behind it I really want to know Do I need to understand its implementation in Python too, in Depth or just the understanding of ALGORITHM WOULD SUFFICE IN INTERVIEWS AND DURING JOBS?

  • @MohamedFazanNismy
    @MohamedFazanNismy 9 місяців тому

    grerat session krish

  • @charanpoojary4804
    @charanpoojary4804 2 роки тому

    god bless u always

  • @xplorewithdeba7869
    @xplorewithdeba7869 2 роки тому +1

    Excellent session sir,!! but where i will get the notes it is not there on community live?

    • @nileshsen9290
      @nileshsen9290 2 роки тому

      They provided

    • @manishamahapatro9978
      @manishamahapatro9978 2 роки тому

      @@nileshsen9290 can you please tell me where I can found the notes

    • @LaxmiNarayan-31014
      @LaxmiNarayan-31014 2 роки тому

      @@nileshsen9290 Hi sir..i could not find notes in community live...can you please tell me where i can get it?Thanks

    • @saketram690
      @saketram690 2 роки тому

      @@nileshsen9290 where is it bro can u tell