Saptarsi Goswami
Saptarsi Goswami
  • 156
  • 238 142
Advanced Machine Learning Course (Introductory Video)
Advanced Machine Learning Course (Introductory Video)
Переглядів: 352

Відео

S4DS Ideathon
Переглядів 2753 роки тому
This video talks about the S4DS Ideathon brought for S4DS Student Members. Submit your ideas forms.gle/fzJKcMydw4fnScdk7 and get the opportunity for a dedicated mentor. Not a member yer, follow this link www.townscript.com/v1/widget/s4ds-membership-123331 ( A yearly fee comes with just 500 Rs for students)
GAN using Tensorflow
Переглядів 7763 роки тому
- We discuss about the basics of GAN - Explain the different functions required for the setup - Create the model - Train and Generate #GAN, #Generative Adversarial Network #Tensorflow
GAN - A simple introduction
Переглядів 3553 роки тому
- Basics of Generative models - Adversarial Machine Learning - GAN few success stories - Generator and Discriminator
Genetic Algorithm Overview
Переглядів 1603 роки тому
- GA Overview - Operators - GA Algorithm
Data warehouse Overview
Переглядів 883 роки тому
- Data warehouse overview
Permutation and Combination with repetition
Переглядів 3243 роки тому
We follow Rosen's book and discuss - Permutation with repetition - Combination with repetition - Permutation with indistinguishable objects - Example Problems
Insertion, Deletion, Update Anomaly
Переглядів 953 роки тому
- Insertions, Deletion, update Anomaly - Link with normalization
Smote with Python
Переглядів 11 тис.3 роки тому
- What is the class imbalance problem - Examples of Class Imbalance - Context of SMOTE - SMOTE Application with a sample dataset - SMOTE Parameters - Other Algorithms Other Materials Notebook: www.kaggle.com/saptarsi/smote-notebook Article: towardsdatascience.com/class-imbalance-smote-borderline-smote-adasyn-6e36c78d804 video: ua-cam.com/video/mKG7lnZNAOk/v-deo.html #SMOTE, #SMOTEUsingPython
Discrete Maths Counting Sum, Product, Inclusion and Exclusion
Переглядів 403 роки тому
Discrete Maths Counting Sum, Product, Inclusion and Exclusion
Discrete Maths Set, Functions, Composite Function
Переглядів 1,1 тис.3 роки тому
Discrete Maths Set, Functions, Composite Function
Relational Algebra Select, Project, Union
Переглядів 483 роки тому
Relational Algebra Select, Project, Union
Relational Database Management Relation, Schema, Instance, Keys
Переглядів 643 роки тому
Relational Database Management Relation, Schema, Instance, Keys
Oracle LiveSQL Overview
Переглядів 1923 роки тому
How to use Oracle LiveSQL Environment
DBMS SQL Tutorial
Переглядів 683 роки тому
In this lecture, we discuss the use of Oracle LiveSQL Environment 1) About DDL and DML 2) Select statement where order by having group by 3) Insert, Delete , Update 4) Creare Table
Python Lists
Переглядів 573 роки тому
Python Lists
Python List Class 1
Переглядів 703 роки тому
Python List Class 1
Data Mining Class 2021 01 06 Clustering
Переглядів 1153 роки тому
Data Mining Class 2021 01 06 Clustering
How to store documents in google drive for sharing
Переглядів 1683 роки тому
How to store documents in google drive for sharing
K Means and Hclust using R Programming
Переглядів 5623 роки тому
K Means and Hclust using R Programming
AI State Space Search, heuristic function
Переглядів 1323 роки тому
AI State Space Search, heuristic function
A Tutorial on Semi Supervised Learning
Переглядів 6 тис.3 роки тому
A Tutorial on Semi Supervised Learning
Applying kNN using R
Переглядів 1,9 тис.3 роки тому
Applying kNN using R
Improving Hill Climbing and Simulated Annealing
Переглядів 2663 роки тому
Improving Hill Climbing and Simulated Annealing
How to perform Linear Regression using R
Переглядів 1993 роки тому
How to perform Linear Regression using R
How to use SMOTE, Borderline SMOTE, ADASYN to handle class imbalance
Переглядів 11 тис.3 роки тому
How to use SMOTE, Borderline SMOTE, ADASYN to handle class imbalance
How to use VGG16 as a pre-trained model in 8 minutes
Переглядів 18 тис.3 роки тому
How to use VGG16 as a pre-trained model in 8 minutes
Lec 23 CNN Lecture 8 VGGNet a brief introduction
Переглядів 2653 роки тому
Lec 23 CNN Lecture 8 VGGNet a brief introduction
Lec 22 CNN Architectures 2 AlexNet
Переглядів 2693 роки тому
Lec 22 CNN Architectures 2 AlexNet
How to scan your document in a pdf (In Hindi)
Переглядів 1,3 тис.4 роки тому
How to scan your document in a pdf (In Hindi)

КОМЕНТАРІ

  • @muhammadrauhan3727
    @muhammadrauhan3727 Місяць тому

    Great video! Sir, your PC or Laptop specification?

  • @RamandeepSingh_04
    @RamandeepSingh_04 4 місяці тому

    Thank you sir really helpful

  • @vijaykumar-od7kx
    @vijaykumar-od7kx 4 місяці тому

    thanks saptarsi, this video hepled me to understand the SMOTE much better. thanks again

  • @nk-dy4hc
    @nk-dy4hc 5 місяців тому

    Very good explanation. You deserve more subscription sir. UA-cam shorts might bring some users. Unfortunately, the algorithm works that way. All the best.

  • @sumanbasak3883
    @sumanbasak3883 5 місяців тому

    Badaa hi khatarnaak subject hain !! 🙂🙂

  • @iitncompany
    @iitncompany 8 місяців тому

    Wring at 10.40 s1 and s2 matrix both calculation wrong .

  • @shatiswaranvigian9349
    @shatiswaranvigian9349 9 місяців тому

    Sir any idea VGG on noise samples?

  • @malihehheydarpour104
    @malihehheydarpour104 10 місяців тому

    Thanks for your video. could you please help me to find the next video of that which you discussed about different methods of smote?

  • @aadiljamshed5239
    @aadiljamshed5239 10 місяців тому

    Sir for applying ANOVA for feature selection we need to apply normality test to demonstrate whether our data follows a normal distribution? Or we can apply for any type of data set without checking the normal distribution.... could you please clarify it?

  • @abhay9994
    @abhay9994 11 місяців тому

    Wow, this video on Linear Discriminant Analysis (LDA) by Instructor Saptarsi Goswami is incredibly informative and well-explained. I truly appreciate how he breaks down the concepts and compares LDA to PCA, highlighting the advantages of LDA. The explanations of the Fisher discriminant ratio, inter-class scatter, within-class scatter, and eigenvalue decomposition have given me a solid understanding of LDA. Thank you, Instructor Saptarsi, for sharing your expertise and helping me improve my knowledge in this area!

  • @SumitGoswami
    @SumitGoswami Рік тому

    Hey @Saptarsi Amazing!! how to get your contact details.

  • @tyronefrielinghaus3467
    @tyronefrielinghaus3467 Рік тому

    English too painful to listen I'm afraid.

  • @memoonashehzadi9660
    @memoonashehzadi9660 Рік тому

    In SMOTE, on which bases do we identify a point from minority class, in step 1?

  • @sinan_islam
    @sinan_islam Рік тому

    Did anyone had a case where SMOTE made ML models performance even worse?

  • @UMARFARUK-qu3vc
    @UMARFARUK-qu3vc Рік тому

    Thanks Sir , It is very useful.

  • @TheJuniorApollo
    @TheJuniorApollo Рік тому

    Thank you sir

  • @朱天-h8m
    @朱天-h8m Рік тому

    Hello sir, how can I do it if I want to solve a problem with three categories with the smote algorithm

  • @nintishia
    @nintishia Рік тому

    Excellent exposition, thanks.

  • @komalsangle8259
    @komalsangle8259 Рік тому

    Is mahalanobis and Manhatten is same?

  • @mostafakhazaeipanah1085
    @mostafakhazaeipanah1085 Рік тому

    great video. I can't understand how scores gets calculated can you help me??

  • @neoblackcyptron
    @neoblackcyptron Рік тому

    Wow this is one of the most insightful deep explanations on the origins and mechanics of GMM, EM algorithm. Great job.

  • @TobiShoyinka
    @TobiShoyinka Рік тому

    how can i add random_state to Agglomerative clustering...my clustering numbers keeps changing everytime i re-run the model

  • @lipe5331
    @lipe5331 Рік тому

    Thanbk you very much sir

  • @bulusuchanakyachandrahas7380

    top tier and simple

  • @theNeuo13
    @theNeuo13 Рік тому

    In the SWAP stage, do we drop the medoid node that is replaced by non-medoid node so it is not SWAP checked against other medoid nodes? in other words, dose the medoid node becomes non-medoid if it is replaced by a non-medoid and SWAP checked against the other initial medoids or removed from the list of medoids and non-medoids?

  • @Nadia-db6nb
    @Nadia-db6nb Рік тому

    Hi. May i know why the no of components for LDA is based on no of classes when we're trying to reduce the no of features?

  • @awon3
    @awon3 Рік тому

    What ist X_train1 and y_train1? you use it but it was never defined

  • @abhijit777999
    @abhijit777999 2 роки тому

    Nice explanation of anova lectures that you have given

  • @thejaswinim.s1691
    @thejaswinim.s1691 2 роки тому

    great job...

  • @pcooi7811
    @pcooi7811 2 роки тому

    Thank you sir.

  • @nehabhullar913
    @nehabhullar913 2 роки тому

    Sir how to use vgg 16 for gray scale images

  • @hsumin3302
    @hsumin3302 2 роки тому

    Thank you for sharing this video. I have learned much from it.

  • @bruteforce8744
    @bruteforce8744 2 роки тому

    excellent video...just a small correction, the mean of the y componets of X1 is 3.6 and not 3.8

  • @solwanmohamed9400
    @solwanmohamed9400 2 роки тому

    i need the material

  • @piyukr
    @piyukr 2 роки тому

    It was a very helpful lecture. Thank you, Sir.

  • @waqaralam7519
    @waqaralam7519 2 роки тому

    Finally my doubt is resolved, keep Going Sir

  • @RupshaliDasgupta
    @RupshaliDasgupta 2 роки тому

    Can anyone please provide the link of the datatset

  • @chrisleivon8567
    @chrisleivon8567 2 роки тому

    why is there 22 in acc = np.empty(22). I mean can we put some lower no instead of 22? i am stuck in re-training labelled and pseudo-labelled data

  • @iheleanbeefpatty
    @iheleanbeefpatty 2 роки тому

    Thank you for this video sir.

  • @sajadms4121
    @sajadms4121 2 роки тому

    thank you so much for the video but i have a question in adasyn we choose must far instance to have higher chance of being sampled to avoid over fitting ? if yes but what if was a noisy one ?

  • @musmanmureed3728
    @musmanmureed3728 2 роки тому

    Thanks very informative but question is that can we use any csv file with tsne?

  • @jahanvi9429
    @jahanvi9429 2 роки тому

    thank you , very helpful

  • @junaidali5129
    @junaidali5129 2 роки тому

    You don't share the notebook of this code sir

  • @startrek3779
    @startrek3779 2 роки тому

    Very informative and clear. Thank you for your effort! The following are the steps for the self-learning algorithm. 1. Train a supervised classifier on the labelled data. 2. Use the resulting classifier to make predictions on the unlabelled data. 3. Add the most confident of these predictions to the labelled data set. 4. Re-train the classifier on both the original labelled data and the newly obtained pseudo-labelled data. 5. Repeat steps 2-4 until no unlabelled data remain. There are two hyperparameters to set, the maximum number of iterations and the number of unlabelled examples to add at each iteration. One issue of self-learning is if we add many examples with incorrect predictions to the labelled data set, the final classifier may be worse than the classifier only trained on the original labelled data. I hope this answer may help someone interested in semi-supervised learning.

  • @rajatrautela6257
    @rajatrautela6257 2 роки тому

    Thanks a lot sir. I have a doubt. I used LDA on Water Potability dataset from Kaggle. I did all the cleaning of data and proceeded with the methods you taught. Since my data was of binary classification, I had only 1 component. So, which graph should I plot for such cases to show classification and how? Also, the accuracy for my dataset was quite low around 61%. Any suggestions on this low accuracy on why it is so?

  • @saswatisahoo6235
    @saswatisahoo6235 2 роки тому

    Sir, it's fantastic. Thnx a lot

  • @arghyakusumdas54
    @arghyakusumdas54 2 роки тому

    Thanks Sir, for the video which was very easy to understand. However I was thinking if the labelled dataset contains sample of 2 classes only(does not contain a sample of a possible 3rd class) and the unlabeled sample contains that specific sample of 3rd class (without the class), then I think the classifier trained on the labeled data cannot properly predict and its confidence for both classes would be low. Can anything or strategy be adopted in this case?

  • @AnkitSingh-cg3rp
    @AnkitSingh-cg3rp 2 роки тому

    Thankyou very much for such a informative video

  • @zaheerabbas4718
    @zaheerabbas4718 2 роки тому

    The new point I learnt is to calculate the cluster purity with respect to ground labels. Thanks for the knowledge sharing and please keep going!

  • @KaushikJasced
    @KaushikJasced 2 роки тому

    Thank you sir for giving a wonderful lecture. Can you tell me how I can put the sampling ratio as per my choice instead of 1:1 using SMOTE?

    • @SaptarsiGoswami
      @SaptarsiGoswami 2 роки тому

      Please go through the parameters class imblearn.over_sampling.SMOTE(*, sampling_strategy='auto', random_state=None, k_neighbors=5, n_jobs=None). the sampling strategy parameter will give you the handle.