I am not a girl who generally comments on you tube videos but I am learning from your videos and this is my genuine comment that you are amazing and your concept in data science is very clear and to the point. I am very happy that the teacher like you is present here. Superb job Sir !
"I am not a girl" okay can't say these days "who generally comments on youtube videos" first of all youtube doesn't have any comment history data to prove this second How dare you call this another youtube video? How dare you generalised an educational video that free of cost while people pay an hefty amount of price for such contents? shame on you!
I was trying to understand NLP concepts referring to various books and videos from last two months but concepts were not clear for me.But this explaination is really awesome .Explained in very easy way .Thanks Krish
🎯 Key Takeaways for quick navigation: 00:00 📚 *Introduction to Spam Classifier Project* - Creating a spam classifier using natural language processing. - Overview of the dataset from UCI's SMS Spam Collection. - Reading and understanding the dataset structure. 01:47 📂 *Exploring the Dataset and Data Preprocessing* - Explanation of the SMS spam collection dataset. - Reading the dataset using pandas and handling tab-separated values. - Data cleaning and preprocessing steps using regular expressions and NLTK. 05:46 🧹 *Text Cleaning and Preprocessing* - Using regular expressions to remove unnecessary characters. - Lowercasing all words to avoid duplicates. - Tokenizing sentences, removing stop words, and applying stemming. 13:52 🎒Creating *the Bag of Words* - Introduction to bag-of-words representation. - Implementation of count vectorization using sklearn's CountVectorizer. - Selecting the top 5,000 most frequent words as features. 17:27 📊 *Preparing the Output Data* - Converting the categorical labels (ham and spam) into dummy variables. - Finalizing the output data with one column representing the spam category. - Overview of the preprocessed data for training the machine learning model. 21:04 📊 *Data Preparation for Spam Classification* - Data preparation involves creating independent (X) and dependent (Y) features. - Explanation of dummy variable trap in categorical features. - Introduction to the train-test split for model training. 22:30 🛠️ *Addressing Class Imbalance and Train Spam Classifier* - Discussion on class imbalance issue in the data. - Introduction to Naive Bayes classification technique. - Implementation of the Naive Bayes classifier using multinomial Naive Bayes. 24:22 📈 *Evaluating Spam Classifier Performance* - Explanation of the prediction process using the trained model. - Introduction to confusion matrix for model evaluation. - Calculation of accuracy score for the spam classifier (98% accuracy). 27:50 🔄 *Improving Spam Classifier Accuracy* - Suggestions for improving accuracy, including the use of lemmatization. - Mention of addressing class imbalance for better performance. - Recommendation to explore TF-IDF model as an alternative to count vectorization. Made with HARPA AI
Best NLP videos of all time . A complete gist , mind you not for the faint hearted . Execllent job Krish. Initially ibhad given up NLP completely but now have renewed vigour after such exemplary teaching
I would say to prevent leakage we should split our data before we fit_transform on the corpus. So in other words, we are teaching vocabulary to our model on the whole dataset which defeats the purpose of splitting into train and test after. The whole purpose of the test set is to test our model on unique data that our model has never seen before. Please correct me if I am wrong! Cheers!!
Thank you very much sir,your videos are really very helpful i am learning NLP from your channel first time . I don't know machine learning thats why facing little problem
I am getting these accuracy values for different combinations: Stemming and CountVectorizer accuracy=98.5650% Lemmatization and CountVectorizer accuracy=98.29596% Lemmatization and TfidfVectorizer accuracy=97.9372197309417% Stemming and TfidfVectorizer accuracy=97.9372197309417%(same as Lemmatization and TfidfVectorizer)
Hi Krish, I am the newest subscriber of your channel and I hope your this video will help me to complete a project of mine own. Thank you so much. Will continue to learn
Thanks Krish .Superb explanation once again.All my concepts about NLP is very crystal clear.I know career in NLP is superb.But can you explain what is its exact value in terms of data science carrer. Please guide and feel free to reply as I am eagerly waiting. Thanks once again.
Hi Krish, Why are we hard-coding Max_features=5000, What if this code is Migrated to Production as-is and face more Tokens/Features in Live Data(Ex: if live data has 0.1 Million(1 Lakh) features)? In this scenario, Do our Model fails?
i have created the model and saved the same using joblib. I am not getting how to use the model for prediction? Is there anyway where i can pass the email text to the body and model can detect spam or ham. I am newbie plz help. Thanks
Hi Krish, good session. I have one comment. For getting test corpus, better practice may be to use transform. Fit transform on train and only transform test. And train test split to be done before we build corpus. Let me know what you think.
Hi Kris, supposing we need to implement a functionality for identifying spam afresh, how can we come up with a solution. The sample data used here already have something tagged as spam and ham by someone, sometime, somewhere. In practice, do we need to have a sample data upfront? Can you please advice?
How to decide when to use count vectorizer, or tfidf? How to decide whether/when to use Stemming or Lemmatization? Like in this example why didnt you use tfidf instead of bag of words? And why lemmatization was not used instead of stemming?
sir i have tried running the code but the shape of x function and y is not the same so train test split is not working its saying Found input variables with inconsistent numbers of samples: [11144, 5572]
You can do it like this. df=pd.DataFrame(['this message is a spam'],columns=['message']) corpus=[] for i in range(0,len(df)): review=re.sub('[^a-zA-Z]',' ',df['message'][i]) review=review.lower() review=review.split() review=[ps.stem(word) for word in review if word not in stopwords.words('english')] review=' '.join(review) corpus.append(review) df=cv.transform(corpus).toarray() pred=spam_detect_model.predict(df) label=pred[0] if label==1: print('Spam') else: print('Ham')
@@yogeshprajapati7107 how does model handle for features 2500 when doing predict? I believe there will mismatch between number of features from new message and number of features from trained model. can share how to overcome this?
i have 2 questions first : Why only multinomialNB, is there specific reason, cant we use bernoulliNB or gaussianNB ?? second : if dataset is imbalanced we have used complimentNB, but how do we know that dataset is balanced or imbalanced??
BinomialNB - when spam classification is being done with a two step decision approach i.e if 'X' is present, then 'spam' else 'not spam' GaussianNB - used when the values are present and are continuous MultinomialNB - counts the presence of words and the frequency of occurrence to decide the decision boundary
i have typed code which you explained in this topic "Implementing a Spam classifier in python| Natural Language Processing" but i am not getting the corpus list... i got empty corpus ie.. [' ', ' ',' ', ............]
Can we just use an if-else condition on the label column to derive the 0-1 (spam-ham) column? What is the purpose of using the get_dummies function for a binary class column?
Hi sir, please correct me if I'm wrong. In the line number 30 you are applying the transform function for the whole data , won't it be data leakage? The transform has to be applied after splitting the data right? Thank you.
To predict whether the new message is spam or ham.write this code. df=pd.DataFrame(['this message is a spam'],columns=['message']) corpus=[] for i in range(0,len(df)): review=re.sub('[^a-zA-Z]',' ',df['message'][i]) review=review.lower() review=review.split() review=[ps.stem(word) for word in review if word not in stopwords.words('english')] review=' '.join(review) corpus.append(review) df=cv.transform(corpus).toarray() pred=spam_detect_model.predict(df) label=pred[0] if label==1: print('Spam') else: print('Ham')
I had gone through the 7 videos in the playlist . Well explained in every videos . Can you please tell me how can implement this program in real scenario ? Everyone has completing their videos by making only the models . So pls try to explain how we can use this model ? If I have text message. Then how to find whether it is spam or not using this model ..
Hi Krish, Nice video, Just had a question. What if i put the model in production and the new message have a word which are not part of my training dataset then the features won't match and the model will give error?
hello sir if we have different number of labels or category such as business,sports, entertainment,category,politics,tech,history.then how can we get the dummy variables and bag of words and how to find which present are the which labels.
There are some of the drawbacks to the bag of words model that it assumes the words are independent. The meaning of the sentences is lost and also the structure of the sentence has no importance, so why to use this model ? Is there any other model / classifier which will give good results with text ?
I am not a girl who generally comments on you tube videos but I am learning from your videos and this is my genuine comment that you are amazing and your concept in data science is very clear and to the point. I am very happy that the teacher like you is present here. Superb job Sir !
Who asked you if you generally comments or not?
@@amankukar7586 good one bro
pehli fursat mei nikal yahan zyaada formality mat kar
"I am not a girl" okay can't say these days
"who generally comments on youtube videos" first of all youtube doesn't have any comment history data to prove this
second How dare you call this another youtube video? How dare you generalised an educational video that free of cost while people pay an hefty amount of price for such contents?
shame on you!
I was trying to understand NLP concepts referring to various books and videos from last two months but concepts were not clear for me.But this explaination is really awesome .Explained in very easy way .Thanks Krish
i have seen a lot of youtube tutorials , but i cant find tutorial like you which are clear and more precise. keep going.
You are great sir, its very difficult to find a good channel that explains the code line by line ❤💥👏
Thank you, the whole NLP playlist is very helpful!
Exactly
The best possible tutorial on Data Science/Machine Learning on UA-cam. Cheers to you brother! :D
Sir meri tapshya hi puri ho gae ye apka lecture dekhake ❤️thank you so so so so so much sir ❤️❤️❤️❤️
Great work Krish. You have this knack of explaining the things in pretty simple manner.
You are an excellent teacher. Thanks for making/uploading these videos
🎯 Key Takeaways for quick navigation:
00:00 📚 *Introduction to Spam Classifier Project*
- Creating a spam classifier using natural language processing.
- Overview of the dataset from UCI's SMS Spam Collection.
- Reading and understanding the dataset structure.
01:47 📂 *Exploring the Dataset and Data Preprocessing*
- Explanation of the SMS spam collection dataset.
- Reading the dataset using pandas and handling tab-separated values.
- Data cleaning and preprocessing steps using regular expressions and NLTK.
05:46 🧹 *Text Cleaning and Preprocessing*
- Using regular expressions to remove unnecessary characters.
- Lowercasing all words to avoid duplicates.
- Tokenizing sentences, removing stop words, and applying stemming.
13:52 🎒Creating *the Bag of Words*
- Introduction to bag-of-words representation.
- Implementation of count vectorization using sklearn's CountVectorizer.
- Selecting the top 5,000 most frequent words as features.
17:27 📊 *Preparing the Output Data*
- Converting the categorical labels (ham and spam) into dummy variables.
- Finalizing the output data with one column representing the spam category.
- Overview of the preprocessed data for training the machine learning model.
21:04 📊 *Data Preparation for Spam Classification*
- Data preparation involves creating independent (X) and dependent (Y) features.
- Explanation of dummy variable trap in categorical features.
- Introduction to the train-test split for model training.
22:30 🛠️ *Addressing Class Imbalance and Train Spam Classifier*
- Discussion on class imbalance issue in the data.
- Introduction to Naive Bayes classification technique.
- Implementation of the Naive Bayes classifier using multinomial Naive Bayes.
24:22 📈 *Evaluating Spam Classifier Performance*
- Explanation of the prediction process using the trained model.
- Introduction to confusion matrix for model evaluation.
- Calculation of accuracy score for the spam classifier (98% accuracy).
27:50 🔄 *Improving Spam Classifier Accuracy*
- Suggestions for improving accuracy, including the use of lemmatization.
- Mention of addressing class imbalance for better performance.
- Recommendation to explore TF-IDF model as an alternative to count vectorization.
Made with HARPA AI
Best NLP videos of all time . A complete gist , mind you not for the faint hearted . Execllent job Krish. Initially ibhad given up NLP completely but now have renewed vigour after such exemplary teaching
I would say to prevent leakage we should split our data before we fit_transform on the corpus. So in other words, we are teaching vocabulary to our model on the whole dataset which defeats the purpose of splitting into train and test after. The whole purpose of the test set is to test our model on unique data that our model has never seen before. Please correct me if I am wrong! Cheers!!
i agree should split before fit_transform to prevent leakage.,.....
+1
Agree, split before getting BOW.
Hi. The CountVectorizer is not a ML model, it just converts to vectors(matrix of numbers)
You are genius in explanation krish Naik Ji, your the best 👍👌👌👌
I am so much addicted to his videos, sometimes even forget to like the video.😂
Excellent , very happy to see such type of explanation @Krissh Naik, we will definitely do good.
Thank you very much sir,your videos are really very helpful i am learning NLP from your channel first time . I don't know machine learning thats why facing little problem
Best playlist to learn NLP. Thank you Krish.. 🙂
I used Lemmatization and TF-IDF in text preprocessing and got an accuracy score of 0.971.
Its really a fantastic video sir. Your really explained the many things which can be understand in very easy manner. Thanks a lots sir!!!
Just amazing sir , cant comment you too usefull sessions thankyou
I am getting these accuracy values for different combinations:
Stemming and CountVectorizer accuracy=98.5650%
Lemmatization and CountVectorizer accuracy=98.29596%
Lemmatization and TfidfVectorizer accuracy=97.9372197309417%
Stemming and TfidfVectorizer accuracy=97.9372197309417%(same as Lemmatization and TfidfVectorizer)
you are really great sir. each and every topic u have explained very well. Hats off to u.
Thank You Krish for sharing the knowledge.
Hi Krish, I am the newest subscriber of your channel and I hope your this video will help me to complete a project of mine own. Thank you so much. Will continue to learn
u r awesome teacher, it really helpful for me...... god bless u
Really no words to represent you.....lottttttttttts of love sir❤️tq so much sir means alot
I used logistic regression ,multiclass was specified and I achieved 94.3% accuracy on test data and 95.7% accuracy on test data
hello,sir
i am very happy that u r making videos..please make more videos on kaggle competitions...
Thank you so much for sharing your knowledge with us
Good job Krish with the NLP playlist
Thank you so much Krish Sir...!!!
Thanku sir...for the wonderful explanation
it was so clear and helpful, thank you so much
Thanks Krish .Superb explanation once again.All my concepts about NLP is very crystal clear.I know career in NLP is superb.But can you explain what is its exact value in terms of data science carrer. Please guide and feel free to reply as I am eagerly waiting. Thanks once again.
very good video sir...thank you
keep up the good work.Thanks
Here the dataset is highly imbalenced (i.e ham : 4825, spam : 747) so got the high accuray
Please make videos on word embedding like word2vec/GloVe/BERT/Elmo/GPT/XLNet etc
You are simply amazing
Sir please make videos on LDA, NMF, SVD and Word2Vec Models
You are bhagwaan for me Sir
Sir in this model why we have used MultinomialNB and not BernoulliNB ? and can we use BernoulliNB this instead of MultinomialNB
Hi Krish, Excellent explanation
Boss please also include sentiment analysis and topic modelling to your already wonderful repertoire!
Brother you are making helpful content for us. Can you tell me how to remove the stopwords of other languages like Bangla or Hindi etc?
Thank you krish sir
Hi Krish,
Why are we hard-coding Max_features=5000, What if this code is Migrated to Production as-is and face more Tokens/Features in Live Data(Ex: if live data has 0.1 Million(1 Lakh) features)?
In this scenario, Do our Model fails?
i have created the model and saved the same using joblib. I am not getting how to use the model for prediction? Is there anyway where i can pass the email text to the body and model can detect spam or ham. I am newbie plz help. Thanks
Have you got how to do? If yes please let me know also
Hi Krish, good session. I have one comment. For getting test corpus, better practice may be to use transform. Fit transform on train and only transform test. And train test split to be done before we build corpus. Let me know what you think.
Sir u r too good
Hi Kris, supposing we need to implement a functionality for identifying spam afresh, how can we come up with a solution. The sample data used here already have something tagged as spam and ham by someone, sometime, somewhere. In practice, do we need to have a sample data upfront? Can you please advice?
Thankyou sir❤️🔥
How to decide when to use count vectorizer, or tfidf?
How to decide whether/when to use Stemming or Lemmatization?
Like in this example why didnt you use tfidf instead of bag of words? And why lemmatization was not used instead of stemming?
Wonderful
Legend ❤️
sir i have tried running the code but the shape of x function and y is not the same so train test split is not working its saying
Found input variables with inconsistent numbers of samples: [11144, 5572]
Hello sir,
I would like to know how to calssify a new message as ham or spam after building the NB model
You can do it like this.
df=pd.DataFrame(['this message is a spam'],columns=['message'])
corpus=[]
for i in range(0,len(df)):
review=re.sub('[^a-zA-Z]',' ',df['message'][i])
review=review.lower()
review=review.split()
review=[ps.stem(word) for word in review if word not in stopwords.words('english')]
review=' '.join(review)
corpus.append(review)
df=cv.transform(corpus).toarray()
pred=spam_detect_model.predict(df)
label=pred[0]
if label==1:
print('Spam')
else:
print('Ham')
@@yogeshprajapati7107 how does model handle for features 2500 when doing predict? I believe there will mismatch between number of features from new message and number of features from trained model. can share how to overcome this?
Sometimes the error is good for health😂
i have 2 questions first : Why only multinomialNB, is there specific reason, cant we use bernoulliNB or gaussianNB ??
second : if dataset is imbalanced we have used complimentNB, but how do we know that dataset is balanced or imbalanced??
BinomialNB - when spam classification is being done with a two step decision approach i.e if 'X' is present, then 'spam' else 'not spam'
GaussianNB - used when the values are present and are continuous
MultinomialNB - counts the presence of words and the frequency of occurrence to decide the decision boundary
Sir, why did we go for Bag of Words and not for TF-IDF? Is TF-IDF only used for sentiment analysis?
Hi Krish, Please make a project relating to Bigram , unigram also . Thank you
Sure I will do that
hello sir, why have we not used lemmatization here? Stemming may or may not give meaningful words but we need meaningful words here right?
Good one... actually u may need to use bernoulis naive bayes model as it deals with binary values 0 and 1...correct me if am wrong
Sir, is Deep learning necessary to be learned before coming to this playlist (as I see Keras and LSTM being there in the last videos)??
I have a ERROR it is saying unhashable type of list even if all the steps are same
Hello Sir...can you please make video on Topic Analysis - LDA. There isn't any clear cut videos on utube yet like yours.
Awesome work sir !!
i have typed code which you explained in this topic "Implementing a Spam classifier in python| Natural Language Processing" but i am not getting the corpus list... i got empty corpus
ie.. [' ', ' ',' ', ............]
Sir, can you please make a video on 'Generate paraphrase from the text using NLP'.
Can we just use an if-else condition on the label column to derive the 0-1 (spam-ham) column? What is the purpose of using the get_dummies function for a binary class column?
Awesome
can't we make this code work in jupyter notebook instead of spyder because i cant really see any output for spyder
I don't have a dependent variable like "spam" in my imported document. How will I train the dataset
very nice kindly post new videos
We could have used drop_first in get_dummies label instead of iterating the whole array.
Hi sir, please correct me if I'm wrong.
In the line number 30 you are applying the transform function for the whole data , won't it be data leakage? The transform has to be applied after splitting the data right?
Thank you.
Hi. The CountVectorizer is not a ML model, it just converts to vectors(matrix of numbers)
excellent........
sir instead of taking max_feature parameter at 16:43.....wt if we apply PCA or LDA on that total columns...
To predict whether the new message is spam or ham.write this code.
df=pd.DataFrame(['this message is a spam'],columns=['message'])
corpus=[]
for i in range(0,len(df)):
review=re.sub('[^a-zA-Z]',' ',df['message'][i])
review=review.lower()
review=review.split()
review=[ps.stem(word) for word in review if word not in stopwords.words('english')]
review=' '.join(review)
corpus.append(review)
df=cv.transform(corpus).toarray()
pred=spam_detect_model.predict(df)
label=pred[0]
if label==1:
print('Spam')
else:
print('Ham')
sir, I am getting an error while downloading stop words .
Parse Error: not well-formed (invalid token): line 1, column 0
Thanks a lot!
Sir,
What is the reason behind choosing navie bays classifier.why not other classifier
very helpful...!!!
I had gone through the 7 videos in the playlist . Well explained in every videos . Can you please tell me how can implement this program in real scenario ? Everyone has completing their videos by making only the models . So pls try to explain how we can use this model ? If I have text message. Then how to find whether it is spam or not using this model ..
Check my deployment playlist u will get to know
With lemmatization and max_features accuracy is 97%
Hi nice lecture.
I have a dataset with 1.3 million rows. I used your code When I perform bagging of words my Google Collab get crashed. Any solution.
How to make GUI for this project ?
any idea about it? It would be of great help !
U can use stream lit framework without knowledge of html, css u can make beatiful web apps
Hi Krish, Nice video, Just had a question. What if i put the model in production and the new message have a word which are not part of my training dataset then the features won't match and the model will give error?
hello sir if we have different number of labels or category such as business,sports, entertainment,category,politics,tech,history.then how can we get the dummy variables and bag of words and how to find which present are the which labels.
instead of pd.get_dummies , we can sklearn.preprocessing.LabelEncoder can be used
I tried with Tf-IDF but my score is better with bag of words ? is it possible or am I making some mistakes?
why didnt u use label encoder for terget column spam/ham
Hello Krish,How can we handle mulitple label classificaton problems?
How do we determine that "top features" are selected when you pass max_features=5000 in CountVectorizer?
nice
TypeError: cannot use a string pattern on a bytes-like object
Error shown when line 17-20 is executed..... How do I rectify it.... someone help please
I get the same error!! What did you do?
How can we visualize at the actual result for a clarification?Thanks
There wouldn't be a data leakage problem if we use fit_transform on entire data?
There are some of the drawbacks to the bag of words model that it assumes the words are independent. The meaning of the sentences is lost and also the structure of the sentence has no importance, so why to use this model ?
Is there any other model / classifier which will give good results with text ?
sir how to upload the file in spyder? can this coding be done in jupyter notebook?
how can we check the model on user provided input?