Your tutorials are truly outstanding, surpassing many paid online courses. I want to express my deep appreciation for the invaluable support they've offered. Your detailed explanations of each code line have been incredibly helpful, particularly when I'm teaching machine learning to my students. Your videos provide a level of comprehension and utility that distinguishes them from other machine learning resources. Your efforts are greatly appreciated... Cheers!!!!!!!!!!!!!!!!💥💫💢
I am a young Ai and machine learning engineer from a IIIT and your videos are like food for me if i don't eat then I can't live .Great explanation ... finally I commented after watching tons of your videos daily . Salute to your spirit sir you will reach 10 M subs soon cause AI and ML is growing exponentially and your videos in this direction in serving as no. 1 you tube channel for simple explanations on Practical AI,ML coding and more people will join with you soon and soon...
This was my first to machine/deep learning as i had to do an assignment. Still i understood it vey well and now I'm able to do CNN on my own. Thanks to the tutor :)
Indeed you are an excellent tutor. Your efforts are greatly appreciated .I am fun of you. I AM an AI and machine learning outreach ,you pave me the way .Thanks a lot for you support
Mr. Modi (Mr. Patel) is one side and rest Opposition (Data Science UA-camr) is on the other side. I really envy you (ONIDA TV) and you command that envy with your highest excellence. I am a retired Sr. Citizen and love data science (not because I understand it) but because of the amazing things that Amazon and Tesla and Google are doing.. Please keep going..and may God give you a very long life..
thank you soo much. from the knowledge i gained from this video, i decided to also increase the number of epochs in the first network(ann) from 5 to 10 and that led to a slight increase in the training accuracy(0.49 to 0.54). and for the cnn i intentionally decided first use the SDG optimizer and later the adam which also gave two different but better results than the ann. i also adjusted the epochs in each case. this has given me some more ideas to play around with, with regards to this model. once again thank you for bn such a great teacher
me too i searched for my issue accuracy was 10 % and no increase however i increased hidden layers epochs , but what help me is changing the softmax to sigmoid and the number of hidden units it was 4 on my project here i found it 3000 , it increase my accuracy too , but based on what he choosed 3000 and 1000 hidden units ?
@@ahmedhelal920 More hidden units will recognize more patterns and more features, which will help if your images have many patterns and objects. It is always recommended to use more hidden units on layers and decrease it after every layer to reach a better solution.
I found something very important. When you reshape your y into 1 dimension, save it in a different variable and use the original one (2d) in the training and test process. Otherwise, the results change a lot
how come you tube did not recommend me this way before. Your videos are just perfect for people who want to learn Deep Learning and want to overcome the fear of AI
the VIdeo is really help full, but you should have also to show where you data set is store, because I am accessing but unable to access my data set from my computer
yes it might. you can try adding them. sometimes too many layers will overfit a model and while accuracy improves on training set, on test set it might perform poorly. You can use regularization techniques such as adding dropout layer to tackle these issues partially
Hi, thanks for the clear explanation. I was wondering why you did not use softmax activation function in the last layer instead of sigmoid? As far as I know, softmax is preferred in multiclass problems (like in this case) and sigmoid is used for binary classification problems. Let me know and I appreciate your answer in advance.
IT's not the case Dhaval, that ANN is performing badly, if you change the y_train/y_test to categorical and use loss='categorical_crossentropy' it's giving 91% accuracy. I feel CNN will perform certainly better but we may need much higher dataset.
Sometimes i have thoughts in my mind that is this really happening or is this valuable( i am not judging or not even assuming) as this type of course are paid and with huge amount of money with high demand but how you can give this for freeee ??????? How sir how ??? Hats off👍👍👍👍👍and big thanks 👌👌👌 🙏🙏🙏🙏 I think this learning won't be stopped ever from you.
ha ha... that's a nice way of appreciating my work Ajay. Thank you. Well this course is not free, the fee you need to pay is share this with as many as you can (via linkedin, watsapp, facebook groups, quora etc) :)
sir I am really appreciated, the way you teach all the concepts related to CNN, and how to build it, sir how can get more accuracy using Keras tuner, please make a video on that.
You are doing an amazing work.. I really get intrest in ml after watching your video explanation.. Sir I'm work on project "image classification using deep neural network" The data set is *CIFAR 10*. Paper on which I'm working it already has 80.2% of accuracy . So by using deep neural network algorithms can I make accuracy beyond 80%
Sir, I've a question suppose I have 3 categories cat, dog and hen. Each have 10 images stacked one after another, ie first 10 for cat, then 10 for dog and then hen. My doubt is, if I train my neural network for first 10 image of cat, my accuracy for cat will be high, but now when I train that network for dog images it's accuracy for cat data will decrease, and similar it will further decrease again for hen images. Suppose we have over 1000s of categories, the accuracy of anyone will be very low. How to deal with this problem. And also, Am I getting a correct feel of Ann or CNN? Thanks you very much sir♥️
Your videos are very good...you explain every line of code...it really helps me a lot to teach ML to my students...your videos are even more useful then other ML videos...👌😊
Great video thank you for your efforts in creating this , just a small doubt when I replicated the ANN model and ran the code without normalizing the data X_train and test Im getting 100% accuracy in train as well as test where as after normalizing it comes down to 50% and in this video you said then normalization is done to increase the accuracy then how is it happening? (Thank you in for your answer)
I hope you are doing...I had an assignment of image classification and we were suppose to make a confusion matrix I searched on your channel ...and couldn't find any related to confusion matrix. Please make one on that
haven't watch the entire video yet. Is there a way to type in a command which looks for statistical outliers in the data that don't match anything well , so they can be eliminated to improve the model?
Thanks a lot for your great courses, is it possible for you to explain my question? How should we add non-image features to our CNN model (features like cat and dog prices) to our flatten layer? Does the CNN model new added features belong to which input image?
I love your lectures Sir ! Thank you for your efforts and works. I have question , how did you Get accuracy ~90% with 10 epochs while i get hardly 10% with 25+ epochs?
i basically need to give an ai images and a numerical values, then i want to predict the numerical value from an image, is this kind of model suitable? what do you suggest?
I have a question: How would you deal with multiclass image classification for an imbalanced dataset? For instance, there are 8 classes, with some having very few samples (around 200+), while others have more than 2000 samples. Additionally, positive and negative classes complement each other. For example, if there are 800 samples where Class A has 100 samples, then the 'not Class A' category has 700 samples. So, when you balance one class, others become imbalanced. How should one deal with such data? One example of such dataset could be Ocular Disease Intelligent Recognition (ODIR) in Kaggle.
@codebasics plz make a video on implementation of transfer learning models like inception v3 vgg16 resnet152 for image classification. Plzzzzz i am waiting for ur response. And can u make one more video on feature concatenation of all models and ensemble method of all
Hello, can you please guide for the K-NN, MLP, CNN, Decision Tree, K-Mean Clustering, regression to solve this CIFAR-10 dataset problem. And compare the accuracies for each of the methodologies used.
I have one doubt.. like here we are working for colored images , we have 3 channels RGB , so do we need filters also different for all the channels or there will be only 1 filter?
Hello Sir, as per your suggestion I have re-done the MNIST handwritten digit recognisation using CNN, but didn't achieved better result than ANN. ANN accuracy after 10 epochs is around 99.38% Epoch 10/10 1875/1875 [==============================] - 4s 2ms/step - loss: 0.0198 - accuracy: 0.9938 While CNN accuracy after 10 epochs is : 93.94 Epoch 10/10 1875/1875 [==============================] - 13s 7ms/step - loss: 0.1838 - accuracy: 0.9394 What conclusive we can draw from this experiment ??
how can you get more accuracy? I have messed around with the hyper parameters a lot but I can't seem to find something that gets me a good accuracy (above 80-85)
Sir...just one request...can you make a video of a project which is done right from downloading the data, saving it in device ,uploading it on notebook and pre processing the data and training a model...it will be very helpful...😊
dear, please try to explore it yourself, just type your question in google how to download data for network trainging, the how to save the to the device, then uploading it notebook, and the prepossessing. you will learn alot believe me. if the problem still persist let me know. i will help you.
Thank you for the awesome tutorial. I have one question. Is there a way so I could give a path to one folder and then it would classify images which are in it using this model?
Hi beautiful video! I have some special image in Black and white to be classified. I have two questions: 1. Do you think it is better to colorize them in order to improve the predicion? 2. If yes at the first question, what is a suitable technique to add colors? Thanks a lot.
Thank you so much for this great tutorial. It is really helpful. I have a question, you used 'sparse crossentropy' in prediction and it's supposed to return the class number but the output of y_pred is an array of the probability of each class, and to get the predicted class we used argmax function to get the index of maximum value?
Hello Bassem, "Sparse categorical cross entropy" is the loss function to be used when the actual output Y in the dataset is not in the one hot encoded format. And, sir has used "softmax" as the final activation function in the code showed in the video. It is because of this function that the final output, y_pred is an array of the probability of each class. Hence, finally in order to get the index position of the maximum probability value, which is typically the output class predicted by the CNN model, sir has used the np.argmax function.
Great video. I was hoping you'd visualize the CNN kernels so we could see what they looked like. You specified 32 of them. Does this mean that all 32 are used in every image, and thus are meaningful in every case? that is, you won't have one what has a koala's eyes because the input images also include, say, rocks, buildings, and GPU cards?
Check out our premium machine learning course with 2 Industry projects: codebasics.io/courses/machine-learning-for-data-science-beginners-to-advanced
sir plese plese reply i am doing a project on pcb defect detection using cnn model please help me out i am not getting it please help me
From Brazil, you are the best ML teacher!!! Thank you.
Thanks Luciano for your kind words
From South Korea, Learning Much Faster, Accurate than Univ. Thanks
🤗🤗🙏
We Asians are for us ❤
Your tutorials are truly outstanding, surpassing many paid online courses. I want to express my deep appreciation for the invaluable support they've offered. Your detailed explanations of each code line have been incredibly helpful, particularly when I'm teaching machine learning to my students. Your videos provide a level of comprehension and utility that distinguishes them from other machine learning resources. Your efforts are greatly appreciated... Cheers!!!!!!!!!!!!!!!!💥💫💢
Excellent tutorials much better than many highly paid course floating online..Thanks a lot sir ..your videos helped me lot ...
I am a young Ai and machine learning engineer from a IIIT and your videos are like food for me if i don't eat then I can't live .Great explanation ...
finally I commented after watching tons of your videos daily . Salute to your spirit sir you will reach 10 M subs soon cause AI and ML is growing exponentially and your videos in this direction in serving as no. 1 you tube channel for simple explanations on Practical AI,ML coding and more people will join with you soon and soon...
Ha ha .. thanks for your kind words of appreciation my friend :)
You are so much better than my university tutors :-D Thanks a lot for your help!
I started to learn ml after getting inspirations from your videos. Thank you !
Happy to hear that sabrina!
@@codebasics lol lo
Plpp
Pl
@@codebasics pl
the important CNN concept is explained in superb and simple to understand , Thanks a lot
This was my first to machine/deep learning as i had to do an assignment. Still i understood it vey well and now I'm able to do CNN on my own. Thanks to the tutor :)
Indeed you are an excellent tutor. Your efforts are greatly appreciated .I am fun of you. I AM an AI and machine learning outreach ,you pave me the way .Thanks a lot for you support
as you teach all concepts even a primary student can understand it easily. Seriously big fan of your teaching style
Mr. Modi (Mr. Patel) is one side and rest Opposition (Data Science UA-camr) is on the other side.
I really envy you (ONIDA TV) and you command that envy with your highest excellence.
I am a retired Sr. Citizen and love data science (not because I understand it) but because of the amazing things that Amazon and Tesla and Google are doing..
Please keep going..and may God give you a very long life..
thank you soo much. from the knowledge i gained from this video, i decided to also increase the number of epochs in the first network(ann) from 5 to 10 and that led to a slight increase in the training accuracy(0.49 to 0.54). and for the cnn i intentionally decided first use the SDG optimizer and later the adam which also gave two different but better results than the ann. i also adjusted the epochs in each case. this has given me some more ideas to play around with, with regards to this model. once again thank you for bn such a great teacher
me too i searched for my issue accuracy was 10 % and no increase however i increased hidden layers epochs , but what help me is changing the softmax to sigmoid and the number of hidden units it was 4 on my project here i found it 3000 , it increase my accuracy too , but based on what he choosed 3000 and 1000 hidden units ?
@@ahmedhelal920 More hidden units will recognize more patterns and more features, which will help if your images have many patterns and objects. It is always recommended to use more hidden units on layers and decrease it after every layer to reach a better solution.
someone give this man a life elixir, he must give this knowledge for all the future generations
For digits: ann gives 90%, cnn gives 99+% on train dataset and 99% on test data, thanks sir
Excellent tutorials much better than my professor! You are the best! thank you so much! your videos helped me a lot....
Thank you...this course has been inspiring
I found something very important. When you reshape your y into 1 dimension, save it in a different variable and use the original one (2d) in the training and test process. Otherwise, the results change a lot
Why results change alot?
You are superb in teaching. Please make video on how to deploy such trained models to production.
how come you tube did not recommend me this way before. Your videos are just perfect for people who want to learn Deep Learning and want to overcome the fear of AI
Exciting Times!! May this series long continue😁
yes it will. My goal is to cove all the topics and make this your one stop place for deep learning
REALLY A GOOD VIDEO , i finally understood implementing CNN using CIFAR10
I love your way of teaching
the VIdeo is really help full, but you should have also to show where you data set is store, because I am accessing but unable to access my data set from my computer
You are the best teacher of mine. I'm grateful to you always. Thanks a lot, sir.
Zeenat, thanks for you kind words
Your classes are really beginner friendly and I have a doubt will adding more layers improves the accuracy
yes it might. you can try adding them. sometimes too many layers will overfit a model and while accuracy improves on training set, on test set it might perform poorly. You can use regularization techniques such as adding dropout layer to tackle these issues partially
Hi, thanks for the clear explanation. I was wondering why you did not use softmax activation function in the last layer instead of sigmoid? As far as I know, softmax is preferred in multiclass problems (like in this case) and sigmoid is used for binary classification problems. Let me know and I appreciate your answer in advance.
IT's not the case Dhaval, that ANN is performing badly, if you change the y_train/y_test to categorical and use loss='categorical_crossentropy' it's giving 91% accuracy. I feel CNN will perform certainly better but we may need much higher dataset.
Sometimes i have thoughts in my mind that is this really happening or is this valuable( i am not judging or not even assuming) as this type of course are paid and with huge amount of money with high demand but how you can give this for freeee ???????
How sir how ???
Hats off👍👍👍👍👍and big thanks 👌👌👌
🙏🙏🙏🙏
I think this learning won't be stopped ever from you.
ha ha... that's a nice way of appreciating my work Ajay. Thank you. Well this course is not free, the fee you need to pay is share this with as many as you can (via linkedin, watsapp, facebook groups, quora etc) :)
@@codebasics will do it definitely
✌✌Long live developers👍👍👍
sir I am really appreciated, the way you teach all the concepts related to CNN, and how to build it,
sir how can get more accuracy using Keras tuner, please make a video on that.
Thank you
model.evaluate:10000/10000 [==================] - 1s 57us/sample - loss: 0.0275 - accuracy: 0.9910
Very good explanation with a clear easily understandable video. Thank you for your tutorial. Loved it.
Your approach is very well. You can explain the topics so well and easy to understand the complex topic.
Glad to hear that, I am happy this was helpful to you.
Very nice explanation on CNN....
how you can simplify such complex topics ? You must be having rich experience in this field...😊
You are doing an amazing work.. I really get intrest in ml after watching your video explanation..
Sir I'm work on project "image classification using deep neural network" The data set is *CIFAR 10*. Paper on which I'm working it already has 80.2% of accuracy . So by using deep neural network algorithms can I make accuracy beyond 80%
Thank you sir! Teaching is also a skill and you nailed it!
Such a Good Content.
I am really exciting for upcoming videos.
Glad to hear that
Thank you sir, excellent explanation
Amazing tutorial, thanks a lot for sharing! Saludos desde Argentina! 🇦🇷
No one in universe can teach like this
Thanks zain for your kind words
thanks a lot sir for your explanation. i got accuracy of 98.97% using cnn model
Great job
Can you share with your code?
Sir, I've a question suppose I have 3 categories cat, dog and hen. Each have 10 images stacked one after another, ie first 10 for cat, then 10 for dog and then hen.
My doubt is, if I train my neural network for first 10 image of cat, my accuracy for cat will be high, but now when I train that network for dog images it's accuracy for cat data will decrease, and similar it will further decrease again for hen images.
Suppose we have over 1000s of categories, the accuracy of anyone will be very low. How to deal with this problem. And also, Am I getting a correct feel of Ann or CNN?
Thanks you very much sir♥️
Can you suggest some good final year project ideas related to image classification.
I'll be grateful
Dear Sir, I have a data-frame with shape: 6500 rows and 146 cols. It is not a 3D data. How can I apply input_shape parameter to use CNN model?
Tq u 💯 much sir, this video is very helpful.😍❤️🌹👍🥰🇮🇳
Thank you, It is a great tutorial😍 on CNN
Very lucid explanation
Thank you so much for detailed tutorial. Can you please make a video on Object detection? Specially Faster RCNN and Yolo models.
Could you explain in detail about the reshaping process, on why its necessary ?
Excellent demo, saved my time.
Thank you for the efforts you put in all these vedios, it is giving us a clear image of what is happening in each part. Thanks alot
Your videos are very good...you explain every line of code...it really helps me a lot to teach ML to my students...your videos are even more useful then other ML videos...👌😊
Glad you like them!
please explain that, why you reshape x_train in exercise, and also change input_shape in conv2D
Hi Thank you for all your tremendous work you make fall in love with Machine learning. don't you dare to stop;) Thank you so so so much.
Thanks for your kind words khan ☺️ and yes now after reading your comment I am not going to stop 😉
@@codebasics bless you.
great job sir.....
Excellent explanation. 👏
really good explanations. thanks for your great help
Great video thank you for your efforts in creating this , just a small doubt when I replicated the ANN model and ran the code without normalizing the data X_train and test Im getting 100% accuracy in train as well as test where as after normalizing it comes down to 50% and in this video you said then normalization is done to increase the accuracy then how is it happening? (Thank you in for your answer)
Thanks for nice explanation. Easy to understand the concepts. Can you make video for region CNN and faster R CNN?
Really nice video...its helped me lot...
I want you to start Audio, Video processing tutorial also because I like it your teaching skills.
Glad it was helpful!
I hope you are doing...I had an assignment of image classification and we were suppose to make a confusion matrix I searched on your channel ...and couldn't find any related to confusion matrix. Please make one on that
permission to learn sir. thanks you
We should use softmax for multiclass classification right?. But here we used sigmoid? How is it executing?
haven't watch the entire video yet. Is there a way to type in a command which looks for statistical outliers in the data that don't match anything well , so they can be eliminated to improve the model?
You are really inspirational and have so much to idolize. Thank you!
Glad it was helpful!
Excellent content! Thank you very much.
Thanks a lot for your great courses, is it possible for you to explain my question? How should we add non-image features to our CNN model (features like cat and dog prices) to our flatten layer? Does the CNN model new added features belong to which input image?
Nice tutorial sir. Can you create a chatbot using ANN? I would like to know how you will test that. Thanks!
I’m from Taiwan. It’s really helpful
Glad it was helpful!
I love your lectures Sir ! Thank you for your efforts and works. I have question , how did you Get accuracy ~90% with 10 epochs while i get hardly 10% with 25+ epochs?
facing same problem
All the way superb!!!! All videos.
Thank you a lot! You helped me with my project!
Glad it was helpful!
GREAT SIR
@codebasics why flattening again in model when reshape() is used to do it ??
i basically need to give an ai images and a numerical values, then i want to predict the numerical value from an image, is this kind of model suitable? what do you suggest?
I have a question: How would you deal with multiclass image classification for an imbalanced dataset? For instance, there are 8 classes, with some having very few samples (around 200+), while others have more than 2000 samples. Additionally, positive and negative classes complement each other. For example, if there are 800 samples where Class A has 100 samples, then the 'not Class A' category has 700 samples. So, when you balance one class, others become imbalanced. How should one deal with such data? One example of such dataset could be Ocular Disease Intelligent Recognition (ODIR) in Kaggle.
You are the best by far
I am happy this was helpful to you.
@codebasics plz make a video on implementation of transfer learning models like inception v3 vgg16 resnet152 for image classification.
Plzzzzz i am waiting for ur response.
And can u make one more video on feature concatenation of all models and ensemble method of all
Hello, can you please guide for the K-NN, MLP, CNN, Decision Tree, K-Mean Clustering, regression to solve this CIFAR-10 dataset problem. And compare the accuracies for each of the methodologies used.
Thank you very much sir for this 😊
I have one doubt.. like here we are working for colored images , we have 3 channels RGB , so do we need filters also different for all the channels or there will be only 1 filter?
Hello Sir, as per your suggestion I have re-done the MNIST handwritten digit recognisation using CNN, but didn't achieved better result than ANN.
ANN accuracy after 10 epochs is around 99.38%
Epoch 10/10
1875/1875 [==============================] - 4s 2ms/step - loss: 0.0198 - accuracy: 0.9938
While CNN accuracy after 10 epochs is : 93.94
Epoch 10/10
1875/1875 [==============================] - 13s 7ms/step - loss: 0.1838 - accuracy: 0.9394
What conclusive we can draw from this experiment ??
how can you get more accuracy? I have messed around with the hyper parameters a lot but I can't seem to find something that gets me a good accuracy (above 80-85)
should we download the whole github file? or just the notebook? please reply
how to split the image data into training and testing in folders
Sir...just one request...can you make a video of a project which is done right from downloading the data, saving it in device ,uploading it on notebook and pre processing the data and training a model...it will be very helpful...😊
dear, please try to explore it yourself, just type your question in google how to download data for network trainging, the how to save the to the device, then uploading it notebook, and the prepossessing. you will learn alot believe me. if the problem still persist let me know. i will help you.
@@khanwali9672 thank you 😊
very nicely explained brother. Loved the teaching style and followed the explanation
😊😊👍
Tq so munch sir for continuing this series amazing content supreb nice explantion
You're most welcome sathiya
great sir thank you
Thank you for the awesome tutorial. I have one question. Is there a way so I could give a path to one folder and then it would classify images which are in it using this model?
Yes you can use tensorflow dataset pipeline for that watch TF data pipeline tutorial in this same playlist
@@codebasics Thank You, I'll definitely watch it.
Hi beautiful video! I have some special image in Black and white to be classified. I have two questions:
1. Do you think it is better to colorize them in order to improve the predicion?
2. If yes at the first question, what is a suitable technique to add colors?
Thanks a lot.
Not necessary as long as your test or prediction is also B&W
Sir I have a problem...when I have to do same code of you in my computer it takes more time in computing .....can you help me please!?
Awesome really like the face to face introduction
Glad you like it
My Prof. does not teach anything about coding but gives bunch of homework :( But it was healing !
There are lot great ML and AI scientist in the world but Dhaval Patel is best to transform the ML skills to ordinary people.
Thank you so much for this great tutorial. It is really helpful. I have a question, you used 'sparse crossentropy' in prediction and it's supposed to return the class number but the output of y_pred is an array of the probability of each class, and to get the predicted class we used argmax function to get the index of maximum value?
Hello Bassem, "Sparse categorical cross entropy" is the loss function to be used when the actual output Y in the dataset is not in the one hot encoded format. And, sir has used "softmax" as the final activation function in the code showed in the video. It is because of this function that the final output, y_pred is an array of the probability of each class. Hence, finally in order to get the index position of the maximum probability value, which is typically the output class predicted by the CNN model, sir has used the np.argmax function.
Realy sir I like your teaching way
Thanks and welcome
Great video. I was hoping you'd visualize the CNN kernels so we could see what they looked like. You specified 32 of them. Does this mean that all 32 are used in every image, and thus are meaningful in every case? that is, you won't have one what has a koala's eyes because the input images also include, say, rocks, buildings, and GPU cards?
The ann model refuse to run. It saying keras has no attribute sequential