Thanks a lot for all your videos, Sreeni! I have been using python for other purposes for the last one year or so and your videos are the perfect gateways into the world of image segmentation using python.
Amazing work! I dont understand why do you use "2" in "out = keras.layers.Dense(2, activation='sigmoid')(drop4)" instead of "1". This is something in contrast with tutorial # 144
THAN YOU so much sir. I am doing my MSC thesis on disease detection and stage classification. i am watching your deep learning videos. can you recommended me the best algorithm for detection and classification(either traditional or deep learning) please. thank you again sir
Hello Sreeni, Nice presentation. I have a question. I have a dataset of 3000 images (Description of the dataset: around 1800 are distinct, and the rest are prepared by zooming and rotating). Is there a way of removing the augmented images from my test dataset. As generally, the test data should not contain augmented data.Thank you.
Hi Sreeni, 1. On Line 84, why do you convert only X_train to np.array() and NOT y_train? 2. The Pixel Data ranges from 0 - 255. Don't we need to normalize the values by dividing it by 255?
1. Please run one line at a time and see the parameter type in the variable explorer, you'll find the answer. In summary, y has already been converted to a numpy array during train and test data separation step (using to_categorical). Therefore, we convert X only to numpy array as it is a list and not an array. 2. Normalization is not a necessity as I am using relu activation which can deal with real value numbers. Also, I am using batch normalization in the hidden layers in my model, so all numbers will be normalized before going to the next layer. May be I should record a video on parameters and normalization. Thanks for asking questions.
1) In the line 48, why did u write 2 inside dense layer. Will not it be one(1) since it is a binary classification? 2) How did it work without expanding dimension of dataset? for example, np.expand_dims(tdataset)?
Really enjoy watching your videos. Applaud the thorough job you are doing in putting this content together. Hard to find such informative and detailed videos. I downloaded the cell_images dataset from Kaggle but when I try to unzip the folder onto my machine, it gives me an error and the winzip program shuts down. Not facing this with any other files. Wondering what the problem could be. These files are not passowrd protected are they ?
As far as I remember there are no tricks with the dataset, just a regular compressed folder. May be your winzip has issues. I use winrar for these type of tasks, please give that a try.
@@DigitalSreeni I want to segment infected red blood cells only from malaria blood smear images. Please suggest me, semantic U-net technique will work or need to apply any other technique?
Thank you so much for your fantastic tutorial videos. My question for this particular one is that you train your model for only two classes(parasitized_images,uninfected_images); what if we have four or five categories? I am still struggling to modify this code for several classes.
Hi Sreeni Sir, I love to learn from you.Thanks for this awesome tutorial. Sir can you please make a tutorial on the classification of Hela Cells or cell culture in python.Looking forward to your reply
Hi, you wrote Image.fromarray(image, 'RGB') in your code. Shouldn't it 'BGR' as cv2 read in BGR by default as you mentioned in the previous video? Thank you.
sir why didn't you apply image segmentation techniques before training the model for classification. does image segmentation help in improving accuracy?
I am not sure I understand your point. If you are referring to segmenting images then I should mention that these type of images can be challenging to segment. They also come in various shades of colors which means you first have to find a way to normalize images and then segment them. Also, segment them for what? And who is going to label them manually? In summary, classification problems do not need the complexity of pixel segmentation.
Sir, this code is working perfectly on Spyder. But when I am trying to execute the same code on Google Colab it is showing the error "ValueError: Shapes (None, 1) and (None, 2) are incompatible". After that, I changed the Dense Unit = 1. But now, its epochs results are very poor. Like this: loss: 0.0000e+00 - accuracy: 0.4980 - val_loss: 0.0000e+00 - val_accuracy: 0.5600. Please help me to remove this error and get good results on Google colab.
Hello i tried this on spyder it works fine but when I used google colab it does not work out right example when appending to the dataset[] there are more items than the parasitized images and also the are more appended labels and just 0s can not append the 1s. What could be the issue?
It works fine for me. Please watch my video 147 where I just copied the spyder code and executed on colab, didn’t change a thing. It works with CPU and GPU. Please check your code.
Hi Dr. Sreeni, I am confused on two places: 1st line of image resizing and 1st line of model.. The images you downloaded are saved in the 'cell_images' folder (3.21), then why the image_directory name is 'cel_images2/'? Then once the resizing done, the images are moved to the dataset folder, how did it go into the model as I can see 1st line of model is INPUT_SHAPE = (SIZE, SIZE, 3), there is nothing from dataset folder?
The entire dataset has about 13K images for parasitized and another 13K for uninfected. My system does not have the type of resources required to process all images. Therefore, I took a subset of those images (about 500 each) for the tutorial. I dumped them in a folder called cell_images2. Sorry for any confusion. After resizing the images are not saved locally, the information is captured in an array called 'dataset'. This dataset has been divided into training and testing sets for training the network. Of course, the train part of the data was used for training. Based on the code it appears that I did not use the test part for validation but instead I further split train to 90% for training and 10% for validation. Please go through the code carefully as it forms the basis for most deep learning approaches.
Neural networks are stochastic; they use randomness as part of the algorithms. This means you will get slightly different result each time but on average your results should be close enough if the training converges. You can try to minimize the randomness by fixing the random number seed. All random numbers use a random seed to produce a random number; if you fix this seed you'll generate same 'randomness' each time. I siggest placing these 4 lines at the top of your code. from numpy.random import seed seed(20) #Can be any number but use the same number to repeat experiments. from tensorflow import set_random_seed set_random_seed(42) #Can be any number but use the same number to repeat experiments. Remember that this only controls part of your randomness but there are other sources for randomness.
I recorded many videos on techniques that can help with lung cancer detection. But I understand that you are asking for a video on the application rather than technique. I will see if I can record series of videos on various applications. Thanks for the suggestion. By the way, at work we already have prepackaged binary classification deep learning modules for training and prediction. These can be used for lung cancer detection. www.apeer.com/app/modules?page=1&q=DL%20Binary (It is free so please sign up and explore)
To predict, you need to load the model and apply on your other images, very similar to how you apply to test images. My video number 131 will give information on how to load a model.
Just use model.predict(img). Of course, you need to preprocess your images just the way you did your training data. Please watch my recent videos (Videos 128 and later) for more information. Here is some sample code to predict for 2 images. #FOr single image # example of generating an image for a specific point in the latent space from keras.models import load_model from numpy import asarray from matplotlib import pyplot from numpy.random import randn from keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img # load model model = load_model('malaria_augmented_model.h5') img1 = load_img('cell_validation/Parasitized/your_image1.png', target_size=(150, 150)) img2 = load_img('cell_validation/Uninfected/your_image2.png', target_size=(150, 150)) x1 = img_to_array(img1) # this is a Numpy array with shape (3, 150, 150) x2 = img_to_array(img2) x1 = x1.reshape((1,) + x1.shape) x2 = x2.reshape((1,) + x2.shape) # generate image X1 = model.predict(x1) X2 = model.predict(x2) print("Prediction for parasitized is: ", X1, " where 0 indicates parasitised and 1 indicates uninfected") print("Prediction for uninfected is: ", X2, " where 0 indicates parasitised and 1 indicates uninfected") #Parasitized, value 0 #Uninfected, value 1
@@reegee8321 Hello, i tried this on spyder it works fine but when I used google colab it does not work out right example when appending to the dataset[] there are more items than the parasitized images and also the are more appended labels and just 0s can not append the 1s. What could be the issue?
Obviously it appears that that path name does not exist. So please check your current working directory and make sure the path you are defining is correct.
i can't believe you are so underrated your channel is filled with so much knowledge!
I watched most of your videos from 1 till 71 and still I want to continue...This is great I applied it too.. thanks a lot.
Keep it up
One of amazing tutor that I have ever met. 🙂😍🧑💻✌
Thanks a lot for all your videos, Sreeni! I have been using python for other purposes for the last one year or so and your videos are the perfect gateways into the world of image segmentation using python.
Thanks for your kind feedback.
You are a good teacher, Always rooting for you!
I appreciate that!
Amazing!! I am struggling to understand Neural Networks. However, this tutorial made it very easy to understand!! Thank you Srini.
Hi, Thanks for your very knowledgeable videos. Please suggest which DL model/video should I follow for crop detection and identification in field?
very clearly described.
Amazing video and I would love it...if you can do video on efficientnet and mobilenetv2 implementation like this clear video on cnn classification
Amazing work!
I dont understand why do you use "2" in "out = keras.layers.Dense(2, activation='sigmoid')(drop4)"
instead of "1". This is something in contrast with tutorial # 144
Really awesome and clean tutorials,, ❤️❤️❤️🔥
can you please tell me how to predict in 1 class of image?
Thank you for your efforts for this video.
My pleasure!
Thanks for your amazing tutorial videos.
THAN YOU so much sir. I am doing my MSC thesis on disease detection and stage classification. i am watching your deep learning videos. can you recommended me the best algorithm for detection and classification(either traditional or deep learning) please. thank you again sir
can predict the image based on actual and predicted value (i.e using x_test and y_test used as test_generator)
sir can you make a video on which you will apply this model on a sample image for prediction.. plzz
thanks for the video. That helps a lot
thanks very much it is very helpful
sir can you tell us how to load an image to check whether it is infected by malaria or not?? please
Thanks just great !!!!
Hello Sreeni, Nice presentation. I have a question. I have a dataset of 3000 images (Description of the dataset: around 1800 are distinct, and the rest are prepared by zooming and rotating). Is there a way of removing the augmented images from my test dataset. As generally, the test data should not contain augmented data.Thank you.
Hi Sreeni,
1. On Line 84, why do you convert only X_train to np.array() and NOT y_train?
2. The Pixel Data ranges from 0 - 255. Don't we need to normalize the values by dividing it by 255?
1. Please run one line at a time and see the parameter type in the variable explorer, you'll find the answer. In summary, y has already been converted to a numpy array during train and test data separation step (using to_categorical). Therefore, we convert X only to numpy array as it is a list and not an array.
2. Normalization is not a necessity as I am using relu activation which can deal with real value numbers. Also, I am using batch normalization in the hidden layers in my model, so all numbers will be normalized before going to the next layer. May be I should record a video on parameters and normalization. Thanks for asking questions.
Hello , why you use activation function in layer con2D
1) In the line 48, why did u write 2 inside dense layer. Will not it be one(1) since it is a binary classification?
2) How did it work without expanding dimension of dataset? for example, np.expand_dims(tdataset)?
Hello Dr. Sreeni, Your videos are very useful. Can we add the Gabor filter that you described before in the CNN? model
ammazing thank you so much
You're welcome 😊
I thought it is conda package and not pip!! I have installed it with conda, is it the same thing?
Really enjoy watching your videos. Applaud the thorough job you are doing in putting this content together. Hard to find such informative and detailed videos. I downloaded the cell_images dataset from Kaggle but when I try to unzip the folder onto my machine, it gives me an error and the winzip program shuts down. Not facing this with any other files. Wondering what the problem could be. These files are not passowrd protected are they ?
As far as I remember there are no tricks with the dataset, just a regular compressed folder. May be your winzip has issues. I use winrar for these type of tasks, please give that a try.
@@DigitalSreeni thank you ..will give that a try.
Please suggest me how can I segment infected red blood cells from these images which you used in this tutorial.
This tutorial explains classification. If you want segmentation you may want to look at U-net for semantic segmentation.
@@DigitalSreeni I want to segment infected red blood cells only from malaria blood smear images. Please suggest me, semantic U-net technique will work or need to apply any other technique?
@@shankaraggarwal4234 start watching from video 71/72, a very good example of cell segmentation using U-Net was made
Sir can you upload a video on performing multiclass classification from images in different folders using CNN
Yes sir please upload multiclass classification from images.
Thank you so much for your fantastic tutorial videos. My question for this particular one is that you train your model for only two classes(parasitized_images,uninfected_images); what if we have four or five categories? I am still struggling to modify this code for several classes.
Please checkout my videos on the topic of multiclass classification.
@@DigitalSreeni Thank you!
Thank u so much sir for the amazing videos. Can you please do some tutorials on 3D deep learning too?
Hello, I tried performing this and it's giving me (None, 0) (None, 1) error. Incompatible shape. Any idea why this?
Hi Sreeni Sir, I love to learn from you.Thanks for this awesome tutorial. Sir can you please make a tutorial on the classification of Hela Cells or cell culture in python.Looking forward to your reply
Classification of Hela cells into what? Do you want to classify entire cell or organelles in cells?
@@DigitalSreeni entire cell..classify images of Hela cells into a dead cell or live cell
Hi, you wrote Image.fromarray(image, 'RGB') in your code. Shouldn't it 'BGR' as cv2 read in BGR by default as you mentioned in the previous video? Thank you.
Sir, please do a kidney stone detection using xception model. I can link the dataset if u want me to
big like
sir why didn't you apply image segmentation techniques before training the model for classification. does image segmentation help in improving accuracy?
I am not sure I understand your point. If you are referring to segmenting images then I should mention that these type of images can be challenging to segment. They also come in various shades of colors which means you first have to find a way to normalize images and then segment them. Also, segment them for what? And who is going to label them manually? In summary, classification problems do not need the complexity of pixel segmentation.
hello, sir can you please tell me how to predict in 1 class of image?
please help sir.
Sir, this code is working perfectly on Spyder. But when I am trying to execute the same code on Google Colab it is showing the error "ValueError: Shapes (None, 1) and (None, 2) are incompatible". After that, I changed the Dense Unit = 1. But now, its epochs results are very poor. Like this: loss: 0.0000e+00 - accuracy: 0.4980 - val_loss: 0.0000e+00 - val_accuracy: 0.5600. Please help me to remove this error and get good results on Google colab.
Have you fixed this? I'm getting the same first error
Hello i tried this on spyder it works fine but when I used google colab it does not work out right example when appending to the dataset[] there are more items than the parasitized images and also the are more appended labels and just 0s can not append the 1s. What could be the issue?
It works fine for me. Please watch my video 147 where I just copied the spyder code and executed on colab, didn’t change a thing. It works with CPU and GPU. Please check your code.
Great
Hi Dr. Sreeni, I am confused on two places: 1st line of image resizing and 1st line of model..
The images you downloaded are saved in the 'cell_images' folder (3.21), then why the image_directory name is 'cel_images2/'?
Then once the resizing done, the images are moved to the dataset folder, how did it go into the model as I can see 1st line of model is INPUT_SHAPE = (SIZE, SIZE, 3), there is nothing from dataset folder?
The entire dataset has about 13K images for parasitized and another 13K for uninfected. My system does not have the type of resources required to process all images. Therefore, I took a subset of those images (about 500 each) for the tutorial. I dumped them in a folder called cell_images2. Sorry for any confusion.
After resizing the images are not saved locally, the information is captured in an array called 'dataset'. This dataset has been divided into training and testing sets for training the network. Of course, the train part of the data was used for training. Based on the code it appears that I did not use the test part for validation but instead I further split train to 90% for training and 10% for validation.
Please go through the code carefully as it forms the basis for most deep learning approaches.
@@DigitalSreeni Thanks a lot. I thought images will be saved locally. That's why I was confused. Now it's clear.
Whenever i run my neural network I get different result. But i need single result everytime i run the code ?
Neural networks are stochastic; they use randomness as part of the algorithms. This means you will get slightly different result each time but on average your results should be close enough if the training converges. You can try to minimize the randomness by fixing the random number seed. All random numbers use a random seed to produce a random number; if you fix this seed you'll generate same 'randomness' each time. I siggest placing these 4 lines at the top of your code.
from numpy.random import seed
seed(20) #Can be any number but use the same number to repeat experiments.
from tensorflow import set_random_seed
set_random_seed(42) #Can be any number but use the same number to repeat experiments.
Remember that this only controls part of your randomness but there are other sources for randomness.
Hello sir can you do the segmentation tutorial for MRI Images like lung cancer detection !!!!!
I recorded many videos on techniques that can help with lung cancer detection. But I understand that you are asking for a video on the application rather than technique. I will see if I can record series of videos on various applications. Thanks for the suggestion.
By the way, at work we already have prepackaged binary classification deep learning modules for training and prediction. These can be used for lung cancer detection.
www.apeer.com/app/modules?page=1&q=DL%20Binary
(It is free so please sign up and explore)
Sir please tell me how can i predict after training for a single image.
To predict, you need to load the model and apply on your other images, very similar to how you apply to test images. My video number 131 will give information on how to load a model.
sir, how to load the image and check the prediction
Just use model.predict(img).
Of course, you need to preprocess your images just the way you did your training data. Please watch my recent videos (Videos 128 and later) for more information. Here is some sample code to predict for 2 images.
#FOr single image
# example of generating an image for a specific point in the latent space
from keras.models import load_model
from numpy import asarray
from matplotlib import pyplot
from numpy.random import randn
from keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img
# load model
model = load_model('malaria_augmented_model.h5')
img1 = load_img('cell_validation/Parasitized/your_image1.png', target_size=(150, 150))
img2 = load_img('cell_validation/Uninfected/your_image2.png', target_size=(150, 150))
x1 = img_to_array(img1) # this is a Numpy array with shape (3, 150, 150)
x2 = img_to_array(img2)
x1 = x1.reshape((1,) + x1.shape)
x2 = x2.reshape((1,) + x2.shape)
# generate image
X1 = model.predict(x1)
X2 = model.predict(x2)
print("Prediction for parasitized is: ", X1, " where 0 indicates parasitised and 1 indicates uninfected")
print("Prediction for uninfected is: ", X2, " where 0 indicates parasitised and 1 indicates uninfected")
#Parasitized, value 0
#Uninfected, value 1
Hi, I have an error of "cannot import name 'Keras' on google colab. Any idea how I can solve this?Thanks
Just checked, it works fine. Please check the spelling, keras is imported with lower case k.
@@DigitalSreeni Thanks, it worked. Do you know how to perform augmentation on a csv file type?
@@reegee8321 Hello, i tried this on spyder it works fine but when I used google colab it does not work out right example when appending to the dataset[] there are more items than the parasitized images and also the are more appended labels and just 0s can not append the 1s. What could be the issue?
Sir I'm getting ,the system cannot find the path specified: ' cell_images/Paraitized error sir
And iam stuck in it can you help me
Obviously it appears that that path name does not exist. So please check your current working directory and make sure the path you are defining is correct.
@@DigitalSreeni I got the ans sir thank you☺
let God of Might blessed you!
Not really a big believer in God but I appreciate your thoughts!
I am unable to download the dataset from the website
See if you can download from here: www.kaggle.com/iarunava/cell-images-for-detecting-malaria
@@DigitalSreeni yes, I was. Thanks a lot!
Please allow to watch 72 video sir
It has wrong i formation so I had to hide it. It will not be useful for you.