For new viewers following this tutorial, the accuracy metrics calculations have changed in the newer versions of Tensorflow. If you're getting low accuracy scores, you can simply change metrics = ['accuracy'] to metrics = ['binary_accuracy']. You will get close to the same values in this tutorial and training the model will also be faster. :)
@@KGPTalkie It's nice to watch this video. I wanted to know that how I can predict the rating of a image by poster ? if you can extend the same video with rating prediction will be very helpful.
Great Video Sir. I havent seen anyone explaining hands on practicals with such clarity before. Really worth the watch. Can you explain in some video how exactly convolutional layers are able to extract necessary details from images. I saw creation of so many convolutional layers lined up as hidden. But how do such lining up of layers increase accuracy. Especially in semantic segmentation
Beautiful sir... I have been using allot of pretrained model lately. Would love to get my hands dirty on building my own model and see how well it works
There is a problem in increasing in ram to 25gb once the ram gets crashed in the colab, iam not able to continue with it. what would be the problem for this. Help me out........
your way of teaching is excellent sir, watched almost all videos of your channel . I have a problem sir, can you tell me how to make a csv file from original data like a csv file in this tutorial....it would be a great help,as your github repository is showing invalid now. Thank you,waiting for your reply sir .
The default way of calculating accuracy has changed, so do not expect same values without changing the metric used in your code if you are using current version of TF + Keras. (The old one gives way too much weight to 0 values which are the most common ones in all true values.)
@@Rushield3981cc I know you commented this a year ago, but just incase you're still wondering (and for future viewers who are getting the same problem of low accuracy) You need to change metrics from 'accuracy' to 'binary_accuracy' to get the same values shown in the video.
Is this still working? I had to purchase Colab Pro for increased RAM, and it keeps crashing during the training phase. I've reviewed the code, and it all matches. Any idea why this is happening?
Hello, your video is very helpful for multi-label image classification. I have a problem. Could you help me with this type of problem? "Train on 11021 samples, validate on 1946 samples 11021/11021 [==============================] - 13s 1ms/sample - loss: nan - accuracy: 0.8918 - val_loss: nan - val_accuracy: 0.8912". The training and validation loss shown "nan". How can I overcome from this problem?
@@KGPTalkie I love the way u explain keep posting videos . Google Colab is not giving any option to increase the RAM size with free of cost so there is any other alternate way. possible.
Thanks for the video. please post the tutorial/link/code that how to construct a confusion matrix for this multi label image classification I've tried but I couldnt find the way to build a confusion matrix for this dataset
Hi sir :) in this video at the part of 23:57 , in my case, it just keep restarting the runtime automatically instead of asking me to add the ram, and this caused me to keep crash and stuck in this step, can you figure out how can I stop it to restart automatically?? thanks in advanced! btw, your tutorial is so much useful for me!
Hi, Google had recently changed it policy. Now they do not allow to increase the RAM. There is a pro version of colab but it is available only US. You might need to reduce your dataset and load it in chunks to reduce memory overhead.
if i am given food data then how will we do the testing of model , in that case its not efficient to predict just three or four categories .. how can we approach the problem
Hello brother. i want to say thank U for this vedio. let me ask one question, as U show in this tutorial there is csv file for the image data set, if i want to use my own new dataset how i prepare corresponding csv file?
Hello Laxmi , Learned a lot from your youtube videos. I am trying to build similar multi-label classification but my output layer is 2D instead of 1D (my input layer includes images similar to yours). Just wondering how to define the dense layer for a 2D output?! Specifically, my output layer consists of coordinates of 100 points in a discretized 2D domain (10,10)?! Thank you for your impressive video. @KGB Talkie
Dense layers are often called flattening layers too. It means, no matter what is dimensions of your data, it will flat the layer. So use some hit and trail method to find optimal performance.
@@KGPTalkie thanks for your feedback. Let me put it this way: my labels (y_train) is not a vector, it is a 3D array. output_dimensions=y_train.shape= 1500x15x15 where 1500 is the number of inputs how do I implement it in the output layer? I tried (units=(15,15) in the dense layer, but it didn't work.
Hi can u make a video on food classification. How can u extract the label of a classified image when working with a large dataset? I would like to classify food image and recommend recipes based on the label identified but I am not understanding how to do this. I WOULD kindly ask a tutorial on this matter. There is barely information on food related projects plz
Hi, I copied your code exactly and my results are not the same as yours, my results are super wrong and I don't know what could have caused it. Do you have any clue? Great channel by the way. Discovered it yesterday :)
The default accuracy calculation has changed. Now it gives more realistic values. Previously result giving 0 to all would have been "great" as for most of the values are 0 in all of the movies. You can get the "same" result by manually changing metrics.
@@KGPTalkie Thanks for Your reply. i written coden on multilable image classification , working good, given right predict value when i used image from data/horse/horse.1.jpg , but when i tried to predict with one downloaded Horse image from Downloads it could n't be predict it ? why ? what was the issue . why it couldn't predict different location of image ? Please help me Code: from __future__ import generator_stop import os,cv2 import numpy as np import matplotlib.pyplot as plt from sklearn.utils import shuffle from sklearn.model_selection import train_test_split from keras import backend as K K.set_image_dim_ordering('th') from keras.utils import np_utils from tensorflow.keras.models import Sequential,load_model from keras.layers import Activation ,Dropout,Flatten,Dense from keras.layers.convolutional import Convolution2D, MaxPooling2D from keras.utils import plot_model from keras.optimizers import SGD,RMSprop,adam ##---------------------- PATH = os.getcwd() # Define data path data_path = PATH + '/data' data_dir_list = os.listdir(data_path) img_rows=128 img_cols=128 num_channel=1 num_epoch=20 # Define the number of classes num_classes = 4 img_data_list=[] for dataset in data_dir_list: img_list=os.listdir(data_path+'/'+ dataset) print ('Loaded the images of dataset-'+'{} '.format(dataset)) for img in img_list: input_img=cv2.imread(data_path + '/'+ dataset + '/'+ img ) input_img=cv2.cvtColor(input_img, cv2.COLOR_BGR2GRAY) input_img_resize=cv2.resize(input_img,(128,128)) img_data_list.append(input_img_resize) img_data = np.array(img_data_list) img_data = img_data.astype('float32') img_data /= 255 print (img_data.shape) if num_channel==1: if K.image_dim_ordering()=='th': img_data= np.expand_dims(img_data, axis=1) print (img_data.shape) else: img_data= np.expand_dims(img_data, axis=4) print (img_data.shape)
Hello, Please i can't find the video about Multi-Class classification you're talking about. i need multi-class and not multi label. for image classification on a neural network. Thank you in advance
@@KGPTalkie Hi Thanks a lot for the quick response. but isn't that considered a binary class classification ? i work with 10 classes. doesn't that affect the code ? do i need to do changes ? please help i'm really struggling. thanks a lot!
Laxmikant sir i requested you to make a video on training a model on more than 5 or 6 persons and then prediting mutiple persons at same time on live camera feed using opencv please if you can make a video on this please make as soon as possble
Hello there, the video was very helpful. Thank you so much for making the entire video. It will be a great help if you can provide me with the link to your GitHub profile. Please respond to my comment and share the link if possible. It will be a great help, thank you so much
For new viewers following this tutorial, the accuracy metrics calculations have changed in the newer versions of Tensorflow. If you're getting low accuracy scores, you can simply change metrics = ['accuracy'] to metrics = ['binary_accuracy']. You will get close to the same values in this tutorial and training the model will also be faster. :)
HERO
TY
Great Video Sir.... Almost many of my doubts are cleared.. keep making such videos...
Thank You..!!!
Thank you so much for watching.
Very good video and amazing explanation. Thanks alot Sir. Please keep making more videos on deep learning object detection and object tracking
Excellent video tutorial on multi label classification. Love it :)
Thank you so much for watching it. Please keep watching and happy Learning. Let us know if you need a lesson on any other topic.
@@KGPTalkie It's nice to watch this video.
I wanted to know that how I can predict the rating of a image by poster ?
if you can extend the same video with rating prediction will be very helpful.
Great Video Sir. I havent seen anyone explaining hands on practicals with such clarity before. Really worth the watch. Can you explain in some video how exactly convolutional layers are able to extract necessary details from images. I saw creation of so many convolutional layers lined up as hidden. But how do such lining up of layers increase accuracy. Especially in semantic segmentation
very clear and easy understanding.Many thanks.
Thank you ❤️ 😍
Sir,can u do a tutorial about this model being deployed in a web app using flask
I've got the ram ussage issue. the high ram option not showing. what should i do?
great work sir
Thank you ❤️
great tutorial. thank you for the good job.
Excellent sir ❤️❤️❤️
❤️ 😍
Beautiful sir... I have been using allot of pretrained model lately. Would love to get my hands dirty on building my own model and see how well it works
Very helpful tutorial. Thank you. Can you please do Multi-class Classification tutorial as well?
I think using softmax as the output activation function solves most of multiclass porblem.
There is a problem in increasing in ram to 25gb once the ram gets crashed in the colab, iam not able to continue with it. what would be the problem for this. Help me out........
your way of teaching is excellent sir, watched almost all videos of your channel . I have a problem sir, can you tell me how to make a csv file from original data like a csv file in this tutorial....it would be a great help,as your github repository is showing invalid now. Thank you,waiting for your reply sir .
sir, if possible please share the csv file in your GitHub or any other platform, it would be a great help.
The default way of calculating accuracy has changed, so do not expect same values without changing the metric used in your code if you are using current version of TF + Keras. (The old one gives way too much weight to 0 values which are the most common ones in all true values.)
Hello, what can we do instead ?
@@Rushield3981cc I know you commented this a year ago, but just incase you're still wondering (and for future viewers who are getting the same problem of low accuracy) You need to change metrics from 'accuracy' to 'binary_accuracy' to get the same values shown in the video.
Hi,
Thanks for the tutorial, I am getting 'nan' as the validation loss any idea of what is means or what the error is?
Is this still working? I had to purchase Colab Pro for increased RAM, and it keeps crashing during the training phase. I've reviewed the code, and it all matches. Any idea why this is happening?
It needs lots of RAM. Once your notebook crashes then it will ask you to increase your RAM. You need to accept as Yes and increase RAM. It will work.
Hello, your video is very helpful for multi-label image classification. I have a problem. Could you help me with this type of problem? "Train on 11021 samples, validate on 1946 samples
11021/11021 [==============================] - 13s 1ms/sample - loss: nan - accuracy: 0.8918 - val_loss: nan - val_accuracy: 0.8912". The training and validation loss shown "nan". How can I overcome from this problem?
can you share more on multi label classification using cnn images for predcition
Thanks for this video, please correct the link of the blog for this video.
thanks. correction done.
@@KGPTalkie I love the way u explain keep posting videos .
Google Colab is not giving any option to increase the RAM size with free of cost so there is any other alternate way. possible.
sir your blog's link is not wokring, can you please talk a look into it
Thanks for the video. please post the tutorial/link/code that how to construct a confusion matrix for this multi label image classification I've tried but I couldnt find the way to build a confusion matrix for this dataset
Hi sir :) in this video at the part of 23:57 , in my case, it just keep restarting the runtime automatically instead of asking me to add the ram, and this caused me to keep crash and stuck in this step, can you figure out how can I stop it to restart automatically?? thanks in advanced! btw, your tutorial is so much useful for me!
and it keep asking me to view runlogs
Hi,
Google had recently changed it policy. Now they do not allow to increase the RAM. There is a pro version of colab but it is available only US. You might need to reduce your dataset and load it in chunks to reduce memory overhead.
@@KGPTalkie omg I see... Thank you for the reply!
@@KGPTalkie sorry for disturbing, can I have the coding of creating csv file?
if i am given food data then how will we do the testing of model , in that case its not efficient to predict just three or four categories .. how can we approach the problem
Hello brother. i want to say thank U for this vedio. let me ask one question, as U show in this tutorial there is csv file for the image data set, if i want to use my own new dataset how i prepare corresponding csv file?
Thanks for watching. I think you have to do it manually. Mostly data are prepared manually by someone.
@@KGPTalkie thanks
Recall and precision are better metrics for this problem, especially when you have an unbalanced dataset like me
Hello Laxmi , Learned a lot from your youtube videos. I am trying to build similar multi-label classification but my output layer is 2D instead of 1D (my input layer includes images similar to yours). Just wondering how to define the dense layer for a 2D output?! Specifically, my output layer consists of coordinates of 100 points in a discretized 2D domain (10,10)?! Thank you for your impressive video. @KGB Talkie
Dense layers are often called flattening layers too. It means, no matter what is dimensions of your data, it will flat the layer. So use some hit and trail method to find optimal performance.
@@KGPTalkie thanks for your feedback. Let me put it this way:
my labels (y_train) is not a vector, it is a 3D array.
output_dimensions=y_train.shape= 1500x15x15 where 1500 is the number of inputs
how do I implement it in the output layer?
I tried (units=(15,15) in the dense layer, but it didn't work.
Hi can u make a video on food classification. How can u extract the label of a classified image when working with a large dataset? I would like to classify food image and recommend recipes based on the label identified but I am not understanding how to do this. I WOULD kindly ask a tutorial on this matter. There is barely information on food related projects plz
Why u did not use vgg16
Can we use other methods rather than CNN ? Just like adopted algorithms of knn or naive bayes for this specific dataset ?
im getting the crash notice but im not being alloted free RAM, its asking me to upgrade to colab pro
Hi,
Google is not providing now free version upgrade. Reduce batchsize by half.
Hi, I copied your code exactly and my results are not the same as yours, my results are super wrong and I don't know what could have caused it. Do you have any clue?
Great channel by the way. Discovered it yesterday :)
Thanks for watching it. you need to share more info.
Experiencing the same. By any chance, have you already discovered why? :)
Same here. Both accuracy and val_accuracy are extremely low.
I have the same problem, do you have a solution ?
The default accuracy calculation has changed. Now it gives more realistic values. Previously result giving 0 to all would have been "great" as for most of the values are 0 in all of the movies. You can get the "same" result by manually changing metrics.
Please give link for a multiclass classification tutorial. Thanks.
Sir i am learning DL. i am impressed with your tutorial .how can i reach you in case if i have doubt in code?
Thanks for watching ❤️. You can comment below your doubts.
@@KGPTalkie Thanks for Your reply. i written coden on multilable image classification , working good, given right predict value when i used image from data/horse/horse.1.jpg , but when i tried to predict with one downloaded Horse image from Downloads it could n't be predict it ? why ? what was the issue . why it couldn't predict different location of image ? Please help me
Code:
from __future__ import generator_stop
import os,cv2
import numpy as np
import matplotlib.pyplot as plt
from sklearn.utils import shuffle
from sklearn.model_selection import train_test_split
from keras import backend as K
K.set_image_dim_ordering('th')
from keras.utils import np_utils
from tensorflow.keras.models import Sequential,load_model
from keras.layers import Activation ,Dropout,Flatten,Dense
from keras.layers.convolutional import Convolution2D, MaxPooling2D
from keras.utils import plot_model
from keras.optimizers import SGD,RMSprop,adam
##----------------------
PATH = os.getcwd()
# Define data path
data_path = PATH + '/data'
data_dir_list = os.listdir(data_path)
img_rows=128
img_cols=128
num_channel=1
num_epoch=20
# Define the number of classes
num_classes = 4
img_data_list=[]
for dataset in data_dir_list:
img_list=os.listdir(data_path+'/'+ dataset)
print ('Loaded the images of dataset-'+'{}
'.format(dataset))
for img in img_list:
input_img=cv2.imread(data_path + '/'+ dataset + '/'+ img )
input_img=cv2.cvtColor(input_img, cv2.COLOR_BGR2GRAY)
input_img_resize=cv2.resize(input_img,(128,128))
img_data_list.append(input_img_resize)
img_data = np.array(img_data_list)
img_data = img_data.astype('float32')
img_data /= 255
print (img_data.shape)
if num_channel==1:
if K.image_dim_ordering()=='th':
img_data= np.expand_dims(img_data, axis=1)
print (img_data.shape)
else:
img_data= np.expand_dims(img_data, axis=4)
print (img_data.shape)
else:
if K.image_dim_ordering()=='th':
img_data=np.rollaxis(img_data,3,1)
print (img_data.shape)
##-----------------------------------------------------------------------------------------------------------------
num_classes = 4
num_of_samples = img_data.shape[0]
labels = np.ones((num_of_samples,),dtype='int64')
labels[0:202]=0
labels[202:404]=1
labels[404:606]=2
labels[606:]=3
names = ['cats','dogs','horses','humans']
# convert class labels to on-hot encoding
Y = np_utils.to_categorical(labels, num_classes)
#Shuffle the dataset
x,y = shuffle(img_data,Y, random_state=2)
# Split the dataset
X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=2)
input_shape=img_data[0].shape
##-----------------------------------------------------------------------------------------------------------------
model = Sequential()
model.add(Convolution2D(32, 3,3,border_mode='same',input_shape=input_shape))
model.add(Activation('relu'))
model.add(Convolution2D(32, 3, 3))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.5))
model.add(Convolution2D(64, 3, 3))
model.add(Activation('relu'))
#model.add(Convolution2D(64, 3, 3))
#model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy', optimizer='rmsprop',metrics=["accuracy"])
model.summary()
hist = model.fit(X_train, y_train, batch_size=16, epochs=num_epoch, verbose=1, validation_data=(X_test, y_test))
########## TEST IMAGE #########################
test_image = cv2.imread('data1/dogs/dog.103.jpg')
test_image=cv2.cvtColor(test_image, cv2.COLOR_BGR2GRAY)
test_image=cv2.resize(test_image,(128,128))
test_image = np.array(test_image)
test_image = test_image.astype('float32')
test_image /= 255
print (test_image.shape)
if num_channel==1:
if K.image_dim_ordering()=='th':
test_image= np.expand_dims(test_image, axis=0)
test_image= np.expand_dims(test_image, axis=0)
print (test_image.shape)
else:
test_image= np.expand_dims(test_image, axis=3)
test_image= np.expand_dims(test_image, axis=0)
print (test_image.shape)
else:
if K.image_dim_ordering()=='th':
test_image=np.rollaxis(test_image,2,0)
test_image= np.expand_dims(test_image, axis=0)
print (test_image.shape)
else:
test_image= np.expand_dims(test_image, axis=0)
print (test_image.shape)
# Predicting the test image
print((model.predict(test_image)))
print(model.predict_classes(test_image))
Hello sir ,, if I want to predict multiple unseen images instead of one then how we do
Yes can use one.
Great video but I didn't got an option to get more ram :(
My accuracy with the same code is coming less than 40%. Can u just give the repo link.
Experiencing the same. By any chance, have you already discovered why? :)
@@dave.jammin Same here do you know why ?
Hello, Please i can't find the video about Multi-Class classification you're talking about. i need multi-class and not multi label. for image classification on a neural network. Thank you in advance
Hi Thanks for watching. This is multi class example
TensorFlow 2.0 Tutorial for Beginners 15 - Malaria Parasite Detection Using CNN
@@KGPTalkie Hi Thanks a lot for the quick response. but isn't that considered a binary class classification ? i work with 10 classes. doesn't that affect the code ? do i need to do changes ? please help i'm really struggling. thanks a lot!
Yes you need to make some changes. Please cifar tutorial to make changes in the last layer.
Can somebody please clarify me why two fully connected layers with 128 filters each are added instead of one...
Hi,
Deep learning models needs lots of testing. I found 2 layers were performing better.
@@KGPTalkie Thank You
Laxmikant sir i requested you to make a video on training a model on more than 5 or 6 persons and then prediting mutiple persons at same time on live camera feed using opencv please if you can make a video on this please make as soon as possble
waiting for your positive reply.
Hi thanks for watching ❤️. I will make one once I come back from my leave.
@@KGPTalkie okk sir will be waiting for that video making a face recognition on multiple faces on a live feed using cnn model
@@KGPTalkie You are the best,please keep contributing! please start a series on OpenCV,a request from your Fan!
what to do if the folder containing images has both '.jpg' and 'jpeg' extensions
also some with .png
Sir there is no option to increase the RAM. Only session is crashed
Then I can't say much about Pro Version. But in normal CoLab it comes everytime. Initially RAM size was 12GB later it gets 25GB. Check your RAM size.
Hello sir,
Increasing Ram on Google colab is not working now.
is there any other way to complete this tutorial
reduce the batch size and then train. otherwise reduce input image size .
whoa my 1080ti ram is at limit for this project, is that expected?
Yes. It is a huge dataset.
@KGP Talkie Please give me a solution....
Hello there, the video was very helpful. Thank you so much for making the entire video. It will be a great help if you can provide me with the link to your GitHub profile. Please respond to my comment and share the link if possible. It will be a great help, thank you so much