Dear Sir, When I use a custom dataset to create a synthetic image with GAN, I am getting the following warning after training when I make the following changes in your code.(then the training doesn't start and I get the following warning) 1.Are the changes I made correct? 2.Can I ask your opinion about the error? Would you consider posting a video where the custom size dataset is used with GAN? def define_discriminator(in_shape=(512,512,3)): .... n_nodes = 512 * 8 * 8 #8192 nodes model.add(Dense(n_nodes, input_dim=latent_dim)) #Dense layer so we can work with 1D latent vector model.add(LeakyReLU(alpha=0.2)) model.add(Reshape((8, 8, 512))) #8x8x128 dataset from the latent vector. # upsample to 16x16 model.add(Conv2DTranspose(512, (4,4), strides=(2,2), padding='same')) #16x16x128 model.add(LeakyReLU(alpha=0.2)) model.add(Conv2DTranspose(512, (4,4), strides=(2,2), padding='same')) #16x16x128 model.add(LeakyReLU(alpha=0.2)) model.add(Conv2DTranspose(512, (4,4), strides=(2,2), padding='same')) #16x16x128 model.add(LeakyReLU(alpha=0.2)) model.add(Conv2DTranspose(512, (4,4), strides=(2,2), padding='same')) #16x16x128 model.add(LeakyReLU(alpha=0.2)) # upsample to 32x32 model.add(Conv2DTranspose(512, (4,4), strides=(2,2), padding='same')) #32x32x128 model.add(LeakyReLU(alpha=0.2)) model.add(Conv2DTranspose(512, (4,4), strides=(2,2), padding='same')) #32x32x128 model.add(LeakyReLU(alpha=0.2)) ...... train(generator, discriminator, gan_model, dataset, latent_dim, n_epochs=200) WARNING:TensorFlow: Compiled the loaded model, but the compiled metrics have yet to be built. `model.compile_metrics` will be empty until you train or evaluate the model.
I'm getting this error when running the exact same code either with 2 or 250 Epochs. “W tensorflow/core/data/root_dataset.cc:266] Optimization loop failed: CANCELLED: Operation was cancelled” I’m using Python 3.9 | Tensorflow 2.10.0 | Tensorflow-gpu 2.10.0 | keras 2.10.0 | Window 10 Any idea?
@DigitalSreeni Excellent. I have two questions though: 1. If you are selecting (batch_size or half batch_size) samples randomly from the dataset in each epoch, how do you make sure that training uses all the available images in the dataset during training? 2. What is the point of using half batch_size? why don't just train the discriminator using same size (e.g. batch_size) real and fake images? Thank you for the detailed tutorial as always.
I am trying to save generator and discriminator models after training, again loading them and training for some more epochs but it seems like gan loss leading to zero how can I retrain the gan
I'm having the exact same problem, did you solve it? really trying my best to solve the problem but i just couldn't figure out how to incrementally train the generator, discriminator and gan
Just as Mr. Gulab Patel mentioned, you can choose vector of any dimension. 100 is used as this was the dimension that many papers used. Do you really want another hyper parameter for your deep learning model :)?
Great Explanation as always. Thank you. Just some questions: 1- How can we evaluate the quality of generated images? 2- Can we plot the generated images one by one instead of a grid? 3- Regarding feeding our own images, what If there is only one class (a folder of images), which part of the code should be changed?
Ans 1: Visually you can check or you can do similarity check(real img & corresponding generated images) but Vanilla gan randomly generates so don't expect one-one mapping as random vectors(after going g_model) can be any images(similar to real images). 2. Yes you do it, take single image instead of 25 together. 3. It's not a classification problem so class doesn't matter, just save your image in folder and feed into train. Help regarding reading and feeding into gans are available on stack overflow.
Okay so I was able to train my gan network on custom images data, after training , when loaded the model and generated images , it seems that it generates same images all the time and also the image look pretty random as it contains random patterns in it. Can any one tell me the reason or help me with it?
@@KashifShaheed yes I used some different approach ie. Gradient tape with higher epochs , at the end I understood that the images are generated randomly , but this randomness is dependent on the last epoch of training the gan
Nice tutorial. Could you please let me know if you are planning to discuss some topics about forecasting using charts ? It would be useful and interesting though :).
Thank you for the great tutorials, i have learned a lot for your Videos Where can i get the Trained model with 250 epochs "cifar_generator_250epochs.h5" Just for testing.
Great tutorial. Thanks for the effort you put into providing such lessons in a simplified and understandable manner
This cannel is highly underrated what an amazing content.
Excellent videos.. Thank you and keep uploading new videos.
great educational videos! keep up your great contribution to people!
Thank you!
Dear Sir,
When I use a custom dataset to create a synthetic image with GAN, I am getting the following warning after training when I make the following changes in your code.(then the training doesn't start and I get the following warning)
1.Are the changes I made correct?
2.Can I ask your opinion about the error?
Would you consider posting a video where the custom size dataset is used with GAN?
def define_discriminator(in_shape=(512,512,3)):
....
n_nodes = 512 * 8 * 8 #8192 nodes
model.add(Dense(n_nodes, input_dim=latent_dim)) #Dense layer so we can work with 1D latent vector
model.add(LeakyReLU(alpha=0.2))
model.add(Reshape((8, 8, 512))) #8x8x128 dataset from the latent vector.
# upsample to 16x16
model.add(Conv2DTranspose(512, (4,4), strides=(2,2), padding='same')) #16x16x128
model.add(LeakyReLU(alpha=0.2))
model.add(Conv2DTranspose(512, (4,4), strides=(2,2), padding='same')) #16x16x128
model.add(LeakyReLU(alpha=0.2))
model.add(Conv2DTranspose(512, (4,4), strides=(2,2), padding='same')) #16x16x128
model.add(LeakyReLU(alpha=0.2))
model.add(Conv2DTranspose(512, (4,4), strides=(2,2), padding='same')) #16x16x128
model.add(LeakyReLU(alpha=0.2))
# upsample to 32x32
model.add(Conv2DTranspose(512, (4,4), strides=(2,2), padding='same')) #32x32x128
model.add(LeakyReLU(alpha=0.2))
model.add(Conv2DTranspose(512, (4,4), strides=(2,2), padding='same')) #32x32x128
model.add(LeakyReLU(alpha=0.2))
......
train(generator, discriminator, gan_model, dataset, latent_dim, n_epochs=200)
WARNING:TensorFlow: Compiled the loaded model, but the compiled metrics have yet to be built. `model.compile_metrics` will be empty until you train or evaluate the model.
Sir, could you please post tutorials on object detection algorithms? Would love to know how the data preprocessing should be done for those .
I'm getting this error when running the exact same code either with 2 or 250 Epochs.
“W tensorflow/core/data/root_dataset.cc:266] Optimization loop failed: CANCELLED: Operation was cancelled”
I’m using Python 3.9 | Tensorflow 2.10.0 | Tensorflow-gpu 2.10.0 | keras 2.10.0 | Window 10
Any idea?
@DigitalSreeni Excellent. I have two questions though:
1. If you are selecting (batch_size or half batch_size) samples randomly from the dataset in each epoch, how do you make sure that training uses all the available images in the dataset during training?
2. What is the point of using half batch_size? why don't just train the discriminator using same size (e.g. batch_size) real and fake images?
Thank you for the detailed tutorial as always.
Great Video sir...Thanks a lot
Thank you for tha amazing explanatio🌸
I wanna ask why you used the tanh as an activation function? At the end we have to scale it to be between 0and 1
I am trying to save generator and discriminator models after training, again loading them and training for some more epochs but it seems like gan loss leading to zero how can I retrain the gan
I'm having the exact same problem, did you solve it?
really trying my best to solve the problem but i just couldn't figure out how to incrementally train the generator, discriminator and gan
great work , could you please tell me why u choose 100 for latent-dim or what is the criteria of choosing 100
It's totally random you can take any length, there is no specific reason, if mostly researcher takes 100.
Just as Mr. Gulab Patel mentioned, you can choose vector of any dimension. 100 is used as this was the dimension that many papers used. Do you really want another hyper parameter for your deep learning model :)?
Why the latent vector is always 100? And how in converges to a 32 by 32 image?
Thanks for great tutorials. I wonder if you could do a custom training loop for CGANs with the use of GradientTap in tensorflow?
What is the minimum no of images needed in a dataset
Sir, could you please make a video on DCGANs for medical imaging augmentation?
Thanks a lot for awesome video! So if you use sigmoid instead of tanh the scaling must be MinMaxScaling?
tanh goes from -1 to 1, so you need any scaling that can scale values between -1 and 1.
@@DigitalSreeni Thanks a lot!
Technically, YES! But the authors recommend using LeakyReLU and tanh activation functions.
@@nitinbommi1867 Thnks buddy!
Great Explanation as always. Thank you. Just some questions: 1- How can we evaluate the quality of generated images? 2- Can we plot the generated images one by one instead of a grid? 3- Regarding feeding our own images, what If there is only one class (a folder of images), which part of the code should be changed?
Ans 1: Visually you can check or you can do similarity check(real img & corresponding generated images) but Vanilla gan randomly generates so don't expect one-one mapping as random vectors(after going g_model) can be any images(similar to real images).
2. Yes you do it, take single image instead of 25 together.
3. It's not a classification problem so class doesn't matter, just save your image in folder and feed into train. Help regarding reading and feeding into gans are available on stack overflow.
Mr. Gulab Patel seems to have answered your questions very well. Thank you man!
@@DigitalSreeni Yes, Thank you.
Okay so I was able to train my gan network on custom images data, after training , when loaded the model and generated images , it seems that it generates same images all the time and also the image look pretty random as it contains random patterns in it. Can any one tell me the reason or help me with it?
Have you solve your problem ?
@@KashifShaheed yes I used some different approach ie. Gradient tape with higher epochs , at the end I understood that the images are generated randomly , but this randomness is dependent on the last epoch of training the gan
Can we do image classification for 128 * 128
Thanks!
Nice tutorial. Could you please let me know if you are planning to discuss some topics about forecasting using charts ? It would be useful and interesting though :).
Not yet on my list. Any specific applications you are looking for?
how can i load my custom dataset , im using imagedatagenerator
Thank you so much Sir you're amazing. #pakistan
Shukriya :)
@@DigitalSreeni Dear Sir do you speak Urdu 🥰🤣
Thank you!
Thank you too!
Thanks a lot
Thanks
Thank you very much. Please keep learning!!!
Thank you for the great tutorials, i have learned a lot for your Videos
Where can i get the Trained model with 250 epochs "cifar_generator_250epochs.h5" Just for testing.
did you figured out where to get the model?