Hi! First of all, great video and very clear explanation! I have a doubt, is it possible to modify the model so I can use 1024x1024 px image as inputs, instead of the 64x64 of this example? How should I modify the layers to make it work fine? Thanks in advice!
@@saddamhussainelectricaleng910 Hi Saddam, I ended up using STYLEGAN 2 (pytorch version) ADA model, from nVIDIA. Pretty much the state of the art standard for high resolution and quality image generation. Check their paper and repo in github!
@@saddamhussainelectricaleng910 mmmm I dont really know how to do that level of customization. You should try to preprocess all your images and try to use tje standar models
How can we generate images of a specified shape? For instance, the celeb_a dataset images are of the shape 218,178,3. Is there a way the generator output has the same shape?
Thanks for the video! I don't quite understand how the loss function results in the gradients to backpropagate in the generator. I think I possibly don't quite understand GradientTape.
What if I use my existing dataset on google drive? I'm confused because all the examples often use the MNIST and CelebA datasets. So until now I have not got the code
Hi Aladdin, I have a short question: At 4:00 why are the second & thrid Conv2D layers both have channels = 128? Can the third layers be channels = 256? If would like to add more Conv2D layers what are the channel number for e.g. 4. & 5. layers? Thanks in advance!
I don't understand why we using Conv2D with sigmoid as the final layer for the encoder. sigmoid returns an output between 0-1 but we want a 64x64x3 image as output right?
The final Conv2D layer in the generator model will still output an image with these dimensions (not a single decimal value between 0 and 1 like the final Dense layer in the discriminator model). The sigmoid function just makes sure that the pixel values in this image are all between 0 and 1 (which we can later multiply by 255 for example to get normal pixel values).
Finally! Something for TF users on this channel after such a long time.
Please make a video to explain the theory or math in detail about each layer and functions used in the video?
yay TF contents are back
Love you man
Maybe you can make a video on training time comparison between tf and torch
Man, you're on fire
Great Content keep up the good work
I like your video
Hi thanks for nice, clean and understandable tutorial. Can you provide the Research paper related to this / Similer implementation.
Hi! First of all, great video and very clear explanation! I have a doubt, is it possible to modify the model so I can use 1024x1024 px image as inputs, instead of the 64x64 of this example? How should I modify the layers to make it work fine?
Thanks in advice!
@@saddamhussainelectricaleng910 Hi Saddam, I ended up using STYLEGAN 2 (pytorch version) ADA model, from nVIDIA. Pretty much the state of the art standard for high resolution and quality image generation. Check their paper and repo in github!
@@saddamhussainelectricaleng910 mmmm I dont really know how to do that level of customization. You should try to preprocess all your images and try to use tje standar models
@@saddamhussainelectricaleng910 ofc ua-cam.com/video/HgSfKfBMAaY/v-deo.html thats an example of it
How can we generate images of a specified shape? For instance, the celeb_a dataset images are of the shape 218,178,3. Is there a way the generator output has the same shape?
is there a dcgan for mobilnetv2?
Hey can you try to implement Visual Transformer (ViT)
Thank you very much!
Thanks for the video! I don't quite understand how the loss function results in the gradients to backpropagate in the generator. I think I possibly don't quite understand GradientTape.
This complete GAN series will be with TF?
I thought about it but I don't think so, I get kinda bored doing the same videos in both frameworks
What if I use my existing dataset on google drive?
I'm confused because all the examples often use the MNIST and CelebA datasets.
So until now I have not got the code
Good job
They did that to get 3 color channels that had the desired image size.
Hi Aladdin, I have a short question: At 4:00 why are the second & thrid Conv2D layers both have channels = 128? Can the third layers be channels = 256? If would like to add more Conv2D layers what are the channel number for e.g. 4. & 5. layers? Thanks in advance!
I don't understand why we using Conv2D with sigmoid as the final layer for the encoder. sigmoid returns an output between 0-1 but we want a 64x64x3 image as output right?
The final Conv2D layer in the generator model will still output an image with these dimensions (not a single decimal value between 0 and 1 like the final Dense layer in the discriminator model). The sigmoid function just makes sure that the pixel values in this image are all between 0 and 1 (which we can later multiply by 255 for example to get normal pixel values).
Overrode