TensorFlow DCGAN Tutorial

Поділитися
Вставка
  • Опубліковано 26 гру 2024

КОМЕНТАРІ • 28

  • @ambujmittal6824
    @ambujmittal6824 3 роки тому +2

    Finally! Something for TF users on this channel after such a long time.

  • @taheralipatrawala7300
    @taheralipatrawala7300 2 роки тому +4

    Please make a video to explain the theory or math in detail about each layer and functions used in the video?

  • @apurbasarkar6918
    @apurbasarkar6918 3 роки тому

    yay TF contents are back

  • @yourboyfazal
    @yourboyfazal Рік тому

    Love you man

  • @kirtipandya4618
    @kirtipandya4618 3 роки тому +3

    Maybe you can make a video on training time comparison between tf and torch

  • @teetanrobotics5363
    @teetanrobotics5363 3 роки тому

    Man, you're on fire

  • @programmerrdai
    @programmerrdai 3 роки тому

    Great Content keep up the good work

  • @sundayamolegbe14
    @sundayamolegbe14 4 місяці тому

    I like your video

  • @Dheemantha
    @Dheemantha 2 роки тому

    Hi thanks for nice, clean and understandable tutorial. Can you provide the Research paper related to this / Similer implementation.

  • @luisgarciatiscar3056
    @luisgarciatiscar3056 3 роки тому +3

    Hi! First of all, great video and very clear explanation! I have a doubt, is it possible to modify the model so I can use 1024x1024 px image as inputs, instead of the 64x64 of this example? How should I modify the layers to make it work fine?
    Thanks in advice!

    • @luisgarciatiscar3056
      @luisgarciatiscar3056 3 роки тому +1

      @@saddamhussainelectricaleng910 Hi Saddam, I ended up using STYLEGAN 2 (pytorch version) ADA model, from nVIDIA. Pretty much the state of the art standard for high resolution and quality image generation. Check their paper and repo in github!

    • @luisgarciatiscar3056
      @luisgarciatiscar3056 3 роки тому +1

      @@saddamhussainelectricaleng910 mmmm I dont really know how to do that level of customization. You should try to preprocess all your images and try to use tje standar models

    • @luisgarciatiscar3056
      @luisgarciatiscar3056 3 роки тому +1

      @@saddamhussainelectricaleng910 ofc ua-cam.com/video/HgSfKfBMAaY/v-deo.html thats an example of it

  • @architsrivastava8196
    @architsrivastava8196 3 роки тому +1

    How can we generate images of a specified shape? For instance, the celeb_a dataset images are of the shape 218,178,3. Is there a way the generator output has the same shape?

  • @Mrduirk
    @Mrduirk 2 роки тому

    is there a dcgan for mobilnetv2?

  • @ashispaul0013
    @ashispaul0013 3 роки тому +1

    Hey can you try to implement Visual Transformer (ViT)

  • @neighboroldwang
    @neighboroldwang 3 роки тому

    Thank you very much!

  • @boggo3848
    @boggo3848 2 роки тому

    Thanks for the video! I don't quite understand how the loss function results in the gradients to backpropagate in the generator. I think I possibly don't quite understand GradientTape.

  • @kirtipandya4618
    @kirtipandya4618 3 роки тому +1

    This complete GAN series will be with TF?

    • @AladdinPersson
      @AladdinPersson  3 роки тому +3

      I thought about it but I don't think so, I get kinda bored doing the same videos in both frameworks

    • @yusufdigitalent2627
      @yusufdigitalent2627 2 роки тому

      What if I use my existing dataset on google drive?
      I'm confused because all the examples often use the MNIST and CelebA datasets.
      So until now I have not got the code

  • @garikhakobyan3013
    @garikhakobyan3013 3 роки тому

    Good job

  • @DHAtEnclaveForensics
    @DHAtEnclaveForensics 8 місяців тому

    They did that to get 3 color channels that had the desired image size.

  • @neighboroldwang
    @neighboroldwang 3 роки тому +1

    Hi Aladdin, I have a short question: At 4:00 why are the second & thrid Conv2D layers both have channels = 128? Can the third layers be channels = 256? If would like to add more Conv2D layers what are the channel number for e.g. 4. & 5. layers? Thanks in advance!

  • @KhayamGondal
    @KhayamGondal 3 роки тому

    I don't understand why we using Conv2D with sigmoid as the final layer for the encoder. sigmoid returns an output between 0-1 but we want a 64x64x3 image as output right?

    • @malek3764
      @malek3764 3 роки тому +4

      The final Conv2D layer in the generator model will still output an image with these dimensions (not a single decimal value between 0 and 1 like the final Dense layer in the discriminator model). The sigmoid function just makes sure that the pixel values in this image are all between 0 and 1 (which we can later multiply by 255 for example to get normal pixel values).

  • @DHAtEnclaveForensics
    @DHAtEnclaveForensics 8 місяців тому

    Overrode