I have watched alot of videos, read books, but the way you explain the theory and the architecture of the networks is the best I have seen . Thank you thank you thank you
hi thank you so much! . i have a mission in my collage to implement a gan architecture , but the receptive filed must not be 70x70. how i will know what is the best size for it except for it? do you have any suggestion from your experience?
hey, i am preparing to train my first pix2pix GAN using google collab virtual GPU for facial emotion transition. Do u have any advices on how to set it up properly?
I think the authors mean with CD in the decoder block the deconvolutional layer (or convolutional transpose layer) not dropout! however, the dropout is usually applied in the decoder part of the Unet!
When I train my own custom dataset, I am getting this error: FileNotFoundError: [Errno 2] No such file or directory: './checkpoints/flir_v2ir/latest_net_G.pth'. Any ideas?
Hello Mr. Sreenivas, I have watched your great videos, and I find them super informative, especially regarding the Pix2Pix model. I would like to use this model, but I am wondering if it would work if both input images have different sizes. Thank you in advance, and best regards, El
Hi , thank you so much for the informative explanation. I have a question and I would be thankful if you can advice and help! If I have three modality in dataset 1 with ground truth and the dataset 2 has a missed one random modality, what we can do in this case? should we train three models and each model should be trained in a pair images to predict the missing image?
Quite helpful video @DigitalSreeni ! Could you please help me and let me know how can I give my source image as input? Should I give the url of the image in quotes in the command line: in_src_image= Input(shape=image_shape) if it is present in my google drive?
Hello Sreeni, thanks for such a great video. I'm curious to know your thoughts on employing Pix2Pix GAN for refining post-processing tasks on segmentation model outputs. Pix2Pix GAN seems promising in its ability to utilise both predicted and ground truth masks, training the generator to refine the predicted mask and align it with the characteristics of the ground truth mask by removing small blobs, fix gaps in the mask, etc. Your insights would be greatly appreciated!
Excellent !!. Hi Sreeni, Can you do a tutorial with example on classification and clustering of multivariate time series after transforming the time series to its Fourier transforms. I have been really searching for the whole examples, but most of the tutorials, end just at the basics. And no end to end example.
Sir, your vids are awesome, but could you please provide the slides or the ppt file from which you gave those specific illustrations in this video?? Because not all of them were fully clear due to the mini-screen from which we can see your face. Would be truly glad, and thanks in advance
Machine learning today is like playing an mp3 file in 1995. We had to know everything about perceptual encoding & write the player from scratch to play the file. There are no abstraction layers like there eventually were for mp3 players.
"For those of you that can't watch a video longer than 5 mins" Actually compelled me to watch the full video😂. Thanks!
I have watched alot of videos, read books, but the way you explain the theory and the architecture of the networks is the best I have seen . Thank you thank you thank you
Sir you are the best. I cant thank you enough. Before your video i could not understand this concept at all. You saved me. Wish you all the best.
This explanation is so neat, that information simply flows through your brain as if these topics were so simple, thanks a lot Dr. best videos ever!
hi thank you so much! . i have a mission in my collage to implement a gan architecture , but the receptive filed must not be 70x70. how i will know what is the best size for it except for it? do you have any suggestion from your experience?
hey, i am preparing to train my first pix2pix GAN using google collab virtual GPU for facial emotion transition. Do u have any advices on how to set it up properly?
I think the authors mean with CD in the decoder block the deconvolutional layer (or convolutional transpose layer) not dropout! however, the dropout is usually applied in the decoder part of the Unet!
When I train my own custom dataset, I am getting this error: FileNotFoundError: [Errno 2] No such file or directory: './checkpoints/flir_v2ir/latest_net_G.pth'. Any ideas?
Hello Mr. Sreenivas,
I have watched your great videos, and I find them super informative, especially regarding the Pix2Pix model. I would like to use this model, but I am wondering if it would work if both input images have different sizes.
Thank you in advance, and best regards,
El
Excellent tutorial as always Dr. Bhattiprolu, thank you very much
sir gith repository do not have code for this presentaion. Please upload it .
Hi , thank you so much for the informative explanation. I have a question and I would be thankful if you can advice and help! If I have three modality in dataset 1 with ground truth and the dataset 2 has a missed one random modality, what we can do in this case? should we train three models and each model should be trained in a pair images to predict the missing image?
how will the generator and discriminator functions change for an image size of 8Ox380
Quite helpful video @DigitalSreeni ! Could you please help me and let me know how can I give my source image as input? Should I give the url of the image in quotes in the command line: in_src_image= Input(shape=image_shape) if it is present in my google drive?
Hey Sreeni, do you think a GAN with a UNET generator could outperform a simple UNET for semantic segmentation?
Try it
hey, i have one dataset that contain mask image but another dataset haven't. any possible solution for getting mask for another
How do we measure the performance of GAN?
Hello!
Thank you for such a great explanation.
Can I get these slides?
Hello Sreeni, thanks for such a great video. I'm curious to know your thoughts on employing Pix2Pix GAN for refining post-processing tasks on segmentation model outputs. Pix2Pix GAN seems promising in its ability to utilise both predicted and ground truth masks, training the generator to refine the predicted mask and align it with the characteristics of the ground truth mask by removing small blobs, fix gaps in the mask, etc. Your insights would be greatly appreciated!
Thank you so much for this amazing video! why the code is not in the github??:(! I can not see the code ! anyone can advice please?
Sir is your ML playlist specifically for image related or normally we can also follow this for learning ML...?
Thank you so much for making such videos, they are really helpful !!
hi, can i use some text image pair instead of image image pair ?
Thank You Sir For your Teaching.... This Video has certainly break it all on how to use it
Glad it was helpful!
Sir extremely grateful for your efforts
Excellent !!.
Hi Sreeni,
Can you do a tutorial with example on classification and clustering of multivariate time series after transforming the time series to its Fourier transforms. I have been really searching for the whole examples, but most of the tutorials, end just at the basics. And no end to end example.
Thank You for this explanatory video. Intelligible explanation.
You are welcome!
Amazing! Thanks for these videos. You are doing great work here :)
Glad you like them!
Sir, your vids are awesome, but could you please provide the slides or the ppt file from which you gave those specific illustrations in this video?? Because not all of them were fully clear due to the mini-screen from which we can see your face. Would be truly glad, and thanks in advance
slide hi nahi date baki sab sahi ha
Machine learning today is like playing an mp3 file in 1995. We had to know everything about perceptual encoding & write the player from scratch to play the file. There are no abstraction layers like there eventually were for mp3 players.
why discriminator output is 16x16x1 and not 1x1x1 ?
That network reduces 70*70 input to 1*1. When given 256*256 input, it returns 16*16 output.
cool, thanks a lot for detailed explanation!
Thank you very much, I like how you really care about the details
Thank you for amazing content
high quality content , thanks a lot
Thank you so much Sir!
Awesome, thanks a lot for the great content!
Such a great video !!!!!
thank you for sharring
Great
Thx