First thank you Alexander and Ava for sharing the knowledge After watching these videos, I realized that learning machine learning is not just a skill; teaching is a much bigger skill.
i think she meant that if we mix the stochasticity with the latent variables in z we can not backpropagate the weights as the added stochasticity makes the weights untracable. if however you detach the stochasticity from the variables in the latent space you can trace it back because latent varibales and stochasticity is clearly separated. its like warming up pasta and the carbonara sauce in separate containers instead of first mixing it and then warming it up
Thanks for your course, what I want to ask is whether you can upload the pratice course file or related document to website etc. It maybe help for all of those who want to follow the course and do some practices. Many thanks!
I am curious, regarding the CycleGANs with respect to audio generation, would the output from the model be better if the person creating the input audio were to try and mimic the person the model was trained on as closely as possible? For example, if an Obama impersonator were to supply the input audio, would the output even more closely resemble that of Obama's true voice? The same question would also apply to the video content. If body-language were more closely mimicking the target, does the model generate an output that more closely resembles the target? My hunch is that it would indeed improve the prediction.
I guess its because once the training is done and as the neural network weights are fixed , as there is no backpropogation etcc.., involved after training , the weights couldn't change and thus for every input you would get the same output as learnt function doesnt involve any probabilistic element.
I have a dataset of 120 images of cell phone photographs of the skin of dogs sick with 12 types of skin diseases, with a distribution of 10 images for each dog. What type of Generative Adversarial Network (GAN) is most suitable to increase my dataset with quality and be able to train my DL model? DcGAN, ACGAN, StyleGAN3, CGAN?
At what point? Surely the basic AE stuff up front made sense, and at a high level many of the concepts made sense even if there wasn’t a good explanation of how, for example, CycleGAN works.
First thank you Alexander and Ava for sharing the knowledge
After watching these videos, I realized that learning machine learning is not just a skill; teaching is a much bigger skill.
I would love to see Lecture 6 on Diffusion Models!
Briliant Ava. Explained one of the most complex concept GAN, cycle GAN brilliantly.
I lost from 32:00 onwards about different terms phi, qphi etc meant ...
i think she meant that if we mix the stochasticity with the latent variables in z we can not backpropagate the weights as the added stochasticity makes the weights untracable. if however you detach the stochasticity from the variables in the latent space you can trace it back because latent varibales and stochasticity is clearly separated.
its like warming up pasta and the carbonara sauce in separate containers instead of first mixing it and then warming it up
Thank you so much for the course. So much interesting.
Thanks for your course, what I want to ask is whether you can upload the pratice course file or related document to website etc. It maybe help for all of those who want to follow the course and do some practices. Many thanks!
Cool and well-sorted.
so excited for this!
Not a MITian but learning in MIT
What an amazing lecture it was. Really enjoyed it tbh.
Does anyone know if we can actually expect Lab 3 to be released or if there's a way to access it?
I am curious, regarding the CycleGANs with respect to audio generation, would the output from the model be better if the person creating the input audio were to try and mimic the person the model was trained on as closely as possible? For example, if an Obama impersonator were to supply the input audio, would the output even more closely resemble that of Obama's true voice? The same question would also apply to the video content. If body-language were more closely mimicking the target, does the model generate an output that more closely resembles the target? My hunch is that it would indeed improve the prediction.
Thank you for the video .. what is Deterministic and Stochastic node?
Brilliant ❤
awesome, many thanks for your initiative !
keep up the great work
Awesome lecture. 🎉
First thank you Ava for sharing the knowledge.
I'm not able to understand, why the standard auto-encoder does a deterministic operation?
I guess its because once the training is done and as the neural network weights are fixed , as there is no backpropogation etcc.., involved after training , the weights couldn't change and thus for every input you would get the same output as learnt function doesnt involve any probabilistic element.
Couldn't bear to live without tech and AGI.
what is qphi?
what is phi?
thank you for the amazing content, please add the slides for this lecture in the website, its still not there, cheers :)
I have a dataset of 120 images of cell phone photographs of the skin of dogs sick with 12 types of skin diseases, with a distribution of 10 images for each dog.
What type of Generative Adversarial Network (GAN) is most suitable to increase my dataset with quality and be able to train my DL model? DcGAN, ACGAN, StyleGAN3, CGAN?
just try them out
Try fine tuning the models with your data
32:33 *_"and so with ????? they employ this really clever trick that effectively"_* Did any body catch what she was saying here, thanks
VAEs
@@陈刘佳学 thanks
Is there such a thing as a Generative Modelling Agency???
The website aint working since a few days :/
It's back lesgooo
5 mins more let's gooooo
This proves Plato's idealism is working.
How so?
Very rush description.
28:20 A Joe Biden moment. Does anyone know what she attempted to communicate here, even the closed captions fail to make coherency?
OK, I lied. Her hair is AI too.
Nice amini teaching❤ and your curly hair nice😮
Spellbound by the lecture, great insights. Is she Indian
She's Persian
@@dragonartgroup6982 it just geography.... analogues to padding....
when gpt 4o lectures :D
IMHO commenting about appearance is a bit sexist. Wake up boys!!
She's an AI, but her hair is real
Oh really?
completely lost me in this lecture.
At what point? Surely the basic AE stuff up front made sense, and at a high level many of the concepts made sense even if there wasn’t a good explanation of how, for example, CycleGAN works.