Thank you, Professor, What I still don’t understand… - the output of the encoder for a specific input image X cannot be a distribution; it’s just a single value - no? This value is then moving on to the decoder zone and is expressed using the reparameterization trick, and then things propagate through the decoder until the reconstructed X and then backpropagation through the decoder and the encoder. Am I correct? So how do you calculate the distribution? Do you collect information about the average and the variance as you iterate over the input Xes? I mean, you have to have a big batch of iterations to be able to say something about the distribution, am I right?
I still can not understand the difference between a deterministic autoencoder and a probabilistic one. The sketch of the network seems alike. Does it mean that the h vector (\mu,\sigma,\cdots) is the output of the encoder but not the literal input of the decoder? Or more fundementally, what accounts for sampling in this network? If the output consists of parameters of Gaussian distribution of each pixel, then, wouldn't the samples be similar to just taking the mean of each Gaussian?
Hello professor, I am not able to understand one thing. Here we are modelling P(Z/X) and P(X/Z) using encoder and decoder respectively. However, we are assuming P(Z/X) follows a certain distribution and P(X/Z) follows a certain distribution( Let's say bernoulli in both cases). But this formulation will not satisfy the bayes relation for which we need to know prior P(Z) and P(X). Therefore, if we choose P(X/Z) (likelihood) to be gaussian and P(Z) to be gaussian, then only we can take P(Z/X) to be gaussian using bayes rule. However, the network architecture you specified isn't taking this thing into account. Can you please clarify, what am I missing ? Thankyou
Thank you, Professor,
What I still don’t understand… - the output of the encoder for a specific input image X cannot be a distribution; it’s just a single value - no? This value is then moving on to the decoder zone and is expressed using the reparameterization trick, and then things propagate through the decoder until the reconstructed X and then backpropagation through the decoder and the encoder. Am I correct? So how do you calculate the distribution? Do you collect information about the average and the variance as you iterate over the input Xes? I mean, you have to have a big batch of iterations to be able to say something about the distribution, am I right?
I still can not understand the difference between a deterministic autoencoder and a probabilistic one. The sketch of the network seems alike. Does it mean that the h vector (\mu,\sigma,\cdots) is the output of the encoder but not the literal input of the decoder? Or more fundementally, what accounts for sampling in this network? If the output consists of parameters of Gaussian distribution of each pixel, then, wouldn't the samples be similar to just taking the mean of each Gaussian?
Hello professor, I am not able to understand one thing. Here we are modelling P(Z/X) and P(X/Z) using encoder and decoder respectively. However, we are assuming P(Z/X) follows a certain distribution and P(X/Z) follows a certain distribution( Let's say bernoulli in both cases). But this formulation will not satisfy the bayes relation for which we need to know prior P(Z) and P(X).
Therefore, if we choose P(X/Z) (likelihood) to be gaussian and P(Z) to be gaussian, then only we can take P(Z/X) to be gaussian using bayes rule.
However, the network architecture you specified isn't taking this thing into account. Can you please clarify, what am I missing ?
Thankyou
Could they be called gans.?
There is no adversary in auto encoders