hello .can we use autoencoders to generate a more accurate dataset where the nan values are gone.in other words use autoencoders to fill the missing values thanks
Hi, I study physics in Germany and i really enjoy your videos! Honestly spoken, i doubt that the network can generalize Denoising to other pictures, even if they are similar to the used one s. I think that the memory of the decoder, there are 50*100+100*128*128 weights (+biases), is more than enough to store the 10 pictures. I will try to put in some other pictures Greedings
Hi, Professor Jeff! :) I’m just wondering-after we obtained the most important features from the bottleneck of our trained neural network, is it possible to apply the denoising capability of the autoencoder to a live feed video that is somewhat highly correlated to the training images? Will this be better, or even recommended, instead of using traditional denoising filters of OpenCV for real-time videos? I’d love to learn more from your expertise and advices as I explore this topic further. Thank you for the insightful explanation and demo by the way! Subscribed! :)
This is not something that I've tried, but it sounds like a valid approach. I've added this idea to my future video list, I want to do more "video" videos.
Professor Heaton, I am trying the single image auto-encoder and happens to find out the accuracy is always 0 while the loss decreased from 12481.3857 to near 0.(after 200 epochs) Did I set the model wrong?( I used the same set up and Sequential model like yours in the code) Thank you! Great video!
model.compile(optimizer='adam', loss='mean_squared_error', metrics=['accuracy']) history = model.fit(img_array,img_array,verbose=1,epochs=200) This is how I set up the training
Thank you so much for these videos and all that hard work Jeff, this is really opening up a whole new world of coding to me, really appreciate it! :)
hello .can we use autoencoders to generate a more accurate dataset where the nan values are gone.in other words use autoencoders to fill the missing values
thanks
Hi, I study physics in Germany and i really enjoy your videos! Honestly spoken, i doubt that the network can generalize Denoising to other pictures, even if they are similar to the used one
s. I think that the memory of the decoder, there are 50*100+100*128*128 weights (+biases), is more than enough to store the 10 pictures. I will try to put in some other pictures
Greedings
Hi, Professor Jeff! :)
I’m just wondering-after we obtained the most important features from the bottleneck of our trained neural network, is it possible to apply the denoising capability of the autoencoder to a live feed video that is somewhat highly correlated to the training images?
Will this be better, or even recommended, instead of using traditional denoising filters of OpenCV for real-time videos?
I’d love to learn more from your expertise and advices as I explore this topic further. Thank you for the insightful explanation and demo by the way! Subscribed! :)
This is not something that I've tried, but it sounds like a valid approach. I've added this idea to my future video list, I want to do more "video" videos.
Professor Heaton, I am trying the single image auto-encoder and happens to find out the accuracy is always 0 while the loss decreased from 12481.3857 to near 0.(after 200 epochs) Did I set the model wrong?( I used the same set up and Sequential model like yours in the code) Thank you! Great video!
model.compile(optimizer='adam', loss='mean_squared_error', metrics=['accuracy'])
history = model.fit(img_array,img_array,verbose=1,epochs=200)
This is how I set up the training
Jeff, you are next level. buddy
Thanks! Appreciate it.