- 21
- 258 995
DeepBean
United Kingdom
Приєднався 22 лют 2023
Deep dives into physics, artificial intelligence and all other sorts of things.
Understanding GANs (Generative Adversarial Networks) | Deep Learning
GANs use an elegant adversarial learning framework to generate high quality samples of everything from images to audio. Here, we explore the theoretical underpinnings, as well as some practical problems that can plague training, such as non-convergence and mode collapse.
Timestamps
--------------------
00:00 Introduction
01:28 Generative modelling
04:46 The GAN approach
07:37 Loss function
12:14 Game theory perspective
13:18 Optimal discriminator
15:33 Optimal generator
17:26 Training dynamics
19:45 Optimal discriminator problem
21:39 Training steps
22:13 Non-convergence
23:39 Mode collapse
Links
--------
- Original GAN paper (arxiv.org/abs/1406.2661)
- Analysis of vanishing/unstable gradients (arxiv.org/abs/1701.04862)
- Analysis of mode collapse (arxiv.org/abs/1606.03498)
- Wasserstein GAN paper (arxiv.org/abs/1701.07875)
- Keras CGAN tutorial (keras.io/examples/generative/conditional_gan/)
- PyTorch DCGAN tutorial (pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html)
Timestamps
--------------------
00:00 Introduction
01:28 Generative modelling
04:46 The GAN approach
07:37 Loss function
12:14 Game theory perspective
13:18 Optimal discriminator
15:33 Optimal generator
17:26 Training dynamics
19:45 Optimal discriminator problem
21:39 Training steps
22:13 Non-convergence
23:39 Mode collapse
Links
--------
- Original GAN paper (arxiv.org/abs/1406.2661)
- Analysis of vanishing/unstable gradients (arxiv.org/abs/1701.04862)
- Analysis of mode collapse (arxiv.org/abs/1606.03498)
- Wasserstein GAN paper (arxiv.org/abs/1701.07875)
- Keras CGAN tutorial (keras.io/examples/generative/conditional_gan/)
- PyTorch DCGAN tutorial (pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html)
Переглядів: 2 824
Відео
Vanishing Gradients: Why Training RNNs is Hard
Переглядів 8204 місяці тому
Here, we run down how RNNs are trained via backpropagation through time, and see how this algorithm is plagued by the problems of vanishing and exploding gradients. We present an intuitive and mathematical picture by flying through the relevant calculus and linear algebra (so feel free to pause at certain bits!) Timestamps 00:00 Introduction 00:46 RNN refresher 03:42 Gradient calculation of W 0...
Vector-Quantized Variational Autoencoders (VQ-VAEs) | Deep Learning
Переглядів 5 тис.4 місяці тому
The Vector-Quantized Variational Autoencoder (VQ-VAE) forms discrete latent representations, by mapping encoding vectors to a limited size codebook. But, how does it do this, and why would we want to do it anyway? Link to my video on VAEs: ua-cam.com/video/HBYQvKlaE0A/v-deo.html Timestamps 00:00 Introduction 01:09 VAE refresher 02:42 Quantization 04:46 Posterior 06:09 Prior 07:06 Learned prior ...
Disentanglement with beta-VAEs | Deep Learning
Переглядів 1,1 тис.4 місяці тому
Link to my VAE video for a refresher: ua-cam.com/video/HBYQvKlaE0A/v-deo.html In this video, we explore how and why modifying the VAE loss function enables us to achieve disentanglement in the latent space, with different latent variables corresponding to different semantic features of the data. We take a look at the original beta-VAE formulation, as well as controlled capacity increase, and th...
Convolutional Neural Networks (CNNs) | Deep Learning
Переглядів 2,8 тис.7 місяців тому
CNNs are a go-to deep learning architecture for many computer vision tasks, from image classification to object detection and more. Here, we take a look at the basics, and see how they use biologically-inspired hierarchical feature extraction to do what they do. Timestamps Introduction 00:00 Kernel convolutions 00:41 Common kernels 02:30 Why flipping? 03:30 Convolution as feature extraction 04:...
Understanding Variational Autoencoders (VAEs) | Deep Learning
Переглядів 17 тис.8 місяців тому
Here we delve into the core concepts behind the Variational Autoencoder (VAE), a widely used representation learning technique that uncovers the hidden factors of variation throughout a dataset. Timestamps Introduction 0:00 Latent variables 01:53 Intractability of the marginal likelihood 05:08 Bayes' rule 06:35 Variational inference 09:01 KL divergence and ELBO 10:14 ELBO via Jensen's inequalit...
The Geiger-Marsden Experiments | Nuclear Physics
Переглядів 1,7 тис.Рік тому
In 1908-13, nuclear physics was born as Hans Geiger and Ernest Marsden embarked on the experiments that would discover the atomic nucleus and revolutionise our understanding of atomic structure. Here we explore why and how they carried out the famous gold-leaf experiment, as well as how Ernest Rutherford arrived at his startling conclusions. CHAPTERS Introduction 00:00 Alpha Particles 00:20 The...
Dijkstra's Algorithm: Finding the Shortest Path
Переглядів 984Рік тому
Dijkstra's algorithm is a neat way of finding the minimum-cost path between any two nodes in a graph. Here we see briefly how can use it to optimize our path through a graph, and also explore why it performs as well as it does. Feel free to like, comment and subscribe if you appreciate what I do!
Einstein's Ladder Paradox; Simply Explained
Переглядів 14 тис.Рік тому
In special relativity, the ladder paradox (or, "barn-pole" paradox) occurs due to they symmetry of length contraction. Here we explore how this apparent paradox can be solved using the relativity of simultaneity. If you're interested in more special relativity content, check out the series below! SPECIAL RELATIVITY SERIES I. The Michelson-Morley Experiment (ua-cam.com/video/DFQtVFEp_3E/v-deo.ht...
Solving Einstein's Twin Paradox
Переглядів 3,5 тис.Рік тому
Many solutions have been proposed to Einstein's twin paradox, but many of them miss the vital reasons why the Earth twin is correct and the Spaceship twin is wrong. Here we condense the solution of the twin paradox to its essentials, and also discuss why applying general relativity to the problem is unnecessary. CHAPTERS What is the Twin Paradox? 00:00 Time Dilation 00:43 The "Paradox" 01:17 Th...
Relativistic Velocity Addition | Special Relativity
Переглядів 6 тис.Рік тому
Here, we briefly derive the equation for relativistic velocity addition, using only the Lorentz transformation equations we derived back in Part 3. Please like, subscribe and leave a comment if you appreciate what I do! SPECIAL RELATIVITY SERIES I. The Michelson-Morley Experiment (ua-cam.com/video/DFQtVFEp_3E/v-deo.html) II. Time Dilation and Length Contraction (ua-cam.com/video/bArTzG3Mkmk/v-d...
What is Spacetime? | Special Relativity
Переглядів 2,9 тис.Рік тому
In this fifth video, we explore how space and time can be treated as one interrelated entity, and how the Lorentz transformations can be given a geometric interpretation. We also explore how paradoxes are avoided by the preservation of causality. Please like, comment and subscribe if you appreciate what I do! SPECIAL RELATIVITY SERIES I. The Michelson-Morley Experiment (ua-cam.com/video/DFQtVFE...
Deriving the General Lorentz Transformation | Special Relativity
Переглядів 9 тис.Рік тому
In this fourth video of the Special Relativity series, we derive the general (matrix) form of the Lorentz transformations for an arbitrary boost velocity in 3D space. Please like, comment and subscribe if you appreciate what I do! SPECIAL RELATIVITY SERIES I. The Michelson-Morley Experiment (ua-cam.com/video/DFQtVFEp_3E/v-deo.html) II. Time Dilation and Length Contraction (ua-cam.com/video/bArT...
Deriving the Lorentz Transformations | Special Relativity
Переглядів 25 тис.Рік тому
In this third video of the Special Relativity series, we derive the Lorentz transformations, which map events in one reference frame to another reference frame that moves at a constant relative velocity. We also demonstrate how these transformations can be used to derive the phenomena of time dilation and length contraction that we explored more informally in the last video. Please like, commen...
Time Dilation and Length Contraction | Special Relativity
Переглядів 7 тис.Рік тому
Here we explore how Einstein's postulates imply that moving clocks tick slower and moving trains become shorter. Please like, comment and subscribe if you appreciate what I do! SPECIAL RELATIVITY SERIES I. The Michelson-Morley Experiment (ua-cam.com/video/DFQtVFEp_3E/v-deo.html) II. Time Dilation and Length Contraction III. Deriving the Lorentz Transformations (ua-cam.com/video/FvqutkaPmas/v-de...
The Michelson-Morley Experiment | Special Relativity
Переглядів 34 тис.Рік тому
The Michelson-Morley Experiment | Special Relativity
Optimization for Deep Learning (Momentum, RMSprop, AdaGrad, Adam)
Переглядів 61 тис.Рік тому
Optimization for Deep Learning (Momentum, RMSprop, AdaGrad, Adam)
Backpropagation: How Neural Networks Learn
Переглядів 4,3 тис.Рік тому
Backpropagation: How Neural Networks Learn
Transformers, Simply Explained | Deep Learning
Переглядів 4,9 тис.Рік тому
Transformers, Simply Explained | Deep Learning
If the train were moving away from the tree, would it be elongated instead of being contracted?
So I guess they found out that light travels at the same speed no matter which direction they were looking at, despite the earth clearly not standing still?
Matrices in this case is absolutely evil, it needs a lot of efforts to obscure such a simple idea from 0:56 to 3:00 Thanks for great video
7:50 'say we have a neuron i connected to every neuron k in the following layer': proceeds to calculate dL/dyi ??? How does the output yi depend on the next weights ??? A few minutes before with the first example, you calculated dL/dyj ?
I have a doubt. At 5:30 of this video, will the light ray path change by 90 degrees or less than 90 degrees (as shown in the diagram)? If it changes by 90 deg. then the length remains L.
YOLO to this video!, great explanation.
Thanks for the clear step-by-step explanation!
That was awesome, extremely helpful. Thank you!
That is extraordinary and clear! This video and another one of yours I saw recently have really clarified special relativity for me. Thank you! I'm going to save these so I can watch them again later. Hopefully at some point it will stick.
Very glad they've helped you!
4:42 also at 18:10 - Small correction: Not N(0, I), but N (0,1). This is a multivariate unit Gaussian: Center at the 0-vector and unit standard variation of 1, not capital I.
Absolutely amazing video! Honestly, perfect explanation!
What would happen if Jack were going fast enough to make the ladder shorter than the barn, and once he's inside, the barn doors shut, and jack instantly stops?
This makes a lot of sense.
Thank you! You did such a great job explaining this! I've finally understood how all these terms and concepts come together. <3
Unduly complicated. Uses the stupid third twin
If we consider mu meson from upper atmosphere, they shower on earth from all sides. In a way we can compare it to go and return journey of one single meson. There is no reversal acceleration, yet there is time dilation in both directions. This meson would see 16th generation of meson in earth's lab. So time dilation must be due to absolute motion and not relativistic. In multidimensional astronomical motion with comparable speeds it is difficult to say which one is faster in absolute sense but a mu meson with 99.8 % speed of light is faster compared to celestial bodies in whatever direction we consider. This would answer twin's paradox I hope.
Chad
one of the best explanations of VAE..👌
perfect
Can w be considered as vector that represents all adjustable parameters? I.e., not just weights of one linear transformation matrix from input to hidden layer but all of them + bias values. So when you compute gradient of L with respect to w, you compute a vector for which each entry is a the partial derivative of L with respect to w_i?
Yup, that's correct!
Pleasure to watch
Nice, will revisit this because the maths is overwhelming right now
From where to study Probability for Deep Learning ? Specifically the one used in here? I have studied probability but not this much. If anyone can give resources it will be very helpful
I'm not sure of the best resources for deep learning in general, but for theory related to VAEs I found this helpful ermongroup.github.io/cs228-notes/
Your videos are amazing to understand the subject of relativity! Unfortunately, I have no background in physics and mathematics so I often struggle to understand a few things. One question I have from the video is: I can grasp that the Michelson e Morley’s experiment proved that there is no special medium in relation to which light would be propagating. But, how does it also prove that light’s speed is constant in relation to all other things including that which emitted it? Did the experiment also move the light source in relation to the detector? If the frequency of the interference remained the same even when the emitter and the detector were in movement with relation to each other, I think it would be a more straightforward way to guarantee the the fact that light’s speed remain constant regardless of relative velocity. Sorry if that’s a stupid question, but I’d really like to understand the subject.
It is not a stupid question. The video-makers always leave important details aside in a hurry to explain things and thus leave the viewers confused.
At 8:46 why is joint probability tractable? Why are others not tractable?
The joint probability is traceable under our model because it's easy to estimate the probability of say, a particular z given a particular x; however the marginal probability of x requires integration over all z, which makes it intractable.
The video is woow
Excellent analysis
My brother, please, can you create a course to learn the manim library for beginners? ☺️🤍🤍
I'm not too familiar with it, but 3Blue1Brown has a tutorial!
One question. Why max{ \int_x {f(x) dx} } is the same thing as max{ f(x) }?
Good question! This is legitimate because by finding our optimal D(x), we are directly maximizing the integrand at every point x. Since our integrand can never be higher for any other D(x) at a specific point x; then if we find this maximum for every x, then we are guaranteed to maximize the integral as a whole.
Always expect the earliest video from your channel
Well, i can’t miss your video
Well done! Very thoroughly explained.
Fantastic explanation. Small erratum: the variable 𝜖 is epsilon, not eta (η) ua-cam.com/video/HBYQvKlaE0A/v-deo.htmlsi=k6EBUeCbMUl4JYWw&t=970
Ah, that's right!
You know neutrinos are 65 billion solar neutrinos, per second per square centimeter and they didn't detect that interaction either. The Ether is likely even smaller. Discounting the existence of an Aether using stone knives and bear skins. What a joke. This is why you don't discount anything, and try to evolve your experiments.
Great video. A good balance of professional, modern and unique style with in-depth technical description.
confusing there must be a simpler derivation
thanks
I’m here because of Lord Jamar. His interview on the Godfrey Comedy channel was very interesting….
Williams John Young Gary Clark Jessica
Очень классные ролики, очень интересно и понятно смотреть Только переводить не легко бывает, привет из России ❤
You're the best explanation out there in my opinion. I appreciate you!!
Why wouldn’t going against the aether be the opposite of going with it. They should cancel out.
predicted Ci is calculated with IoU if the cell have object, then how to calculate predicted Ci if the cell doesnt have object?
Ortiz Mountains
Beautiful video. Thank you
Your error is at ua-cam.com/video/FvqutkaPmas/v-deo.html If the spherical wave is centred and expanding from K origin, it cant also be expanding from a different center at location K prime's origin which is also in motion. Your whole explanation has failed at this point.
Thank you so much for this, really cleared up how VAEs work
Thank you so much for these videos
the is all BS. For jack, his ladder doesn't shrink, because he can say the barn is doing the moving, so the barn is not as big as it was before. So the ladder certainly cant fit. Now that we know it doesn't fit for two reasons, its not going to fit if you invent a third option. Your error is in your stupid simultaneity example. make up your mind, it cant be two things that light does. The centre of an expanding sphere of light cant have two different origins, one not moving and the other moving. That is where you make the error of simple logic and simple physics.
Martin Jessica Moore Carol Taylor Dorothy