DeepBean
DeepBean
  • 21
  • 258 995
Understanding GANs (Generative Adversarial Networks) | Deep Learning
GANs use an elegant adversarial learning framework to generate high quality samples of everything from images to audio. Here, we explore the theoretical underpinnings, as well as some practical problems that can plague training, such as non-convergence and mode collapse.
Timestamps
--------------------
00:00 Introduction
01:28 Generative modelling
04:46 The GAN approach
07:37 Loss function
12:14 Game theory perspective
13:18 Optimal discriminator
15:33 Optimal generator
17:26 Training dynamics
19:45 Optimal discriminator problem
21:39 Training steps
22:13 Non-convergence
23:39 Mode collapse
Links
--------
- Original GAN paper (arxiv.org/abs/1406.2661)
- Analysis of vanishing/unstable gradients (arxiv.org/abs/1701.04862)
- Analysis of mode collapse (arxiv.org/abs/1606.03498)
- Wasserstein GAN paper (arxiv.org/abs/1701.07875)
- Keras CGAN tutorial (keras.io/examples/generative/conditional_gan/)
- PyTorch DCGAN tutorial (pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html)
Переглядів: 2 824

Відео

Vanishing Gradients: Why Training RNNs is Hard
Переглядів 8204 місяці тому
Here, we run down how RNNs are trained via backpropagation through time, and see how this algorithm is plagued by the problems of vanishing and exploding gradients. We present an intuitive and mathematical picture by flying through the relevant calculus and linear algebra (so feel free to pause at certain bits!) Timestamps 00:00 Introduction 00:46 RNN refresher 03:42 Gradient calculation of W 0...
Vector-Quantized Variational Autoencoders (VQ-VAEs) | Deep Learning
Переглядів 5 тис.4 місяці тому
The Vector-Quantized Variational Autoencoder (VQ-VAE) forms discrete latent representations, by mapping encoding vectors to a limited size codebook. But, how does it do this, and why would we want to do it anyway? Link to my video on VAEs: ua-cam.com/video/HBYQvKlaE0A/v-deo.html Timestamps 00:00 Introduction 01:09 VAE refresher 02:42 Quantization 04:46 Posterior 06:09 Prior 07:06 Learned prior ...
Disentanglement with beta-VAEs | Deep Learning
Переглядів 1,1 тис.4 місяці тому
Link to my VAE video for a refresher: ua-cam.com/video/HBYQvKlaE0A/v-deo.html In this video, we explore how and why modifying the VAE loss function enables us to achieve disentanglement in the latent space, with different latent variables corresponding to different semantic features of the data. We take a look at the original beta-VAE formulation, as well as controlled capacity increase, and th...
Convolutional Neural Networks (CNNs) | Deep Learning
Переглядів 2,8 тис.7 місяців тому
CNNs are a go-to deep learning architecture for many computer vision tasks, from image classification to object detection and more. Here, we take a look at the basics, and see how they use biologically-inspired hierarchical feature extraction to do what they do. Timestamps Introduction 00:00 Kernel convolutions 00:41 Common kernels 02:30 Why flipping? 03:30 Convolution as feature extraction 04:...
Understanding Variational Autoencoders (VAEs) | Deep Learning
Переглядів 17 тис.8 місяців тому
Here we delve into the core concepts behind the Variational Autoencoder (VAE), a widely used representation learning technique that uncovers the hidden factors of variation throughout a dataset. Timestamps Introduction 0:00 Latent variables 01:53 Intractability of the marginal likelihood 05:08 Bayes' rule 06:35 Variational inference 09:01 KL divergence and ELBO 10:14 ELBO via Jensen's inequalit...
The Geiger-Marsden Experiments | Nuclear Physics
Переглядів 1,7 тис.Рік тому
In 1908-13, nuclear physics was born as Hans Geiger and Ernest Marsden embarked on the experiments that would discover the atomic nucleus and revolutionise our understanding of atomic structure. Here we explore why and how they carried out the famous gold-leaf experiment, as well as how Ernest Rutherford arrived at his startling conclusions. CHAPTERS Introduction 00:00 Alpha Particles 00:20 The...
Dijkstra's Algorithm: Finding the Shortest Path
Переглядів 984Рік тому
Dijkstra's algorithm is a neat way of finding the minimum-cost path between any two nodes in a graph. Here we see briefly how can use it to optimize our path through a graph, and also explore why it performs as well as it does. Feel free to like, comment and subscribe if you appreciate what I do!
Einstein's Ladder Paradox; Simply Explained
Переглядів 14 тис.Рік тому
In special relativity, the ladder paradox (or, "barn-pole" paradox) occurs due to they symmetry of length contraction. Here we explore how this apparent paradox can be solved using the relativity of simultaneity. If you're interested in more special relativity content, check out the series below! SPECIAL RELATIVITY SERIES I. The Michelson-Morley Experiment (ua-cam.com/video/DFQtVFEp_3E/v-deo.ht...
Solving Einstein's Twin Paradox
Переглядів 3,5 тис.Рік тому
Many solutions have been proposed to Einstein's twin paradox, but many of them miss the vital reasons why the Earth twin is correct and the Spaceship twin is wrong. Here we condense the solution of the twin paradox to its essentials, and also discuss why applying general relativity to the problem is unnecessary. CHAPTERS What is the Twin Paradox? 00:00 Time Dilation 00:43 The "Paradox" 01:17 Th...
Relativistic Velocity Addition | Special Relativity
Переглядів 6 тис.Рік тому
Here, we briefly derive the equation for relativistic velocity addition, using only the Lorentz transformation equations we derived back in Part 3. Please like, subscribe and leave a comment if you appreciate what I do! SPECIAL RELATIVITY SERIES I. The Michelson-Morley Experiment (ua-cam.com/video/DFQtVFEp_3E/v-deo.html) II. Time Dilation and Length Contraction (ua-cam.com/video/bArTzG3Mkmk/v-d...
What is Spacetime? | Special Relativity
Переглядів 2,9 тис.Рік тому
In this fifth video, we explore how space and time can be treated as one interrelated entity, and how the Lorentz transformations can be given a geometric interpretation. We also explore how paradoxes are avoided by the preservation of causality. Please like, comment and subscribe if you appreciate what I do! SPECIAL RELATIVITY SERIES I. The Michelson-Morley Experiment (ua-cam.com/video/DFQtVFE...
Deriving the General Lorentz Transformation | Special Relativity
Переглядів 9 тис.Рік тому
In this fourth video of the Special Relativity series, we derive the general (matrix) form of the Lorentz transformations for an arbitrary boost velocity in 3D space. Please like, comment and subscribe if you appreciate what I do! SPECIAL RELATIVITY SERIES I. The Michelson-Morley Experiment (ua-cam.com/video/DFQtVFEp_3E/v-deo.html) II. Time Dilation and Length Contraction (ua-cam.com/video/bArT...
Deriving the Lorentz Transformations | Special Relativity
Переглядів 25 тис.Рік тому
In this third video of the Special Relativity series, we derive the Lorentz transformations, which map events in one reference frame to another reference frame that moves at a constant relative velocity. We also demonstrate how these transformations can be used to derive the phenomena of time dilation and length contraction that we explored more informally in the last video. Please like, commen...
Time Dilation and Length Contraction | Special Relativity
Переглядів 7 тис.Рік тому
Here we explore how Einstein's postulates imply that moving clocks tick slower and moving trains become shorter. Please like, comment and subscribe if you appreciate what I do! SPECIAL RELATIVITY SERIES I. The Michelson-Morley Experiment (ua-cam.com/video/DFQtVFEp_3E/v-deo.html) II. Time Dilation and Length Contraction III. Deriving the Lorentz Transformations (ua-cam.com/video/FvqutkaPmas/v-de...
The Michelson-Morley Experiment | Special Relativity
Переглядів 34 тис.Рік тому
The Michelson-Morley Experiment | Special Relativity
The Physics of Nuclear Weapons
Переглядів 2 тис.Рік тому
The Physics of Nuclear Weapons
How YOLO Object Detection Works
Переглядів 53 тис.Рік тому
How YOLO Object Detection Works
Optimization for Deep Learning (Momentum, RMSprop, AdaGrad, Adam)
Переглядів 61 тис.Рік тому
Optimization for Deep Learning (Momentum, RMSprop, AdaGrad, Adam)
Backpropagation: How Neural Networks Learn
Переглядів 4,3 тис.Рік тому
Backpropagation: How Neural Networks Learn
Transformers, Simply Explained | Deep Learning
Переглядів 4,9 тис.Рік тому
Transformers, Simply Explained | Deep Learning

КОМЕНТАРІ

  • @nightmareTomek
    @nightmareTomek 4 дні тому

    If the train were moving away from the tree, would it be elongated instead of being contracted?

  • @nightmareTomek
    @nightmareTomek 4 дні тому

    So I guess they found out that light travels at the same speed no matter which direction they were looking at, despite the earth clearly not standing still?

  • @Darkness_7193
    @Darkness_7193 5 днів тому

    Matrices in this case is absolutely evil, it needs a lot of efforts to obscure such a simple idea from 0:56 to 3:00 Thanks for great video

  • @apianoadventure
    @apianoadventure 13 днів тому

    7:50 'say we have a neuron i connected to every neuron k in the following layer': proceeds to calculate dL/dyi ??? How does the output yi depend on the next weights ??? A few minutes before with the first example, you calculated dL/dyj ?

  • @sbijapure
    @sbijapure 14 днів тому

    I have a doubt. At 5:30 of this video, will the light ray path change by 90 degrees or less than 90 degrees (as shown in the diagram)? If it changes by 90 deg. then the length remains L.

  • @yanickmedina6343
    @yanickmedina6343 14 днів тому

    YOLO to this video!, great explanation.

  • @KrizTahimic
    @KrizTahimic 17 днів тому

    Thanks for the clear step-by-step explanation!

  • @collinmccarthy
    @collinmccarthy 18 днів тому

    That was awesome, extremely helpful. Thank you!

  • @philochristos
    @philochristos 23 дні тому

    That is extraordinary and clear! This video and another one of yours I saw recently have really clarified special relativity for me. Thank you! I'm going to save these so I can watch them again later. Hopefully at some point it will stick.

    • @deepbean
      @deepbean 22 дні тому

      Very glad they've helped you!

  • @MooseOnEarth
    @MooseOnEarth 23 дні тому

    4:42 also at 18:10 - Small correction: Not N(0, I), but N (0,1). This is a multivariate unit Gaussian: Center at the 0-vector and unit standard variation of 1, not capital I.

  • @janerikbellingrath820
    @janerikbellingrath820 24 дні тому

    Absolutely amazing video! Honestly, perfect explanation!

  • @philochristos
    @philochristos 26 днів тому

    What would happen if Jack were going fast enough to make the ladder shorter than the barn, and once he's inside, the barn doors shut, and jack instantly stops?

  • @philochristos
    @philochristos 26 днів тому

    This makes a lot of sense.

  • @elenamacedo7338
    @elenamacedo7338 26 днів тому

    Thank you! You did such a great job explaining this! I've finally understood how all these terms and concepts come together. <3

  • @qualquan
    @qualquan 27 днів тому

    Unduly complicated. Uses the stupid third twin

  • @seetharama7254
    @seetharama7254 29 днів тому

    If we consider mu meson from upper atmosphere, they shower on earth from all sides. In a way we can compare it to go and return journey of one single meson. There is no reversal acceleration, yet there is time dilation in both directions. This meson would see 16th generation of meson in earth's lab. So time dilation must be due to absolute motion and not relativistic. In multidimensional astronomical motion with comparable speeds it is difficult to say which one is faster in absolute sense but a mu meson with 99.8 % speed of light is faster compared to celestial bodies in whatever direction we consider. This would answer twin's paradox I hope.

  • @CameronC-ML
    @CameronC-ML Місяць тому

    Chad

  • @sathyanarayanan7268
    @sathyanarayanan7268 Місяць тому

    one of the best explanations of VAE..👌

  • @arashakbari6986
    @arashakbari6986 Місяць тому

    perfect

  • @na50r24
    @na50r24 Місяць тому

    Can w be considered as vector that represents all adjustable parameters? I.e., not just weights of one linear transformation matrix from input to hidden layer but all of them + bias values. So when you compute gradient of L with respect to w, you compute a vector for which each entry is a the partial derivative of L with respect to w_i?

    • @deepbean
      @deepbean Місяць тому

      Yup, that's correct!

  • @adamchelchowski
    @adamchelchowski Місяць тому

    Pleasure to watch

  • @rishidixit7939
    @rishidixit7939 Місяць тому

    Nice, will revisit this because the maths is overwhelming right now

  • @rishidixit7939
    @rishidixit7939 Місяць тому

    From where to study Probability for Deep Learning ? Specifically the one used in here? I have studied probability but not this much. If anyone can give resources it will be very helpful

    • @deepbean
      @deepbean Місяць тому

      I'm not sure of the best resources for deep learning in general, but for theory related to VAEs I found this helpful ermongroup.github.io/cs228-notes/

  • @TheKenigham
    @TheKenigham Місяць тому

    Your videos are amazing to understand the subject of relativity! Unfortunately, I have no background in physics and mathematics so I often struggle to understand a few things. One question I have from the video is: I can grasp that the Michelson e Morley’s experiment proved that there is no special medium in relation to which light would be propagating. But, how does it also prove that light’s speed is constant in relation to all other things including that which emitted it? Did the experiment also move the light source in relation to the detector? If the frequency of the interference remained the same even when the emitter and the detector were in movement with relation to each other, I think it would be a more straightforward way to guarantee the the fact that light’s speed remain constant regardless of relative velocity. Sorry if that’s a stupid question, but I’d really like to understand the subject.

    • @sbijapure
      @sbijapure 14 днів тому

      It is not a stupid question. The video-makers always leave important details aside in a hurry to explain things and thus leave the viewers confused.

  • @AbhayShuklaSilpara
    @AbhayShuklaSilpara Місяць тому

    At 8:46 why is joint probability tractable? Why are others not tractable?

    • @deepbean
      @deepbean Місяць тому

      The joint probability is traceable under our model because it's easy to estimate the probability of say, a particular z given a particular x; however the marginal probability of x requires integration over all z, which makes it intractable.

  • @ThomasHaron
    @ThomasHaron Місяць тому

    The video is woow

  • @klausziegler60
    @klausziegler60 Місяць тому

    Excellent analysis

  • @3B1bIQ
    @3B1bIQ 2 місяці тому

    My brother, please, can you create a course to learn the manim library for beginners? ☺️🤍🤍

    • @deepbean
      @deepbean Місяць тому

      I'm not too familiar with it, but 3Blue1Brown has a tutorial!

  • @dmitriizhilenkov2673
    @dmitriizhilenkov2673 2 місяці тому

    One question. Why max{ \int_x {f(x) dx} } is the same thing as max{ f(x) }?

    • @deepbean
      @deepbean 2 місяці тому

      Good question! This is legitimate because by finding our optimal D(x), we are directly maximizing the integrand at every point x. Since our integrand can never be higher for any other D(x) at a specific point x; then if we find this maximum for every x, then we are guaranteed to maximize the integral as a whole.

  • @nguyenvothanh9230
    @nguyenvothanh9230 2 місяці тому

    Always expect the earliest video from your channel

  • @nguyenvothanh9230
    @nguyenvothanh9230 2 місяці тому

    Well, i can’t miss your video

  • @rafa_br34
    @rafa_br34 2 місяці тому

    Well done! Very thoroughly explained.

  • @RuizSerra
    @RuizSerra 2 місяці тому

    Fantastic explanation. Small erratum: the variable 𝜖 is epsilon, not eta (η) ua-cam.com/video/HBYQvKlaE0A/v-deo.htmlsi=k6EBUeCbMUl4JYWw&t=970

    • @deepbean
      @deepbean 2 місяці тому

      Ah, that's right!

  • @GreenyX1
    @GreenyX1 2 місяці тому

    You know neutrinos are 65 billion solar neutrinos, per second per square centimeter and they didn't detect that interaction either. The Ether is likely even smaller. Discounting the existence of an Aether using stone knives and bear skins. What a joke. This is why you don't discount anything, and try to evolve your experiments.

  • @BenjaminEvans316
    @BenjaminEvans316 2 місяці тому

    Great video. A good balance of professional, modern and unique style with in-depth technical description.

  • @HaiderAli-l5z1c
    @HaiderAli-l5z1c 2 місяці тому

    confusing there must be a simpler derivation

  • @nabinbk1065
    @nabinbk1065 2 місяці тому

    thanks

  • @gilrosario7224
    @gilrosario7224 2 місяці тому

    I’m here because of Lord Jamar. His interview on the Godfrey Comedy channel was very interesting….

  • @PepysFlora-t8p
    @PepysFlora-t8p 2 місяці тому

    Williams John Young Gary Clark Jessica

  • @Kir-f4j
    @Kir-f4j 2 місяці тому

    Очень классные ролики, очень интересно и понятно смотреть Только переводить не легко бывает, привет из России ❤

  • @fzigunov
    @fzigunov 2 місяці тому

    You're the best explanation out there in my opinion. I appreciate you!!

  • @lambda4931
    @lambda4931 2 місяці тому

    Why wouldn’t going against the aether be the opposite of going with it. They should cancel out.

  • @raihanpahlevi6870
    @raihanpahlevi6870 3 місяці тому

    predicted Ci is calculated with IoU if the cell have object, then how to calculate predicted Ci if the cell doesnt have object?

  • @BradleyJohnson-t2e
    @BradleyJohnson-t2e 3 місяці тому

    Ortiz Mountains

  • @oinotnarasec
    @oinotnarasec 3 місяці тому

    Beautiful video. Thank you

  • @everythingisalllies2141
    @everythingisalllies2141 3 місяці тому

    Your error is at ua-cam.com/video/FvqutkaPmas/v-deo.html If the spherical wave is centred and expanding from K origin, it cant also be expanding from a different center at location K prime's origin which is also in motion. Your whole explanation has failed at this point.

  • @bradleymorris161
    @bradleymorris161 3 місяці тому

    Thank you so much for this, really cleared up how VAEs work

  • @nielsniels5008
    @nielsniels5008 3 місяці тому

    Thank you so much for these videos

  • @everythingisalllies2141
    @everythingisalllies2141 3 місяці тому

    the is all BS. For jack, his ladder doesn't shrink, because he can say the barn is doing the moving, so the barn is not as big as it was before. So the ladder certainly cant fit. Now that we know it doesn't fit for two reasons, its not going to fit if you invent a third option. Your error is in your stupid simultaneity example. make up your mind, it cant be two things that light does. The centre of an expanding sphere of light cant have two different origins, one not moving and the other moving. That is where you make the error of simple logic and simple physics.

  • @fullerholiday2872
    @fullerholiday2872 3 місяці тому

    Martin Jessica Moore Carol Taylor Dorothy