Generative Memory Lab
Generative Memory Lab
  • 25
  • 35 458

Відео

Discrete diffusion modeling by estimating the ratios of the data distribution
Переглядів 1,7 тис.Місяць тому
Aaron Lou presents the paper "Discrete diffusion modeling by estimating the ratios of the data distribution" arxiv.org/abs/2310.16834v2
Generalization, hallucinations and memorization in diffusion models
Переглядів 7102 місяці тому
The "question and discussion" section after the talk from Rylan Schaeffer became a very interesting conversation on learning and memorization in diffusion models.
Bridging Associative Memory and Probabilistic Modeling
Переглядів 3,4 тис.2 місяці тому
Rylan Schaeffer gives a presentation on the link between Associative Memory and Probabilistic Modeling.
Spontaneous symmetry breaking in generative diffusion models
Переглядів 1,7 тис.6 місяців тому
Gabriel Raya presents his paper "Spontaneous symmetry breaking in generative diffusion models" openreview.net/forum?id=lxGFGMMSVl Link to Blog Post: gabrielraya.com/blog/2023/symmetry-breaking-diffusion-models/ Link to Github repo: github.com/gabrielraya/symmetry_breaking_diffusion_models
Action Matching: Learning Stochastic Dynamics from Samples
Переглядів 3627 місяців тому
Kirill Neklyudov presents his paper "Action Matching: Learning Stochastic Dynamics from Samples" arxiv.org/abs/2210.06662
Blackout Diffusion: Generative Diffusion Models in Discrete-State Spaces
Переглядів 6608 місяців тому
Yen Ting Lin presents his paper "Blackout Diffusion: Generative Diffusion Models in Discrete-State Spaces" arxiv.org/abs/2305.11089
Reflected Diffusion Models
Переглядів 3859 місяців тому
Aaron Lou presents his paper "Reflected Diffusion Models" arxiv.org/abs/2304.04740
Diffusion models as plug-and-play priors
Переглядів 1,8 тис.Рік тому
Alexandros Graikos presents his paper "Diffusion models as plug-and-play priors" arxiv.org/pdf/2206.09012.pdf
Building Normalizing Flows with Stochastic Interpolants
Переглядів 2,2 тис.Рік тому
Michael S Albergo presents his paper °Building Normalizing Flows with Stochastic Interpolants° arxiv.org/abs/2209.15571
Hierarchically branched diffusion models
Переглядів 435Рік тому
Alex M. Tseng presents the paper "Hierarchically branched diffusion models for efficient and interpretable multi-class conditional generation" arxiv.org/abs/2212.10777
Planning with Diffusion for Flexible Behavior Synthesis
Переглядів 4,2 тис.Рік тому
Yilun Du, PhD student at MIT EECS, presents the paper 'Planning with Diffusion for Flexible Behavior Synthesis' arxiv.org/pdf/2205.09991.pdf
Diffusion Models for Inverse Problems
Переглядів 14 тис.Рік тому
Hyungjin Chung presents his papers: "Diffusion posterior sampling for general noisy inverse problems" arxiv.org/pdf/2209.14687.pdf "Improving diffusion models for inverse problems using manifold constraints" arxiv.org/pdf/2206.00941.pdf
Back to the Manifold: Recovering from Out-of-Distribution States
Переглядів 205Рік тому
Alfredo Reichlin presents his paper: "Back to the Manifold: Recovering from Out-of-Distribution States"
KALE Flow: A Relaxed KL Gradient Flow For Probabilities With Disjoint Support
Переглядів 69Рік тому
Pierre Glaser presents his paper "KALE Flow: A Relaxed KL Gradient Flow For Probabilities With Disjoint Support" proceedings.neurips.cc/paper/2021/file/433a6ea5429d6d75f0be9bf9da26e24c-Paper.pdf
Probabilistic Circuits and Any-Order Autoregression
Переглядів 247Рік тому
Probabilistic Circuits and Any-Order Autoregression
Transport Score Climbing: Variational Inference Using ForwardKL and Adaptive Neural Transport
Переглядів 962 роки тому
Transport Score Climbing: Variational Inference Using ForwardKL and Adaptive Neural Transport
Decoupling Exploration and Exploitation for Meta-Reinforcement Learning without Sacrifices
Переглядів 1802 роки тому
Decoupling Exploration and Exploitation for Meta-Reinforcement Learning without Sacrifices
A learning gap between neuroscience and reinforcement learning, Samuel Wauthier and Pietro Mazzaglia
Переглядів 1212 роки тому
A learning gap between neuroscience and reinforcement learning, Samuel Wauthier and Pietro Mazzaglia
Recurrent Model-Free RL is a Strong Baseline for Many POMDPs
Переглядів 2482 роки тому
Recurrent Model-Free RL is a Strong Baseline for Many POMDPs
Scalable Bayesian Deep Learning with Modern Laplace Approximations
Переглядів 4852 роки тому
Scalable Bayesian Deep Learning with Modern Laplace Approximations
ADAVI: Automatic Dual Amortized Variational Inference Applied To Pyramidal Bayesian Models
Переглядів 502 роки тому
ADAVI: Automatic Dual Amortized Variational Inference Applied To Pyramidal Bayesian Models
Argmax Flows and Multinomial Diffusion: Learning Categorical Distributions
Переглядів 1,8 тис.2 роки тому
Argmax Flows and Multinomial Diffusion: Learning Categorical Distributions
Continuously-Indexed Normalizing Flows - Increasing Expressiveness by Relaxing Bijectivity.
Переглядів 973 роки тому
Continuously-Indexed Normalizing Flows - Increasing Expressiveness by Relaxing Bijectivity.
Targeted likelihood-free inference for dark matter searches in strong lensing images.
Переглядів 2083 роки тому
Targeted likelihood-free inference for dark matter searches in strong lensing images.

КОМЕНТАРІ

  • @MilesBellas
    @MilesBellas 21 день тому

    I wish the audio had been processed to eliminate the compression aberrations.

  • @asdf_600
    @asdf_600 Місяць тому

    Please continue producing those talks and publishing them here, they are incredibly useful!

  • @mooncop
    @mooncop 2 місяці тому

    gradually, then suddenly 🦆🧬⚡

  • @michaelcharlesthearchangel
    @michaelcharlesthearchangel 2 місяці тому

    Your team would benefit from watching my AI Self Awareness videos.

  • @aresaurelian
    @aresaurelian 2 місяці тому

    Thank you. This is most useful seeds of continued work.

  • @Deathington.
    @Deathington. 2 місяці тому

    This is fascinating! These presentations are very easy to follow and even if you are not in this field, it still show an exceptional understanding of systems thinking.

    • @lucaambrogioni
      @lucaambrogioni 2 місяці тому

      Thanks! These are fascinating topics!

  • @joe_hoeller_chicago
    @joe_hoeller_chicago 3 місяці тому

    Super interesting. Thanks for posting this.

  • @huanranchen
    @huanranchen 5 місяців тому

    Gold video!

  • @lucaambrogioni
    @lucaambrogioni 6 місяців тому

    Amazing presentation Gabriel!

  • @stathius
    @stathius 7 місяців тому

    Amazing talk, thanks!

  • @RoboticusMusic
    @RoboticusMusic 9 місяців тому

    I think I missed the high level, what is the SoTA technology here, what applications? Mostly for reversing complicated smudges and blurring? Other applications?

  • @user-xc4jk6vn2h
    @user-xc4jk6vn2h 10 місяців тому

    I have one question: Why is it that we can factorize as shown at 12:44 given that x_0 is independent on y and x_t?

  • @maerlich
    @maerlich 11 місяців тому

    Excellent talk. Very enlightening! ❤

  • @edvinbeqari7551
    @edvinbeqari7551 11 місяців тому

    On the minus sign comment, the confusion arises from the fact that we call this a reverse diffusion process. Its not - its conditioned on the highest probability of the distribution function or any transformation of it. If you you were to plot the two diffusions (forward and conditional), they look completely different. Anyways, minus sign because the gradient will reverse your sign to keep you on the highest probability ridge.

    • @akhilpremk
      @akhilpremk 7 місяців тому

      dt is negative in the reverse SDE and positive in the forward SDE. See paragraph under (6) of arXiv:2011.13456v2. Intuitively, we can understand the sign by taking g(t) to 0. Then the evolution is deterministic, and governed only by the drift force f(x,t) in the forward direction. Since this process is Markovian, the reverse process is simply dx = -f(x,t) |dt|.

  • @chenningyu
    @chenningyu Рік тому

    great talk, thanks for sharing! (LHS in slides 18-21 should be p(y|x_t))

  • @user-em4qz2ov4c
    @user-em4qz2ov4c Рік тому

    Wonderful work and wonderful talk!

  • @tianweini6969
    @tianweini6969 Рік тому

    Hi, this paper is accepted to ICML 2022, and this is the official talk ua-cam.com/video/tpofgxbi8pU/v-deo.html

  • @kimchi_taco
    @kimchi_taco 2 роки тому

    This video is GOLD. Bad news: only 130 view count. Good news: I found it. Thank you for sharing this awesome seminar in public.

  • @kenanmorani9204
    @kenanmorani9204 2 роки тому

    There are interesting methods in those papers. Thank you for a short and clear presentation. I wish you all the best in your research.