MIT 6.S191 (2023): Deep Learning New Frontiers

Поділитися
Вставка
  • Опубліковано 21 кві 2023
  • MIT Introduction to Deep Learning 6.S191: Lecture 7
    Deep Learning Limitations and New Frontiers
    Lecturer: Ava Amini
    2023 Edition
    For all lectures, slides, and lab materials: introtodeeplearning.com​
    Lecture Outline - coming soon!
    Subscribe to stay up to date with new deep learning lectures at MIT, or follow us @MITDeepLearning on Twitter and Instagram to stay fully-connected!!
  • Наука та технологія

КОМЕНТАРІ • 65

  • @manohariisc
    @manohariisc Рік тому +26

    This is wonderful. The speakers in this series are very generous in sharing their isnights and knowledge. And, they are immensely talented in the art of exposition. Many thanks.

  • @forheuristiclifeksh7836
    @forheuristiclifeksh7836 23 дні тому +1

    44:00 Forward Noising

  • @fencerlacroix2512
    @fencerlacroix2512 Рік тому +15

    In Lecture 7 of MIT's Introduction to Deep Learning course, the instructor discusses the limitations of deep learning and explores some new frontiers in the field.
    One of the limitations of deep learning is that it often requires a large amount of labeled data to train a model. This can be difficult to obtain in certain domains, and it can also be expensive and time-consuming to annotate the data. Another limitation is that deep learning models can be difficult to interpret, which can make it challenging to understand why a model is making a particular prediction or decision.
    To address these limitations, researchers are exploring new frontiers in the field of deep learning. One area of focus is unsupervised learning, which involves training models on unlabeled data. This can be useful in domains where labeled data is scarce or expensive to obtain. Another area of focus is explainable AI, which aims to make deep learning models more transparent and interpretable. This can help to build trust in the models and ensure that they are making decisions that align with ethical and legal standards.
    The instructor also discusses some new frontiers in deep learning research, including reinforcement learning and generative models. Reinforcement learning involves training models to make decisions based on rewards and punishments, and it has been used to develop autonomous agents that can play games and navigate complex environments. Generative models involve training models to generate new data that is similar to a given dataset. This has applications in fields such as art, music, and natural language processing.
    Overall, the lecture provides a broad overview of the limitations of deep learning and the new frontiers that researchers are exploring in the field. By understanding these limitations and exploring new approaches to deep learning, researchers can continue to push the boundaries of what is possible with this powerful technology.

    • @xmohd2011
      @xmohd2011 Рік тому +7

      is this generated using AI?
      It looks like so tbh

  • @eddiejennings5262
    @eddiejennings5262 5 місяців тому +1

    Thank you all again. I have personally followed and have many times recommended this series to friends and colleagues.I look forward to and will followup on materials on encoding structure and prior knowledge during learning and extrapolation.

  • @bluealchemist6776
    @bluealchemist6776 Рік тому +5

    MIT, thank you for the incredible knowledge shared!

  • @gapsongg
    @gapsongg Рік тому +1

    Thank you

  • @forheuristiclifeksh7836
    @forheuristiclifeksh7836 23 дні тому +1

    43:00 Diffusion model rather than

  • @vikrambhutani
    @vikrambhutani Рік тому +5

    Great insights into GCNs in deep learning, well done.

  • @user-wr4yl7tx3w
    @user-wr4yl7tx3w Рік тому +4

    Really enjoyed the presentation. Well structured and organized.

  • @loremipsum9071
    @loremipsum9071 10 місяців тому

    Really love the way Ava explains the materials ❤

  • @pavalep
    @pavalep Рік тому +1

    Thanks, Ava
    For this Great Lecture !!!

  • @theawesomeharris
    @theawesomeharris Рік тому +1

    this lecture is so informative im fully blown away!

  • @nikteshy9131
    @nikteshy9131 Рік тому +3

    Thank you 🙏💕 😊

  • @alexis91459
    @alexis91459 Рік тому +2

    Just awesome, can't wait for lecture 8

  • @yuqiwang3296
    @yuqiwang3296 Рік тому +1

    can't wait for the whole course😍

  • @krajanna
    @krajanna Рік тому +2

    Nice lecture with updated syllabus. Superb.

  • @nsteblay
    @nsteblay Рік тому +1

    Thanks for making these lectures available. Well worth anyone's time wanting to understand current state of ML / AI. Funny - Mrs. Davis commercial popped in the middle me watching this lecture. The result of an applied AI algorithm? Who knows!

  • @mPajuhaan
    @mPajuhaan Рік тому +2

    This was thoughtfully structured and meticulously organized👌

  • @saulsaitowitz6023
    @saulsaitowitz6023 9 місяців тому

    When a diffusion model produces a generated image, how close is that to some image that was in the training data? Like was there a turtle swimming in the ocean in the training data (54:40) that the model just recreated? Or is the output brand new?

  • @codingWorld709
    @codingWorld709 Рік тому +1

    Love you Sir and Ma'am ❤❤❤❤

  • @weiyicho8209
    @weiyicho8209 9 місяців тому

    This is really an amazing courses. I got one question. It seems like I could install capsa in google colab in lab3. Is there anyway to solve this problem?

  • @jennifergo2024
    @jennifergo2024 5 місяців тому

    Thanks for sharing

  • @RajabNatshah
    @RajabNatshah 10 місяців тому

    Thank you :)

  • @fencerlacroix2512
    @fencerlacroix2512 Рік тому +5

    Ayo, just dropped in to say thanks to the MIT crew for putting together this dope lecture on deep learning limitations and new frontiers! Ava Amini/ Alex Amini really killing (ed, past) it with the presentation, and I learning a lot from this video.
    Keep doing your thing, MIT! You're setting the standard for educational content and helping us all stay ahead of the curve.
    Peace out!

    • @biniyam106
      @biniyam106 Рік тому +1

      this looks ai generated

    • @fencerlacroix2512
      @fencerlacroix2512 Рік тому

      @@biniyam106 damn right!! it is, imagine typing all that.
      but this is human generated

    • @AAmini
      @AAmini  Рік тому +1

      😂

  • @johnpaily
    @johnpaily Місяць тому

    In which direction the time flow is studied . Vertical or horizontal . Do you consider the overall time direction

  • @RedTooNotBlue
    @RedTooNotBlue 4 місяці тому

    Thank you for this awesome content! Would be good to see some code examples alongside the models talked about, nonetheless awesome stuff guys!

  • @schevischenko
    @schevischenko Рік тому +1

    Can a diffusion model create any image out of noise?

  • @bigbud369
    @bigbud369 21 день тому

    51:37 - is the ultimate "noise case" actually the cosmic microwave background (CMB)?

  • @liftingmysoul
    @liftingmysoul 11 місяців тому +1

    where we could get the shirt if we are watching online? Thanks!

  • @Phliee
    @Phliee Рік тому +1

    For those who wonder how Diffusion models are trained, here's what I figured out (correct me if wrong): First, noise the image for t time steps with some noising equations, so that for each time step, you have a ground truth noise. Then train your network a bit like recurrent net which has t time steps but one set of weights. For each time step, input a noised image from previous time step, and networks predicts what the noise is like in this image. The difference between the predicted noise and the real noise is your loss function. After the training is done, the network already "knows" what noises are like for each time step, so given a completely random noise image from scratch, it's possible to subtract the noise time step by time step until it's completely denoised. So iterate t time steps, predicts a noise at each time step, use a denoising equation some one else has already figured out to get a cleaner image as the next input and finally creates a new image.

    • @connorkapooh2002
      @connorkapooh2002 Рік тому

      whooooaaaaa that makes complete sense too! denoising the image at different levels has really made that click for me, thank you very much!

  • @savantofillusions
    @savantofillusions Рік тому

    Hey, Alexander and Ava - I'm a savant who has some valuable data for training datasets. I draw perfectly sideways without knowing what I'm constructing in the illustration and don't know until I see it rotated 90 degrees. I have potential eye tracking, finger tracking and coordinate data, as well as the math of the pixels as the drawing is made. I am technically a "demon of science" and I break a few psychophysical "laws" as I do art like Maxwell's demon opens a trap door (without really knowing what it's doing". The demon has to just get really lucking in order to break "nature" if you ask me. There's no other way to truly break the 2nd law of thermodynamics. That aside, I just seem to be very lucky like this when I do a profoundly automatic drawing humans cannot visualize until seeing it in its best fit for perspective of viewing the contained objects.
    I am looking for a home for my work. I'm trying to get Adobe to set up a lab for me in Cambridge. Fingers crossed.
    I don't want Elon's help. lol
    I'm open to other ideas if staff at the schools have them and want to get the drop on it. I'm ready to move there from Richmond, Va. I'm working on my own lab software, but can't make it what it should be with what I've got right now. It's a really good candidate for an NSF PAC grant project for someone qualified (other than me)

  • @johnpaily
    @johnpaily Місяць тому

    It then exposes the black hole singularity and exposes the parallel world

  • @johnpaily
    @johnpaily Місяць тому

    Deep learning calls to go beyond mind and five sensory organs to connect to the mind of the heart and beyond to the INNER SPACE.

  • @michaelbuckers
    @michaelbuckers 2 місяці тому

    The reason people often see incorrectly generated hands is because human vision is extremely good at recognizing hands. Errors like that are everywhere in equal amounts, it's just your brain exaggerates it greatly when it comes to hands.

  • @edmonda.9748
    @edmonda.9748 9 місяців тому

    quick question to all DL enthusiasts:
    I read somewhere (which obviously forgot where!) that there is a new architecture that can capture long distance relationships and far apart features. Let me explain, CNN can capture features in images which are adjacent or very close to each other, so that the sliding filter can capture them as a bigger scale feature. This new DL architecture can capture features that are in the same image but very far apart from each other, much larger than size of filter. Does any body know about this architecture?
    hope I made some sense! 😂

  • @forheuristiclifeksh7836
    @forheuristiclifeksh7836 23 дні тому

    35:00

  • @dennissdigitaldump8619
    @dennissdigitaldump8619 11 місяців тому

    Ai plus researcher data should be included. It should never run via diffusion only. It needs also result data fed back.

  • @erkinalp
    @erkinalp Рік тому +1

    Did you get any help from AI while updating the syllabus?

  • @rohanchess8332
    @rohanchess8332 11 місяців тому +1

    Hey, is there anyway to buy MIT deeplearning T-Shirts. I really liked it!

  • @andrewgoodrich3530
    @andrewgoodrich3530 3 місяці тому

    Those bubbles are linked to the stock market and finance. The more hype the more money you can get as a sector, as a company.

  • @raiso9759
    @raiso9759 Рік тому +1

    Thank you

  • @AbhishekVerma-kj9hd
    @AbhishekVerma-kj9hd 4 місяці тому

    What does it mean - spatially share parameters of each filter

  • @ojasvisingh786
    @ojasvisingh786 Рік тому +2

    👏👏

  • @johnpaily
    @johnpaily Місяць тому

    Cross Link the mind of the body with the mind of the heart and explore the INNER SPACE

  • @user-wr4yl7tx3w
    @user-wr4yl7tx3w Рік тому +1

    If I understand correctly, that means, if you want to generate random images of dogs, you need a diffusion model trained on dogs. It's not like you can train on 100 different classes of animals, and get random images of those animals. Just want to clarify that.

    • @AAmini
      @AAmini  Рік тому +1

      You can do either! In the case where you train on all 100 classes of animals, then you will also need a way to tell the model that you now want to generate a new image *specifically* of a dog. This is called "conditioning" -- checkout Conditional Diffusion Models for more details.

  • @forheuristiclifeksh7836
    @forheuristiclifeksh7836 Рік тому +1

    6:19

  • @misesliberty
    @misesliberty 2 місяці тому

    "Garbage in, garbage out." How about LLMs? Isn't this good countre example?

  • @sonamshrishmagar6035
    @sonamshrishmagar6035 Рік тому

    Alex, could we get timestamps?

  • @ZK-iu5gl
    @ZK-iu5gl Рік тому

    I suggested to create a group to discuss the topics

  • @ayushhhhhh
    @ayushhhhhh 11 місяців тому +2

    I need that shirt 🙇

    • @faridsaud6567
      @faridsaud6567 4 місяці тому +1

      Same here!! Really, @Alexander Amini, any way of getting one?? 🙏 🙏

  • @johnpaily
    @johnpaily Місяць тому

    The greatest intellectual of the last century Max Planck said " A conscious and intelligent mind is the Matrix of matter". Einstein went on to call to look deep into nature and search for the mind of. God. We need to look deep into life and unravel consciousness and the root of creativity from atomic levels. This would be a stepping stone to Deep Learning. It can unravel the truth of nature and life and lead humanity from darkness to light.

  • @user-wr4yl7tx3w
    @user-wr4yl7tx3w Рік тому +1

    Are all unfolded protein the same? That is, can a folded protein that performs a certain biological function come from any unfolded protein?

    • @AAmini
      @AAmini  Рік тому +2

      The unfolded state would have to have the same amino acid sequence as the folded state. But the unfolded protein could occupy a number of different conformations (i.e., initial states) before folding to the final folded state. And there may also be slight variations on the folded state.

  • @wobblynl1742
    @wobblynl1742 Рік тому +2

    Not me watching this and being low-key jelly of lab prices and t-shirts 🫠

  • @acerhigh09
    @acerhigh09 Рік тому

    How can overhype be "very dangerous"?