DeepMind x UCL | Deep Learning Lectures | 10/12 | Unsupervised Representation Learning

Поділитися
Вставка
  • Опубліковано 18 кві 2024
  • Unsupervised learning is one of the three major branches of machine learning (along with supervised learning and reinforcement learning). It is also arguably the least developed branch. Its goal is to find a parsimonious description of the input data by uncovering and exploiting its hidden structures. This is presumed to be more reminiscent of how the brain learns compared to supervised learning. Furthermore, it is hypothesised that the representations discovered through unsupervised learning may alleviate many known problems with deep supervised and reinforcement learning. However, lacking an explicit ground truth goal to optimise towards, developmental progress in unsupervised learning has been slow. In this talk DeepMind Research Scientist Irina Higgins and DeepMind Research Engineer Mihaela Rosca give an overview the historical role of unsupervised representation learning and difficulties with developing and evaluating such algorithms. They then take a multidisciplinary approach to think about what might make a good representation and why, before doing a broad overview of the current state of the art approaches to unsupervised representation learning.
    Download the slides here:
    storage.googleapis.com/deepmi...
    Find out more about how DeepMind increases access to science here:
    deepmind.com/about#access_to_...
    Speaker Bios:
    Irina is a research scientist at DeepMind, where she works in the Frontiers team. Her work aims to bring together insights from the fields of neuroscience and physics to advance general artificial intelligence through improved representation learning. Before joining DeepMind, Irina was a British Psychological Society Undergraduate Award winner for her achievements as an undergraduate student in Experimental Psychology at Westminster University, followed by a DPhil at the Oxford Centre for Computational Neuroscience and Artificial Intelligence, where she focused on understanding the computational principles underlying speech processing in the auditory brain. During her DPhil, Irina also worked on developing poker AI, applying machine learning in the finance sector, and working on speech recognition at Google Research.
    Mihaela Rosca is a Research Engineer at DeepMind and PhD student at UCL, focusing on generative models research and probabilistic modelling, from variational inference to generative adversarial networks and reinforcement learning. Prior to joining DeepMind, she worked for Google on using deep learning to solve natural language processing tasks. She has an MEng in Computing from Imperial College London.
    About the lecture series:
    The Deep Learning Lecture Series is a collaboration between DeepMind and the UCL Centre for Artificial Intelligence. Over the past decade, Deep Learning has evolved as the leading artificial intelligence paradigm providing us with the ability to learn complex functions from raw data at unprecedented accuracy and scale. Deep Learning has been applied to problems in object recognition, speech recognition, speech synthesis, forecasting, scientific computing, control and many more. The resulting applications are touching all of our lives in areas such as healthcare and medical research, human-computer interaction, communication, transport, conservation, manufacturing and many other fields of human endeavour. In recognition of this huge impact, the 2019 Turing Award, the highest honour in computing, was awarded to pioneers of Deep Learning.
    In this lecture series, research scientists from leading AI research lab, DeepMind, deliver 12 lectures on an exciting selection of topics in Deep Learning, ranging from the fundamentals of training neural networks via advanced ideas around memory, attention, and generative modelling to the important topic of responsible innovation.
  • Наука та технологія

КОМЕНТАРІ • 24

  • @leixun
    @leixun 3 роки тому +26

    *DeepMind x UCL | Deep Learning Lectures | 10/12 | Unsupervised Representation Learning*
    *My takeaways:*
    *1. Plan for this lecture **0:57*
    -In this lecture, unsupervised learning also refers to self-supervised learning 1:23
    *2. What is unsupervised learning **2:13*
    -In this lecture, supervised learning refers to both supervised learning and reinforcement learning
    2.1 Do we need it? Clustering; Dimensionality reduction 4:13
    2.2 How do we evaluate it? 5:45
    *3. Why is it important **6:51*
    3.1 History of representation learning 7:30
    3.2 Shortcomings of supervised learning 9:46
    -Data efficiency; Robustness; Generalization; Transfer; "Common sense"
    3.3 What Geoff Hinton, Yann LeCun and Yoshua Bengio have said: Unsupervised Representation Learning 15:10
    *4. What makes a good representation **16:41*
    *5. Evaluating the merit of representation **34:13*
    *6. Techniques & applications **42:47*
    - Downstream tasks to evaluate the representation quality: semi-supervised learning; reinforcement learning; model analysis 44:13
    6.1 Generative modelling 49:22
    6.2 Contrastive learning 1:23:06
    6.3 Self-supervision 1:34:38
    *7. Future **1:42:38*

  • @Marcos10PT
    @Marcos10PT 3 роки тому +11

    When the presentation starts getting a little confusing and esoteric, you know we're reaching the edges of our current knowledge 😁

  • @adityabagwadkar472
    @adityabagwadkar472 3 роки тому

    Thank you so much, Irina Higgins and Mihaela Rosca and DeepMind for giving intuition on Unsupervised Representation Learning : )

  • @wy2528
    @wy2528 3 роки тому

    Thank you for sharing the research. Is there any paper that you will recommend?

  • @pervezbhan1708
    @pervezbhan1708 3 роки тому

    Thank you so much, Irina Higgins and Mihaela Rosca and DeepMind for giving intuition on Unsupervised Representation Learning : ) ppt

  • @lukn4100
    @lukn4100 3 роки тому

    Great lecture. Very interesting topic. Thx Irina and thx Mihaela!

  • @freemind.d2714
    @freemind.d2714 3 роки тому

    Thank you so much, just everyday you find out more you don't know, always.

  • @lizgichora6472
    @lizgichora6472 3 роки тому

    Excellent Lecture on Neural network, physics and Math. Reinforcements and Deep mind learning, thank you very much.

  • @sandipk1632
    @sandipk1632 3 роки тому +1

    Brilliant! 2 months, 2 weeks....

  • @bingeltube
    @bingeltube 3 роки тому

    Please publish a list of the referenced papers in the description of the video! These yellow boxes are hard to read!

    • @irinahiggins5657
      @irinahiggins5657 3 роки тому +1

      The boxes are more readable if you change the streaming quality to HD. You could also check out the slides directly here: bit.ly/3eqYlyt

  • @muckvix
    @muckvix 3 роки тому +2

    I found it completely impossible to understand anything without first reading the linked papers (or at least watching detailed talks about them). Once I know the paper, however, this lecture provides a valuable high level commentary on how that paper fit into the overall research.
    Also, I'm not sure why, when describing ways to learn representations, the talk didn't start with the simplest one: learn a clarification model, then use the penultimate layer as your representation.

    • @Marcos10PT
      @Marcos10PT 3 роки тому

      I agree, complex equations are presented with poor explanation, very hard to learn unless you already know what they are.

    • @ssssssstssssssss
      @ssssssstssssssss 3 роки тому

      That's not really the simplest approach. KMeans, agglomerative clustering and many other clustering algorithms are simpler.

  • @henryvanderspuy3632
    @henryvanderspuy3632 3 роки тому

    very important lecture

  • @aBigBadWolf
    @aBigBadWolf 2 роки тому

    those are cool robots drawings!

  • @mohammadelassal8079
    @mohammadelassal8079 3 роки тому +2

    I love the diversity in deepmind

  • @i4ko95
    @i4ko95 3 роки тому

    баси якита машин лърнин. АЗ машин лърнствам от няколко години и съм мнгоо добър, уее

  • @iinarrab19
    @iinarrab19 3 роки тому +2

    This is great! Although why does Irina's voice creep me out as if it is AI generated 😱

  • @rubenhayk5514
    @rubenhayk5514 3 роки тому

    disabling comments is weak