DeepMind x UCL | Deep Learning Lectures | 10/12 | Unsupervised Representation Learning

  • Опубліковано 22 чер 2020
  • Unsupervised learning is one of the three major branches of machine learning (along with supervised learning and reinforcement learning). It is also arguably the least developed branch. Its goal is to find a parsimonious description of the input data by uncovering and exploiting its hidden structures. This is presumed to be more reminiscent of how the brain learns compared to supervised learning. Furthermore, it is hypothesised that the representations discovered through unsupervised learning may alleviate many known problems with deep supervised and reinforcement learning. However, lacking an explicit ground truth goal to optimise towards, developmental progress in unsupervised learning has been slow. In this talk DeepMind Research Scientist Irina Higgins and DeepMind Research Engineer Mihaela Rosca give an overview the historical role of unsupervised representation learning and difficulties with developing and evaluating such algorithms. They then take a multidisciplinary approach to think about what might make a good representation and why, before doing a broad overview of the current state of the art approaches to unsupervised representation learning.
    Download the slides here:
    Find out more about how DeepMind increases access to science here:
    Speaker Bios:
    Irina is a research scientist at DeepMind, where she works in the Frontiers team. Her work aims to bring together insights from the fields of neuroscience and physics to advance general artificial intelligence through improved representation learning. Before joining DeepMind, Irina was a British Psychological Society Undergraduate Award winner for her achievements as an undergraduate student in Experimental Psychology at Westminster University, followed by a DPhil at the Oxford Centre for Computational Neuroscience and Artificial Intelligence, where she focused on understanding the computational principles underlying speech processing in the auditory brain. During her DPhil, Irina also worked on developing poker AI, applying machine learning in the finance sector, and working on speech recognition at Google Research.
    Mihaela Rosca is a Research Engineer at DeepMind and PhD student at UCL, focusing on generative models research and probabilistic modelling, from variational inference to generative adversarial networks and reinforcement learning. Prior to joining DeepMind, she worked for Google on using deep learning to solve natural language processing tasks. She has an MEng in Computing from Imperial College London.
    About the lecture series:
    The Deep Learning Lecture Series is a collaboration between DeepMind and the UCL Centre for Artificial Intelligence. Over the past decade, Deep Learning has evolved as the leading artificial intelligence paradigm providing us with the ability to learn complex functions from raw data at unprecedented accuracy and scale. Deep Learning has been applied to problems in object recognition, speech recognition, speech synthesis, forecasting, scientific computing, control and many more. The resulting applications are touching all of our lives in areas such as healthcare and medical research, human-computer interaction, communication, transport, conservation, manufacturing and many other fields of human endeavour. In recognition of this huge impact, the 2019 Turing Award, the highest honour in computing, was awarded to pioneers of Deep Learning.
    In this lecture series, research scientists from leading AI research lab, DeepMind, deliver 12 lectures on an exciting selection of topics in Deep Learning, ranging from the fundamentals of training neural networks via advanced ideas around memory, attention, and generative modelling to the important topic of responsible innovation.
  • Наука та технологія


  • Luk N
    Luk N Місяць тому

    Great lecture. Very interesting topic. Thx Irina and thx Mihaela!

  • Liz Gichora
    Liz Gichora Місяць тому

    Excellent Lecture on Neural network, physics and Math. Reinforcements and Deep mind learning, thank you very much.

  • Sandip k
    Sandip k 5 місяців тому

    Brilliant! 2 months, 2 weeks....

  • Marcos Pereira
    Marcos Pereira 5 місяців тому +8

    When the presentation starts getting a little confusing and esoteric, you know we're reaching the edges of our current knowledge 😁

  • muckvix
    muckvix 5 місяців тому +1

    I found it completely impossible to understand anything without first reading the linked papers (or at least watching detailed talks about them). Once I know the paper, however, this lecture provides a valuable high level commentary on how that paper fit into the overall research.
    Also, I'm not sure why, when describing ways to learn representations, the talk didn't start with the simplest one: learn a clarification model, then use the penultimate layer as your representation.

    • Marcos Pereira
      Marcos Pereira 5 місяців тому

      I agree, complex equations are presented with poor explanation, very hard to learn unless you already know what they are.

  • Iliya Zhechev
    Iliya Zhechev 6 місяців тому

    баси якита машин лърнин. АЗ машин лърнствам от няколко години и съм мнгоо добър, уее

  • Free Mind.D
    Free Mind.D 6 місяців тому

    Thank you so much, just everyday you find out more you don't know, always.

  • Henry Vanderspuy
    Henry Vanderspuy 6 місяців тому

    very important lecture

  • James-Andrew Sarmiento
    James-Andrew Sarmiento 7 місяців тому +2

    This is great! Although why does Irina's voice creep me out as if it is AI generated 😱

    • Vincent Prince
      Vincent Prince 2 місяці тому

      I'd say Miahela voice is not bad either :)

  • pervez bhan
    pervez bhan 7 місяців тому

    Thank you so much, Irina Higgins and Mihaela Rosca and DeepMind for giving intuition on Unsupervised Representation Learning : ) ppt

  • W Ya
    W Ya 9 місяців тому

    Thank you for sharing the research. Is there any paper that you will recommend?

  • Ruben Hayk
    Ruben Hayk 9 місяців тому

    disabling comments is weak

  • Lei Xun
    Lei Xun 9 місяців тому +15

    *DeepMind x UCL | Deep Learning Lectures | 10/12 | Unsupervised Representation Learning*
    *My takeaways:*
    *1. Plan for this lecture **0:57*
    -In this lecture, unsupervised learning also refers to self-supervised learning 1:23
    *2. What is unsupervised learning **2:13*
    -In this lecture, supervised learning refers to both supervised learning and reinforcement learning
    2.1 Do we need it? Clustering; Dimensionality reduction 4:13
    2.2 How do we evaluate it? 5:45
    *3. Why is it important **6:51*
    3.1 History of representation learning 7:30
    3.2 Shortcomings of supervised learning 9:46
    -Data efficiency; Robustness; Generalization; Transfer; "Common sense"
    3.3 What Geoff Hinton, Yann LeCun and Yoshua Bengio have said: Unsupervised Representation Learning 15:10
    *4. What makes a good representation **16:41*
    *5. Evaluating the merit of representation **34:13*
    *6. Techniques & applications **42:47*
    - Downstream tasks to evaluate the representation quality: semi-supervised learning; reinforcement learning; model analysis 44:13
    6.1 Generative modelling 49:22
    6.2 Contrastive learning 1:23:06
    6.3 Self-supervision 1:34:38
    *7. Future **1:42:38*

  • Thomas Bingel
    Thomas Bingel 9 місяців тому

    Please publish a list of the referenced papers in the description of the video! These yellow boxes are hard to read!

    • Irina Higgins
      Irina Higgins 9 місяців тому

      The boxes are more readable if you change the streaming quality to HD. You could also check out the slides directly here:

  • Mohammad El Assal
    Mohammad El Assal 9 місяців тому +2

    I love the diversity in deepmind

  • aditya bagwadkar
    aditya bagwadkar 9 місяців тому

    Thank you so much, Irina Higgins and Mihaela Rosca and DeepMind for giving intuition on Unsupervised Representation Learning : )