DeepMind x UCL | Deep Learning Lectures | 12/12 | Responsible Innovation

Поділитися
Вставка
  • Опубліковано 8 лип 2020
  • What can we do to build algorithms that are safe, reliable and robust? And what are the responsibilities of technologists who work in this area? In this talk, Chongli Qin and Iason Gabriel explore these questions - connected through the lens of responsible innovation - in two parts. In the first part, Chongli explores the question of why and how we can design algorithms that are safe, reliable and trustworthy through the lens of specification driven machine learning. In the second part, Iason looks more closely at ethical dimensions of machine learning, at the responsibility of researchers, and at processes that can structure ethical deliberation in this domain. Taken together, they suggest that there are important measures that we can, and should, put in place - if we want to build systems that are beneficial to society.
    Download the slides here:
    storage.googleapis.com/deepmi...
    Find out more about how DeepMind increases access to science here:
    deepmind.com/about#access_to_...
    Speaker Bios:
    Chongli Qin is a research scientist at DeepMind, her primary interest is in building safer, more reliable and more trustworthy machine learning algorithms. Over the past several years, she has contributed in developing algorithms to make neural networks more robust to noise. Key parts of her research focuses on functional analysis: properties of neural network that can naturally enhance robustness. She has also contributed in building mathematical frameworks to verify/guarantee that certain properties hold for neural networks. Prior to DeepMind, Chongli studied in Cambridge, where she studied the mathematics tripos and scientific computing before doing a PhD in bioinformatics.
    Iason Gabriel is a Senior Research Scientist at DeepMind where he works in the ethics research team. His work focuses on the applied ethics of artificial intelligence, human rights, and the question of how to align technology with human values. Before joining DeepMind, Iason was a Fellow in Politics at St John’s College, Oxford, and a member of the Centre for the Study of Social Justice (CSSJ). He holds a doctorate in Political Theory from the University of Oxford and spent a number of years working for the United Nations in post-conflict environments.
    About the lecture series:
    The Deep Learning Lecture Series is a collaboration between DeepMind and the UCL Centre for Artificial Intelligence. Over the past decade, Deep Learning has evolved as the leading artificial intelligence paradigm providing us with the ability to learn complex functions from raw data at unprecedented accuracy and scale. Deep Learning has been applied to problems in object recognition, speech recognition, speech synthesis, forecasting, scientific computing, control and many more. The resulting applications are touching all of our lives in areas such as healthcare and medical research, human-computer interaction, communication, transport, conservation, manufacturing and many other fields of human endeavour. In recognition of this huge impact, the 2019 Turing Award, the highest honour in computing, was awarded to pioneers of Deep Learning.
    In this lecture series, research scientists from leading AI research lab, DeepMind, deliver 12 lectures on an exciting selection of topics in Deep Learning, ranging from the fundamentals of training neural networks via advanced ideas around memory, attention, and generative modelling to the important topic of responsible innovation.
  • Наука та технологія

КОМЕНТАРІ • 29

  • @leixun
    @leixun 3 роки тому +15

    *DeepMind x UCL | Deep Learning Lectures | 12/12 | Responsible Innovation*
    *My takeaways:*
    *1. Overview **0:16*
    *2. Motivation **0:50*
    2.1 Risk 2:25
    2.2 What are our responsibilities? 6:25
    *3. Specification driven ML **7:25*
    *4. Building adversarially robust networks **9:36*
    4.1 Adversarial training 12:28
    4.2 Adversarial evaluation: finding the worst case 16:00
    4.3 Gradient Obfuscation 24:58
    4.4 Verification algorithm 27:20
    4.5 Other specifications 33:30
    *5. Ethics and technology **34:49*
    5.1 Ethical training data 36:52
    5.2 Algorithmic bias 38:31
    5.3 Power and responsibility 40:13
    5.4 Science and value 41:28
    5.5 Responsible innovation 42:21
    *6. Principles and processes **42:21*
    6.1 Principles 43:38
    6.2 A five-step process 46:10
    6.3 Two final tests 56:18
    *7. The path ahead **58:30*

    • @neurophilosophers994
      @neurophilosophers994 3 роки тому +1

      Thank you for these summaries

    • @leixun
      @leixun 3 роки тому +1

      @@neurophilosophers994 You are welcome!

    • @pervezbhan1708
      @pervezbhan1708 2 роки тому

      ua-cam.com/video/r_Q12UIfMlE/v-deo.html

  • @lukn4100
    @lukn4100 3 роки тому +4

    Great lecture and big thanks to DeepMind for sharing this great content.

  • @datta97
    @datta97 3 роки тому +4

    Is there a lecture on Graph Neural Networks(GNN)?

  • @fabb802
    @fabb802 9 місяців тому +1

    Awesome lecture, thanks! The adversarial evaluation part was specially enlightening :)

  • @joshhickman77
    @joshhickman77 3 роки тому +3

    I'd never heard of gradient obfuscation before. Optimizers are challenging to reason about!

  • @minervaaurora8109
    @minervaaurora8109 3 роки тому +1

    Just an idea my friend keeps talking about;: wouldn't it be easier and more commercially/personally useful to begin with botanical flora - to use AI and big data to create a camera app that comprehensively identifies a. comprehensive nomenclature b. the parts of the plant (roots, seeds, leaves, barks) c. ecological best practices, geographical and optimal agricultural growing conditions,d. known medicinal/nutritional uses e.molecular breakdown etc.. and then store the info in a botanical cloud based blockchain to ensure redundancy and integrity.

  • @damiendeedunne351
    @damiendeedunne351 3 роки тому

    i would like to see AI that can mix music by using compressors and EQing balancing the music and so on ... as music mixing is a matter of human taste to our hearing sense

  • @billyf3346
    @billyf3346 3 роки тому

    The fact that this company doesn't have a monthly demo release cycle is deeply troubling to me. :|

  • @chesslover1016
    @chesslover1016 3 роки тому

    Alphazero vs updated Stockfish whose with me?...

  • @FabianRoling
    @FabianRoling 3 роки тому +7

    This lecture taught me surprisingly little, after already having watched previous DeepMind videos. The first half basically says "If your AI is bad at what's it doing, then it can't do good reliably.", which is just common sense, and then explains the generator+discriminator model as one of the many improvements to AI that was already explained in an older video (a bit better in my opinion) and that has basically nothing to do with ethics. The second half basically said "behave morally, please" and explained something that's basically just utilitarianism.

    • @msheart2
      @msheart2 3 роки тому

      This shite is all about mind control ai and enslaving the population.

    • @mikesully110
      @mikesully110 3 роки тому

      yeah basically I don't think they'd have any idea how to control one of these neural networks if it started misbehaving, other than throwing out all the training data and starting again, hoping it doesn't come up this time? I mean it's not a problem with AI's that play starcraft, but what if we had a similar AI that ran automated crop harvesters?

    • @FabianRoling
      @FabianRoling 3 роки тому +3

      Why do none of the replies here have anything to do with what I said? If you want to comment on the video directly, please do that.

    • @mikesully110
      @mikesully110 3 роки тому

      @@FabianRoling lol bald

    • @PainYvonWedel
      @PainYvonWedel 3 роки тому +4

      "The second half basically said "behave morally, please" and explained something that's basically just utilitarianism."
      But that's actually a very interesting point. In the end, there is not much more than hoping that researchers do not make immature iterations accessible. Imitating voices is a good example, where he also names solutions or at least improvements. Ultimately, it is probably unavoidable that sooner or later someone will no longer adhere to the "behave morally, please". No law, no rule will help here. But we can limit circulation if the power houses stick to seemingly basic rules.
      Let's also not forget that he only had 25 minutes available. Kinda hard to go beyond the basics.

  • @Zarozian
    @Zarozian 3 роки тому

    Bring it to more games like Warframe.

  • @yusufmujeeb9693
    @yusufmujeeb9693 3 роки тому

    Nice video as always . I have a question though . I trying to buy a laptop for cad as well as programming . I only just moved into the world of computer vision /machine learning and was wondering if hp that is supposedly coming later this month with Ryzen 4800h and 2060 will be an intelligent buy. I read that Ryzen doesn’t support avx512 instructions etc ...can you throw more light on this for me ? Thank you

  • @jfbooke447
    @jfbooke447 3 роки тому +2

    We need Silicon Valley to tell us more about ethics 😂

    • @jamesle4330
      @jamesle4330 3 роки тому +1

      DeepMind is in London

    • @msheart2
      @msheart2 3 роки тому

      They have none, no ethics no morals, be they in SV or London, I don't need them to tell me about ethics

    • @AZ-zy8sz
      @AZ-zy8sz 3 роки тому

      @@jamesle4330 It's owned by google

    • @pervezbhan1708
      @pervezbhan1708 2 роки тому

      ua-cam.com/video/r_Q12UIfMlE/v-deo.html