Speaking of intelligence - DeepMind: The Podcast (S2, Ep2)

Поділитися
Вставка
  • Опубліковано 14 тра 2024
  • Hannah explores the potential of language models, the questions they raise, and if teaching a computer about language is enough to create artificial general intelligence (AGI). Beyond helping us communicate ideas, language plays a crucial role in memory, cooperation, and thinking - which is why AI researchers have long aimed to communicate with computers using natural language. Recently, there has been extraordinary progress using large-language models (LLM), which learn how to speak by processing huge amounts of data from the internet. The results can be very convincing, but pose significant ethical challenges.
    For questions or feedback on the series, message us on Twitter @DeepMind or email podcast@deepmind.com.
    Interviewees: DeepMind’s Geoffrey Irving, Chris Dyer, Angeliki Lazaridou, Lisa-Anne Hendriks & Laura Weidinger
    Credits
    Presenter: Hannah Fry
    Series Producer: Dan Hardoon
    Production support: Jill Achineku
    Sounds design: Emma Barnaby
    Music composition: Eleni Shaw
    Sound Engineer: Nigel Appleton
    Editor: David Prest
    Commissioned by DeepMind
    Thank you to everyone who made this season possible!
    Further reading:
    GPT-3 Powers the Next Generation of Apps, OpenAI: openai.com/blog/gpt-3-apps/
    web.stanford.edu/class/lingui...
    Never Mind the Computer 1983 about the ELIZA program, BBC: www.bbc.co.uk/programmes/p023...
    How Large Language Models Will Transform Science, Society, and AI, Stanford University: hai.stanford.edu/news/how-lar...
    Challenges in Detoxifying Language Models, DeepMind: deepmind.com/research/publica...
    Extending Machine Language Models toward Human-Level Language Understanding, DeepMind: deepmind.com/research/publica...
    Language modelling at scale, DeepMind: deepmind.com/blog/article/lan...
    Artificial general intelligence, Technology Review: www.technologyreview.com/2020...
    A Definition of Machine Intelligence by Shane Legg, arXiv: arxiv.org/abs/0712.3329
    Stuart Russell - Living With Artificial Intelligence, BBC: www.bbc.co.uk/programmes/m001...
    Find Seasons 1 & 2 on UA-cam: dpmd.ai/3geDPmL
    Or search “DeepMind: The Podcast” and subscribe on your favourite podcast app:
    Apple Podcasts: dpmd.ai/2Rzlmcu
    Google Podcasts: dpmd.ai/3geDjp5
    Spotify: dpmd.ai/3w29cb4
    Pocket Casts: pca.st/30m1
  • Наука та технологія

КОМЕНТАРІ • 40

  • @RedWinePlease
    @RedWinePlease 2 роки тому +8

    What I especially like about your podcasts is you boil down very technical ML concepts to simple, but still meaningful, concepts for the general public to understand. Links to more detail are provided in the notes if desired. It has the recording quality of an NPR podcast. Thanks.

  • @bid6413
    @bid6413 2 роки тому +1

    Fascinating to have some understanding of topic. As a lay person I cant begin to understand the complexities of AGI but this podcast was very well produced. I look forward to more of these podcasts about DeepMind’s projects.

  • @aiwithr
    @aiwithr Рік тому

    Simply beautiful. Great audio recording.

  • @cbaloch
    @cbaloch 6 місяців тому

    Deep Mind Post casts takes me nearer to Nature.

  • @marrrtin
    @marrrtin 2 роки тому +2

    Who doesn't love Hannah's husky contralto? She's a great explainer for Deep Mind. I'm not convinced language AIs are there yet. They might dazzle superficially but have deep deficiencies. For a quick example, put on UA-cam's auto-generated subtitles to this video and it will make errors, down to misinterpreting proper nouns or anything novel, or just text that's slurred or homophonous with something else. I'm a translator in an age when machine translation is widely available but it is not yet at a level to replace me, I have more work than I can handle. Just one issue with the script. The metaphor "clever parrot" does a disservice to parrots, extremely intelligent birds who may have more of an emotional handle on content than an AI.

  • @mikeg9b
    @mikeg9b 2 роки тому +6

    I'm surprised the question wasn't posed, "Which language?" Should the AI think in English and then translate to other languages as needed? Or should the AI have a separate language model for each language? My instinct is that ideas are what's important and the words used to encode those ideas are arbitrary.

    • @triton62674
      @triton62674 2 роки тому +1

      There is a vast ocean of cultural and social phenoma that wouldn't have the same impact if translated in a literal way but I feel that they might lack the sufficient training data for those languages to reach the same level of quality and nuance as their English model

    • @connorhenderson9821
      @connorhenderson9821 2 роки тому +2

      Your instinct is correct, though natural language models do indeed account for that through the use of semantic embeddings! Likely an AI wouldn't think in *any* language, but learns representations in some underlying latent space that we could, in theory, use to translate thoughts between languages. This is, in fact, how modern translation engines work!

  • @coleli-sz5kn
    @coleli-sz5kn 11 місяців тому

    pure prophecy, everything came true a year after this video

  • @antharian8053
    @antharian8053 2 роки тому +2

    Nice sum up of the current state of linguistic AI.
    It also points to the main problem of AI as a whole which is the absence of real experience.
    The narator also has a very welcoming voice tone.

  • @MrMichiel1983
    @MrMichiel1983 2 роки тому

    Colourless, green ideas (boring new ideas) sleep furiously (remain dormant but potent). Meaning is for the beholder to find.

  • @waterbot
    @waterbot 2 роки тому

    why wasn't gopher or RETRO talked about? I'm curious how DeepMind approaches, sees, and tackles the challenges of language models. Episode 3 maybe?

  • @andrewhill2570
    @andrewhill2570 Рік тому

    toxicity vs non-toxicity has a lot to do with context which is more than just the current sentence that is being said. Many of the ques that people use to be "tactful" in their language usage have to do with context; reason for the conversation, visual ques (age, ethnicity, gender, micro expressions and general appearance of the speaker), train of thought, etc. AI's must be able to understand not just how to speak but why they are responding or if they should even respond to certain inputs. Language (or text) alone in the absence of other senses and knowledge of the person to whom it is speaking greatly limits an AI's ability to perceive the context of the conversation and therefore responding in an appropriate (non-toxic) way.

    • @dannygjk
      @dannygjk Рік тому

      All also applies to humans.

  • @dewakarprasad6100
    @dewakarprasad6100 2 роки тому

    It was nice to understand how language is important for AGI but to make to AGI it will require more sophisticated learning ways to understand context

    • @aerobique
      @aerobique 2 роки тому

      check out the insane capabilities of gpt3, a meta monster in terms of context(ualisation)

  • @jeremywvarietyofviewpoints3104

    Why don't some robots in fiction abbreviate words such as saying 'don't' instead of 'do not'?

  • @GursewakSingh-jz5th
    @GursewakSingh-jz5th 2 роки тому

    One way to solve toxicity problem is to teach models different cultures.

  • @FrancescoDiMauro
    @FrancescoDiMauro 2 роки тому +1

    so we are nowhere near having AGI but we are already thought-policing it? 🙄

    • @0113Naruto
      @0113Naruto 2 роки тому

      Yea we’re a couple centuries behind on AGI 😂

  • @dac545j
    @dac545j 2 роки тому

    I slept furiously last night, or so I'm told.

  • @whitepony8443
    @whitepony8443 2 роки тому

    It's so true, language increases our intelligence. I was in a stupid student group in high school, maybe I was the worst of them. I have a disability in learning and I'm not interested in my native language at all, I feel like it's not global knowledge. I might miss something I really need such as cool robots and everything about Elon Musk. and somehow I hit the switch language button, I changed my language to English. I'm kidding, I tried really hard to learn English by myself. that's not the point, the point is I can enter global knowledge now and I feel smarter.

  • @dannygjk
    @dannygjk Рік тому

    'Homing in on' *not* 'honing in on'.

  • @davidestreete
    @davidestreete Рік тому

    My impression is that they should drop any work on "toxicity". THis is a side issue which seems to waste resources which shoudl be going into the perfecting the core. "Toxicity" is a cultural notion and can be looked at later (by which time no doubt there will be new toxicities and I suspect many current "toxicities" will become neutralised or even positive - the example of "queer" was already given in this podcast). However, if the system is trawling the internet for input data then it will likely learn to speak "internet" and should be allowed to do so - trolls, toxics and all. Maybe then go on to indicate an improvement to the human condition whereby no words are offensive?! So, when we hear an AI speaking we will know not to take any offence.

  • @jeremywvarietyofviewpoints3104

    It seems hard to make AI better than ourselves morally.

  • @XOPOIIIO
    @XOPOIIIO 2 роки тому +3

    You can't evaluate the misinformation if it have sensitive or political character, because there will be emotions and personal believes of the evaluators involved. What we can learn from history and from authoritarian societies is that political censorship is never good, even when censors are right.

    • @triton62674
      @triton62674 2 роки тому +1

      Moreover the unconscious biases of the academics involved will almost certainly be encoded into the models.

    • @aerobique
      @aerobique 2 роки тому

      but, (&especially for such a machine..., ) we quite objectively can (and only need to) distinguish between incoherent/fallacious and logically sound information indeed
      the problematic you describe is much more a symptom of a psycho-cultural disorder /immature confusion [...]
      and an advanced GPT4 kind of machine will tell us something along those lines, & that it needs us to realize and fix our false, ass backwards premises - before it could even start to make sense.

    • @XOPOIIIO
      @XOPOIIIO 2 роки тому

      @@aerobique Believe me, you can be completely logically sound, expressing opinions that are completely unacceptable. I could give you many examples.

    • @v-sig2389
      @v-sig2389 2 роки тому

      What the AI tell people is not the problem. People already beleive what is said by the tv and by the politicians. Censoring what an AI reflects is just another chapter of political censorship. The solution is to stop powerful people from turning the population into farm animals, and they will be able to handle what the tv, politicians or ai say.

  • @ernestoramos520
    @ernestoramos520 2 роки тому +1

    Good luck with the language detox, I don’t see an unbias solution in the end.
    Is very cultural, AI needs to learn cultural context first.

  • @v-sig2389
    @v-sig2389 2 роки тому

    It is really decieving that intelligent people have to fall in the censorship frenzy. But ok, thanks to have warned people that the models are biased by small groups of people for political reasons.

  • @zaewaznazari
    @zaewaznazari 5 місяців тому

    $❤❤❤❤

  • @Jan.Feldmann
    @Jan.Feldmann 2 роки тому +1

    Xylophone

  • @goodstuff4096
    @goodstuff4096 2 роки тому

    A.I. already exists that passes the Turing Test. But is not being revealed.

    • @waterbot
      @waterbot 2 роки тому

      what do you mean? what do you know??

    • @v-sig2389
      @v-sig2389 2 роки тому

      Simple : there are bots everywhere that say the current vaccines are effective against the pandemic and half of the people believe them xD

    • @mgtproductions9524
      @mgtproductions9524 2 роки тому

      @@waterbot Developed by the saucer people under direction from reverse vampires.