#83 Dr. ANDREW LAMPINEN (Deepmind) - Natural Language, Symbols and Grounding [NEURIPS2022 UNPLUGGED]

Поділитися
Вставка
  • Опубліковано 13 чер 2024
  • First in our unplugged series from #NeurIPS2022
    We discuss natural language understanding, symbol meaning and grounding and Chomsky with Dr. Andrew Lampinen from DeepMind.
    We recorded a LOT of material from NeurIPS, keep an eye out for the uploads.
    Pod: anchor.fm/machinelearningstre...
    Video sponsored by XRAI Glass! xrai.glass/ We are hiring ML software engineers!
    Support us! / mlst
    References
    [Paul Cisek] Beyond the computer metaphor: Behaviour as interaction
    philpapers.org/rec/CISBTC
    Linguistic Competence (Chomsky reference)
    en.wikipedia.org/wiki/Linguis...
    [Andrew Lampinen] Can language models handle recursively nested grammatical structures? A case study on comparing models and humans
    arxiv.org/abs/2210.15303
    [Fodor et al] Connectionism and Cognitive Architecture: A Critical Analysis
    ruccs.rutgers.edu/images/pers...
    [Melanie Mitchell et al] The Debate Over Understanding in AI's Large Language Models
    arxiv.org/abs/2210.13966
    [Gary Marcus] GPT-3, Bloviator: OpenAI’s language generator has no idea what it’s talking about
    www.technologyreview.com/2020...
    [Bender et al] On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?
    dl.acm.org/doi/10.1145/344218...
    [Adam Santoro, Andrew Lampinen et al] Symbolic Behaviour in Artificial Intelligence
    arxiv.org/abs/2102.03406
    [Ishita Dasgupta, Lampinen et al] Language models show human-like content effects on reasoning
    arxiv.org/abs/2207.07051
    REACT - Synergizing Reasoning and Acting in Language Models
    arxiv.org/pdf/2210.03629.pdf
    ai.googleblog.com/2022/11/rea...
    [Fabian Paischer] HELM - History Compression via Language Models in Reinforcement Learning
    ml-jku.github.io/blog/2022/helm/
    arxiv.org/abs/2205.12258
    [Laura Ruis] Large language models are not zero-shot communicators
    arxiv.org/pdf/2210.14986.pdf
    [Kumar] Using natural language and program abstractions to instill human inductive biases in machines
    arxiv.org/pdf/2205.11558.pdf
    Juho Kim
    juhokim.com/

КОМЕНТАРІ • 30

  • @AICoffeeBreak
    @AICoffeeBreak Рік тому +4

    ☕ Loved every second of this! Thanks for making it.

  • @davidw8668
    @davidw8668 Рік тому +15

    Great interview. He's such a grounded and intelligent guy.

  • @mikenashtech
    @mikenashtech Рік тому +1

    Wonderful conversation, Tim and Dr Lampinen. I am excited to see the evolution of LLMs, especially when driven by so many perspectives. The wider the conversation, the better the technology. Thanks for your time and the links to the papers too. Mike Nash

  • @rjScubaSki
    @rjScubaSki Рік тому +3

    Thanks for all the references. So useful!

  • @DelandaBaudLacanian
    @DelandaBaudLacanian Рік тому +1

    Thanks for this amazing interview + production, and getting the references in the description..you rock Tim!

  • @siarez
    @siarez Рік тому +1

    Thank you, Tim! This was great.

  • @dr.mikeybee
    @dr.mikeybee Рік тому +2

    Great job, Tim!

  • @snarkyboojum
    @snarkyboojum Рік тому

    That audio artefact at 6:45 was a bit spooky :D

  • @Soul-rr3us
    @Soul-rr3us Рік тому

    Love that he speaks about Peirce’s semiotics triad. I try to bring this up on discord whenever the syntax / semantics discussion appears. It’s a powerful model.

    • @dr.mikeybee
      @dr.mikeybee Рік тому

      But how can it be utilized? My initial reaction is in the curation of training sets, but how can that filtering process be automated?

  • @dr.mikeybee
    @dr.mikeybee Рік тому +1

    Agency for a LLM would be a self-prompting system in a loop while maintaining history.

  • @XOPOIIIO
    @XOPOIIIO Рік тому +2

    But Chomsky didn't said against deep learning in your interview, he said against specifically language models, and he noticed that protein folding and like stuff is actually promising. That's how I remember that.

    • @TimScarfe
      @TimScarfe Рік тому +1

      He poured scorn on language models in particular - quite right, on deep learning in general he says it's useful engineering but not a contribution to science. He uses automated captioning software, and we used deep learning to rescue his interview 😅

  • @davidw8668
    @davidw8668 Рік тому +1

    Re the mentioned paper about implicatures ( Laura Ruis): I had to try how the new model davinci-3 does with understanding of implicatures. I tried a few tricky ones that are context heavy and it did all of them correctly including a perfect explanation. This is a big step openai made.

  • @vectoralphaAI
    @vectoralphaAI Рік тому +1

    how old is he? he looks really young.

  • @dr.mikeybee
    @dr.mikeybee Рік тому

    We build imperfect control systems based necessarily on poor statistical models and social history -- because of bandwidth limitations. AI can build much better statistical models; so they don't need fuzzy imperfect systems.

  • @justinlloyd3
    @justinlloyd3 Рік тому

    Something weird going on with this video. It looks like time is stretched out

  • @lucamatteobarbieri2493
    @lucamatteobarbieri2493 Рік тому

    All informatics interested in AI should study neuroanatomy much more: there is no need to re-invent the wheel. Broca's area and Wernicke's area are comparable to LLM. So Imagine a minimal thinking human brain and preproduce it in silico. You need very few subcortical structure to do it. You can eliminate many brain structures and still have intelligence. For example big parts of the brain are useful for movement but are absolutely not necessary for intelligence. Localized strokes show unequivocally that many parts of the brain are not necessary for intelligence. Even bilateral structures are basically there just for redundancy, so you could reproduce in silico an AGI by reproducing vaguely the systems of a minimal single brain hemisphere.

  • @klammer75
    @klammer75 Рік тому +3

    Embodiment is key….meaning doesn’t exist outside of context, IMHO, and you can’t get much more contextual than embodiment🤔🧐😉

    • @dr.mikeybee
      @dr.mikeybee Рік тому

      If, indeed, we exist in a simulation, then our embodiment is simulated. Does that mean we can embody our AI agents in a simulated environment and get great results? Is language alone an environment? What are the minimum attributes that make up an environment? Why?

  • @MassDefibrillator
    @MassDefibrillator Рік тому

    Isn't it somewhat of a contradiction to suggest that these language models can give us scientific insight while also saying that the field will probably entirely focused on practical uses in the coming years? I do not see away to resolve both these things as true.

    • @davidw8668
      @davidw8668 Рік тому

      They can help to discover scientific insights. They are tools.

    • @MassDefibrillator
      @MassDefibrillator Рік тому

      @@davidw8668 That would be a potential way to rectify the apparent contradiction, But a tool in a scientific sense is one that is designed to give a transparent measurement of something. These tools are designed as black boxes, they cannot give you insights in that scientific sense of a tool almost by their entire design. So it's not like they can be used to measure language and then give some useful output back.
      The only insights that can be gained from these is how to make a better language model for practical purposes. And if what he says is accurate, that any discussion around the philosophical theory of mind side of things will disappear, then where are the scnitific insights supposed to be coming from?

    • @davidw8668
      @davidw8668 Рік тому

      @@MassDefibrillator good points. I agree there are issues with definitions in the field of AI starting with calling ML AI and including the word "science" in CS - even it's a classic engeneering field. Some computer engineers seam to deny the utility of definition per se but try to answer scientific questions and predict on them which of course is doomed to fail.
      Re tools: LLMs are tools in their various applications to extend cognitive abilities of humans. That's it. Amazing but nothing to do with superhuman capabilities
      in a bottle and they certainly wont answer
      questions about neurology, human intelligence or consciousness. But apparently it fuels popular discussions which is great for learning.
      However there is scientific research done with regard to LLM in cognitive science and linguistics that is remarkable - isn't it? E.g the research questions related to how much syntax can be extracted from text alone. And I would say despite not knowing the exact math of neural nets you can certainly run scientific experiments to answer such and other questions. But again the scientific method is imperative for that.

    • @MassDefibrillator
      @MassDefibrillator Рік тому

      @@davidw8668 "However there is scientific research done with regard to LLM in cognitive science and linguistics that is remarkable - isn't it? E.g the research questions related to how much syntax can be extracted from text alone. And I would say despite not knowing the exact math of neural nets you can certainly run scientific experiments to answer such and other questions. But again the scientific method is imperative for that."
      Sure, but I was going off his statement, where he said that this sort of side to AI was going to disappear; but he also said that he thinks they can provide scientific insight. Hence the apparent contradiction.

  • @JasonDrWho
    @JasonDrWho Рік тому

    He needs to spend more time outside.