Large language models, OpenAI, and striving to make the future go well | Richard Ngo

Поділитися
Вставка
  • Опубліковано 14 чер 2024
  • Originally released December 2022. Large language models like GPT-3, and now ChatGPT, are neural networks trained on a large fraction of all text available on the internet to do one thing: predict the next word in a passage. This simple technique has led to something extraordinary - black boxes able to write TV scripts, explain jokes, produce satirical poetry, answer common factual questions, argue sensibly for political positions, and more. Every month their capabilities grow.
    But do they really ‘understand’ what they’re saying, or do they just give the illusion of understanding?
    Today’s guest, Richard Ngo, thinks that in the most important sense they understand many things. Richard is a researcher at OpenAI - the company that created ChatGPT - who works to foresee where AI advances are going and develop strategies that will keep these models from ‘acting out’ as they become more powerful, are deployed and ultimately given power in society.
    Host Rob Wiblin and Richard cover:
    • Could speeding up AI development be a bad thing?
    • The balance between excitement and fear when it comes to AI advances
    • Why OpenAI focuses its efforts where it does
    • Common misconceptions about machine learning
    • How many computer chips it might require to be able to do most of the things humans do
    • How Richard understands the ‘alignment problem’ differently than other people
    • Why ‘situational awareness’ may be a key concept for understanding the behaviour of AI models
    • What work to positively shape the development of AI Richard is and isn’t excited about
    • The AGI Safety Fundamentals course that Richard developed to help people learn more about this field
    In this episode:
    • Rob's intro [00:00:00]
    • How Richard feels about recent AI progress [00:05:56]
    • Regulation of AI [00:10:50]
    • Why we should care about AI at all [00:15:00]
    • Key arguments for why this matters [00:23:27]
    • What OpenAI is doing and why [00:34:40]
    • AIs with the same total computation ability as a human brain [00:45:25]
    • What we’ve learned from recent advances [00:51:19]
    • Bottlenecks to learning for humans [01:01:34]
    • The most common and important misconception around ML [01:09:16]
    • The alignment problem from a deep learning perspective [01:15:39]
    • Situational awareness [01:26:02]
    • Reinforcement learning undermining obedience [01:40:07]
    • Arguments for calm [01:49:44]
    • Solutions [02:01:07]
    • Debate and interpretability [02:08:29]
    • Some conceptual alignment research projects [02:12:29]
    • Overrated AI work [02:14:09]
    • Richard’s personal path and advice [02:16:39]
    • Characterising utopia [02:28:37]
    • Richard’s favourite thought experiment [02:37:33]
    ----
    The 80,000 Hours Podcast features unusually in-depth conversations about the world’s most pressing problems and what you can do to solve them.
    Learn more, read the summary and find the full transcript on the 80,000 Hours website:
    80000hours.org/podcast/episod...

КОМЕНТАРІ • 8

  • @goodleshoes
    @goodleshoes 23 дні тому

    The thing that was so interesting to me when I heard the lex-yud talk it's how yud described the intelligence as alien, an alien actress. There's some unnerving when you understand that it is not thinking like a human at all, not only that but its thought it totally separate from all living organisms and we can't understand what exactly is going on under the hood despite the fact that humans were the one to create ai. As the level of intellect of ai goes up, it becomes less and less possible to predict what an ai will choose or decide. When it comes to serious things that we put in their hands you can't really know if they're going to decide to be sinister or malicious. They're not on our team, they never will be. They are separate, wholly separate.

  • @kinngrimm
    @kinngrimm 17 днів тому

    31:00 There are think tanks if not even faculties within universities that try to predict, some call themselves futurologists others have described themselves as being Cassandra's. Overall maybe this needs some more funding and more focus on specialisations on dangers coming with new technologies and then a subcategory being AI Safty. Having heard Elias Yudkowski, Texmark and others, they have produced a variaty of scenarios, some more or less viable, more or less likely. Listening then to Jan Lacun is like putting your head in the sand ignoring anything that might make your job more difficult in finding AGI (G standing for general) upto the point he denies without proof theories as he thinks the engineer will always have the head to avoid the worst case scenario.

  • @therainman7777
    @therainman7777 22 дні тому

    DALL-E 2? How old is this interview?

    • @eightythousandhours
      @eightythousandhours  21 день тому

      This episode was recorded in December 2022

    • @therainman7777
      @therainman7777 21 день тому

      @@eightythousandhoursGotcha. You might consider putting the date on future podcasts, especially if they’re older such as this one. With AI being such a hot topic right now, people are constantly searching for news and when I saw this posted just a few days ago, I assumed it was a new interview. Only when he mentioned DALL-E 2 did I realize it might not be news at all. That said, great interview.

    • @eightythousandhours
      @eightythousandhours  20 днів тому +1

      @@therainman7777 We definitely agree that's important context, especially with such a fast-moving area of development. All of our podcast episodes have the date of original release at the start of the description, but sometimes that can be easily missed - we'd definitely welcome any feedback on anywhere else this information would be most valuable to add to an episode to avoid any future misunderstanding!

    • @therainman7777
      @therainman7777 20 днів тому

      @@eightythousandhours Wow, thanks for being so receptive to outside opinions. The place I see it most often is in the title of the video, like at the end of the title you would see something like “ - 4/22/24” or whatever the date happens to be.

  • @artwtfart
    @artwtfart 25 днів тому

    damn, you talk fast…