OpenAI’s huge push to make superintelligence safe | Jan Leike
Вставка
- Опубліковано 21 сер 2023
- In July 2023, OpenAI announced that it would be putting a massive 20% of its computational resources behind a new team and project, Superalignment, with the goal to figure out how to make superintelligent AI systems aligned and safe to use within four years.
Today's guest Jan Leike, Head of Alignment at OpenAI, will be co-leading the project.
---------
The 80,000 Hours Podcast features unusually in-depth conversations about the world’s most pressing problems and what you can do to solve them.
Learn more, read the summary and find the full transcript on the 80,000 Hours website: 80000hours.org/podcast/episod...
And now he's gone. It will be interesting: 1. what he and Illya do in the future, and 2. What Altman does about the gaping hole left in their alignment team and how he handles the publicity and speculations about 'why they left'.
Yeah, very good questions. I can't help but feel rather frustrated now that such a forward-thinking mind has left OpenAI... So many experiments and test are not done, so many alignment ways are left unexplored... I think OpenAI should rethink it's approach deeply...
@@enlightenment5dalignment is the single most important hard question facing humanity right now, so of course we’re gonna skip it on the road to market.
Godspeed, Jan.
i would not be surprised to learn that the superalignment issue is contentious within OpenAI. I don't think it becomes a problem in the current regime of autoregressive GPT. maybe in 2 or 3 generations when the system has a degree of agency, ability to run by itself, or does some form of self improvement.
My guess is that the current generation of the system not yet released is the crux of the issue where alignment is needed and not happening, thus the resignations. Isn't agency something they have been hinting at regarding GPT 5? AI tech is certainly determined to achieve such systems because how else can these AI revolutionaries achieve their dream of having their AI systems run entire companies that they reap the profits from?
Well, that didn't age well.
I guess the program didn't go so well. We need more whistleblowers in AI that's for damn sure.
Keep it doing!
Unbelievable this has so less views
And only one comment!
Why does he have two microphones?
This one aged nicely
This sounds to me like raising teenagers 😂
This didn’t age well…
this didn't age well... looks like its an all out race to agi no safety at all and i doubt its just for OpenAi. Seems to me its practically here I mean llm's have beat the turing test already things are just plain smart in ways we don't even understand.