The Fastest Way to AGI: LLMs + Tree Search - Demis Hassabis (Google DeepMind CEO)
Вставка
- Опубліковано 26 лют 2024
- Full Episode: • Demis Hassabis - Scali...
Transcript: www.dwarkeshpatel.com/p/demis...
Apple Podcasts: podcasts.apple.com/us/podcast...
Spotify: open.spotify.com/episode/6SWb...
Follow me on Twitter: / dwarkesh_sp - Наука та технологія
The way Demis communicates is definitely multi-modal
😂😂
people with heritage from south Europe do have these trait and he is half Greek-Cypriot it seems.
Really really ridiculously smart.
Use perplexity ai it does ai plus search // end video
I've no idea how you're getting these guests so early in your content creation journey, but you're doing an amazing job -- really insightful interviews. Expect your channels to be much larger by the end of the year
The only guy asking hardcore technical questions.
They probably listen to this lol (or people working at OpenAI / Deepmind def do)
yea it's totally mind boggling that content which falls in line with the visions of corporations marketing "AI", parroting their line and their schpiel, exists. wow, so weird I wonder how this happens. I'm stupidly naive and don't know about industry plants and adjacent concepts by the way. wow the world is so complicated and amazing! everything is a series of coincidences and incompetencies!
This is the content that makes me question why I pay for cable.
Is cable still a thing? I thought fiber was everywhere now.
You, sir, are my favorite interviewer. Keep up the great work!
My favourite too. He does a great job.
The way LLMs build a world model is very inefficient if not impossible. They may not be able to find the accurate data distribution functions of the world events. I think it will be better make use of active reinforcement learning to help building the world model, maybe based on the raw models built by the LLMs.
This is probably going to be your best interview 😅. Demis is great
Cannot wait for this!
Love the interviews man. Keep em coming!
FYI the audio seems slightly off from the video. Looking forward to the whole episode!
If it would be possible, I would have invested 40% of my capital on Demis Hassabis... You are just a motivation sir.. hats off !!
I love that Demis and I agree on how to achieve AGI to the point where we both literally use the word “bootstrap” to describe the approach to get Systems 2 thinking from LLMs.
Looking forward to the release of the full podcast episode!
This video deserves a subscribe to this channel.
cool, looking forward
demis, a legend
Great interview. But also, are we hearing what's being said? Super sobering: we just casually end on this note of, "We built bots that can win board games. Oh, and now we're moving on to building & deploying systems that billions will inevitably treat as if they were intelligent beings... and we're still clarifying what values to reward in it. I mean, not that the rest of the world has *any* trouble identifying and living out planetarily-benificial values or anything..."
Eep! This is *the* question. Sure, as a computer scientist, I'm as down as the next hacker to geek out about architectures, RL, etc. But this isn't just an engineering problem, these are companies building AI because they're *legally oblidged* to grow as much as possible, irrespective of the strain on society or our planet.
Let me just say the quite part of this clip out loud: the more we talk about (and approach AGI), the more we nee to think less about engineering, and more about shaping our existing super intelligences (society, states, culture, corporate giants) to be more ecologically sustainable & centered on human flourishing, not legally bound exploitation for market growth. I hope we're hearing that clearer everytime we listen to interviews like this with leaders in AI.
Im starting to convinced, well just destroy ourselves. Best case agi will avoid it and keep us in check.
Its the mass of adolescent computer scientist with no world model of themselves thats just going to make this wash over us.
It already happened with it. Seriously, is the world a better place? I doubt that.
It will be worse with the additional, exponentially increased power of agi in humanoid robots like figure 01.
The way I read this: "LLMS (plus) Tree Search (minus) Demis Hassabis"... Are you trying to tell us something?
XDDDD
wheres the full interview? Great questions, shouldve pushed to get his actual thoughts on reward function
Psyched for this one!
Is there a full interview?
Feel so vindicated seeing this, I have been thinking this for 3+ months
Deepmind with all the head start has obviously failed to be an agi system .
🇧🇷🇧🇷🇧🇷🇧🇷👏🏻, Amazing interview!
Where is the full interview of this? Still coming or did I miss it?
tomorrow
It's interesting that Demis is thinking about adding tree search to LLMs, while the chess guys right now seem interested in *removing* tree search and getting by with policy alone (several papers on this recently).
Someone please interview Vadswani, the guy behind transformers
alpgageometrie???
My thinking was more small language models with SFP, ... But I'm a little biased to small models 😉
I am very interested to learn whether we need Neuro-Symbolic AI like Gary Marcus says.
Depending on how you define symbolic, these systems are already nero-symbolic. For example, tree search, or search in general as discussed here, and as used in say alpha-go, is a symbolic operation. We could also say that LLMs map symbolic inputs to symbolic outputs (natural language), but more meaningfully, we could call systems like ChatGPT neuro-symbolic as they combine LLMs with existing symbolic tools, e.g. external math solvers like Wolfram alpha.
@@hunteroffire9 Do you know Gary Marcus? He is one of the proponent of Neuro Symbolic AI and has been consistently criticising OpenAI and other popular LLM as gimmicky auto predict and they should focus on Neuro Symbolic AI.
Idk why he does not consider present AI based on next word prediction much valuable.
Gary Marcus is wrong about most things, IMO. And I definitely believe he’s wrong about this. I recommend reading “The Bitter Lesson” by Rich Sutton. It’s a short essay available for free online. It explains how time and time again, researchers wanted to think that we needed to build our human ideas and knowledge into AI in order to improve its intelligence, but time and time again that turned out to be incorrect, and all that was really needed was scale, both in data and compute (this is what he calls the “bitter lesson”). Gary Marcus is a perfect example of this someone making this mistake. He’s also been wrong about most AI-related claims he’s made in recent memory, and it’s worth noting that few top-level AI researchers agree with him (and in many cases they don’t even seem to respect him).
One reason not to bootstrap with existing data: to avoid inheriting biases
An LLM already is a tree search on steroids.
When's the full version coming?
tomorrow
When he talks about going full Alpha Zero on an AGI model, how would that work without feeding it any human knowledge? For example, how could it independently come up with the English language if there are infinite possible languages?
Well I don't know exactly what they'll do but since the model is an rl model it all comes down to the rules of the simulation they put the model in and the reward function. I'm not an expert on RL and I don't know if you're looking for a more detailed answer or just the basic one I gave
They could train robots virtually using some physics engine, then unleash them into the real world with physical embodiment. The problem is we need to inject some kind of goal to it, and it must be able to understand it. That is probably why Hassabis stated that things gets easier by incorporating a prior world model (based on some multi-modal architecture trained on 'real' data).
I don't see it happening. In fact the AGI avenue feels to me a dead end like Fusion Energy- the power source of the future and always will be.
Seems like Ai development is becoming so complex that it needs an huge Ai computer to analyse different methodologies to take it forward at pace. Just a short matter of time before AGI
They should use alpha zero to win the game of creating a better LLM.
He just explained why Sora is such a HUGE deal as a world model simulator
Why did you speed up the video?
Yeah, I feel like I'm having a panic attack watching this.
@@matthewburson2908 😭right?!
the guy who killed google not sundar
The lack of a reward system/win condition is what makes llms so hard.. Wonder what the win condition is for llms... Probably a mix of manual human approval and some kind of political correctness/saftey sentiment metric
Dwarkesh Patel the hottest podcaster ...=)
A LLM will never be a AGI
Please, please, god no. No AGI from Google, please 🙏 Guys are incompetent even in testing basic LLMs. Literally every release was a fuck up one way or another. Unacceptable shit that you don’t see even from open-source teams with a bunch of randoms or 10 people startups 🤦
Obviously, they are not even close to achieving it hahaha
Dwarkesh, please speak a bit slower, so we can process what you are saying.
Ah come on Dwarkesh, releasing stuff ala Williamson piecemeal before the full interview is incredibly annoying, I know the algorithm game must be played, but I'm still annoyed.
Woke AGI? No thanks!
what does that even mean?
What’s your icon pic?
your brain is cooked lil bro, log off
@@ataraxia7439Night Vision camera.
@@conall5434 Liberal ideological bias imbedded into the AI, like what's happened with Gemini.
Tomorrow is now