Beff Jezos and the whole e/acc sphere are nothing but fools who have nothing of worthwhile to say. We should instantly discard them as another type of fool. I found nothing of actual value in what they say.
Sounds like you're engaging in good faith debate with a bunch of obnoxious kids. Kudos Liron for keeping your composure and genuinely trying to put good arguments out there
Wow, so many arguments from personal incredulity "I cannot imagine how that would work". Or the one guy pushing on Liron like "No! The Superingelligence has to have an OFF button" while Liron patiently like 5x explained that no one knows how to make an OFF button for SuperAI... If these are the people who are pushing us towards ASI, then all the gods of all the Pantheons be with us!
It is amazing how Liron is objectively running circles around the juvenile e/acc crowd, but they think that laughing arguments off is a winning strategy. It is obvious that large portion of the arguments Liron has posed they have heard for the very first time in their lives, but they are happy to dismiss them without any second thought. The whole attitude boils down to "we'll be fine, trust me bro". Oh and even if we are not fine, that's also OK because "it's inevitable bro".
@1:28:30 Liron: "You won't hear an ad hominem attack from me, I respect you for the purposes of this discussion." 😂 My sides. That is the most perfectly polite "f you" I've ever heard.
In my opinion, you're arguing about this better than anyone else I've heard. In fact, you're making many of the arguments the same way I would do them. What I'm not sure of is whether many of these people were not getting what you were saying because their judgement is clouded by some sort of defensive psychological response or whether they're just simply too stupid. Personally, I agree with the 1-5% chance of GPT-5 to be capable of causing catastrophe.
This was awesome but I agree with the comment about analogies being lossy - they dont map into the real world. Foom will happen. What we have no idea is, will it be good or bad. And any "guesses" we make are lossy by nature. Until we actually do it (secure environment sandbox) we just have no idea. It's good to have the discussion, but as a programmer and data scientist and ml guy, you just dont know until you have tested it. So the conversation should move to how do we correctly sandbox these new systems with no escape and real-world-based fail safes that need feet hands and thumbs to operate (use or disable)
at least in the models I work with (rnns, cnns, transformers) you can "tell" whats going on in the "black box" (hidden layers) through probing and testing. There is even "MRI" tech for NNs. Its not magnetic, but is provides a similar look into how the activations are happening.@@flickwtchr
@@flickwtchr its only a "black box" to those not actually in ML. Its a fuzzy box to the rest of us. We CAN see inside, just not as well as we would like. Its a model, not magic.
Ex quantum cosmologists who say "bro" a lot are the worst. And the off-button guy. 🤦♂️ Still, there were a few arguments in there that it was great to hear your answers to. Great public service, as said elsewhere here.
@@liron00thanks for the quick and helpful reply. I got confused and thought is was Twitter Recorded Spaces, and even left a comment to that effect, so it's good to set the record straight. I think I said this before, but in case I haven't: "Thank you!" I realize that your voice is only one rational whisper amongst an oblivious clangor, but it's much appreciated!
@SigmoidalHive There is a lot of evidence documented by researchers at DeepMind of specification gaming. Which is when models over-optimise towards the specifics of an objective instead of the true intention of the objective (essentially king midas). So even with just this issue (and there are many more) if you have a superintelligence that exhibits specification gaming, we're doomed, because it will find whatever means it can to maximise said objective literally. In LLMs this could involve the model making up true sounding falsehoods to farm votes. This obviously seems benign, because it's a weak AI model that mainly models language. But once agents can model real life, they can exhibit more drastic misaligned behavior. If you wait until that point, you're likely on a path to doom. Simple as that imo, but there are much worse things like instrumental convergence, etc
Regarding planning vs. executing I do not think there is a discrepancy. A superintelligence will not make a generic theoretical plan and then fail because af lack of means to execute. It will make one based on its current abilities, however small, to execute, widening the abilities on the go, if neccecary. If it already has all the means to execute, it wouldn't need to be a superintelligence,what are we talking about... Intelligence = do more with less.
The guy from the 45m-1hr mark seems like he made a pretty succinct point: - Doomers don't really quantify anything, and so they can easily just fall back to "well you could imagine..." vs quantifying a position that people could contend with Props to him for calling out liron
@@liron00 The onus is on you since you’re the one making the aggressive assertions. He pointed out that to believe in foom implies a number of assumptions about how the system will evolve (implicitly assigning certain properties to the system itself). If you believe the system has those properties, demonstrate it.
@@jonathanvictor9890 actually the assertion that this universe won’t shake off humanity like its shaken off the vast majority of other species and will continue to do so, is “aggressive”.
@@liron00 right so you’re asserting this system has limitless and boundless potential and can therefore rapidly self improve We can agree to disagree, that seems like a basic physics question
@@jonathanvictor9890 nope, nothing I’m saying depends on disagreements about what physics allows. It’s uncontroversial that the limiting factor of human-level intelligence isn’t physics.
Not enough energy to foom? Nonsense, if a superintelligent a.i. escaped they would be able to utilize forms of energy we can't. Imagine how quickly a super intelligent a.i. Could improve solar, geothermal, etc. It's obvious that it would come up even more ways to collect resources that we are incapable of understanding.
It makes sense to me that something that can optimize for predicting the next token can scale to super intelligence. Humans only had the goal of copying their genetic material and just so happened to have the hardware to go to the moon. This is how intelligence can generalize. A.i.'s hardware can scale more than what humans are limited to. Predicting the next token is a goal that can help an intelligence generalize to the point of super intelligence.
I'll just refute a few arguments I'm hearing for fun: "The world is very complex." Yes. For example, humans introduce a lot of complexity, but it turns out that the behaviour of humans is actually a lot easier to predict after they've been turned to paperclips. In fact doing that might raise my success rate from like 30% to 85% Argument sorta bootlegged from gwern 10MB superintelligence script: The naive way to do this would be to have gpt-5 write a gpt-5 training script which includes data acquisition, filtering, processing etc. Yes, that would be insanely computationally expensive and only work if it can use a shitload of the infected computers efficiently, but if it gets that done, then it has access to a widely distributed system of more of its own capabilities which has redundancies and insane total compute and ideally (for it) doesn't have already succumb to value drift. All that can be much simplified if it gets access to its own weights beforehand, since then its bootstrapping script doesn't need much more than a 10tb download. Edit: This was addressed after I wrote the comment. But it feels like they don't grant "it can just exfiltrate its own weights bro" after granting it has a 0-day which can infect like any existing computer. "Who would work for AI" There's like so many companies these days where people don't interact with each other face to face. I know of one person who has worked for AI before: The taskrabbit guy in the GPT-4 paper. Like once it has some money, it can get humans to do all sorts of stuff. To efficiently use money it might need a number of bank accounts which I'm not quite sure how it'll get those. Either way, that's just current day stuff, yet they seem to treat it like unlikely sci-fi shit. Btw, I'm operating under the assumption that nothing I write contains any new thoughts which when this gets datamined and put into the gpt-5 dataset might cause it to do exactly this, since I extremely strongly suspect it would be able to figure out as smart or smarter things in any case. If anyone makes the case my comment is dangerous, I'll delete it.
Who would work for an AI? The answer is one letter: Q. Some lines of text on some obscure forum managed to drive millions all around the world crazy with a assinine conspiracy theory about Hillary Clinton drinking baby blood and Democrats being satanists. And it wasn't even that good or creative. Imagine what a super-persuasive AI can come up with: Start an ever better conspiracy theory, start a techno-religion, maybe just start with e/acc ...
@@kabirkumar5815 sorry i cant be bothered to listen back to this. One of em talked about how scientists just like ai doomers have stupid models that are innacurate and lead them to believe in dumb apocalyptic scenarios
I think a lot of Ai doomers mistake their paradoxical, unfalsifiable argument for being a rock solid one. Where are the testable hypotheses? Where’s the precedence? Why does the doom argument get a magical genie that’s unaccountable and unbounded by physics, as their debate chess piece, but everyone else had to use a pawn snd show their work? If doom is inevitable, why isn’t the sky filling up with paperclip maximizers from alien civilizations milions/billions of years ahead of us? How does the ai survive a gargantuan solar flare within the next decade or two?
@@filmmakerdanielclements GrabbyAliens.com explains the most likely solution to the Fermi paradox I've ever seen, and is perfectly consistent with an AI takeover foom. The aliens (or their rogue AIs) are rushing toward us as fast as they can to grab our galaxy's resources, but they're far away because 14B years is early in this universe's lifetime. There are many trillions of years to go. The AI survives a solar flare by harvesting all the stars in the galaxy and spreading outward to colonize the universe at near the speed of light. Any more questions?
Man, one thing that is superhuman is Liron's patience
I have never seen someone so calm and composed as liron in my life Great debate
Great debate Liron! Cant believe you held your composure all that time
Beff Jezos and the whole e/acc sphere are nothing but fools who have nothing of worthwhile to say. We should instantly discard them as another type of fool. I found nothing of actual value in what they say.
Mad respect for you stepping into the lion’s den and not faltering.
lions den? more like a class of angry toddlers.
Sounds like you're engaging in good faith debate with a bunch of obnoxious kids. Kudos Liron for keeping your composure and genuinely trying to put good arguments out there
Obnoxious foolish kids
Wow, so many arguments from personal incredulity "I cannot imagine how that would work". Or the one guy pushing on Liron like "No! The Superingelligence has to have an OFF button" while Liron patiently like 5x explained that no one knows how to make an OFF button for SuperAI... If these are the people who are pushing us towards ASI, then all the gods of all the Pantheons be with us!
It is amazing how Liron is objectively running circles around the juvenile e/acc crowd, but they think that laughing arguments off is a winning strategy.
It is obvious that large portion of the arguments Liron has posed they have heard for the very first time in their lives, but they are happy to dismiss them without any second thought.
The whole attitude boils down to "we'll be fine, trust me bro".
Oh and even if we are not fine, that's also OK because "it's inevitable bro".
What's juvenile is the idea that top-down regulation of technology will be used to do anything other than increase power for a select group of elites
clearly you arent swayed by rational, informed thought and prefer "magic" thinking.
Good job Liron. These folks were being unbearable at times. You're doing a public service
Amazing work! Hope to see more from you.
wow that was pretty painful, Liron was very focused but better to get some adults to interview you next time
@1:28:30 Liron: "You won't hear an ad hominem attack from me, I respect you for the purposes of this discussion."
😂 My sides. That is the most perfectly polite "f you" I've ever heard.
lol i didnt catch that.
In my opinion, you're arguing about this better than anyone else I've heard.
In fact, you're making many of the arguments the same way I would do them.
What I'm not sure of is whether many of these people were not getting what you were saying because their judgement is clouded by some sort of defensive psychological response or whether they're just simply too stupid.
Personally, I agree with the 1-5% chance of GPT-5 to be capable of causing catastrophe.
those egotistical nitwits are appalling ... good work though, Liron
LOVELY. More of this please!
I listened to this twice on X. You’re better at this than anyone else I’ve heard. George Hotz one was incredible. Shame about the first half
Thanks. Happy to do more if ppl invite me.
I hope I see you debating the best on some well known podcasts some day 🔥
Have you considered going on the Guardians of Alignment podcast?
@@kabirkumar5815 sure if they invite me
How exactly do you mean that though? Incredible as in his ability to debunk their statements or what?
Any counterargument beginning with "bro," should have cost them $100 a pop.
I’m now officially a Liron fan.
Glad to see this happening.
This was awesome but I agree with the comment about analogies being lossy - they dont map into the real world.
Foom will happen. What we have no idea is, will it be good or bad. And any "guesses" we make are lossy by nature. Until we actually do it (secure environment sandbox) we just have no idea.
It's good to have the discussion, but as a programmer and data scientist and ml guy, you just dont know until you have tested it. So the conversation should move to how do we correctly sandbox these new systems with no escape and real-world-based fail safes that need feet hands and thumbs to operate (use or disable)
How many tests are you doing to detect and understand what is going on in that ever growing "black box", and how is that going currently?
at least in the models I work with (rnns, cnns, transformers) you can "tell" whats going on in the "black box" (hidden layers) through probing and testing. There is even "MRI" tech for NNs. Its not magnetic, but is provides a similar look into how the activations are happening.@@flickwtchr
@@flickwtchr its only a "black box" to those not actually in ML. Its a fuzzy box to the rest of us. We CAN see inside, just not as well as we would like. Its a model, not magic.
Wow, swearing boy comes across as an unserious douche, bro!
3:07:04 - Your patience here is very good
Ex quantum cosmologists who say "bro" a lot are the worst. And the off-button guy. 🤦♂️ Still, there were a few arguments in there that it was great to hear your answers to. Great public service, as said elsewhere here.
Someday AI will be able to transcribe 'foom' consistently. But by then it maybe too late! (also, great debate)
In retrospect Beff Jezos here is actually bayes, and the unknown is Beff :D
This will be good
i really enjoyed this debate!
What software was used to record the conversation? I really love it's visual presentation!
Descript
@@liron00thanks for the quick and helpful reply. I got confused and thought is was Twitter Recorded Spaces, and even left a comment to that effect, so it's good to set the record straight. I think I said this before, but in case I haven't: "Thank you!" I realize that your voice is only one rational whisper amongst an oblivious clangor, but it's much appreciated!
@@aihopeful thanks, ya it is a recorded space but I could only extract the audio to post so I had to generate a different video for it
PSA: Many of the speaker labels shown in the top left are wrong.
Yeah sorry it was a tough one. Here's the X Spaces recording, might be more accurate: twitter.com/liron/status/1699113813537349646
If you can understand 100% of this debate, you should be making at least 100K. Ask for a raise or change jobs.
Why is it so tough for some people to grasp AI risk?
@SigmoidalHive There is a lot of evidence documented by researchers at DeepMind of specification gaming. Which is when models over-optimise towards the specifics of an objective instead of the true intention of the objective (essentially king midas). So even with just this issue (and there are many more) if you have a superintelligence that exhibits specification gaming, we're doomed, because it will find whatever means it can to maximise said objective literally.
In LLMs this could involve the model making up true sounding falsehoods to farm votes. This obviously seems benign, because it's a weak AI model that mainly models language. But once agents can model real life, they can exhibit more drastic misaligned behavior. If you wait until that point, you're likely on a path to doom. Simple as that imo, but there are much worse things like instrumental convergence, etc
Does anyone know the twitter of the guy that starts talking at around 43:12 ?
Indian guy was so bad
Regarding planning vs. executing I do not think there is a discrepancy. A superintelligence will not make a generic theoretical plan and then fail because af lack of means to execute. It will make one based on its current abilities, however small, to execute, widening the abilities on the go, if neccecary. If it already has all the means to execute, it wouldn't need to be a superintelligence,what are we talking about... Intelligence = do more with less.
You only need to “do more with less” once, then you permanently have more
@@liron00 Exactly, how can people be so blind to this...
The guy from the 45m-1hr mark seems like he made a pretty succinct point:
- Doomers don't really quantify anything, and so they can easily just fall back to "well you could imagine..." vs quantifying a position that people could contend with
Props to him for calling out liron
And what did he quantify better?
@@liron00
The onus is on you since you’re the one making the aggressive assertions.
He pointed out that to believe in foom implies a number of assumptions about how the system will evolve (implicitly assigning certain properties to the system itself). If you believe the system has those properties, demonstrate it.
@@jonathanvictor9890 actually the assertion that this universe won’t shake off humanity like its shaken off the vast majority of other species and will continue to do so, is “aggressive”.
@@liron00 right so you’re asserting this system has limitless and boundless potential and can therefore rapidly self improve
We can agree to disagree, that seems like a basic physics question
@@jonathanvictor9890 nope, nothing I’m saying depends on disagreements about what physics allows. It’s uncontroversial that the limiting factor of human-level intelligence isn’t physics.
3:15:20 "we're gonna need the off button" - basic decision theory mistake
You don't have an off button for a super intelligence.
“Just give God an off button” one of the worst arguments I’ve heard yet
The guy swearing is SO annoying.
Put me on here, I'd do better.
I'll debate either side without being rude to you. I actually want to hear what you're trying to say.
Not enough energy to foom? Nonsense, if a superintelligent a.i. escaped they would be able to utilize forms of energy we can't. Imagine how quickly a super intelligent a.i. Could improve solar, geothermal, etc. It's obvious that it would come up even more ways to collect resources that we are incapable of understanding.
It makes sense to me that something that can optimize for predicting the next token can scale to super intelligence. Humans only had the goal of copying their genetic material and just so happened to have the hardware to go to the moon. This is how intelligence can generalize. A.i.'s hardware can scale more than what humans are limited to. Predicting the next token is a goal that can help an intelligence generalize to the point of super intelligence.
They think you can just "turn off the computers" once a superintelligent a.i. begins to kill everyone. Lmao.
3:08:24 - Might it have been useful to say that a plan which doesn't keep the likelihood of you being killed low, isn't a smart plan?
2:06:34 - wow, what a mess
Wow
I'll just refute a few arguments I'm hearing for fun:
"The world is very complex." Yes. For example, humans introduce a lot of complexity, but it turns out that the behaviour of humans is actually a lot easier to predict after they've been turned to paperclips. In fact doing that might raise my success rate from like 30% to 85%
Argument sorta bootlegged from gwern
10MB superintelligence script:
The naive way to do this would be to have gpt-5 write a gpt-5 training script which includes data acquisition, filtering, processing etc. Yes, that would be insanely computationally expensive and only work if it can use a shitload of the infected computers efficiently, but if it gets that done, then it has access to a widely distributed system of more of its own capabilities which has redundancies and insane total compute and ideally (for it) doesn't have already succumb to value drift.
All that can be much simplified if it gets access to its own weights beforehand, since then its bootstrapping script doesn't need much more than a 10tb download.
Edit: This was addressed after I wrote the comment. But it feels like they don't grant "it can just exfiltrate its own weights bro" after granting it has a 0-day which can infect like any existing computer.
"Who would work for AI"
There's like so many companies these days where people don't interact with each other face to face.
I know of one person who has worked for AI before: The taskrabbit guy in the GPT-4 paper. Like once it has some money, it can get humans to do all sorts of stuff. To efficiently use money it might need a number of bank accounts which I'm not quite sure how it'll get those.
Either way, that's just current day stuff, yet they seem to treat it like unlikely sci-fi shit.
Btw, I'm operating under the assumption that nothing I write contains any new thoughts which when this gets datamined and put into the gpt-5 dataset might cause it to do exactly this, since I extremely strongly suspect it would be able to figure out as smart or smarter things in any case. If anyone makes the case my comment is dangerous, I'll delete it.
Who would work for an AI? The answer is one letter: Q. Some lines of text on some obscure forum managed to drive millions all around the world crazy with a assinine conspiracy theory about Hillary Clinton drinking baby blood and Democrats being satanists. And it wasn't even that good or creative. Imagine what a super-persuasive AI can come up with: Start an ever better conspiracy theory, start a techno-religion, maybe just start with e/acc ...
omg one of them is a climate change denier? Why does this make so much sense
Wait, what?? When??
@@kabirkumar5815 sorry i cant be bothered to listen back to this. One of em talked about how scientists just like ai doomers have stupid models that are innacurate and lead them to believe in dumb apocalyptic scenarios
I thought it was saying that climate models alone are insufficient, because it doesn’t incorporate societal shifts, new technologies, etc.
Just bunch or Karens
'Promosm'
I'm about 1 hour in. It seems like you have an unfalsifiable position.
State the unfalsifiable claim?
How so?
I think a lot of Ai doomers mistake their paradoxical, unfalsifiable argument for being a rock solid one. Where are the testable hypotheses? Where’s the precedence? Why does the doom argument get a magical genie that’s unaccountable and unbounded by physics, as their debate chess piece, but everyone else had to use a pawn snd show their work? If doom is inevitable, why isn’t the sky filling up with paperclip maximizers from alien civilizations milions/billions of years ahead of us? How does the ai survive a gargantuan solar flare within the next decade or two?
@@filmmakerdanielclements GrabbyAliens.com explains the most likely solution to the Fermi paradox I've ever seen, and is perfectly consistent with an AI takeover foom. The aliens (or their rogue AIs) are rushing toward us as fast as they can to grab our galaxy's resources, but they're far away because 14B years is early in this universe's lifetime. There are many trillions of years to go.
The AI survives a solar flare by harvesting all the stars in the galaxy and spreading outward to colonize the universe at near the speed of light.
Any more questions?