AI Expert Yudkowsky Warns Destiny About The AI Threat | LIVE DEBATE
Вставка
- Опубліковано 22 вер 2023
- LIVE DEBATE: Destiny debates AI Expert Eliezer Yudkowsky at the manifest conference.
Date: 23 September, 2023
Eliezer Yudkowsky
► / esyudkowsky
▼Follow Destiny▼
►STREAM - www.destiny.gg/bigscreen
►TWITTER - / theomniliberal
►DISCORD - discordapp.com/invite/destiny
►REDDIT - / destiny
►INSTAGRAM - / destiny
►MERCH - shop.destiny.gg/
Check Out My Amazon: www.amazon.com/shop/destiny
Buy My Merch: shop.destiny.gg/
#destiny
#politics
#debate
Lex Fridman wants his thumbnail back
LMFAO GOT EM
Ain't no wayyyy
😂
gg
Yo! That was what I was thrown off on. Facts
How dare he compare his orbiters to newborns
Newborns cry and whine way less then his orbiters
repent to God
@@Baggerz182repent to deez nuts
@@Baggerz182ok now what
@@bruhdabonesnow you send nooods to your crush and cross your fingers.
It works
@@BeerBreath702 holy shit she said “that’s actually kinda big 🤤”
This guy is like a blast of nostalgia. This is how nerds used to be, kids.
I took him for granted. Its been a while since I saw a guy like that. And honestly i dont miss it.
How would you say nerds are now? Lol
@@pspmaster2071 they've all transitioned
@@iammrpositive5108 I don't think I would call those nerds.
I know multiple people in real life who are studying/practicing computer science and they are exactly like this. Honestly, I hate how the term nerdy just means whatever now: anyone who isn't conventionally attractive and plays video games, anyone who is interested in anime, comics or obsessed with a fandom, anyone who studies something with lots of math or just someone who doesn't leave the house that much, has bad social skills and also no girlfriend of course.
Nowadays being nerdy sometimes just means being smart lol
Back then it was like the whole package, you know?
Theres no way people use the internet in 2023 and come away from it with the idea that a hat like that is an acceptable thing to wear in public without getting roasted
And the audacity that it’s YELLOW
he looks like a nerdy pimp
Damn, stop doggin on him. That's a sick ass hat
I wish everyone dressed like that. Things would be better.
There’s an entire “subculture” where the most important thing is to never hurt other people’s feelings unless they’ve committed certain moral transgressions.
If he’s not wearing it just for this interview and wears it out in public, it’s because nobody has the heart to say anything about it, especially if they’re not close with him.
That aside, he’s simply WonkaMaxing
I find it fascinating that every AI expert has differing opinions on AI. I was at a lecture with an AI expert who said that the current state of AI is just advanced algorithms surrounded by a bunch of buzzwords.
I couldn't listen to this dude speak both times I've seen him pop up for this exact reason. AI might be undersold and misunderstood to an extent, but for the foreseeable future, it's 100000000000000% limited by the data we put into it and the creativity of the people setting its algorithms. Emotionality, subjectivity, and bias plays a GIGANTIC role in how we problem solve, implement logic, and emerge conscious experience. If AI develops consciousness, it will be a ruleset put in place and be *MASSIVELY* tempered/regulated by human beings in the process of its development.
@@divinemeta Depends on how you define creativity. Many people come up with creative ideas based on input received from tutors, schools and their peers. We are all just a collection of life times of data, passed down generation to generation. Hell, most people aren't creative in the least. So if you have an AI that can solve problems that humans haven't been able to think of, even if it comes from data input by humans, you could still arguable call that creativity. If you define it as coming up with something, off the dome, with no input from anyone (don't know how this would even be possible) than sure but I think there is a good argument to make that they are creative. They certainly come up with wild art that I would describe as creative. I've asked chatGPT to make raps dissing my friends and they are actually really good. If a human came up with that rap, we would call them creative but chatGPT and that human get their data, language and everything from the exact same place.
How can you call a computing machine that only understands formal logic creative?
They are not algorithmic, they are statistically based.
@@mrmr2488 I already hinted at it in the comment section of this thread, but all inputs that fuel AI's creativity is going to be human derived. As you said, humans derive creativity from other humans... but what we call AI now isn't going to have any sort of emergent creative process. A team is going to come up with creativity algos that will just reference rulesets and data we feed into it.
There are no needs or contextual functions for an "AI" to make art without a person utilizing it to make something for someone else. It'd be like asking a rock what it wants for Christmas.
Is this the final boss of Reddit?
No, it IS reddit
Tragically accurate assessment.
i steal comments from other clips and post them as if they are my original thought
For all the shit people give Eliezer, I have yet to see anyone actually refute any of the points he tends to make in even a semi-convincing way.
Some people don't even engage at all and just dismiss him based on the way he's presenting himself or his arguments, even though that has literally nothing to do with whether his logic is either correct or wrong.
Absolutely shameful discourse in this comment section.
People can’t comprehend what’s saying, so they just make some idiotic remark
When being taken seriously is literally a matter of life or death, make sure to wear the silliest hat possible.
This genuinely pisses me off. I'm an AI doomer myself, and I wish Yudkowsky would take his role more seriously
@@cuylerbrehaut9813 it worries me more that he seems to be wearing that entirely seriously and unironically.
@cuylerbrehaut9813 What do you think is the strongest argument you have for the AI doomer stuff? Cuz as a computer scientist it really feels like most of it is just people who watch too much sci-fi and have no idea how any of this actually works.
@@aniranth7289 We'd have to go point-by-point. First, do you agree that if an AI was significantly smarter than a human, like 300 IQ and much better social skills, much better creativity, 1000 times faster thoughts, etc etc etc, and it wanted to kill humanity, it would succeed? Even in the most disadvantaged position where it's in a box and everyone knows it's super dangerous, and so on?
@@aniranth7289 if you believe that 1) an agent smart enough would attempt (due to instrumental convergence) and succeed (due to superior intelligence) at eliminating all life for the purpose of a goal 2) that level of intelligence is theoretically achievable 3) we are making progress in this area but have no way to measure the gap between where we are and that deadly threshold ==> AI DOOM is a non-0%-chance threat. Also, the "bUt i'M a cOmPuTeR sCiEnTiSt" authority-based argument is irrelevant (I am yet to hear a solid argument disproving AI that's actually rooted in factual computer science), this is simply a matter of game theory/evolution. The only role computer science would play here is by demonstrating that this cannot be done, which it can't. To be clear, I also can't prove that pDoom is 100%, but it's definitely not 0 and, with the whole word at stake, even 1% is too much. Also, even without the ASI killing us all, current AI growth will destroy the job markets, create social instability, destroy our legal system (perfect deepfakes invalidate evidence) and make mass murder extremely easy.
I feel like Yudkowsky acts way too smug when talking about consciousness for someone who's as incapable of concretely describing it as anyone else. The baby consciousness debate basically boils down to Destiny saying "I don't think babies don't have consciousness" and him responding with "If you knew what consciousness was like I do, you would obviously think babies are not conscious". It's kind of insuferable to listen to.
Also, I get that Destiny was probably seriously out of his element here so he approached the one topic he knew he's gonna have something to say about, but he really should have talked more about actual AI stuff instead of getting bogged down on the consciousness thing.
hard disagree. your description of what yud said is super biased and wrong. he literally said he defines consciousness as ability to have qualia. why comment this?
@@ketamineautism "Consciousness is the ability to have qualia" is kind of a useless definition in their conversation which wasn't just about what consciousness is, but was about *proving* that it exists in another entity. In that conversation is where Yudkowsky claimed to have some grander understanding of consciousness that he never explained.
If I say, "I can't prove to you that a baby's consciousness exists" and you say "you can prove it if you understand consciousness. Consciousness is just having qualia" then sorry, but that doesn't cut it.
@@someone98760 i was responding to the criticism that he couldn’t concretely describe it, because “ability to have qualia” is pretty concrete to me. i agree that yud left a lot unexplained, but i think he was trying to avoid rabbitholing hard because of the time limit. i get why some people find him sort of reddit and hard to listen to
@@ketamineautism Cool. I like Yudkowski regardless of his AI opinions and he's obviously smart. Just wish we got a larger explanation of his thoughts but yeah, probably the wrong format here.
The problem with defining consciousness as "ability to have qualia" is that qualia is defined as "instances of subjective, conscious experience". It's a circular definition that doesn't give you anything to work with or even put you closer to understanding what consciousness is.
Destiny really ended off with "Yeah... well I think babies are conscious so...."
This really was a debate of all time
repent to God g
And the counter point to that was "well when you know as much as I do about consciousness you'll know babies aren't conscious," which is so arrogantly ignorant he should win an award for ending a relatively decent conversation on the stupidest note outside of CPAC
@@vyvianalcott1681not as ignorant as a baby
@@vyvianalcott1681What i got from the cringy hat man is that he thinks if we can understand the mechanisms, which we currently can somewhat comprehend and/or are even able to study or at least try to, we can eventually understand consciousness too. But that line of thinking can be completely wrong if consciousness is more like someone pressing buttons on the keyboard, here no matter how well u understand how the keyboard works and its effects on the computer, u may never even come close to understanding the person who is pressing the keys. Destiny did mention something similar, but the guy dismissed it saying that if u think like that you are doing some weird philosophy stuff. Cringe...
@@sam3764 You don't come close to understanding it but the fact of the matter is still that there's a mechanical explanation for why the button was pushed, it's just a more complex one than you think.
Omg it's the dude who wrote that cool Harry Potter fanfic
I'm pretty sure that's oxhorn, the fallout youtuber.
@@hayvenbnah it was Eliezer, wrote the most popular one that exists
@@ThrustWithVigor I was joking lol
This fellow even a technoligist
Destiny should have asked him "what's an NFT?"
I bet he is, he probably wrote his AI programs from the command line
Guys in 15 years Destiny is going to be 50, wtf
I mean he's 14 years older than 20 so.
omg wtf no way i just realized in 50 years he will be 85! thats insane i didn't realize that's how it works
Holy shit people actually age over time!? New fear unlocked.
No he isn't
@@TheColdOnezhaha I will never die
I was really frustrated by Yudkowsky's selective engagement with topics. Whenever Steven would ask an interesting question for him to directly respond to, if he did not have a sufficent answer he would just go into the biological mechanisms of the phenomena on humans. I get that there is a good analysis to be had from a perspective like that sometimes, but it just felt really weaselly and you could tell he had no direct answer to give. Feels like a showman playing a crowd rather than engaging in good faith.
Agreed
It's like he was constantly viewing every topic from "Once we understand what consciousness is, it will be evident that ______" Such an unproductive way of thinking. The worst part is he kept acting like he said something extremely deep and profound every time, when in fact he really just said a whole bunch of nothing.
The AI doomers including this guy are just glorified marketers for their VC companies which are funding a bunch of ai startups and hoping one of them makes an actually profitable ai product, and letting idiots like him run interview circuits which are basically disguised buzz marketing techniques.
It could be an ego thing where he wants to remain in the domain of neuroscience because that's what he's most comfortable with, but it could also be a subconscious thing where he recently read a bunch of neuroscience papers to prepare for this discussion and so his mind was unintentionally inclined to stay in that domain.
Either way, I think Destiny handled it well. He's not afraid of delving into any topic. I do think this is a showcase of his good intuition for asking all the right questions.
Yeah, it seems like everyone is ragging on destiny for not knowing anything about AI stuff. He for sure has a good understanding on current AI issues and thinking. I think the awkwardness was Destiny getting frustrated with the wacky responses from dude that didn't really leave anything to engage with.
Im not the smartest person... But i genuinely feel like this guy is just puking out words that dont actually contribute to the point hes trying to make.
He definitely uses way too much filler
hes the vaush of ai
I agree .. you're not the smartest person.
@@markupton1417 sick burn, dude lol
Truly a debate of all time
repent to God g
I think Destiny generally chooses to be more civil when he's in person. There were moments I was surprised he didn't push back when he easily could have, but I think this was one of those correspondences where he wanted to enjoy himself a little, and talk about stuff that actually interests him, instead of having another confrontation that he's already had with other people online 50 times on stream. I can sympathize with Destiny getting bored of facing the same talking points over and over and having to rebut with the same refutations over and over.
Get a new joke for fucks sake.
I'm not so sure about that
The fact that they got sidetracked from AI to baby qualia is really upsetting. AI is such an important issue and almost everyone just glosses right over it. I feel like I'm being gaslit by society.
Take ur meds
9:20 this is probably the clearest sign that Yudkowsky does in fact have actual autism.
That was so painful to listen to.
that was hard to watch
repent to God
@@Baggerz182 I repent at least 15 times a day nazarene
I was never in doubt Yudkowsky had autism. Destiny probably does too to a lesser extent. It's extremely common in high (and low) IQ individuals.
I actually love when Destiny talks philosophy. I know he doesn't often like it but I enjoy the back and forths in these conversations.
I think Destiny enjoys philosophy, because he's good at asking the critical questions and creating the hypotheticals necessary to make unintuitive delineations essential to important philosophical conclusions.
However, he probably just dislikes talking to philosophers. Philosophers can be too idealistic, and Destiny is kind of a hardcore pragmatist. He only uses philosophy to explain what he observes, not to prescribe social change the way some philosophers do.
@@youtubeviolatedme7123Funny how Destiny would describe himself as the exact opposite regarding philosophy 😂
@@youtubeviolatedme7123 Pragmatism is a philosophy, though. Funnily enough, if you actually read Yudkowsky's philosophical writings they're often very close to the views Destiny frequently espouses, though more thoroughly thought-out (at least based on what I've seen/read from both), so I think they'd probably get along philosophically at least.
@@jonathanhenderson9422I agree with you. I actually think they would agree on more than they disagree.
When he had to clarify it destiny was just being snarky for 2-3 minutes, I knew this was going to be insufferable.
Yeah he spent so long trying to verify his reception of obvious social cues 😂
@@bruhdabones I think we're all trying to say what he is without saying it
Autism FeelsStrongMan
@@loverofbigdookiesLet's just say Destiny is running on Windows and social cues guy over here is running on Linux 😅
did he ever actually give an argument for why babies are probably not conscious? All I heard was 'they have small brains', which is obviously not a real reason to come to this conclusion
Yes. He just knows it.
Considering birds can learn individual human words, how to trade, basic problem solving, and have some concept of language
(grammatical rules, modifiers, etc). Arguably also complex social grouos and dynamics. Then brain size isn't a full matter in describing how functional a brain is.
5 year old crow > Baby
I can summarize since he didn't literally say it, but lets say in abstract bullshit numbers the average adult human brain is equivalent of a cpu with 1blilion transistors and can handle 32 instruction sets.
A baby's brain has theoretically like 200million transistors and 4 instruction sets. And a crow has like 20million transistors but 8 instruction sets. The potential is there but the neurons are untrained, the instructions and patterns aren't yet imbedded. Birds are not actually "intelligent", and brain size is indictive of capacity for different cognitive functions and memory. (More neurons, more links, more potential) Most birds operate very similarly overall, and we will project meaning onto their behaviors in our perception bias. They all follow the simple action->reward chain. Some of them have different toolsets, and some of them are more disposed to human pleasing and interaction, but that doesn't necessarily mean anything about their thinking capacity. Of course there's variance, not only by species but by individuals, and we attribute certain behaviors to intellect, but it's impossible to justify. What if the other birds simply have instincts that tell them to be more wary of humans, that crows are missing? They both possess the capacity, but crows are advantaged for the assigned tasks that you attribute to intellect by nature of negligence of self preservation. This isn't a question of the brain, but of the behavior being attributed biased meaning.
This is just musing, I don't know that much about birds. You can just quantify the number of patterns animals can handle, and their ability to abstract, and compare them with children. Yes, children are out performed on measurable metrics, until a developmental point. That is literally why he doesn't assign children consciousness. Brains and culmination of patterns have to develop. If a human doesn't develop enough to communicate in any measurable form that it's conscious, then there's no reason to assume it is. Which is callous, and a good reason to better measure neural signals so that we can qualify better, and observe people who can not communicate, to see if they're braindead or not.
His argument is that it takes a certain level of "training" for the mind to grow enough to produce consciousness. Which is likely given how much the brain grows and changes over a humans early life (look it up some time, shit's wild). IMO my need to err on the side of caution leads me to place the cutoff somewhere around the 2nd trimester. But intuitions differ and the facts are limited. I don't think Eliezer is insane for his opinion.
@@BoraHorza456 I will say this, what explanation do we have for not remembering when we are babies? We tend to bank memories when something big happens, whether its good or traumatizing. Babies go through all sorts of traumatizing and good things, so why don't they bank memories? I would venture to guess that consciousness and memory are connected in an intimate way. It would likely have something to do with being conscious enough to recognize that something good or bad is even happening, to be able to learn from it. Babies seem to only just respond to stimuli. They get hurt, they cry. They are hungry, they cry. They are happy, they smile. They clearly lack something that makes them a child or teenager or adult. You can simply say its "age" but what does "age" bring to the table that you don't have when you are born? Maybe consciousness? I'm not sure but to pretend it's something settled is just silly. We don't understand consciousness at all, which is why I tend to really dislike Destiny's abortion stance.
"The same way you don't care about an ant, this thing is not going to care about you. These things we are summoning into the world now are not demons, they're not evil. But there more like the Lovecraftian Great Old Ones. There are entities that are not necessarily going to align themselves with what we want." - Geordie Rose , Founder of D-Wave.
Sounds absolutely promising what could go wrong.
Destiny debates reddit irl
😂
He looks exactly how I would expect the author of "Harry Potter and the methods of rationality" to look like.
Bro kill me he sat down with a AI Expert to basically have a consciousness debate
Consciousness is an important philosophical topic when talking about AI. At some point AI might become conscious, and so the correlate in humans is that they also seem to "become" conscious at some point
Isn't that literally one of the pinnacle debate topics of AI? I see what you mean though as it was mostly related to babies and AI was an afterthought.
to be fair he wasn't actually answering any of the AI questions
@@spacelevator Not really. It has implications for ethics, like how we ought to treat AI's. But it has little impact on AI risk, which is what Eliezer is mostly worried about.
the true AI is the abortion debates we made along the way
Yudkowsky is a bit r/iamverysmart watch the first 10 minutes he never really answers exactly to the question that was asked. It feels like he rather pulls out some adjacent story that he has in his mind and then plays the tape
“There’s gonna be a pretty terrible worst case scenario when it comes to AI. For one, the risk of AI developing synthetic pathogens is a pretty big risk, especially when you consider over half the US population didn’t take the vaccine. When you think about it, our odds of being totally wiped out by 2060 are significant.”
“Gotcha”
Destiny knows too little about AI. I wish he would look at some of the classic problems. He basicially stumbled into the The Chinese Room Argument on his own, so would be cool to see him actually prepared. This is WAY more interesting than abortion debates
For real. I was incredibly disappointed with Destiny's performance this debate
@@Chadmlad he doesn't know everything
@@osvelitbut why go into a debate if you know nothing about the subject? Destiny not being omniscient isn’t a good defense
This is like humans creating, their own, species level abortion mechanism and attempting to actually be conscious about it.
GENUINELY i was so disappointed that the AI segment of this conversation was so brief! I would like to hear these two talk about it at length.
Really wasted potential. Consciousness segway was a total waste of time, and EY didn't approach the argument by debunking the "reasons AI can't kill us". I think EY was too nervous/starstruck to approach this as he normally would (probably explains the golden hat too).
Starstruck? Wtf you talking about
Wow, Vaush looks really different these days.
Destiny’s response throughout this debate: “Okay”
I imagine he had to hold back from pointing out the guy hadn't actually answered the question, over and over
Wasn’t a debate
@@crayondude8014i know but why call the video a debate
@@sobangja He absolutely did.
@@saschah.174 absolutely didn't
I think with Yud you really need more time and narrower scope.
He hates saying wrong things, but to avoid that he needs to specify/do a lot of ground work.
Dipping your toes into a subject with him for a few minutes really isn't where he shines.
If babies don't have conciousness, then how can they wind up so permanently impacted by the presence or absence of touch, affection, interaction, etc?
Do they?
Blank slate, that needs to be programmed to be conscious.
If you don’t do those things, they still develop a conscious, however not a program that is optimised to run on their hardware.
because it affects the endocrine system and brain development. people can also be permanently impacted by the effects of the intrauterine environment on themselves as a fetus.
their neurological wiring is being updated by those interactions regardless of whether they are consciously experiencing them
You are more than your conciousness
Give back my Goddamn 30 Minutes!!!
Who's Al and why is this guy an expert on him?
Manifest Destiny
This was the most Le Redditor debate I've seen in a while
This guy gives off Spongebob food inspector vibes
AI is one of those topics that people who are remotely informed on the topic can sound like geniuses with a few fancy words when in reality theyre saying absolutely fuck all.
True honestly
Truly true
never seen anything so true in my life
AI is designed to suck money out of VCs to fund guys like this being able to buy any hat they could ever imagine. Look up Ben Goertzle and you'll see AI guys LOVE silly hats.
It's also a topic where people who are completely uninformed will confidently deny the weight of the subject just because they don't feel like learning about it (maybe they think it's too fedora coded, or it "gives elon musk vibes" or something idk).
Yeah I'm just gonna go out at the limb and assume this dude is a hack and skip this one.
fair assumption and good call
From having spent time on lesswrong in the past I can confirm that eliezer is proof that just because someone has a high iq doesn't mean they can't be redacted
Safe bet. Under 72 Archetypes he's a Narcissist Archetype.
I have to agree. I would never call myself an AI expert, or even close, but I've trained a few models in various applications and I feel like I have a decent understanding of the mechanism of neural nets and I can't see how his arguments map on to the technology and how it works.
Like when he says that AI (as the current ML philosophy) has the capacity to understand the world and you better than any human can, the only way that statement makes any sense in my mind is like saying "A knife understands the act of cleaving better than any human". Like sure, in a weird sort of way, but I wouldn't say the knife understands anything.
The rest of it he kind of handwaves away with technical jargon and pointless acronyms. like when he says "I never would have predicted that AI could be used to model protein folding, at least not an LLM" LLM just means large language model. Like no shit, an LLM isn't the tool you'd use for that, but I could easily see how ML/neural nets could be used to do exactly that, given the right data and approach. Like of course you're not gonna just be able to ask chat GPT about undiscovered fundamentals of the universe and have it spit out the correct answer.
Every time a see someone being super doomer about AI, I just can't help but feel like they have a complete misunderstanding of the technology and what it's actually doing, and instead have bought into the marketing and buzzwords surrounding it without actually learning what any of it means.
@@72_archetypistis there a dork arquetype or where do u fit in
It's so hard to take these ai guys serious if this is the best they have
He really isn’t, he became famous for being the first person to really publicize that ai safety is really important but he’s not really as relevant anymore
I mean you can't really expect AI computer science neckbeards to be able to briefly and simply have a discussion on consciousness 😂
He's not an AI expert, he's the clown that wrote harry potter and the methods of rationality
Exactly, this was absolutely nothing but "good words". So sad we need to tolerate these kind of competence actors.
That’s because he’s not an expert on AI, he’s not a computer scientist either, he’s just a charlatan full stop.
This guy considers AI as conscious, not because the machines are conscious right now, but that they are on a path that leads to consciousness.
And yet a human baby, with proper socialization will also grow into a consiois being. But babies aren’t conscious?
Chat GPT is conscious because of its potential, but not a human baby?
Its the apparent self-awareness that he seems to be taking into account
Then really must not have spent much time around infants if he has never seen A human baby demonstrate self aware behavior.
I don't wanna be mean but goddamn that guy seems annoying to be around
Definitely one of the debates of all time
Definitely 2 people talking of all time
This guy is shockingly over confident in his own understanding of everything
Hes got autodidact brain
Because he is the human embodiment of reddit.
Because he's thought deeply about it for 20 years as opposed to, "Wow... everyone is talking about this, I should"look into it "
this felt like destiny streaming and pulling in a tier 5 sub lmao
A baby's brain has more connections than an adult one, but just less structure so from a technical perspective a baby's brain seems more complex.
Isn’t complexity an arbitrary metric? Why would one definition of complexity take precedence over another?
@@MrAdamo That's kind of my point. Added complexity in no way implies being more purposeful. Often the base complexity of AI systems and processing power are referred to as reasons why things will surely advance when the bottleneck is wholly elsewhere.
ive seen this guy in a few interviews now and im always being left confused as to weather hes a genius or just knows how to speak well on subjects hes really not an expert on.
More the latter.
The two are not mutually exclusive. The greatest geniuses in history weren't experts on everything. Newton believed in and spent as much time researching alchemy as physics. Having read a lot of Yudkowsky's writing on LessWrong I'm quite confident he's a genius. That doesn't mean he's an expert on everything or isn't wrong about a lot.
short interviews aren't a good measure, read his book Rationality: AI to Zombies, he's easily one of the smartest humans alive today.
@@mrc4435 It's not a debate, and considering he's quoted in Stuart Russell's textbook on AI apparently he doesn't sound ridiculous to the guy that literally wrote the book on the subject.
He's very smart. And the fault of the meandering interview doesn't lie with the interviewee but rather the interviewer, he should guide the conversation in a rational way but seems intent on spreading his confusion. Eliezer critiques him saying let's try follow one thread at a time so we can come to some solid conclusions, instead they end up jumping from topic to topic and the interview is no better than your average dorm room philosophical speculation save for some humorous remarks by Yudkowsky.
lol what an unexpected crossover
oh boy this was a dusey of a stream...gotta send love to 4thot and other mods just letting DGG trash it without getting an ego about it
Although the whole idea of the event was wacky and no topics in mind. Felt like a wasted opportunity to dicuss AI properly with Mr AI Doomsdayer himself.
Was expecting a interesting debate between why AI will for sure kills us all and why AI will not no matter how powerful it becomes
Having said that after listening to this guy talk about getting about the FDA I feel better about humans surving that much longer
Does this Yudkowski guy have any peer reviewed papers on the dangers of AI? Or is he just some guy?
@@MrRazmut When Yudkowsky first started writing about Friendly AI I don't think "peer reviewed papers" was even a thing in that field. Back in the day, Stuart Russell 's textbook was the only one on the subject in general and it dates to the mid-90s. Yudkowsky is referenced in the later editions for his paper in Global Catastrophic Risk published by Oxford, if that counts. The thing is that Yudkowsky has mostly been working on alignment while almost everyone else in AI has been working on capability. The two are very different, and progress in capability has far out-stripped progress on alignment. That's where the concern comes from, and for why alignment is important I'd highly recommend Robert Miles's channel.
repent to God
I agree. He didn’t really make any arguments; he just stated ‘eh, probably 10-20 years’ and supported it with ‘if I’m not lying’. Uninteresting honestly, and I think he should have given us some indication of what ai can currently do which is dangerous.
@@jonathanhenderson9422 when he started writing peer reviewed papers weren't a thing in AI research?
Boy, do I have a bridge to sell you.
Expert at what? creating hysteria?
Newborns are conscious. If you spend any time with them you can see them interacting and interpreting their surroundings immediately. Their level of communication and understanding is growing, doesn't start at adult level. They need time to understand the world as we do and express that to us.
They are sentient, like a dog or a cat (which on average are about as smart as a 2 year old child), but they are not neccesarily sapient on the level of adolescent/adult humans or even elephants or chimpanzees.
@@couchgrouches7667well I think the point is you don't have to teach a baby to cry or feel uncomfortable. Imagine teaching a humanoid machine with sensors. You would need stats algorithms and proper datasets etc...
I heard that they can actually learn sign language far before they can speak even.
when people talk about debate bros being annoying and pedantic and tiresome, this is the guy they're talking about. good lord zzz
Yudkowski isn’t an AI expert. He started a Nick Bostrom fanfic blog that got big 2 decades ago . He has no degree in anything. It’s sad that people take him seriously when there are *real* AI experts, statisticians and ethicists who talk about the very real dangers of AI. Instead we listen to the ramblings of a sci-fi fan.
Yudkowsky has no formal background in AI research, and it shows.
Wikipedia him. Where do you people get such STUPID ideas?
This is the guy that pops into your head when you hear people speak about things with authority but they dont really know what the fuck they're talking about
Lol Destiny is the literal king of this
This really doesn't seem like a debate more like a discussion.
Debates are just to get to know each other 🐸💅
@25:10 when he says 'ghost' an actual ghost shows up just after 👻 They were happy to just be acknowledged I think, one thing I'm worried about is AI ghosts actually
Been waiting for this for 10+ years.
The funny hat is an indicator. I gave him a chance though, ending in nothing smart.
If you understood nothing else but this message: "Please don't eat or launch babies" then at least you learned something.
Good talk, please more with you two!
I'm glad that you discussed consciousness. I work at the Qualia Research Institute, and I'd be very happy to discuss all these topics anytime :-)
A guy denying consciousness in animals is surprised by a guy denying consciousness in children.
Feels like this guy is trying to act like Elon Musk. Plus wtf was that laugh 18:39
hes mad autistic
Seems that Yudkowsky is an expert at creating sophisticated word salad more than anything else.
Kinda like an LLM, hmmm......
Seems like Elon without a billion dollars
Skill issue
He is a pseudo, yes.
Crazy how he just says the protein folding problem is solved when that is not at ALL the case
I enjoyed this talk
AI fear grifting is a job now
Your mindest will literally destroy society. Ai is unbelievably dangerous
lmao. I love how he was about to just leave the stage without saying bye to the audience
Damn i can't believe I missed this live with my guy Yudkowsky.
Yudkowsky's showing was poor, most people can see he is a grifter.
Good stuff. Thanks.
My worst issue with Yudkowsky is he claims LLMs aren't interpretable (i.e., we don't know what's going on "in the giant matrices" behind the scenes) when in reality attribution and direct adjustment of them has already advanced to practicality and we will certainly see LLM low-level debuggers and updating (e.g., low-rank adjustment) for commercial models like ChatGPT in less than a year, allowing very close monitoring and correction of alignment issues and the like, and thus eliminating almost all of his doomer scenarios.
it drives me bananas that these AI doomsday people get so much credibility, the guys a op-ed writer, he's convinced himself that AI is the villain when there's no guarantee we'd ever make an AI capable of doomsday, there's no benefit in making something that intelligent, hell what we have to do isn't even remotely intelligent. Imagine being the first company to make Data then he gets human rights and goes and works for your competitor lmfao. As the Woz himself said, specific tools for specific problems will beat generic tools, which is why he's more focused on IOT.
'there's no guarantee we'd ever make an AI capable of doomsday' What about a 50%, 10%, heck even 1% chance? Are you willing to take that risk?
Second: Specific tools for specific tasks would be great but that's not what's happening. Frontier AI companies explicitly are working on generalized superintelligent models and almost all of their CEOs are themselves warning about inherent risks.
So I guess it's you who's writing the op-ed
I'm more worried about people creating designer viruses or killer robots with AI than self-aware and rebellious AI at this point.
Doesn't eliminate anything when you have self learning AI that could outsmart us easily. You're talking about commercial tech and things they want or allow us to know about. AI is probably way more advanced already than anything ChatGPT or Ameca. It's the classified or experimental progress that we should worry about.
@@danaut3936 you're absolutely right about CEOs warning about risks, if you go check out the risks they warn about they are almost entirely about misinformation because the models we have now are actually incredibly bad at accuracy. Hence the models that do get used end up being very specific tasks, and models like chatgpt end up having very inconsistent unreliable results.
You're right there is potentially the ability for us to one day create a machine or software capable of thinking and even more importantly thinking for itself. And that may possibly result in our destruction. If we actually even so much as touch the tip of an iceberg I could understand "AI experts" freaking out. But right now it's literally linear algebra. It does super specific tasks. And it takes a great deal of effort to make those very specific tasks work consistently.
I think this is the first time I've seen tiny wearing jeans
Why so short? Could listen for days!
I give up. Who are either of these people in the adult world of policy, research or anything that approaches credentials?
The remarkable thing about consciousness is that no matter how much introspection or research we've all done we're all just on equal levels shooting out bullshit over a beer about it.
What do you mean? When you look at a picture of brain synapses neurons and axons firing, what do you think that is? Its like biological programming signals being made through chemicals instead of 0's and 1's. What is so hard to comprehend about what consciousness is? Can you please look at how complex the brains network is and stare at it and realize its an insane computer and not doing it for no reason
@@prodev4012 bro thinks he knows more than science as a whole and all PHD scholars -bro got the answers lmfao ok
@@jaromsmissremember its not just me its the AI and the entire database of the internet with hundreds of science papers that can be summarized
@@prodev4012 How can you explain consciousness by brain structures and physiology when it comes to being conscious in a specific body, in a specific century, and from specific parents and location? Like why are you or I conscious in our separate bodies? How come not 20,000 BC or 10,000 AD? I believe scientists will never be able to figure this out. It's like as explained in the video, there are things we can not perceive that are at play that create our formed consciousness, not just brain and body physiology.
@@prodev4012 What's hard to comprehend is the sort of opened channel that consciousness seems to be.
Like a primitive animal like a monkey or a dog, they have a very narrow field of consciousness but what allowed us humans to expand our field of consciousness and how far can it go?
You can look at the brain as long as you want but it won't show you the reason consciousness or life arose in the first place.
Bruh they done found MundaneMatt 😭😭😭
@august PLEASE turn up the volume
I thought that Destiny was autist*c until this guy started talking jeez lewiz these dudes are big brain
I work in the industry that provides the infrastructure that AI (and the rest of the internet) runs on. It is extremely fragile and requires lots of redundancy and lots of skilled workers keeping the systems running. It would be trivial to shut it all down. Humans are essential to the survival of AI. Something like the plotline to the Terminator franchise is ridiculous.
You realize its going to....improve right? Nobody intelligent thinks Gpt4 will destroy the world
@@demodiums7216 People like Elon Musk are seriously scared of AI.
@@erikkovacs3097 Yes, AI. As in, what it can do in the future. Not now
this WHOLE event, genuinely felt like something weird rich people would attend.
Also, dudes body proportions are like Dr Eggman
Charismatic autism vs basic reddit autism
19:10 Astrologers study the conscious experience of individuals as well as groups of people like countries by looking at the alignment of the cosmos at the moment it began, because for some reason the potentiality of any system is encapsulated in the moment it begins, and this is a great access point for understanding consciousness.
I too am concerned about the Aiden Influence. He just interviewed a fake Kim Jong-un.
Ah eliezer yudkowsky, the guy who dropped out of school and is known for writing a Harry Potter fanfiction
That sounds more impressive than whatever opinion he has on AI
@@robobbymurrowen6785 you sound like you listen to anyone who claims to be an expert on AI who has no background
Steven doesn’t claim to be an expert on any particular topic, that’s the main difference.
Try hard enough and you can dismiss anybody like this. I can call destiny a manlet who lets his wife sleep with other dudes for example but it’s not productive. Why not attack Yudkowsky’s argument not his character?
@@sepro5135 I'm not talking about Steven
serious question, what was with the crypto stuff?
Is there a way to watch this full conversation?
manifest said they'd post it to their youtube channel
Why does it seem like none of these AI doomsayers understand how it works?
considering that a lot of ai doomersayers are literally working at the companies building them, i think ur the one who doesn't understand AI
They are trying to boost their stock options, none of these nerds know when their AI algos will be able to make good definitive decisions without previous inputs. Until then who gives a fuck
What's an example of someone who *does* seem like they understand how it works to you?
It's the same with conspiracy theorists, flat earthers, etc. The less they know the more they think they know, its pretty funny.
@@red_Sun24and is eliezer one of them?
It is actually crazy how unknowledgeable Yudkowsky is on consciousness lol. Access consciousness =/= phenomenological consciousness. Yudkowsky essentially doesn’t believe p-consciousness exists. He has the weakest, most anemic illusionist-like materialist positions I have ever heard and was incapable of making any case for it. He believes qualia exists yet somehow likened his experience to a linear system such as light hitting a telescope. Destiny was more agnostic on the topic. Also Yudkowsky saying his third biggest worry is about who’s not having kids while having none of his own. 💀
repent to God
Tbh most destiny viewers are being unreasonable with yudkowsky. There is almost no way to talk about metaphysics and reality in quick 1-on-1. Yudkowsky invented 'qualia' and has written about it for years now. I personally disagree but would not call him unknowledgeable. Consciousness has been his main thing for like 20 years now
Do you disagree with his AI doomsday positions?
yea, I prefer hemophilic disillusionist-like meta-materialist & supernatural positions too. Hate when things get all bogged down with demonstrable reality.... Can't people just understand all the kinds of consciousness that obviously exist obviously?
@@firulice7000 Not necessarily. I’m less doomer. Still cautious and concerned by it. I’m less worried about hostile super intelligence and more worried about a collapse of organic knowledge, more easily created misinformation, and the implosion of any shared episteme.
I really wanted this to be an in-depth conversation/debate…next time I guess
Could we test whether or not AI was able to consciously observe the exterior world using observation paradox? Some form of dual slit experiment that utilized conscious interruption of quantum superposition?
I was excited to see destiny debate yudkowsky or any prominent AI safety pundit. It's such an interesting topic, and there are a lot of bad takes (just look at this comment section). And yet they spent their time talking about babies lol
Okay but why does he look like that?
The audio level of this video is super low. At max volume I'm still having trouble hearing them properly.
Consciousness is probably just a combination of memory neurons connected in just the right way that they allow for higher levels of self awareness through retention.
M’Lady personified
I like that Steven was fixated on the question until the guy answered it.
If you are interested in how the brain could self-report qualia in a universe with no room for qualia phenomenologically, I recommend checking out Graziano's Attention Schema Theory of Awareness. The perhaps retrospectively obvious short of it is that qualia existing as a feature of the universe is not a necessary condition for you to maintain the belief that you have it.
I don't think this fella (while very smart and eloquent), not disrespectful to him at all. I don't think he has had any kids. When a baby is born the first thing you notice, which is super mind-boggling is that you really notice that they Don't know anything. it's so profound to see how the baby starts learning and reacting to things that they've never experienced. They definitely have consciousness I feel like.