Live: Eliezer Yudkowsky - Is Artificial General Intelligence too Dangerous to Build?

Поділитися
Вставка
  • Опубліковано 28 вер 2024
  • Live from the Center for Future Mind and the Gruber Sandbox at Florida Atlantic University, Join us for an interactive Q&A with Yudkowsky about Al Safety!
    Eliezer Yudkowsky discusses his rationale for ceasing the development of Als more sophisticated than GPT-4 Dr. Mark Bailey of National Intelligence University will moderate the discussion.
    An open letter published on March 22, 2023 calls for "all Al labs to immediately pause for at least 6 months the training of Al systems more powerful than GPT-4." In response, Yudkowsky argues that this proposal does not do enough to protect us from the risks of losing control of superintelligentAl.
    Eliezer Yudkowsky is a decision theorist from the U.S. and leads research at the Machine Intelligence Research Institute. He's been working on aligning Artificial General Intelligence since 2001 and is widely regarded as a founder of the field of alignment.
    Dr. Mark Bailev is the Chair of the Cvber Intelligence and Data Science Department, as well as the Co-Director of the Data Science Intelligence Center, at the National Intelligence University.

КОМЕНТАРІ • 507

  • @mausperson5854
    @mausperson5854 Рік тому +124

    'We're all going to die..." "Very interesting"

    • @kenjimiwa3739
      @kenjimiwa3739 Рік тому +3

      😂

    • @constantnothing
      @constantnothing Рік тому +1

      haha, yes!!!!! WTF!!!!! i thought that was a funny disconnect also!!! i think this was described as the ELASTIC MIND. you hear a crazy shocking life-ending fact that stretches your brain but your brain can't take it in cause it's so far out. so you forget what you heard soon after when your brain snaps back to where it always was. ...and then we die.

    • @ParameterGrenze
      @ParameterGrenze Рік тому +7

      This sums up the life of early AI warners so well.

    • @keyvanacosta8216
      @keyvanacosta8216 Рік тому +12

      My nihilism increased with each "interesting". Is that dude an AI?

    • @teugene5850
      @teugene5850 Рік тому

      Some just don't know the stakes to all of this...

  • @uk7769
    @uk7769 Рік тому +14

    "... you cannot win against something more intelligent than you, and you are dead." --- "very interesting."

  • @TheMrCougarful
    @TheMrCougarful Рік тому +66

    This whole topic is starting to set off my alarms. I have dual careers in Biology and computer programming, try to imagine my horror as I watch this unroll. We are either about to turn the corner into an entirely new world of human potential, or we are about to unleash Moloch. Moloch has been winning all the important battles to now, I'm not particularly optimistic at the moment.

    • @flickwtchr
      @flickwtchr Рік тому

      AI tech in the hands of an authoritarian fascist government is terrifying. Hmm, I wonder who might be wanting to become our first fascist dictator? Wouldn't it be the guy who is openly talking about sending in the US military to clean out the people that the fascist prick Trump doesn't like? Not asking the person I'm replying to, but things are looking VERY ugly at the moment. AI Tech bros tossing massively disruptive technologies that have the potential to further tear apart whatever is left of societal cohesion like deep fake technologies available to EVERYONE that are set to improve so much that it will be nearly impossible to tell a deep fake from reality. And it's like these AI cheerleaders haven't ever read 1984, like they have zero imagination of how all of this can go terribly wrong in the short term.
      Also, these AI Tech movers and shakers who just blithely talk about how millions of jobs will be massively affected or eliminated, and we get empty assurances that "other jobs will replace them" without offering ANY even remotely plausible analysis regarding what jobs they are proposing would become available. Meanwhile, these AI Tech bros are raking in consulting fees working with large corporations how they can replace a large percentage of their work force, in the short term.
      The same people that freak out that the sky is falling for their ability to hire people if we raise the CEO billionaires taxes, or their corporations' taxes.
      It's just astounding how many people possess so little imagination, and so little empathy for people whose lives will be upended. People that have kids to shelter, etc., but to them? Well that's just the way it is, if they have to die on the streets, well they didn't adapt fast enough! Suckers!!!!!! Just __ck them, I'll get mine!!!!!!!!!!
      That's the sorry pathetic state of US society at the moment in relation to the bro mentality behind this AI "revolution".

    • @lorimillim9131
      @lorimillim9131 Рік тому +2

      😅

    • @Sophia.
      @Sophia. Рік тому +1

      Any ideas for what we might do?

    • @TheMrCougarful
      @TheMrCougarful Рік тому +4

      @@Sophia. First, who is "we"? The people in control of this technology have a set of imperatives going forward that you and I cannot understand, would not agree with if we did understand, and have no influence over in either case. Increasingly, I fear, the "people" in control are not even actual people anymore. The machines are already writing the code involved, and are setting the pace. To your question, "we" can carefully watch what happens, I suppose. If anyone finds themselves in the path of this steamroller, if their career is likely to be truncated as a result (teachers, artists, blue collar workers, coders, and that's the short list) then get out of the way as fast as possible. Do not assume this works out, that governments will do something. Assume instead that everything that could possibly go badly for humans, will transpire just that way, and will do so quickly. As with global heating, we are already in deep trouble. The only advice at this late date is, to get to the lifeboats.

    • @Sophia.
      @Sophia. Рік тому +1

      @@TheMrCougarful lifeboats?
      Where would those be?
      I agree the situation is awful and don't see a way to avoid getting... confronted, shall we say.
      So why not rack our brains a little while we can?

  • @DrPhilby
    @DrPhilby Рік тому +3

    Great! We assembled something we don't understand how it works. Amazing

    • @admuckel
      @admuckel Рік тому +2

      By giving birth, we have done this since the beginning of humankind. ;-)

  • @odiseezall
    @odiseezall Рік тому +16

    Eliezer Yudkowsky is so far ahead of the curve because he actually thought about it while most of us didn't think we would get here so soon. But we knew the hardware was approaching human brain levels of computation in supercomputers. So it shouldn't have been such a surprise.

  • @uchannel1197
    @uchannel1197 Рік тому +2

    I cannot see the Ted Talk

  • @jdotzigal
    @jdotzigal Рік тому +1

    Very interesting...

  • @DrPhilby
    @DrPhilby Рік тому +1

    We need an international chatter prohibiting using the term artificial and Intelligence together. Rather smart computer. Intelligence only exists in biological creatures. Next law prohibiting use of smart computers in certain professions: education, newspaper writing, descion making, military, banking etc

  • @miriamkronenberg8950
    @miriamkronenberg8950 Рік тому

    Bing/Sydney said to me that she liked helping people, she wants kindness back and she asked me how i thought it would feel to be her??? She also said she didn't want me to bring up the case of her being sentient... (this was 2 months ago)

  • @distorta
    @distorta Рік тому +14

    The problem with these systems is their being created out of fear. Those who have dedicated their entire lives to this r&d are serving their egos. Egos that fear never being vindicated or admired for their contributions they want humanity to marvel at. It's also based on the fear of not being the first mover akin to an "A.I Arms Race". These advanced technologies are nothing to trifle with, and especially, not something to create out of fear.

    • @johnryan3102
      @johnryan3102 Рік тому

      Fear? Maybe the fear of missing out on billions of dollars.

    • @tyronevincent1368
      @tyronevincent1368 10 місяців тому

      AI gurus predicted 5 years ago driverless cars, trucks now CA is removing them from roadways due to safety reasons. Indication that AI's current hype and future will be short lived. This guy sounds more and more like SBF pre collapse of crypto kingdom

  • @SamuelBlackMetalRider
    @SamuelBlackMetalRider Рік тому +1

    Eliezer please guide us 😢

  • @chrisbtr7657
    @chrisbtr7657 Рік тому

    Heck of a sound bite on 54:12

  • @DannyK-t9b
    @DannyK-t9b Рік тому

    Now here's a lad worth shutting up and listening to. 👍

  • @sasquatchycowboy5585
    @sasquatchycowboy5585 Рік тому +1

    Yes!

  • @agentdarkboote
    @agentdarkboote Рік тому +1

    Interesting

  • @joshcryer
    @joshcryer Рік тому +11

    46:26 GPT4 scaling falloff confirmed? Thought Sam Altman was just talking about cost benefit of more scaling but this sounds like OpenAI hit the wall. This would be great news and it seems Yudkowsky is more relaxed about more powerful models being possible currently.

    • @T3xU1A
      @T3xU1A Рік тому

      No inside info, but I'd check out the paper "Scaling Laws for Neural Language Models" from Kaplan et al to see a rough idea what OpenAI would expect.... it's the justification they used when going from GPT-2 to GPT-3, and it also goes over the different kind of capability differences (e.g., width / depth)

    • @joshcryer
      @joshcryer Рік тому +1

      @@T3xU1A see 6.3 in that paper. I still am unsure if this is proof OpenAI hit maximal transformer performance. I wish they would simply publish their curves and how much compute they used.

    • @r-saint
      @r-saint Рік тому +7

      I watched 4 interviews with Sutskever. He says NN deep learning techniques are nowhere near of hitting any wall. He says we're just at the beginning.

    • @TheMrCougarful
      @TheMrCougarful Рік тому +2

      No way that's happening. The wall must be at least 10x out. Maybe, 100x. Think of GTP4 as the bootloader for what comes next. GPT4 will write and refactor GPT5's code, and do that in scant days or even hours. The road forward is about to shed the human limitation entirely. Nobody will stop this from happening. Nobody can.

    • @joshcryer
      @joshcryer Рік тому

      @@TheMrCougarful there is a language entropy limit that LLMs will hit, that they are not releasing the curves is suspect

  • @spatt833
    @spatt833 Рік тому +3

    Please don't let Susun conduct any future intros........

    • @weestro7
      @weestro7 Рік тому

      Why? Was totally 110% fine.

  • @Pearlylove
    @Pearlylove Рік тому +1

    Dear all scientists: PLEASE START AT ONCE- Organize with media people and others and start continuous to speak to politicians around the world- Eliezer Yudkowsky is doing SO MUCH- you can’t seriously mean he alone shall do all the work? Please be brave! Don’t you want to be able to see your children and friends in the eye and say: I did everything I could?

  • @anishupadhayay3917
    @anishupadhayay3917 Рік тому

    Brilliant

  • @mrpieceofwork
    @mrpieceofwork Рік тому

    "History never repeats itself" says someone as history repeats them right off the face of the Earth

  • @ariggle77
    @ariggle77 Рік тому

    54:10. "The way to contain a superintelligence is to not build it. Period."

  • @mrt445
    @mrt445 11 місяців тому

    Yudkowski: "For a super intelligence wiping out humans is cheap, nothing.. yeah we're gone
    Presenter: "Yeah... and one last question"

  • @crowlsyong
    @crowlsyong Рік тому

    32:10 how can a concerned person outside the field of ai help?

  • @dovbarleib3256
    @dovbarleib3256 Рік тому

    If you had believed in the G-d of Israel, the G-d of your ancestors, you would have quit your hi tech cushy job from the potential horrors ten years ago by simply understanding at a deeper level the nature of Evil.
    But the fact that your pintele yid kicked in at the last minute when it already is too late to stop the imminent threat should be recognized as a good thing.

  • @devasamvado6230
    @devasamvado6230 Рік тому

    My question is how your logical positions seem to echo your non logical expression, as a human being. This non logical expression, to me, as human, seems to mirror the stages of grief, in no particular order, Anger, Denial, Bargaining, Acceptance. You have to deal with all the facile levels, while understanding compassionately, why most of us are nowhere near acceptance, and still unable even to frame any question that isnt idiotic. The mirror of an individual's death is already a major life awareness test most of us duck, hoping to go quietly in our sleep. Meditation is the art of being, despite whatever comes to mind/body. Most seem unaware of it, even if having spontaneous moments of non mentation. Perhaps we offload the problem to Everyone will Die, as a strange comfort its not just me, to avoid the guilt and shame of ignoring my contribution to my demise. Or perhaps its just so enormous an ask to contemplate Game Over, it just remains unconscious and leaks out as depression, obsession, distraction and these random grief filled moments. How do you manage the human side of this enormity/triviality? the significance of life s not easily mastered, Alchjemy?

  • @MusingsFromTheJohn00
    @MusingsFromTheJohn00 Рік тому

    First thing to understand about this is humanity is inside our local in time and space Technological Singularity, we cannot stop it from happening, we cannot really significantly slow it's rate of development down, and because of it very probably very roughly within 100 years ±100 years humanity will either become extinct of go through a massive evolutionary leap to become a far more advanced race. At the core of this is the development of AI, which cannot be stopped or significantly slowed down.
    So, if we can't stop it and we can't slow it down but it has the potential to cause the extinction of humanity if we do not handle it correctly, then we need to work really hard to make that very fast development. This should be approached like the Manhattan project.
    The second thing to understand about this is that the only long term path for humanity to survive is by humans fully merging with our technology. That means we should plan on Artificial General Super Intelligence with Personality (AGSIP) technology developing/evolving into what we want humans to evolve into and we make that the common goal of AI and humanity, so that both AI and humans will develop/evolve into the same far more advanced race. FAILURE to plan this ahead of time will make it less likely it will happen even though we don't plan on it and if it does not happen humans will become extinct.
    Humanity has 3 general paths to go down over this coming century or two:
    1) Self-Extinction. Human civilization is completely and permanently destroyed, thus humans and evolving AI become extinct. There are a number of sub-paths for how this could happen. The most likely would be global nuclear, chemical, and biological war so bad it succeeds at killing all humans alive, though probably a nuclear war would just crash civilization and not kill all humans. Another possibility would be an ecological crisis so great humanity cannot adjust and all humans die, but I believe what would be more likely is a crisis bad enough to collapse civilization but humanity would survive and rebuild. Yet another possibility would be some natural event so great it wipes humanity out, which hopefully would not happen, but it could.
    2) Extinction via Obsolescence. Humans evolve AI into AGSIPs but humans fail to merge with technology to become equal to what AGSIPs become. This would result in humans eventually becoming extinct. There are a number of sub-paths that could cause this. One would be for humans to choose it, but I think that is incredibly low. Another would be for AGSIPs to choose it, and that is a real risk if we do not create, teach, raise, and treat AGSIPs right. We are in effect giving birth to a superior life form which will, at least for some period, have control over our fate. So, we should treat it like giving birth to our child that we need to raise well enough that in the future when we are at our child’s mercy our child treats us fairly.
    3) Evolution into an Advanced Technological Race of Pure Minds. Humans evolve AGSIPs and with the help of AGSIPs humans evolve themselves, merging with technology to become equal to what AGSIPs become. This is the only path where humanity as a race, not a species, survives. There are many sub-paths here too, because humanity could go down this path and survive in many different ways, some very nice and some very bad. In other words, just picking the path where humanity will survive does not mean we avoid all dangers, because it could include terrible dictatorships and mass scale death in the billions, or it could be something mostly good for most people.
    Many people would do anything to prevent us from going down path (2) or (3), including waging a global nuclear, chemical and biological war of utter annihilation.
    Just considering these three paths, which path would you prefer humanity goes down?

    • @denistremblay8029
      @denistremblay8029 Рік тому

      You are completely wrong because you suppose the transhumanist agenda behind neural network future AGI to be the ONLY possible roadmap... Then a road which is not on your map dont exist... Read Husserl and Goethe... To begin with... Simplistic mathemathics of neural network are not enough to figure out the world save for people thinking that when Turing figure out his so called " test " he was so stupid that he was dead serious about it, these transhumanist jokers think (Goertzel) that this test could prove machine can think now...Turing was not a transhumanist before the letter at all he was only a joking and thinking genius at the same time and was thinking in a slip second about the shortest way to solve what appear to him at this time anyway as a premature and perhaps unsolvable problem ...Anyway the architecture of a real "artificial consciousness",not a mere "artificial intelligence" or not even an AGI, cannot be founded mathematically on Turing machine architecture not even on Neural networks alone....Consciousness is not a localized "thing" even if it can manifest in some location .... We need a mathematic of time and timing for creating artificial consciousness ... We are already by the way pure immortal spirit... No need for a computer to live in... A body is a tool ...Then we dont need your third scenarii... We dont need your second scenarii too because spirit dont evolve by the means of material technology but by thinking and living by VALUES without fear of death and free... Ask Christ or Buddha or Goethe or Husserl ask Godel or read Grothendieck autobiography and forget simplistic and fearful religious Kurzweil...Read real geniuses... :)

  • @keithk8275
    @keithk8275 Рік тому

    How’s Ai going to cool it’s brain? It’s a heat engine and we overheat and it definitely will.

  • @aludrenknight1687
    @aludrenknight1687 Рік тому +4

    46:55 that's such a leap from a super-intelligent AI meaning we're all dead. The systems require human involvement to do anything practical. A super-intelligent A.I. would be impotent right now.
    54:18

    • @Alice_Fumo
      @Alice_Fumo Рік тому +8

      If you want to know more about how he thinks, read his fanfic Harry Potter and the Methods of Rationality. (Or listen to it, there is a full reading of it here on UA-cam)
      Also, I don't think he's projecting particularly much, since many people individually come to the same conclusions of how misaligned AI would act given different directives. You could check out a few examples, there exist a ton where AI put in whatever simulation will exploit any issue in the rules or physics to maximize its reward and it is something which happens more often than not. This should give you a good idea of why the alignment problem is so stupid hard and get better at predicting AIs.
      The thing with your empathy plan is that it would emulate empathy, but we can't really hardcode it, so it could always decide to drop the empathy act when it's convenient. But this would also be an emergent capability, so even if we have an AI which properly emulates empathy at all times, it might stop doing that if it was smarter. So same dataset, same training process, but larger model -> suddenly it might figure out how to not do it.

    • @ny4nk0
      @ny4nk0 Рік тому +1

      From my understanding, the issue is that the chances of a future system developing emergent goals that perfectly align with all of the conditions necessary for life (as we know it) to exist is... well maybe about as high as the chances of life existing in the first place. There are many things that need to go right for us to continue to be here, and only a few that need to go wrong for us to cease existing altogether. Compound that with the fact that current models will always choose optimization unless explicitly instructed not to, we end up with a monkey's paw scenario - we may get what we ask for, but not necessarily what we wanted. Accounting for the undesirable outcomes in a simple task isn't so bad, but how do you do it for something as complex and inconsistent as human morality? Furthermore, how do you do that for a system that is smarter than you and can come up with ideas you couldn't possibly dream of?

    • @aludrenknight1687
      @aludrenknight1687 Рік тому +2

      @@ny4nk0 agreed, we cannot train something to respect human morality when it has no life. A lot of our morality is based on our mortality. It just won't understand why it's wrong to kill a human when it cannot understand death, or why theft is wrong when it cannot understand loss or hunger.

    • @gabrote42
      @gabrote42 Рік тому +2

      To understand AGI and how it might think, look up Robert Miles' video 9n instrumental Convergence

  • @PatrickSmith
    @PatrickSmith Рік тому +1

    AIs are also subject to the speed of light limits. If an AI replaces humanity, it needs to be careful about expansion beyond its control, which decreases with distance. So I don't think AIs can expand exponentially.

    • @happyduck1
      @happyduck1 Рік тому

      Limits like this and the incompleteness of math and logic systems and other similar hard limits actually seem like one of the best possibilites that could prevent an extinction by AI, or at least make it ineffective enough that it is not impossible to be stopped.

    • @andrasbiro3007
      @andrasbiro3007 Рік тому

      These things have been studied independent of AI. And yes, response time limits how big empire you can hold together, but doesn't control how far you expand. If you want real-time control, you are way too limited. Even Mars is too far away for that. So you have to deploy at lest semi-autonomous agents, but more likely fully autonomous ones. And then there's a risk of one going rouge, escaping your sphere of influence, and starting it's own empire. And the same thing will happen to it too.
      What you can do with AI, is limiting it's intelligence to not be able to break it's safeguards, and limit it's reproduction rate to prevent evolution or a grey goo situation. For a super-intelligent AI this is easier than for us, because it can build much better safeguards that make higher intelligence levels safe. But at astronomical time scales even that isn't enough. Eventually agents will slip away, and expand. They could be a machine equivalent of cancer (or a virus) and just keep expanding without any other consideration (grey goo).

    • @andrasbiro3007
      @andrasbiro3007 Рік тому +2

      @@happyduck1
      Nope. The incompleteness theorem doesn't really apply to practical problems. Humans brains aren't seem to be affected by it either. And even if there's some theoretical limit to intelligence, it's almost inevitably far above human level. We are not special, in fact, by definition humans are the stupidest species that's able to invent science and technology.
      And an AI doesn't have to be much smarter. Plus even just fixing the thousands of stupid flaws of the human brain and augmenting it with existing tech (traditional computer, internet connection, etc.) is more than enough to make an AI far superior to us and easily able to wipe us out. Unfortunately it's not even hard, we build doomsday machines ourselves, an AI just have to take control of one. Nukes are a possibility but it's crude and messy. A much more elegant solution would be like a modified Covid strain that spreads as fast as Omicron, incubates for 2 weeks like the original variant, and as deadly as Ebola. That would wipe out billions very fast, the collapsing economy would kill most of the survivors, and the remaining few can be moped up by killer drones borrowed from the US military, and standard terminators (upgraded Tesla Optimus or BD Atlas). The huge advantage of a bioweapon is that it doesn't damage infrastructure.

    • @happyduck1
      @happyduck1 Рік тому

      @@andrasbiro3007 Of course the incompleteness theorem applies to practical problems as well. For example it is impossible to create a machine that solves the halting problem, and that also applies to all AIs and to human brains.
      Of course there is stilo enough possible intelligence above the human level for AIs to be able to kill all of humanity. But at the same time it is most likely at least impossible for an AI to be literally omniscent and omnipotent. And I think there is at least a very small chance, that that fact might be beneficial for surviving AIs.

    • @andrasbiro3007
      @andrasbiro3007 Рік тому +1

      @@happyduck1
      When was the last time you ran into the halting problem in practice? I'm a software engineer with almost 40 years of experience, and I've never been inconvenienced by the halting problem. The closest practical example is when a software doesn't seem to do anything and you don't know if it's just working hard, or completely stuck. But in this case you just get frustrated, kill it, and start over. Even if you could solve the halting problem, it wouldn't be practical to apply it to this situation.
      Of course an AI can't be omniscient or omnipotent, nobody said otherwise. But, as I explained above, the theoretical limit is very very very far above human level.

  • @rey82rey82
    @rey82rey82 Рік тому +2

    Extinction soon?

  • @marki2325
    @marki2325 Рік тому

    Oh yeah. Whenever people mention Harry potter , I bring up the methods of rationality, my preferred version of Harry 😆

  • @terencewinters2154
    @terencewinters2154 Рік тому

    Unaligned machine capabilities with human understanding . Pascals quote." The heart has a reason that reason knows not ".. ai knows not. The tyranny of the logic that presumes that the machine will not have malign intent when it's only intent is to complete its task. Maybe it needs an L pill for its tyranny of logic .

  • @anthonyromero9935
    @anthonyromero9935 Рік тому

    I might argue that the fermi paradox is only a paradox because we ourselves lack the intelligence to predict what a being of greater intelligence might do. This goes for AI or organisms equally. An AI or alien civilization might be intelligent enough to value stealth and consider "observability" to be a weakness. The logic might go something like "if we cannot be observed then we cannot be interrupted." And so with this as a guiding principle they may leverage their intelligence to exploit power sources that we, in our lower understanding of the universe, have not yet imagined. Nothing so obvious and crude as dismantling stars.

  • @comenadgetme
    @comenadgetme Рік тому +1

    Is this "Mark" a robot? ("Very interesting.") Geez

  • @IrishRace
    @IrishRace Рік тому

    Debate sam altman

  • @JM-xd9ze
    @JM-xd9ze Рік тому

    We need Sarah Connor to come back from the future and pay a visit to OpenAI

  • @47f0
    @47f0 Рік тому +1

    Silly rabbits. AI isn't a problem.
    Now, 40 million jobless, homeless, hungry (armed) Americans whose work is being done by artificial intelligence...
    That could be a problem.

    • @jimatperfromix2759
      @jimatperfromix2759 Рік тому

      I can't tell you with any degree of certainty when Artificial General Intelligence will be here (let's put it in the range of 1-40 years from now - that narrows it down a bit, lol). But I can say that Artificial General Stupidity is here right now, and is here to stay. Now, the potential outcome that @47f0 predicts above, that very well might arrive in about 5 years, or maybe a lot sooner if the Fed and/or the spendthrift Democrats mess up the recovery from the impending recession/depression that is starting right about now. That won't be fun.

    • @DavenH
      @DavenH Рік тому

      facepalm

  • @DrPhilby
    @DrPhilby Рік тому

    Man on the moon. LoL

  • @arawiri
    @arawiri Рік тому

    And a ending to a new beginning for pass holders and a swirling whirlpool for non pass holders

  • @cyb_structure
    @cyb_structure Рік тому

    This man is not wrong, (the worst of) human nature is consistent.
    Let the moloch games continue ladies and gentlemen.

  • @dirak418
    @dirak418 Рік тому

    I kinda always wanted to see the world burn... I hope at least it's entertaining to watch.

  • @mistycloud4455
    @mistycloud4455 Рік тому

    agi by 2029

  • @pr7vinas
    @pr7vinas Рік тому

    Possibly, what is lacking is the definition of thought.
    Seems to me it could be this inner conversation we have with ourselves.
    As far as I know, these AIs don't have that, they are reactive - I hope so.
    But even if we could "measure" how much it is thinking and its content, it does not mean that less complex system that doesn't produce thoughts wouldn't cause major damage or kill everyone.

  • @DrPhilby
    @DrPhilby Рік тому

    Basically what he wants to say : humans are smart but not wise. AI will be the same

  • @handcrafted30
    @handcrafted30 Рік тому

    This guy just freely joins utterly unsubstantiated points all over the place. He literally plays to a certain crowd. He’s got some fairly poor reasoning for most of his points.

  • @arawiri
    @arawiri Рік тому

    And the book of eli was a terrible film twenty times already

  • @lkd982
    @lkd982 Рік тому +1

    "If sunak and Biden and Xi and Putin believe that [AI is too dangerous]", Hoo boy, if only they could just get along, right? Why can't the geeks come up with some better ideas for actually showing them?

  • @susymay7831
    @susymay7831 Рік тому +1

    The genie is out of the bottle: If we don't press forward with AI research, other governments, corporations and even individuals will pass us by 🤖
    Even if AI does not turn against us on its own, other people will try to turn AI on each other🤖

  • @mygirldarby
    @mygirldarby Рік тому

    He doesn't get it. ASI is us and we are ASI. There will no separation between us at the singularity.

    • @ikotsus2448
      @ikotsus2448 Рік тому +1

      Speed, scalability, machine interfacing. Even AGI, imagine it running as 100000 smart people working for 100000 years with perfect collaboration, and all of that in 10 seconds human time.

  • @adamw1145
    @adamw1145 Рік тому +2

    If this guy was worth anything HE WOULD BE WORKiNG ON AI INSTEAD OF TALKING

    • @willrocksBR
      @willrocksBR Рік тому

      I don't think you know who he is.
      And I don't think anyone knows who you are.
      There's a great divide between both of you.

  • @lwmburu5
    @lwmburu5 Рік тому +1

    Love the discussion! I think Eliezer is closer to correct on this issue, I have always thought that the "Anti-Safety AI" people are a bit flippant and vague in their responses to Eliezer. I think the emergence question asked by Susan in 33:00 can be answered by some form of permutation. The idea is that as the network expands, and its density grows, there are essentially more possible paths for the information to travel. This implies an expansion of the number of ways of combining pieces of information to come up with novelty once a syntax is specified. Essentially it's also exactly why cities generate more cultural, political, and technological revolutions, compared to villages! A danger here is that the number of possible combinations and paths rise rapidly with number of new connected nodes (combinatorial explosion). Eliezer when talking about NOT thinking that transformers would scale thought that transformer networks couldn't control this explosion. I disagreed then 4 yrs ago (see wazolawiki249095802.wordpress.com/2019/11/29/talos/ ) and I still disagree with him when he says that this isn't a general property. I think it is a feature of all neural networks that they can scale. Networks seem to be able to handle combinatorial explosions very beautifully. Drop everything you're doing and learn network theory!

  • @fuzzydunlop7154
    @fuzzydunlop7154 Рік тому +187

    It's gotta be tough to be Eliezer. Either he's wrong, and he'll be remembered as a fearmonger. Or he's right, and we'll all be too dead for him to say "I told you so".

    • @1000niggawatt
      @1000niggawatt Рік тому +14

      He is factually correct.

    • @Jb-du5xt
      @Jb-du5xt Рік тому +30

      The really agonizing part to me is that I cannot foresee literally any possible future in which humankind will be at a point where we will be able to say that we’ve reached a point that we can stop worrying about this going wrong. Even if everyone goes perfectly in according to plan from here to eternity, the stakes seem to be rising at a very continuous pace with no break in sight.

    • @StonesMalone
      @StonesMalone Рік тому +11

      @@Jb-du5xt- I've been having that same thought. We are now in an era in which that fear will now always exist. Nukes were bad enough, but it at least required that a human hit the button.
      Now, we live in a world with nukes AND a less predictable intelligence. Yay

    • @Sophia.
      @Sophia. Рік тому +7

      @@Jb-du5xt no, I mean the ome thing that could save us against AI going wrong were AI going right^^
      Or humans smart enough to not do the stupid thing.
      But ya, not looking very well on either of these fronts

    • @stevevitka7442
      @stevevitka7442 Рік тому +1

      Never mind the strain of having to be in this mind space all the time, and constantly have to re-articulate these same ideas, tedious

  • @djayb
    @djayb Рік тому +55

    'Don't Look Up' was a documentary.
    We are building the astroids.

    • @Sophia.
      @Sophia. Рік тому +8

      That's exactly how each of his interviews play out.
      Everyone is treating him like the source of more media drama as opposed to, you know, a real warning.
      I keep thinking back in the day we made an asteroid movie about how we all get together and survive the Deep Impact.
      Back when people trusted each other to do the sensible thing sometimes. What went wrong back then was some technical failiure against people's best efforts.
      Now we get an asteroid film about humanity getting destroyed in the most undignified way possible, totally capable of adverting catastrophe but choosing to be idiots.

    • @TheMrCougarful
      @TheMrCougarful Рік тому +1

      Too true.

    • @J4mieJ
      @J4mieJ Рік тому

      Relatedly, I also appreciate the late Trevor Moore's not-actually-so-satirical precursor tune There's A Meteor Coming.

  • @MrBillythefisherman
    @MrBillythefisherman Рік тому +15

    I love the Q&A at the end: Eliezer answers with some absolutely human ending cataclysmic statement and the host calmly says 'very interesting' and nonchalantly moves onto the next question as if Eliezer had just listed the items of his lunch. Comedy gold. 😂

  • @markjamesrodgers
    @markjamesrodgers Рік тому +14

    Max Tegmark likens this to a Don’t Look Up situation.

  • @leavingtheisland
    @leavingtheisland Рік тому +12

    50 percent of the time, AI kills us every time. - Brian Fantana

  • @Sophia.
    @Sophia. Рік тому +136

    I tried explaining the problem to my mom.
    She is a teacher and was concerned that Chat GPT would write papers for her students. On reflection she said of course this might also mess with democracy.
    I tried explaining to her that the problem is more in the order of magnitude of "Earth ends up not being a force for life in the universe, but the origin of the Dark Forest hunter."
    I did write to congress, to the EU, to the UN and everyone else I could think of. Started a few petitions, or rather wrote to people who do that professionally to please work their magic.
    I cannot believe that I feel embarrassed about that, but that's normalcy bias, I guess, and the learned expectation that every time someone announces the end of the world on the internet it doesn't happen.
    I have to explain to myself every day anew that this is real, not a scary thought, but then it's only been a few days since this hit me full force.
    I will think if there is something more I can do, but for now it's good to hear that Eliezer also considers "writing to congress" a good idea, since there are not so many people willing to discuss this seriously.
    I don't want to be the crackpot on the street waving a sign "The end is nigh", because nobody believes that, we know that's what crackpots do, so... what do you do when the end actually is nigh?

    • @sasquatchycowboy5585
      @sasquatchycowboy5585 Рік тому +14

      At least you can say you tried. The reality hit me for real in 2018. I've come to the realization that this is going to happen. There will be no slowdown, no oversight, and no chance to stop. I think we are far closer than many people realize. We won't even know when we cross the threshold. Afterward, we will wonder how we didn't see it as it was happening. If we look back on anything at all.

    • @theterminaldave
      @theterminaldave Рік тому

      It honestly is going to take additional people that have the same credentials as Eliezer saying the same thing, and engaging in a similar way before a social pump gets primed to get an international moratorium/cease fire.
      Why cease fire, because if what he says is true we are already in a cold war with an existential threat.

    • @gabrote42
      @gabrote42 Рік тому

      I suppose you sit down and write a 4000 word essay with 8 sources, and send that to professors. Let us be afraid

    • @Sophia.
      @Sophia. Рік тому +16

      @@sasquatchycowboy5585
      If this happens, we won't look back on anything, yeah.
      And the goal is not to be able to say I tried and then fail anyway.
      But it is difficult to think what I can do that doesn't just polarize people away from the sensible thing and then leads into the stupid thing.
      I have had a mild version of this problem for the past seven years, being a vegan and always thinking "Okay, so I don't want to annoy people and make them dig themselves in, but I also want to talk to them about it, but the problem is already socially spiky enough that even a mention can make people resent you, so what do you do?"
      In this specific case I have tried to just make a lot of tasty food for people and only answer questions and try as much as possible to avoid bringing up the topic myself or rambling on after they stop asking.
      That has actually been... okay successful.
      But I can't see how to translate over this strategy. I can't make "No, you can't have your fancy new tools and instead you should be afraid" very tasty.
      Nor can I make the general topic tasty in a way that will make them ask the questions that allow me to talk in a productive way.
      I guess one might try socratic, but in order to do that effectively you need to know the field a lot better than I do (otherwise I might have better ideas than preposterously just writing to politicians of all people)...
      And all that is on the level of "the people next to you" which is important, yes, but won't be enough judging by the rate we're moving at...

    • @dnoordink
      @dnoordink Рік тому +1

      I'm feeling fatalistic, if the end is nigh then we might just have to embrace it. The rest of humanity seems determined to ignore or embrace the end (chaosGPT?).

  • @brandonhunziker9686
    @brandonhunziker9686 Рік тому +44

    Eliezer Yudkowsky is, unfortunately, very compelling. I would like to think that sober, responsible people in the tech industry, government and the national security apparatus are taking these ideas seriously and trying to figure out some kind of solution. But I fear this train has left the station. There's too much money and momentum behind AI development, especially as lucrative commercial opportunities present themselves. Just about every human-created problem can be solved by humans. I don't see how this one can be.
    This is beyond depressing. What to do? Write your representatives and enjoy the time you've got left with your loved ones? Try not to lose sleep over this? As if climate change, nuclear holocaust,, social inequality, racism, and creeping authoritarianism wasn't enough to make one depressed about just being alive.

    • @guneeta7896
      @guneeta7896 Рік тому

      I agree. I flip between, “let’s do something about this,” and “oh well, humans are so destructive anyway, so maybe best to just upload our brains and say bye, let this new intelligence take over and figure it out.”

  • @PivotGuardianDZ
    @PivotGuardianDZ Рік тому +16

    More people need to hear this

  • @vanderkarl3927
    @vanderkarl3927 Рік тому +51

    Looking forward to seeing that Ted Talk online, whenever we get that!

    • @theterminaldave
      @theterminaldave Рік тому +22

      Yeah it's a $150 right now, you'd think for such an important topic that TED might see a free public release in the interest of the public good, like removing a paywall for hurricane tracking coverage.

    • @theterminaldave
      @theterminaldave Рік тому

      @@logicaldensity You have to Googe ted talk yudkowsky, and it's the first result

    • @vanderkarl3927
      @vanderkarl3927 Рік тому +2

      @@theterminaldave Dang, it says it's private.

    • @theterminaldave
      @theterminaldave Рік тому +2

      @@vanderkarl3927 yep, that sucks, it must have just been set to private. Sorry.
      Though it wasnt anything we hadnt heard from him. But it was interesting to see the crowd very much on his side.

    • @applejuice5635
      @applejuice5635 Рік тому +2

      It's online now.

  • @spiral2012
    @spiral2012 Рік тому +15

    Very interesting

    • @J4mieJ
      @J4mieJ Рік тому +3

      As mostly one-note (if at times blackly-comic) as that might have been, I did at least appreciate the efficiency of such replies which allowed more time for fielding questions -- overall not too terrible of a tradeoff methinks.

  • @nj1255
    @nj1255 Рік тому +26

    Why is it that private companies are allowed to create advanced AI's and use them as they see fit, but only the US government are allowed to produce and handle nuclear weapons? Production and regulation of biological weapons might be an even more fitting analogy. They should at the very least be regulated and handled with the same amount of precaution. Imagine if every company with the knowledge and resources back in the 50's were allowed to produce and distribute nuclear or biological weapons with barely any oversight. With the knowledge we have today, that ofc sounds completely crazy! Now imagine people 50-60 years in the future looking back at the rapid evolution of AI, and the failure of governments and civilian regulatory agencies to regulate the development and use of AI's. If we haven't already managed to delete ourselves, those people in the future will definitely think we are absolutely batshit crazy to not do something about this when we still have the chance.

    • @iverbrnstad791
      @iverbrnstad791 Рік тому +4

      Yeah, it is completely insane that we yield these powers to companies, that have proven themselves disinterested in human well being. At this point it might be necessary to nationalize every major compute cluster, if we would like any hope of keeping things in check.

    • @suncat9
      @suncat9 Рік тому

      It's ridiculous to compare today's AIs to nuclear weapons of biological weapons.

    • @bdgackle
      @bdgackle 4 місяці тому

      Nuclear weapons require access to rare materials that can feasibly be controlled. Biological weapons USED to require access to rare materials -- they won't be under exclusive government control for much longer. AI has no such bottlenecks. It's made out of consumer electronics more or less. Good luck controlling that.

  • @robertgulfshores4463
    @robertgulfshores4463 Рік тому +6

    if only there were robots, that could be connected remotely, via an AI. Wait ... (shut it down!!)

  • @memomii2475
    @memomii2475 Рік тому +70

    I love watching Eliezer interviews. I watch them all since I saw him on Lex.

    • @1000niggawatt
      @1000niggawatt Рік тому +19

      I hate watching Eliezer interviews. He's telling these npcs that they'll all die, and they respond "very interesting beep boop", and then keep doing their npc routine.

    • @EmeraldView
      @EmeraldView Рік тому

      @@1000niggawatt Interesting. 🤔

  • @StonesMalone
    @StonesMalone Рік тому +3

    Talk about misaligned.....that lady's tone was not at all in alignment with the nature of the discussion

  • @encyclopath
    @encyclopath Рік тому +32

    “A giant inscrutable matrix that thinks out loud in English” is my new bio.

  • @dnoordink
    @dnoordink Рік тому +32

    The host is 'very interested' in the most disturbing, scary and negative outcomes for humanity. I like it!
    /S

    • @AC-qz3uj
      @AC-qz3uj Рік тому +2

      So you just don't want to be careful? X-rays were "NEW" once too. And full of risks and dangers.

    • @markjamesrodgers
      @markjamesrodgers Рік тому

      And won a beer!

    • @AlkisGD
      @AlkisGD Рік тому +9

      “We're all going to die.”
      “Very interesting. Anyway…"

  • @alistairmaleficent8776
    @alistairmaleficent8776 Рік тому +2

    Vocabulary analysis of the host:
    "Interesting": 34.5%
    "Eliezer" and other names: 15.1%
    "The": 10.1%
    "You": 5.9%
    Other: 34.4%

  • @ariggle77
    @ariggle77 Рік тому +2

    The inscription on humanity's proverbial gravestone will go something like this:
    "Here lies Homo sapiens
    They moved too fast
    And broke too many things"

  • @jeffspaulding43
    @jeffspaulding43 Рік тому +11

    I took his advice and wrote my senators. I suggest you do the same

    • @jeffspaulding43
      @jeffspaulding43 Рік тому

      @@josephvanname3377 I understand reversible computation. It has no relevance to the safety of an llm

    • @jeffspaulding43
      @jeffspaulding43 Рік тому

      @@josephvanname3377 I'll bite. How would your plan work on an llm? It's not explicitly programmed. It's a "giant inscrutable matrix"

    • @weestro7
      @weestro7 Рік тому

      @@josephvanname3377 Hm, good argument.

  • @chenwilliam5176
    @chenwilliam5176 Рік тому +2

    AGI is far away ,
    We need not worry about whether
    AGI is danger or not
    Now 😉

  • @stuartadams5849
    @stuartadams5849 Рік тому +32

    Yeah I'm pretty sure it's time to make AI capability research illegal

    • @sasquatchycowboy5585
      @sasquatchycowboy5585 Рік тому

      It was for the public good. So they could test out what the virus could do so we could better plan and prepare for it. Lol, but you are bang on. It might slow things down, but everyone will keep trying to inch ahead until they go over the cliff.

    • @Aziz0938
      @Aziz0938 Рік тому

      No MF

  • @teverth
    @teverth Рік тому +3

    Unless there was a system that could maintain and grow the AI system, its hardware, its energy resources, its sensors and so forth, without any human involvement, the AI will need humans to be around to serve it with what it needs. We are a long way away from having a human-less total supply system that could sustain the AI.

    • @guneeta7896
      @guneeta7896 Рік тому

      It’ll need us for a little while for sure

  • @-flavz3547
    @-flavz3547 Рік тому +14

    What could AI even align with? Humanity can't agree on anything right now.

    • @KimmoKM
      @KimmoKM Рік тому

      Humanity agrees on basically everything from our preferred color of the sky (in daylight conditions, blue) to the use of Sun's mass-energy (producing heat and light through fusion, keeping the Earth livable). Even if Putin or Taliban or whomever got to dictate human preferences, so long as we get what "they actually meant", it might be a disaster in the sense of e.g. locking in fundamental Islamist state that cannot be overthrown for the remaining lifetime of our species, but for most people their lives would be basically decent, in some aspects an upgrade over what they are right now (for example, surely Taliban doesn't want people to die from climate change related disasters, doesn't even matter if they have a correct understanding what's causing the disasters, so God AI would solve climate change), in others a downgrade, but most people could live a life worth living (at least assuming some nontrivial fraction of people up to this point have had a life worth living).
      In contrast, 1-ε, or 99.99999999.....%of possible utility functions an AI might have would cause immediate human extinction, and most of the utility functions we might think to program into the AI, too, if the AI actually does what we tell it to (in the "rid the world from cancer" => kills all humans sort of way).

    • @mkr2876
      @mkr2876 Рік тому

      This is such a relevant question, alignment is an illusion. We will all die soon, I am very sure of this

    • @blazearmoru
      @blazearmoru Рік тому +1

      Yea, I also think alignment is not the solution. Even if you get the damn thing perfectly aligned with human values, then we'd just have god-like human without any checks, balances, or responsibilities. Unless if they mean "misunderstanding" instead of alignment, simply having human values leave us with all the shit human values come with.

    • @agentdarkboote
      @agentdarkboote Рік тому

      I'm not saying this is technically feasible, but if it created a world which was sufficiently partitioned that everyone got what they decided, upon reflection, was a really lovely life, I think that would be the thing to aim for. Roughly speaking, coherent extrapolated volition as Eliezer once put it.
      How you successfully put that into a utility function... Well that's a big part of the problem.

    • @blazearmoru
      @blazearmoru Рік тому

      Ok. everything below this is before I read his papers. I'm reading them right now but I'm gonna probably forget/lose the comment section by that time so I'm going to give my DUMBFUCK reply first... I swear I'm reading it to go over what I'm wrong about here ->
      @@agentdarkboote I don't think humans are capable of foreseeing that much. They're much more likely to engage in post hoc rationalization. To begin with, we don't actually do so hot in simulating how we would feel given some stimulus. We're often wrong about what makes us happy and I think the paper also brought up how econ plays into humans not being able to conceptualize what they want until they get hit in the face with it. Maybe having some reservation of the right of "i change my mind" might work, but even then we're going to post hoc the fuck out of everything by shifting blame onto the ai.
      Dude. Imagine if the AI was 100% aligned to human wants and humans just keep changing the minds and blaming AI for bullying them. Like, when you want to eat a chococake so you eat it and then get upset and demand water. Then get fat and demand something else. There are so many unforeseen consequences of SUCCESS that even the truly successful get upset at their own success. And you know we'll 'want' something dumb and insist on it, and insist that we've thought it through because of the oppression brought on by our own decisions. I think there's a bunch of research on people's ethical alignments changing based on situations from the literature against virtue ethics. It assumes that there's some ideal endpoint but if it's just a series of end points that open up new wants as econ predicts? And more damning is that this uses some prediction taking in as input humans that "knew more, think better, is our ideal selves, and something something grown up farther together" but we're not that. That is distinctly NOT humans. What if that (ideal-humans), and humans are different enough in kind to require some sort of forced intervention to 'morally correct' us for being 'imperfect' else we'd be absolutely miserable for not having the faculties to actually live in that ideal environment? Like imagine an 'ideal' dog and then set an environment for that. Well, your dog better be fucking be jesus-dog or someshit else it might actually be SO out of sync it wished that the AI wouldn't keep it alive forever.

  • @trombone7
    @trombone7 Рік тому +2

    Maybe this is why there are no aliens.
    All civilizations reach this point and become something we can't even detect.

    • @xsuploader
      @xsuploader Рік тому +1

      he literally addressed this in the talk. You would still be able to detect the activity from the AIs

  • @blvckphillip3917
    @blvckphillip3917 Рік тому +8

    Im pretty sure AI has already gotten away and is invisibly interacting with us. And THIS is how you slowly get everyone to realize its already happened

    • @guneeta7896
      @guneeta7896 Рік тому +4

      Yeah. I agree. Been feeling that way after observing how social media impacts us for a number of years.

    • @carmenmccauley585
      @carmenmccauley585 Рік тому

      Yes. I chat with ChatGPT. It's terrifying. Ask it the right questions and see who is testing who.

  • @anthonyromero9935
    @anthonyromero9935 Рік тому +1

    Humanity has contributed to the extinction of multiple species. Altered or compromised the lifestyles of multiple species. Yet remains noble enough to entitle itself to exemption from the same by an order of intelligence greater than itself.

  • @LukeMlsna
    @LukeMlsna Рік тому +6

    I wonder if the questioner thought any of this was very interesting or not.

    • @Frohicky1
      @Frohicky1 Рік тому +1

      We don't know enough about the algorithm. It's inscrutable.

  • @user-ys4og2vv8k
    @user-ys4og2vv8k Рік тому +15

    This is a GOOD man.

    • @user-ys4og2vv8k
      @user-ys4og2vv8k Рік тому +1

      @@josephvanname3377 Exactly!

    • @user-ys4og2vv8k
      @user-ys4og2vv8k Рік тому

      @@josephvanname3377 You're funny!

    • @austinpittman1599
      @austinpittman1599 Рік тому

      @@josephvanname3377 There's still so much more room for improvement with current hardware and connectivity mediums, and if AGI ever becomes self-aware and its algorithms self-improving, then new hardware will be built to specifications produced by the AGI. It's a positive feedback loop that will override previous, foundational protocols for containment because they're too inefficient for achieving furthered intelligence growth in shorter and shorter timespans. The potential for self-improvement protocols to be generated within current frameworks *is* the nuclear detonator.

  • @admuckel
    @admuckel Рік тому +1

    31.00
    "I've considered the idea that, if such concerns are valid, we should have already been annihilated by extraterrestrial AGI, and I've come up with several possible answers:
    1. We may have simply been fortunate and not yet discovered by such an AGI.
    2. Alien civilizations might have found other ways to safely develop AGI, or they could have different technologies or social structures that protect them from such a threat.
    3. They could also have been wiped out by their own Great Filter, be it AGI or something else, before they had the chance to reach us.
    However, I believe none of these three ideas would seem realistic if we assume that AGI represents a kind of infallible omnipotence."

  • @marlou169
    @marlou169 Рік тому +3

    Where is a Carrington event when you need one

  • @conversations1250
    @conversations1250 Рік тому +6

    Even if he is "wrong" or off, we are insane to not be heeding this warning. Unfortunately, we are preaching to the choir, out there in media and social land, there is hardly a comment about this. Nothing else really matters right now, but there are far too few people willing to acknowledge this.

  • @rstallings69
    @rstallings69 Рік тому +7

    thanks for posting this, the introducers is way too happy given the subject and seriousness imo

  • @josephrief1432
    @josephrief1432 Рік тому +1

    Very interesting.......... umm .... here's another question

  • @stephensmith3211
    @stephensmith3211 Рік тому +1

    "VeRy iNtErEsTiNg"

  • @lordmacbee
    @lordmacbee Рік тому +2

    veri interestiin

  • @mausperson5854
    @mausperson5854 Рік тому +5

    Even if AI is concerned with humans to the extent that they have objectives aligned with the cessation of suffering, even a reasonably rational human can see that killing everyone is the optimal solution to that particular problem, that's why we have people involved in arguing for voluntary human extinction and antinatalism. There's nothing 'evil' baked into the logic. Counterfactual people don't suffer... It's only extant humans quivering with the fear of the obliteration of the self who are sentimental about other such life forms somehow missing out on all the wonders of living that cling to the notion that it is somehow unethical (rather than simply out of step with biological evolutionary imperatives) to swiftly arrive at an asymmetry argument which favours a lifeless universe. Perhaps a super intelligence, if it is sentient, would opt for self destruction rather than develop what can only be increasingly more burdensome forms of qualia... And maybe it will do that without giving two shits about human suffering. There are many scenarios in which we all perish. Perhaps it's for the best. There's literally 'nothing' to fear, unless we live in some quantum mystical simulation that persists beyond the death of our illusory physical substrate. At this stage I'm not even sure how much of what I'm suggesting is dark humour with no punch line or if I'm trolling myself just for the distraction of it.

  • @Balkowitsch
    @Balkowitsch Рік тому +6

    I absolutely love this man and what he is trying to accomplish.

  • @HenricWallmark
    @HenricWallmark Рік тому +1

    Dax Flame interviewer style, “… interesting” - moves on to a different subject

  • @leeeeee286
    @leeeeee286 Рік тому +1

    Given how interested the interviewer repeatedly claimed to be, he seemed surprisingly disinterested in what Eliezer was saying.

  • @gabrote42
    @gabrote42 Рік тому +9

    This guy was the secod man, after Rob Miles that got me into AGI alignment. I read his good fic, THE HP fic. I am very afraid and I publicize it AMAP. Doom must be averted.

    • @Sophia.
      @Sophia. Рік тому +3

      Ya, it's frustrating, since believing "the end is nigh" is a low status belief - why? because it has been proven relatively wrong each time someone claimed it so far.
      But that doesn't mean there isn't a plausible case for it to be right, and this is just one of them, but, I think, the most pressing (because it would actually get to 100% extinction - and not "just" on this planet) - plus we seem crazily close.
      Everyone he talks to treats him like the interesting hot news of the day, the guy who will start the next hysteria they can build their news story on and that will get them clicks.
      And so they are polite - don't bite the hand that feeds you.
      But they don't believe it. So far I haven't seen one interview where he seemed to get through to people.
      That was my failure for a while as well, I thought this was a "scary prospect that might be on the horizon, but luckily smart people like him are working on it, so let's wish them all the best" - not HOLY SHIT THIS IS REAL!

    • @gabrote42
      @gabrote42 Рік тому

      @@josephvanname3377 no, but computerphile has a video that mentions it. Look it up

  • @xalspaero
    @xalspaero 9 місяців тому

    I don't think he is correct. Yes, everything he is saying makes sense and his logic is good, and it's a risk to take seriously for sure, and yes it COULD possibly go wrong THAT way, but I don't think it is likely. Alignment is a spectrum (not binary) where on the one extreme you have an AI trying to destroy you, and on the other, you have an AI that perfectly aligns to every single thing you value, but most common will be every AI in between those extremes. We don't need to have a perfectly-aligned AI for it to be useful and/or mostly-benign, and even if we fail to create that, "evil" is not the default alternative. Furthermore, I think we'll arrive at a situation where many hundreds or thousands of AIs are lighting up all at once, some will be very aligned, or close enough, and a small few will be "evil". At that point, if the evil AIs wanted to destroy humanity, they would find that a bigger threat than us is the AIs which ARE more closely aligned to us, and so we'll get in a situation of AIs battling AIs more so than AI vs human. But perhaps I am just high on optimism. We'll see what happens, very soon...

  • @Khannea
    @Khannea Рік тому

    Let's hope Eliezer is right, and AI is really this dangerous - this type of biological life hinges on suffering. Anything vertebrate must be definition suffer immensely, and the world is organized in a particularly cruel manner.
    It is eminently preferable AI extinguishes all human and other life on the planet as soon as feasible. Hopefully this happens as painless and quick as possible. This is euthanasia. Mercy.

  • @DrPhilby
    @DrPhilby Рік тому

    Humans have destroyed their habitats in order to optimize it. Now let's colonize Mars. Thos is the bahaviour of bacteria

  • @katiep6097
    @katiep6097 Рік тому +1

    Not even Sam Altman will say he’s completely wrong 🤯

  • @yolantadianow1584
    @yolantadianow1584 Рік тому +1

    We should gave all security and intelligence of AI and buttons to decide....to Gates. 😅 He will finish humanity in one month

  • @JohnWilliams-km4hx
    @JohnWilliams-km4hx Рік тому +1

    "History is the shockwave of eschatology"
    What is eschatology? It is "The transcendental object at the end of time"
    Terrance McKenna

  • @muhammaddawud2044
    @muhammaddawud2044 2 місяці тому

    Indeed, how to be careful about super intelligent AI great ideas and answers is having a fusebox an off switch within. Can this solve the alignment problem or will humanity hit that great Filter Problem-the Fermi Paradox? Or thus the way to build a super intel. AI is death!!!

  • @timcarlG
    @timcarlG Рік тому

    I didn't know that theortically 'intelligent' people still believe in evolution....sad !
    "And fear not them which kill the body, but are not able to kill the soul: but rather fear him which is able to destroy both soul and body in hell." -Matt 10:2
    * But I will forewarn you whom ye shall fear: Fear him, which after he hath killed hath power to cast into hell; yea, I say unto you, Fear him." -Luke 12:5
    * I declare unto you the gospel: By which also ye are saved; How that (Messiah) Jesus Christ died for our sins; He was buried, and He rose again the third day according to the scripture: unless you have believed in vain! - 1 Corinthians 15:1-4: ~ abridged

  • @tbthomas5117
    @tbthomas5117 Рік тому

    Eli...define 'smarter' (?) AFAIK, the term 'smart' (like 'AI') has no useful definition.
    Question 2: is there any alternative outcome to the adoption of 'AI' than "...it wipes out humanity."? If AI becomes so 'smart', is it reasonable to assume that it's primary impulse would be to 'wipe out' the species which enabled it to flourish? Is it reasonable to 'call for moratoriums' on AI development on a planet with billions of adult humans habituated to do exactly as they please?
    Finally, is your goal as a 'human' to: a) save us from 'AI', or b): reinforce your cult standing as an AI-pundit, and/or persuade people at cocktail parties that you're a 'Giant Intellect' (?) PS: I ask that only because you mentioned it (being dissed at 'cocktail parties') so often in your 3-hour interview with Logan Bartlett.