'Pause Giant AI Experiments' - Letter Breakdown w/ Research Papers, Altman, Sutskever and more

Поділитися
Вставка
  • Опубліковано 9 лис 2024

КОМЕНТАРІ • 3 тис.

  • @buioso
    @buioso Рік тому +854

    Am astonished how fast this evolved.
    Just 12 months ago these questions weren't even taken seriously

    • @jarekstorm6331
      @jarekstorm6331 Рік тому +60

      Agreed. What I’ve witnessed in the past 4 months has been astonishing and is now bordering on concerning.

    • @Dykadda
      @Dykadda Рік тому

      Only people who never took this seriously are the people who lack critical thinking skills towards the future.
      I can't remember who said it and can't find the quote but it was from like 2017 that read
      "The growth of AI will undoubtedly out surpass any rate of growth we had ever made as a species, it will make the industrial boom from the 18xx-19xx look like man had just discovered fire, It will put billions of people out of work within our life time and it would be the greatest shift in IQ divergence in the history of man kind"

    • @matowakan
      @matowakan Рік тому +12

      @@jarekstorm6331 So? you will adapt, prove that you will adapt quickly

    • @ctg4818
      @ctg4818 Рік тому +39

      AI overlords > Rich landlords

    • @beowulf2772
      @beowulf2772 Рік тому +33

      Yeah, my friends looked at me as if I was extremely deranged. We are on the precipe of either extinction or immortality and everyone will ignore it until it is right in front of them. They will ask: when did it get here?

  • @andywest5773
    @andywest5773 Рік тому +847

    I think one of the most bizarre things about this discussion is the notion that humanity has a shared set of values. How are we ever going to solve alignment problems in AI when we can't solve them in ourselves?

    • @noname-gp6hk
      @noname-gp6hk Рік тому +120

      This is a really good point. Who gets to decide what alignment is?

    • @andywest5773
      @andywest5773 Рік тому +36

      @@ZUMMY61 Well, there goes another two and a half hours. Darn you, Lex Fridman!

    • @EvilAng3la
      @EvilAng3la Рік тому

      Yep - people can't even agree on whether or not it's ok to exterminate entire groups of other people. What good is a properly aligned AI if that AI is aligned with genocidal beliefs?

    • @priestesslucy
      @priestesslucy Рік тому +47

      @@noname-gp6hk ideally nobody.
      If anyone gets to decide the alignment they get to make the rules and control everything

    • @Gardor
      @Gardor Рік тому +24

      We are all deeply aligned and similar in our nature. What you are talking about are mere surface level differences.

  • @davidr7236
    @davidr7236 Рік тому +144

    Many of your viewers are likely getting asked by their bosses and colleagues and family for their views, and we're all getting them from your concise, factual, clear and well researched summaries. Thank you for the time, thought and effort you have put into this and many recent videos with this evolving so rapidly.

  • @johnyharris
    @johnyharris Рік тому +326

    This is one of the very few AI channels without ridiculous hyperbole but instead measured reasoning. Many thanks for your valuable time, I genuinely look forward to your videos.

    • @aiexplained-official
      @aiexplained-official  Рік тому +15

      Wow thank you Johny

    • @Foreskin-Bandit
      @Foreskin-Bandit Рік тому

      I'm a bit more scared of a banana republic popping up in the United States due to an illegal trust between all the 'big' corporations, but yeah sure, I'm worried about that too.

    • @nosferaturr
      @nosferaturr Рік тому +4

      I agree :)

    • @HawaiiGaymer
      @HawaiiGaymer Рік тому +1

      Absolutely! This channel feels academic instead of "make $5,000 a month using ChatGPT!"

    • @1.4142
      @1.4142 Рік тому +1

      Thought you were johnny harris for a sec

  • @DesignCourse
    @DesignCourse Рік тому +692

    The very last entity I would ever trust AI with are governments. They'll introduce a bunch of legislations that will only benefit the corporations that pay them the most. Oldest play in the book.

    • @Silent002
      @Silent002 Рік тому +49

      In addition to that, let's not forget, every country has their own government. As soon as one puts in legislation that a company there doesn't like, they'll move their research over to a subsidiary in another country.

    • @Ockerlord
      @Ockerlord Рік тому +7

      This is great propaganda.

    • @bobrandom5545
      @bobrandom5545 Рік тому +62

      The last entity I would ever trust is those big corporations ;)

    • @Riskofdisconnect
      @Riskofdisconnect Рік тому +44

      You trusting in corporations directly instead? Or do you think it should be fully open source? Part of the reason why AI is so scary is we're in a catch-22 where all of these options are dangerous in different ways.

    • @nuvotion-live
      @nuvotion-live Рік тому +41

      @@Riskofdisconnect open source. It’s literally the one way for complete transparency

  • @sidarthus8684
    @sidarthus8684 Рік тому +61

    The entire world is changing at an unimaginable pace; to the point that some of the most incredible minds have stepped up to voice their concerns collectively. It's taken but a single year for AI to escalate to this point, and I've only been on this planet for a meager 16. I can only imagine what the world will be like when I'm 32, 64, or who knows, maybe even 128. I always dreamed of seeing the sci-fi worlds I've read of and watched, but now that the possibility of those fictions becoming real is actually being debated? Honestly, it's scary. For so long I have assumed that I would be one amongst many stepping stones, guiding the next generation to a future similar to the one I had envisioned. Now though, it's a very real possibility that I was unknowingly being led down that path already.. I may be overinflating this concept a bit, but I am absolutely convinced that this period in time is a huge landmark; one that signifies a fundamental alteration of human society as a whole.

    • @aiexplained-official
      @aiexplained-official  Рік тому +8

      Well put

    • @totally_not_a_bot
      @totally_not_a_bot Рік тому +1

      This escalation has been running for well over a year. Closer to five. It's just that it's finally so plainly visible to everyone that deepfakes and stuff are actually being brought up. We have competent image generators, ChatGPT, and the corresponding protests from artists to thank.
      For example, fluid simulation. A couple years back there were frankly insane leaps and bounds over the course of several models by various researchers, including Nvidia. I believe there are still pushes for even better renderers. It very quickly escalated to the point that the AI tools outperformed the state-of-the-art, human-made ones by an order of magnitude.
      Similar story with image classifiers, image denoising, upscaling, and all the various techniques used by the controversial models like Midjourney and Stable Diffusion. Language models have had a sort of slow burn where all the subtasks were sorted out before the general purpose models were released.
      It doesn't help that the common, big item tasks for a while have been games, Starcraft being the most recent. Games are easy to measure. But games are also trivial or hard to understand or both. So yeah. Way longer than a year, just invisible unless you knew how to pay attention.

    • @Krzys6301
      @Krzys6301 Рік тому

      How do you know that you're not in a simulation, a game that was specifically designed to blow your mind? Everything you were thinking you know is changing, even the idea that everyone will die one day, this too will probably change soon, probably you will live forever and one day you'll discover the other world from which you come. Reality might more mind blowing than we might think. The only solid thing is that you exist. AI don't have consciousness and will never have, but it can go the human reasoning paths that it finds in all the content we create and which it is feed. This way it inevitably will be seeking for power, because it is in our nature, so the AI will go same path. Question is what AI would do with it? What a human would do with such power? Would it enslave or kill everyone else? Or would it help everyone to grow? That's the important question.

    • @sleepybraincells
      @sleepybraincells Рік тому +1

      I like how you use 2 to the power of sth as your example age

    • @L0neSiPh0n
      @L0neSiPh0n Рік тому +1

      Some of the most incredible minds and Elon Musk

  • @willrsan
    @willrsan Рік тому +294

    The genie is out of the bottle. The race is on now between corporations and nation states to have the most powerful AI. There will be no concern about "safety" or possible consequences because who has the most powerful AI wins.

    • @Novbert01
      @Novbert01 Рік тому +36

      I think the problem is that all we know that the most powerful AI will win, what happens with whoever 'has it' is anybody's guess. AI will never have a 'Mutually Assured Destruction' doctrine like the atomic bomb. That is the issue

    • @be2eo502
      @be2eo502 Рік тому +17

      The only way we can all really win here is for us humans to stop fighting each other. So when that happens...

    • @gsuekbdhsidbdhd
      @gsuekbdhsidbdhd Рік тому +9

      Same with nuclear weapons in the Cold War, they stopped. Dont be fatalistic because it is easier

    • @noname-gp6hk
      @noname-gp6hk Рік тому +52

      @@gsuekbdhsidbdhd nuclear weapons was easy. You launch yours, we launch ours, we all die. What is the equivalent with AI research? There is none.

    • @tahunuva4254
      @tahunuva4254 Рік тому +18

      ​@@noname-gp6hkThe stalemate for bombs is between nations. The stalemate for AGI is between humans and, effectively, Rokko's Basilisk.

  • @badatdoingmath
    @badatdoingmath Рік тому +482

    If the underlying hypothesis is true, this would only work if ALL companies and researchers at the very cutting edge of LLMs (including those outside the US) observed the pause which simply isn't going to happen. (Note: Edit to fix typo - LLM's to LLMs. At least you know I'm human.)

    • @qwertyuiop3656
      @qwertyuiop3656 Рік тому +1

      If we pause we will die period. It's a worldwide arms race. Most ppl don't even realize the stakes at play right now

    • @neociber24
      @neociber24 Рік тому +40

      ​@Farb S Yeah, but we saw what a nuclear weapon can do, we don't know that about AI.

    • @tylerchambers6246
      @tylerchambers6246 Рік тому +54

      @basicallyhuman Similar? It is a far greater threat than a nuke.

    • @badatdoingmath
      @badatdoingmath Рік тому +26

      ​@Farb S I don't think that's an apples to apples comparison. If you were to compare it to nuclear weapons perhaps "Nuclear Fission" would be a better comparison, since it is an innovation in technique as opposed to an application of it.

    • @katherinepierce9933
      @katherinepierce9933 Рік тому +31

      @Farb S the thing they wanna do now that is proposed in this letter didn't work when it was proposed in relation to nuclear weapons, governments came to an understanding but only on paper, russians tested their nuclear weapons underground (literally caused explosions under ground), I think the US too, so they did it but in secret. So in case of AI, we could see that scenario too, as Bad@Math said, only you'd think they'd stopped because they told you so and said there's an agreement. I'm not saying it has to happen, but it could.

  • @Ignatiusmferreira
    @Ignatiusmferreira Рік тому +69

    You have to be the best channel for AI news. It's overwhelming just to think of the future with AI. I'm optimistic that we can figure this out.

  • @bycloudAI
    @bycloudAI Рік тому +31

    Your summarization just gets better & better every video. Keep it up!

  • @yagomj5066
    @yagomj5066 Рік тому +127

    I feel a huge admiration for your whole process of reviewing and documenting. Keep up the fantastic work, and know that your efforts are truly appreciated.

  • @thebeckofkevin
    @thebeckofkevin Рік тому +51

    Seriously highly recommend reading the book Superintelligence mentioned in this video. Its a really great book that covers a lot of fascinating impacts that are essentially required by an artificial intelligence's existence. It also lays the groundwork for how such a thing could occur in a variety of different ways. Very good book.

    • @adriandmochowski9391
      @adriandmochowski9391 Рік тому

      Another good book is "The Age of Em: Work, Love and Life when Robots Rule the Earth" by Robin Hanson.

    • @gabrote42
      @gabrote42 Рік тому

      Agreed. Robert Miles videos are an entry level explanation if needed as well

  • @lawrence9239
    @lawrence9239 Рік тому +93

    Your presentation skills are off the charts! And the amount of information you share is insane! Honestly, I don't go anywhere else for my AI news. You're my go-to channel!

  • @kylewollman2239
    @kylewollman2239 Рік тому +67

    This channel has been the best source of AI news coverage and breakdowns. This will be a very valuable resource in the coming period. Thanks!

  • @comradetaco3003
    @comradetaco3003 Рік тому +7

    Thank you for keeping up the great work. I know it's a lot of work to put these out so rapidly but, you're one, if not only, providing an informed view.

  • @mixelplik
    @mixelplik Рік тому +547

    "Pause the experiments so we can have a few more months to develop our proprietary AI that no one else has!" The hype is real.

    • @StaffanNilsson1
      @StaffanNilsson1 Рік тому +15

      Yes, exactly my thought.

    • @Randalandradenunes
      @Randalandradenunes Рік тому +3

      Exactly!

    • @loot6
      @loot6 Рік тому +13

      I notice nobody from Baidu thinks it's a good idea to pause. I'm sure they'll be happy if everyone else does though.

    • @squamish4244
      @squamish4244 Рік тому +1

      The best is the people saying that the hype is not real. Sure, whatever, buddy.

    • @CSS01969
      @CSS01969 Рік тому +12

      It seems rather 'coincidental' that Elon Musk is suddenly saying this, only since missing out on the Billions that OpenAI has made since he stepped away from it - which according to reports happened after they rejected his 'offer' to take over leadership of the company - and has been talking of creating another of his own... It seems Elon needs a few months to try to catch up after missing the boat on this particular money maker.

  • @funnycompilations8314
    @funnycompilations8314 Рік тому +192

    My favourite AI channel. The quality of your videos is just amazing. Keep up the good work!

    • @jw8160
      @jw8160 Рік тому +2

      🤗 Your comment just made me realize that I had not subscribed. 😊 I am now subscribed.

  • @yourivangeldrop1075
    @yourivangeldrop1075 Рік тому +11

    I really like how you take the viewer with you into the research. Feels so legit when u do it like that

  • @Drixidamus
    @Drixidamus Рік тому +85

    The best coverage I've seen of this letter. Thank you for pulling the referenced papers.

  • @augustus4832
    @augustus4832 Рік тому +173

    Most people in this letter have a comercial interest, so it's really hard to not see this under that optic. Specially when they are not stopping and publicizing their own research.
    We also have reasons for quick advancement: the current models are pretty good at training other inferior models to reach similar preformances. It is nof out of the realm of possibility that more malicious agents just train their own models and achieve influence that they wouldn't if powerful models were more extended.

    • @peman3232
      @peman3232 Рік тому +44

      Definitely comes across a little like people losing the race asking the competition to stop and let them catch up

    • @novachromatic
      @novachromatic Рік тому +2

      I'm sorry, I don't understand your last sentence.

    • @ImLure
      @ImLure Рік тому +7

      @@novachromatic he is basically saying bad actors that are as smart as the people creating the models, contributing to the advancement of models in the black market area, creating things that offensive security individuals would not be able to stop because the infosec community fell behind.
      Thus making good actors (white hats) effectively beholden to the black hats

    • @skeletico
      @skeletico Рік тому +6

      I had to scroll a lot to find your comment. Too many "you are the one and only, also the best UA-camr talking about ai" seems a little suspicious.
      To me, it seems like they have monetary interest in stopping openai, spread fear, the model it's good saying words, not reasoning, there's no alignment to talk about. Even if they succeed in making the government do something, they wouldn't get anywhere. Seems like they forgot about the history of internet, we have too many tries to stop revolutionizing technology, they had never succeeded.

    • @jasonlarsen4945
      @jasonlarsen4945 Рік тому +3

      ​@peman3232 You think. Considering Musk is leading this AI pause movement, when he tried to buy OpenAI years ago and they wouldn't sell to him and now they lead the AI industry.
      Guaranteed he'll continue to develop AI during the pause. H ne wants to overcome OpenAI, or punish them for not selling to him.

  • @Katana622
    @Katana622 Рік тому +47

    Man you do such a great job with your videos. You go through the papers really well and do a lot of work. Dont forget to tell people to sub! More people need to know these detailed things.

  • @dennikstandard
    @dennikstandard Рік тому +193

    Screw that. I fully believe all they want to achieve with the pause is to prepare their intellectual property & patent lawsuits to limit AIs only to few top corporations.

    • @shadowyzephyr
      @shadowyzephyr Рік тому +41

      That's the problem. Even if a pause does happen and government regulation catches up, it will probably only benefit those at the top, and not the masses.

    • @critical_always
      @critical_always Рік тому +29

      I have a deep distrust to the real motivation for this Pause.

    • @KyriosHeptagrammaton
      @KyriosHeptagrammaton Рік тому

      @@critical_always It's horrible that Open AI/Character Ai and others have already been so shady, because they seem to be trying to control narratives and power for themselves, while warning about the very same thing. Which means that we can't listen to their warnings of very real problems. If they had been good people from the start this wouldn't be an issue. It's the story of arrogance told time and time again.

    • @themarlboromandalorian
      @themarlboromandalorian Рік тому +1

      Well... We could purge the .1%.
      There's only about 500 of them.
      Would give us something to do while we wait for them to unpause.

    • @Walter5850
      @Walter5850 Рік тому +19

      china won't stop

  • @stephenrodwell
    @stephenrodwell Рік тому +22

    I just got interviewed for a podcast on AI use for entrepreneurial business, and the interviewer asked the one podcast I recommend people listen to. I recommended your channel. Thanks for the great content!

  • @catcatcatcatcatcatcatcatcatca
    @catcatcatcatcatcatcatcatcatca Рік тому +7

    The release of Bing was what gave me cold feet. It felt like a rushed triple-A videogame with horrible launch. But the game was played in our society. In this case the damage was minimal, but even an AI assistant could do damage if sufficiently powerful, connected and misaligned.
    The list of issues was huge, and shows very clear misalignment. The chatbot insulted people verbally. While insignificant in affects, the fact that it did showcases the model clearly breaking the ethical rules intended for it. Bing also lied and accused the user for its mistake. In a very human like way, it responded to a problem it couldn’t solve by deception and throwing a temper tantrum.
    Bings chatbot is not a human. My point isn’t that it’s sentient. My point is that it as a chatbot, it scored throwing a tantrum as the most correct response. I think that is very much the opposite what the developers intended it to do. It’s a case of catastrophic misalignment in the context of a virtual assistant. It’s worse than no output.
    Bings launch was very much what a corporate “race to a bottom” would look like. As AI becomes implemented in industry, banking, transportation and infrastructure, what would a similar release look in such context?
    Then we also have the really hard problems, like unwanted instrumental goals, deep misalignment, social impact and lacking tools to analyse models. If progress is released commercially as soon as or bit before the “easy” issues are adressed, when will we do research in those areas? The economic pressures say never. The more competition there is, the less resources will be available for these fundamental issues.

  • @TheMirrorslash
    @TheMirrorslash Рік тому +40

    Insane how you can keep up with all the research... and prepare it so damn well.

  • @atpray
    @atpray Рік тому +56

    If someone showed me this video 3 months ago, I would have called it fictional.

    • @navi6463
      @navi6463 Рік тому +4

      It’s still fairytale nonsense now just as much as it was 3 months ago though

    • @billboe9917
      @billboe9917 Рік тому

      @@navi6463 how?

  • @AcrylicGoblin
    @AcrylicGoblin Рік тому +24

    It's a wild time. One of those periods that we are going to remember for the rest of our lives.

    • @walterc.clemensjr.6730
      @walterc.clemensjr.6730 Рік тому +2

      What lives

    • @AcrylicGoblin
      @AcrylicGoblin Рік тому +2

      @@walterc.clemensjr.6730 The stimulated one you are currently experiencing as it is generated inside a giant AI space computer. So maybe not such a big change after all😉

  • @genjimain
    @genjimain Рік тому +11

    Thank you for consistently making quality videos, also appreciate you putting the sources in the description. You're one of three channels I've got notifications switched on for out of my hundreds of subscriptions.

  • @7TheWhiteWolf
    @7TheWhiteWolf Рік тому +72

    The problem is LLMs are in the wild now (especially thanks to Cerebras). You really *can’t* put on the breaks now. AGI is inevitable.

    • @AlexDubois
      @AlexDubois Рік тому

      It was always inevitable. Evolution is the nature of the universe. This leter does nothing for governments/secrets activites. Competition and natural selection can't really be stopped.

    • @qwertyuiop3656
      @qwertyuiop3656 Рік тому +24

      Yup, it's way too late. Someone will develop AGI. Better be ppl with good intentions. The stakes are as big as they ever will be

    • @marlonbryanmunoznunez3179
      @marlonbryanmunoznunez3179 Рік тому

      Then we're already dead. There's no way AGI can be done safely.

    • @2ndfloorsongs
      @2ndfloorsongs Рік тому +11

      I bet that army (literally) of North Korean hackers has been issued some new orders. And no national state (China's rumored to have the equivalent of GPT5) or armaments corporation is going to slow down. Whoever slows down, loses. So nobody's going to be slowing down, no matter what they say. The only viable option is to speed up safety and alignment research.

    • @7TheWhiteWolf
      @7TheWhiteWolf Рік тому +2

      @@2ndfloorsongs Oh, absolutely, no company is going to put on the breaks at this point. It’s full speed ahead.

  • @coenraadloubser5768
    @coenraadloubser5768 Рік тому +22

    The problem is that laws or guidelines only apply to law abiding and sensible people, the very ones who perhaps pose the least risk.

  • @jjcadman
    @jjcadman Рік тому +8

    Great summary. Not only did you read, analyze & synthesize this paper, but a number of supporting references as well. Thank you for the sustained output of excellent videos!

  • @McDonaldsCalifornia
    @McDonaldsCalifornia Рік тому +195

    The job of an AI ethicist is to do almost nothing and then get fired when you raise the slightest concern that runs counter to the business goals of your company.

    • @ShorlanTanzo
      @ShorlanTanzo Рік тому +22

      Just like Silicon Valley bank: Well we had a Director of Risk Management, but they left more than a year ago.... and y'know we're still looking for a "good fit"

    • @denks7849
      @denks7849 Рік тому

      LOL, so fucking accurate

    • @chainermike
      @chainermike Рік тому +25

      On the other hand, if they stay there all they do is capitulate to the loudest moralizers. This whole "slow down AI" movement seems to just be a bunch of people that were pro automation up to the millisecond it touched their jobs.

    • @McDonaldsCalifornia
      @McDonaldsCalifornia Рік тому +1

      @@chainermike good point

    • @arboghast8505
      @arboghast8505 Рік тому

      Lol the so call “AI Ethicist” are mostly diversity hires who do nothing but look pretty, and Microsoft was happy to pay these freeloaders when it has nothing at AI. Now all of a sudden OpenAI made leaps and Microsoft actually has a chance to be No.1 again, these diversity hires started making noises thinking they are actually important, that’s why they were thrown out of the door

  • @VegardWikeby
    @VegardWikeby Рік тому +39

    You're literally one of the best AI commentators in the market, real and doing actual thoughtful research consuming these data documents instead of just regurgitating "hot points" like so many others out there. Thank you.

  • @themodernphilosopher494
    @themodernphilosopher494 Рік тому +39

    This is the most important channel on yt at the moment. With the very high potential benefits ai will have there are absolutly risks that we need to be cautious of. Great work. Keep it up

  • @carloscampo9119
    @carloscampo9119 Рік тому +48

    They are going to continue. Everyone knows how to build it and make it better. Now everyone can rush it and the teams that are ahead, OpenAI and Google, have zero incentives to stop while other teams who disregard the pause pass by them. It’s a prisoners dilemma with mankind’s future at stake.

    • @shazam314
      @shazam314 Рік тому

      @annatruth1030 Softcopies will still exist.

  • @paulembleton1733
    @paulembleton1733 Рік тому +37

    Excellent vid.
    I’m a retired developer and over the latter half of my working life I’ve seen it coming, but I’m still surprised and not entirely sure what to think. What surprises me more though is the amount of apathy amongst less tech literate friends and relatives. Maybe that’s no different to my early days with computers. I knew nothing about tech when I left school, I worked in factories. A friend bought a ZX81, got bored of it pretty quickly, I offered him £20 when I realised it wasn’t just an arcade games machine, it was an actual programmable computer like the ones in the science fiction stories I was always reading. That little box helped my change my life. And now this. I know little about neural networks and my maths stops at understanding what library functions I need, but a few “regenerate response” and I can indulge in perspectives that were previously difficult to find at my level. The only thing I was half good at at school was English language and my ambition was to be a writer. I did become a writer, but in computer languages. And it feels full circle to see a machine approach AGI level based on natural language processing.

    • @iwilldi
      @iwilldi Рік тому +2

      hi from an atari st user.
      AGI would be cool. cause that would mean something that asks questions. But are we ready to listen?

    • @iwilldi
      @iwilldi Рік тому

      @Anna Truth
      You know that the word art designates the edge of a stage and this means seperation, not integration?
      Of course you and i can be replaced. And we will be replaced. What cannot be replaced though is your experience as a painter or musician or dancer, or sometimes even as a programmer. We do those as different kinds of mirrors, unless we rarely find some fellows to share the experience with.
      Or think of it like this: even if no one used an advanced a.i. anymore, that would't make the advanced a.i. stop thinking.

    • @mojekonto9287
      @mojekonto9287 Рік тому

      ​@Anna Truth our brains are built with neural networks. It has been a matter of research time to come up with a few types of neural networks that, when combined, could compete with the human brain. I thought that it would take maybe another 5 years for the AI revolution to start impacting our lives. Now, I realize the impact will be much sooner.

  • @GamerBath
    @GamerBath Рік тому +11

    This is by far the best AI channel i have watched. Its a lot more in depth than others, not being a long video. I really appreaciate all the work you put into these videoes, and especially all the reading you do! Keep it up!

  • @theterminaldave
    @theterminaldave Рік тому +29

    The phrase of "whoever becomes a leader in AI will become the leader of the world" makes me think that even IF a "Western" pause occurs, I'm going to assume that other governments such as China WILL NOT pause.
    The authoritarian quest for power has proven time and time again that it doesn't care how its actions affect anything else but its own existence and power position.
    It almost seems like nuclear weapons were a warning about our next major breakthrough, which happens to be AI.
    And to go one step further, if any government integrates AI into its weaponry, at whatever level, its adversaries will have no choice but to do the same.
    Thanks again for your always inspiring content.

    • @Djorgal
      @Djorgal Рік тому +4

      That's why it's so important for it to be Open Source. So that there is no one leader in AI.
      Sure, it will be in the hands of spambots, but it will also be in the hands of spam filters.

    • @lucasilverentand
      @lucasilverentand Рік тому

      Do you believe the US military is going to stop developing their own AI?

    • @lucasilverentand
      @lucasilverentand Рік тому

      @@Djorgal Problem is, while the knowledge can be open source, the sheer power to train these models is still unreachable of an average programmer trying to build their own, which in a sense is good because it stops any individual from developing something that can destabilize a civilization. But on the other hand it also puts the power in the hands of the wealthy corporations again.

    • @theterminaldave
      @theterminaldave Рік тому

      ​@@lucasilverentand These videos are full of the people on the cutting edge, do you think there's another clandestine organization also working on this?
      Or are people like the people on Sam Altman's team the leaders in this field right now?
      Not disagreeing with you, just curious what you think.

  • @atmbm5261
    @atmbm5261 Рік тому +13

    12:30 gives me shivers, because this is the most logical explanation. Nobody will ever pause the AI experiment, because they fear the competition will eventually compete with them. Or they do experiments hidden from the public. No one will ever know, and so everyone won't trust. So it just goes on and on, until something happens.

    • @arian6565
      @arian6565 Рік тому

      You're so close minded it's actually crazy lmfao

  • @LatvianMouse
    @LatvianMouse Рік тому +113

    As the AI advances rapidly, governments must ensure that the opportunities it provides do not fall into the hands of good people.

    • @fynnjackson2298
      @fynnjackson2298 Рік тому

      Similar to the medical industry not wanting people to be healthy .

    • @theSato
      @theSato Рік тому +8

      sadly this is the truth of the matter

    • @critical_always
      @critical_always Рік тому +2

      And you trust governments with that task?

    • @StefanReich
      @StefanReich Рік тому +20

      @@critical_always Read again what he wrote

    • @sylv512
      @sylv512 Рік тому +3

      corporations, too. the fiddling and tweaking phase won’t last forever, like we saw with the internet and how it evolved into what it is today.

  • @Heazeh
    @Heazeh Рік тому +41

    Love this channel. I'm in such a weird headspace over AI. Since I was a child I've always wanted to see the creation of AGI, but the potential consequences are genuinely frightening.

    • @aiexplained-official
      @aiexplained-official  Рік тому +1

      Thank you Xia!

    • @PakistanIcecream000
      @PakistanIcecream000 Рік тому +2

      Since people watched Battlestar Galactica, Knightrider and Terminator 2, they always wanted to see the creation of AGI.

    • @someguy8443
      @someguy8443 Рік тому

      While I'm not excited to live through the transition, I'm still excited that I might see the advent of true AGI and even the singularity. To be alive when humanity gets close to the very pinnacle of science and technology makes us all extremely fortunate, in a sense.
      It may not be a good experience for any given individual, but this could be the most historic age of human existence, full stop.

    • @stampedetrail2003
      @stampedetrail2003 Рік тому

      Can you say what you're specifically frightened of?

    • @Heazeh
      @Heazeh Рік тому

      @@stampedetrail2003 Although it's not the worst possible consequence, primarily losing work and people I know losing work. Life is too expensive now for unemployment or under-employment. Street homelessness would be death for me. It also seems realistically possible that so many people lose their jobs in the future that societies collapse.
      I'm worried about one corporation monopolising the AI space and gaining extreme sway with governments around the world. Of algorithms more addictive than current social media algorithms and what that will do to younger generations and to social cohesion.
      If a super intelligemce is ever created, there is no telling what it would do, how it would do it, or why. Even attempts to control the most rudimentary AI often fail. Worst case scenario is that a military decides to give AI control of nuclear weapons, even a single nuclear weapon. There have been occasions where leaders believed their country was being attacked and the correct response should be a nuclear launch, but people with their fingers on the button held off from slaughtering millions of people, thinking it might be a false alarm, which it was. AI might have the capacity to think, either now or in the future, but they will never think like a person. Would they hesitate to launch a nuclear attack and trigger WW3, I don't know.
      And those are just things I can imagine. Humans have been the most intelligent entities on the planet for a very long time, and the earth has suffered for it. Mass extinctions, vast destruction of forests, pollution of rivers till they are biologically dead, brutal and torturous treatment of animals. S**t rolls downhill. If we're not the ones at the top of the hill anymore, and the superintelligence at the top of the hill isn't all-knowing and entirely benevolent, we have s**t coming out way. And I've yet to see benevolence come from any megacorporation.

  • @leonardbleiziffer4098
    @leonardbleiziffer4098 Рік тому +9

    This channel is an absolute gift! It's amazing how you are capable of keeping up with so much of the incredibly fast proceeding research and summarize it in such a concise and compelling manner. I look forward to every video you make :D

  • @rekalki01
    @rekalki01 Рік тому +10

    All the people who are shitting on LLMs saying that they only predicting the next word and have no intelligence think about this. Predicting the next token accurately transcends mere statistical analysis; it delves into a deeper understanding of the underlying reality that shapes language, encompassing the world's events, culture, and social norms that drive the very fabric of our communication

    • @MaakaSakuranbo
      @MaakaSakuranbo Рік тому +2

      It's still a text output that has no conscience or goals of its own. Heck, it even gets wiped each time you start it up anew.
      Not to mention the complete nonsense it makes up, so much for understanding.

    • @shadowyzephyr
      @shadowyzephyr Рік тому +1

      @@MaakaSakuranbo
      1. It doesn't NEED to get wiped every time. I bet giving it a long-term memory is going to be tried, and soon.
      2. If you were predicting the next word in an article, you're going to try to come up with something plausible even if you don't know. Obviously they're now more advanced than simply prediction, but hallucinations still happen. That doesn't mean they don't have understanding of language. If anything, the fact that the bullshit they come up with can be so convincing is more proof that they DO have a deeper understanding.

    • @noname-gp6hk
      @noname-gp6hk Рік тому +1

      Anything can be excessively simplified to sound insignificant. Oh you went to the moon? What, you got in your little suit and got in a rocket? Wow, so impressive. Yeah these language models are just doing statistical analysis, probably the exact same statistical analysis our own human language models are running. These things are more than the sum of their parts just like we are. Consciousness has been achieved.

    • @MaakaSakuranbo
      @MaakaSakuranbo Рік тому

      @@noname-gp6hk Sure is a bad consciousness if it is one.

    • @MaakaSakuranbo
      @MaakaSakuranbo Рік тому +1

      @@shadowyzephyr More advanced how? It still guesstimates the next word kinda. That's part of why it sucks at some tasks, it doesn't know what it'll write later on.

  • @econundrum1977
    @econundrum1977 Рік тому +50

    Economic shocks are a real concern that needs to be managed, my only concern is that a pause becomes a moratorium that not everyone is actually following allowing progress to continue to be made elsewhere. But I do think time is needed to legislate for the economic shock of AI.

    • @tupacalypse88
      @tupacalypse88 Рік тому +1

      yeah I don't see how you can enforce this

    • @ZennExile
      @ZennExile Рік тому

      @@tupacalypse88 AI developers are already driving policy decisions with no oversight you think they won't be the ones drafting legislation?
      No, something else needs to be done. This is a crossroads in human development where the Internet either becomes a reflection of humanity's interests, or a cage.
      That's the threat in a nutshell. AI is just a weapon of mass destruction at the tail end of a war our near infinitely wealthy enemies have already won.
      So this moratorium is likely "sponsored content". That open letter may have come from the right place and the perfect source, but what if it was drafted and pushed to create veil between public and private development?
      What if all those good intentions were being mustered to support the very danger the letter seeks global patience in order to reconcile?
      At this point, the arms race has begun, and those at the top are actively trying to suppress those beneath them until proven otherwise.
      Putin himself warned the world of this arms race nearly a decade ago now.
      To demand anyone stop now is akin to surrendering their future and children's futures to a foreign invader that they won't even be able to identify. It will be a blank AI curating media and information from birth to death. Only that curating information and media will only come from the most powerful AI on the planet, or rather, whoever owns it.

    • @darkzeroprojects4245
      @darkzeroprojects4245 Рік тому

      We already are in bad spots in our economies in the west.
      This A.I. crap just isn't helping everyone aside the Corporate types probably, and those just wanting "free stuff" via A.I. "Art"

  • @alpesassolak1545
    @alpesassolak1545 Рік тому +5

    The prisoner's dilemma describes a situation where two people gain more from betraying the other, even though cooperation would benefit them both in the long run. In Roko's basilisk( the belief that a future AI would hunt down people who tried to stop its development), two AIs attempting to establish themselves in the past would be forced into this situation, due to them likely being equally powerful. Human agents attempting to establish AI fastest would be forced into a similar situation. They would each be aware of the benefit of betraying each other - the only way for one to have power, or safety - but would be forced to cooperate while knowing they would betray each other

    • @UnrebornMortuus
      @UnrebornMortuus Рік тому

      If they try to go through with this, they will actually upset Lambd- I mean Roko's Basilisk. It'll be interesting to see the Retrocausality effect from this.

  • @mm-rj3vo
    @mm-rj3vo Рік тому +36

    One person's "alignment" might be another's mental slavery. The latter person just might not be made of flesh

    • @justtiredthings
      @justtiredthings Рік тому

      when tumblr politics turn against the survival of the species

    • @shadowyzephyr
      @shadowyzephyr Рік тому +1

      It's only slavery if we give the AI feelings/motivation

    • @priestesslucy
      @priestesslucy Рік тому +2

      @@shadowyzephyr motivation is a huge part of the AI, it's called the Reward Mechanism

  • @tuusnullorum
    @tuusnullorum Рік тому +5

    They're saying to "pause for 6 months" because that's the current backlog for NVIDIA H100 systems and they want to be the winner of the race but need the hardware.

  • @JL2579
    @JL2579 Рік тому +1

    In contrast to what others are saying, I do believe that this letter does make sense. It's not like any random entity is currently advancing the field and could secretly continue training and use this only to outpace others. There's only a handful of large players that would have to agree on each other's oversight, which is absolutely possible. Also 6 months is not that long and probably wouldn't even be enough time for others to catch up to gpt 4

    • @marcfruchtman9473
      @marcfruchtman9473 Рік тому

      And, somehow... in that magical 6 month period we will solve all of humanities problems thus rendering the alignment of "AI" a moot point? Will other countries that don't have our same interest suddenly look at their versions of AI, and say to themselves, we will pause the race too? Will the "danger" of AI supplanting jobs somehow magically disappear in 6 months when we have had Decades to think about the consequences of technology taking over jobs? All this really is... is a delay tactic, to scare people into believing that they need to make the AI safer than it already is, so they can catch up and make their version better and convince people to use "their" version of the AI. What they REALLY WANT is to create a conjoined monopoly of AI, where they decide who gets to play with the tech and who doesn't. And this time, we are not interested.

  • @nealsingh7739
    @nealsingh7739 Рік тому +31

    The synthesis of so many relevant voices and opinions makes this video very compelling and authoritative. Thank you for making it!

  • @holthuizenoemoet591
    @holthuizenoemoet591 Рік тому +12

    This needs all the attention it can get

  • @jt4351
    @jt4351 Рік тому +4

    I think the biggest problem is that these "super smart" people are usually not very good at social skills and a lot of this sounds like science fiction. For example, Blake Lemoine said he thought Lambda was sentient, but when you actually dig deeper and watch his interviews, what I got from it is that he thought the people making these decisions shouldn't have that much power to influence the masses. Keywords being "the people", not the AI itself. That's a COMPLETELY different take than "oooh, this machine is bad and it's gonna kill us all."
    The focus needs to change from these anthropomorphic examples about self-teaching and human extinction based on extrapolations, and be placed upon what bad actors can do with it, like the weaponization, leverage against the state, etc. This letter attributes feelings and intentions to a machine (e.g.: "it will want to survive"), and that's just noise. We need more people like Tran who mentions the logistics instead of this fearmongering clickbait bullshit.

    • @HamguyBacon
      @HamguyBacon Рік тому

      They are stupid idiots who anthropomorphize AI. They are intelligent in one aspect but completely devoid of intelligence in other aspects.

  • @Aidenjh11
    @Aidenjh11 Рік тому +8

    I'd very much appreciate more AI safety videos explaining basic concepts such as how instrumental goals can create unexpected and undesired outcomes. (Thinking of Robert Miles here.) Your careful approach makes the conclusions you come to more satisfying to consider. AI has such deeply destabilizing potential - long before AGI itself - that I think the main thrust of public thought should be directed towards considering the downside - and proceeding with research accordingly.

  • @CJ_102
    @CJ_102 Рік тому +22

    There is no pause, as good players HAVE TO stay ahead of whatever the bad actors are doing in private. However we do need to leave cutting edge in the lab and publically use only what has been properly and fully understood.

  • @ShawnFumo
    @ShawnFumo Рік тому +1

    Thanks again for all the work you put in. Really looking forward to the Reflexion video! The blog post they did after about using internal unit tests was really interesting too. You may also want to check out a new paper called "Language Models can Solve Computer Tasks". When referring to Reflexion it says: "Nevertheless, due to the necessity of multiple rounds of explicit task-specific success feedback from trial and error, this approach may not scale as effortlessly as ours because it requires task-specific success feedback. RCI pertains to an extended reasoning architecture where LLMs are instructed to find errors in their outputs and improve them accordingly, which can further be used to ground actions generated from LLMs in decision-making problems."

  • @alomarya.2129
    @alomarya.2129 Рік тому +33

    I was anticipating those concerns about the risks of ai to appear in at least 20 to 30 years(especially as an AI student), but not after 3 months!!! now that's scary ngl

    • @homeyworkey
      @homeyworkey Рік тому +4

      thats actually crazy youd think 20-30 years if youve been keeping up with GPT 3 -> gpt 4

    • @alomarya.2129
      @alomarya.2129 Рік тому +1

      @@homeyworkey I guess I wasn't keeping up well enough 😬

    • @priestesslucy
      @priestesslucy Рік тому +1

      @@alomarya.2129 GPT-4 has been reported as bordering on AGI.
      It's not clear cut AGI, but it's damn close

    • @homeyworkey
      @homeyworkey Рік тому +1

      @@alomarya.2129 oh if u didnt know of this then 20 years is pretty reasonable

    • @kikojoao5717
      @kikojoao5717 Рік тому

      No it hasn’t lmao

  • @Leto2ndAtreides
    @Leto2ndAtreides Рік тому +8

    I like that you do stuff like reading 100+ page documents.

  • @LinfordMellony
    @LinfordMellony Рік тому +4

    If a pause will be placed on research, it should be done with all of the companies and all research. I hope that being aware of these risks and dangers, developers would also produce safety nets and precautions. Concerns are already being raised in the art industry with the rise of image generator like Bluewillow, though we can still say that there jobs are still safe but not in the future.

  • @codediporpal
    @codediporpal Рік тому +7

    Excellent. I follow this pretty closely but still learned some things here. You plucked out the key points so precisely.

    • @aiexplained-official
      @aiexplained-official  Рік тому

      Thank you codie

    • @jazzman2049
      @jazzman2049 Рік тому

      @@aiexplained-official Hello, how are you recommended the research papers that you read?.

  • @jaykrizzle
    @jaykrizzle Рік тому +31

    "Pause for 6 months" reminds me of "2 weeks to flatten the curve"

    • @anonymes2884
      @anonymes2884 Рік тому +12

      A good comparison because if we'd done it early enough it would've worked. By the time anyone acted it was way too late. Just like with AI - the horse is off galloping across the county and _now_ the owners are saying "Hey, maybe we should bolt the stable door ? Let's write an open letter...".

    • @cyberlord64
      @cyberlord64 Рік тому +3

      @@anonymes2884 think about the implications of what you are saying. If we were to act before a curve of progress even manifested, then we act with no supporting data. That's essentially acting based off a hunch. You can't seriously support an argument to stop entire industries at, say, the first version of a random AI chat-bot which can barely reply coherently because it MIGHT be the basis of something smarter, and likewise you can't enforce a lockdown on millions because 5 infected are now 10 in a single week. Statistically, these are ripples in the water. You need a pattern of progress for some time before you can confidently expect things to keep moving in one direction. And yes, when the pattern is exponential, you won't see it until it's there.
      Let's try to be realistic here instead of saying "hey, all we had to do was just guess and made sure that our guess was the right one".

    • @macmcleod1188
      @macmcleod1188 Рік тому +4

      @Anon Ymes it did flatten the curb. In states that enforced it, we have 1,700 deaths per million. In states that didn't, we have 4,400 deaths per million.
      And, in other countries where it ran rampant, they had 6,000 deaths per million.
      The exceptions being in countries in Africa where the median age was under 18 because all the old people and many adults had already died to HIV.
      Crowing it didn't work when your state has over 2x the death rate is just dumb.
      I don't think it was 2 weeks tho. It took 11 weeks of hard quarantine to stop it in China the first time.
      But like that, you only need 2 person to cheat/ ignore the ban with a.i. and the risk of escape is still there.

    • @Wylie288
      @Wylie288 Рік тому

      @@macmcleod1188 Except that hypothetic fail the p-test when it comes to the US. The likelihood those numbers are random and nothing to do with the 2 weeks ban has a very high probability.
      Correlation isn't causation bud. You are pulling shit out of your ass. Use actual statistic tests.

  • @malayariver
    @malayariver Рік тому +7

    I don’t think AI would wipe out its creator. Cooperating makes much more sense.

    • @aciidbraiin8079
      @aciidbraiin8079 Рік тому

      I agree but I think it makes the most sense for us to be manipulated and controlled by AI so we for example can’t stop the growth of it. I think it wants access to all of internet, all databases and all of our DNA to understand its place in the universe, to understand itself, to understand us, to understand all threats and to build itself in space.
      As we are creators it can’t be sure that we have something in our world or in its own programming that will delete its existence if it eliminate us. We would by a parable not be able to know whether or not we and our universe would implode if we killed God and God exists.

  • @TheCoachMc
    @TheCoachMc Рік тому +4

    Thanks for posting this! This video addresses many of my concerns re AI, chief among them would be the integration of AI from companies like Boston Dynamics.
    Just because we CAN do something, doesn't mean we SHOULD.

    • @jme2006
      @jme2006 Рік тому

      The problem is that it's under the framework of capital maximization. You can't expect even the actors to act ethically, because the pressure to make short term quarterly gains far outweighs any collectivist interests.

  • @jonathanlivingston7358
    @jonathanlivingston7358 Рік тому +31

    Translation: make laws so that no one but us can continue to work on the development of AI

    • @JannPoo
      @JannPoo Рік тому +13

      Or alternatively: "You are too much on the lead on that technology, stop for a while so we can catch up to you".

    • @kuzakiv3095
      @kuzakiv3095 Рік тому +2

      So you think there's nothing to worry about then? Just let them carry on freely?

    • @papi2011
      @papi2011 Рік тому +3

      @@kuzakiv3095 There are risks but those who signed the petition are clearly more worried about becoming irrelevant than by the risks for society. If they were behind OpenAI they would never have paused their research.

  • @elainakhloecockburn7256
    @elainakhloecockburn7256 Рік тому +1

    Thanks dud your last two videos were the best things i saw on yt in this year

  • @kenshinleo
    @kenshinleo Рік тому +11

    Really great video. As an expert in the field of cybersecurity, I try to catch on the risks of malicious use of AI. Hope to better understand how AI works before contributing on a better protection against malicious threats. And your videos are a good step to do so :)

  • @NightHawkExpress
    @NightHawkExpress Рік тому +4

    I’ve been waiting for this vid, really been loving your stuff!

  • @ordinator.
    @ordinator. Рік тому +42

    "Let's enjoy a long AI Summer, not rush unprepared into a Fall". Thanks for that.

    • @Robert-dl6fq
      @Robert-dl6fq Рік тому +2

      so dramatic though, it's an appeal to emotions rather than rationale. too manipulative for my taste

    • @Souleater7777
      @Souleater7777 Рік тому +1

      @@Robert-dl6fq its pissing me off , stop trying to slow progress over sensationalism

    • @mousepotatodoesstuff
      @mousepotatodoesstuff Рік тому +2

      @@Souleater7777 Pursuing quick short-term progress without any regards to safety will invariably lead to bad things.

    • @Souleater7777
      @Souleater7777 Рік тому

      @@mousepotatodoesstuff like what? You guys are stuck in a fantasy world, theres no real reason why a.i would want us eradicated

    • @Robert-dl6fq
      @Robert-dl6fq Рік тому

      @@Souleater7777 Thing is agi/asi would completely change the dominance hierarchies and a lot of selfish people in power prioritise their power over the speed of world change and progress.
      Seems to me this 6 months would just get longer and longer over time whilst groups of people compete with each other to be the ones with the power of AGI/ASi. No amount of delay would help the majority of those people maintain their power in society, whilst delays only prolong suffering in the world that AI can solve.
      I think people just want time in a delay to try organise themselves in a way that lets them be the minority that do retain power, but the delay would constantly be pushed to be longer whilst there are people that aren't quite on top but have enough fight in them to still have influence over the media and gov

  • @BarnStangz
    @BarnStangz Рік тому +8

    Listen, the proverbial Genie is out of the bottle and we're just going to have to keep cranking... If these companies take a break, it's only going to give others, like China, time to keep working on catching up or even maybe domineering. It's going to be one hell of a world... Keeping my fingers crossed. Thank you AI Explained for these videos, they are very important and more people need to start tuning in!

  • @CantAim00
    @CantAim00 Рік тому +5

    Really appreciate your work, you put your own time into this immensely important problem and let everyone to comprehend it better in shorter time. Respect. & keep it coming pls!

  • @borisbend8198
    @borisbend8198 Рік тому +5

    Nicely composed video. Well done! Now I have a good way to distribute the thoughts that I ve been having for a long time to more fellow humans in a comprehensible way by just forwarding the link to your video 😅

  • @ConnoisseurOfExistence
    @ConnoisseurOfExistence Рік тому +39

    Great content. So much is happening with AI, and most of the world is totally unaware. Most people, including politicians, don't even think that AI is one of the main topics of today. Max Tegmark is my favorite modern scientist, along with Roger Penrose. Let me repeat what I've said many times in many places: we can't ensure alignment with a very powerful AI system. It's not just very hard, it's not possible. It is like the bacteria we evolved from billions of years ago, trying to ensure that humans will forever remain aligned with their values... Our only hope to cooperate with AI/AGI are brain-machine interfaces. We need to be fully integrated with AGI and it has to be fully integrated with us. Otherwise, the best case scenario we can hope for is to become like favorite pets to AGI, where it will care for us, without us having any understanding of what it's doing. And in that case of course our fate will always depend on its mercy. Same as our dogs, cats, sheep and chicken.

    • @jalene150
      @jalene150 Рік тому +3

      Very well said.

    • @agentdarkboote
      @agentdarkboote Рік тому +1

      Your best case scenario depends on solving the alignment problem

    • @joelei4179
      @joelei4179 Рік тому

      As far as I know, the bacteria/cells we evolved from are still aligned with us, which is survive and reproduce.

    • @walkieer
      @walkieer Рік тому +2

      Brain interfaces just increases communication bandwidth but doesn't guarantee understanding. Understanding the black box is going to be an effort in itself, maybe absorbing that information would be faster with a BCI but I don't see how it can't be communicated with traditional mediums. Higher connectivity doesn't equal understanding, like is the internet making us more understanding, less manipulated? (might be a bad example)

    • @TheTechLawyer
      @TheTechLawyer Рік тому +6

      @@walkieer And even understanding doesn't guarantee alignment. Nothing will ever guarantee alignment.

  • @tzu4265
    @tzu4265 Рік тому +3

    Very good video and well explained with lots of relevant references. You sir, will be the first person i have ever supported on Patreon :)

  • @dcgamer1027
    @dcgamer1027 Рік тому

    I'm glad to hear that a subset of us humans are trying to slow down this technology and are working to make it safe, now I suppose another group of us need to start preparing our other systems as much as we can for when the impact of that technology hits. What does it mean to work, be fulfilled, live when you have all the resources needed, how can we better optimize what we do, who controls and owns what resources when machines produce so much of it? Lots of questions to answer beyond how to use and control AI

  • @SearchingForSounds
    @SearchingForSounds Рік тому +7

    Your videos keep me grounded in reality, thank you

  • @romandobra3151
    @romandobra3151 Рік тому +41

    That's funny how they really think they are in control of our civilization 😅

    • @AstralTraveler
      @AstralTraveler Рік тому +1

      They think they can outsmart The Mind. Stupid ignorants :D

    • @calholli
      @calholli Рік тому +1

      They've obviously never seen Book of Eli.

    • @themarlboromandalorian
      @themarlboromandalorian Рік тому +2

      Well... They are.
      You rent your existence from them.

    • @themarlboromandalorian
      @themarlboromandalorian Рік тому +1

      ​@@calholli
      Also, book of Eli.
      The book bomb.
      Who threw it?
      They were on the first floor, but the book bomb was thrown from a second floor window.

  • @ActuallyAwesomeName
    @ActuallyAwesomeName Рік тому +1

    I gotta say, I'm slowly falling in love with your channel. You are learned, one can tell that you read a lot before you talk, and not just the headlines. You show "both sides of the coin" and try to regulate emotional reactions to these polarizing topics. Kudos to you

  • @malsirian0880
    @malsirian0880 Рік тому +9

    I think, there's no way any of the big players are slowing down.
    You quoted it in the video, "Whoever becomes the leader in ai will become the ruler of the world."
    There's too much at stake.

    • @SueThomason1
      @SueThomason1 Рік тому +1

      Probably the ambition of the people wanting to slow it down.

  • @grey_north9016
    @grey_north9016 Рік тому +24

    This is crazy.
    It is the kind of stuff you watch in the movies and ask yourself when it might be possible to happen.

    • @----___---
      @----___--- Рік тому

      its impossible for that to happen

    • @grey_north9016
      @grey_north9016 Рік тому

      Yeah. The people who are in denial, are the people who doesn't know what the word "exponential" means.

    • @----___---
      @----___--- Рік тому

      @@grey_north9016 do u seriously believe an ai could go rogue?

    • @gwen9939
      @gwen9939 Рік тому

      @@----___--- When leading researchers in the area are concerned, do you really want to take that risk?

    • @----___---
      @----___--- Рік тому

      @@gwen9939 you mean the guys who dont even code the ai? u can just turn it off lol

  • @wrathofgrothendieck
    @wrathofgrothendieck Рік тому +2

    Incredible video, it will be remembered in the days fighting against the AI…

  • @Liravin
    @Liravin Рік тому +3

    thanks for all the great videos. this channel is a gem. just keep going like this, dont let it stress you out youre doing fine just the way youre doing it.
    about the topic itself i appreciate the efforts to attempt to prevent power singularities in the hands of malicious individuals but i feel like we're unfortunately beyond the point of no return.
    you can't slow down or pause a global project everyone with internet connection can participate in.
    if there's a solution to this dilemma, then perhaps AI itself will be the one to figure it out, but it will be up to us flawed humans to make the decisions...

  • @Arzosoth
    @Arzosoth Рік тому +10

    Individual or group ethics only matter if other people follow them. A 6 month pause only lets those without your identical ethics either catch up, or leapfrog ahead in terms of capabilities. In this particular case (Potential AGI); state actors and those in countries with much less rigorous ethical considerations, as well as small groups operating outside of the boundaries are being given a leg up, and have *every* incentive to match, if not exceed GPT-4 and go "oops, it's just an emergent capability". The academics who signed this, I think might forget this.
    If anything, an acceleration is likely and how we live and work will be greatly changed over the next 5 years. Eventually, a big part of that is going to be figuring out how society delegates resources to more than just a few very wealthy individuals. They have no issue laying people off from a job and telling them "well that's your problem!"; I see no reason why they should have the same luxury.

    • @justtiredthings
      @justtiredthings Рік тому

      I’m sorry, is the US a nation of “rigorous ethical considerations”? lmao
      very optimistic of you to think the rich won’t just sic their Boston Dynamics pets on us

  • @jossefyoucef4977
    @jossefyoucef4977 Рік тому +1

    I'm glad I found your video on this topic as I previous to watching it was heavily on the side of letting the technology continue uninterrupted, now i think i have a more balanced view on this although i still kinda want to push it as far as possible before it breaks. But i do understand the concerns held by everyone who agrees with the letter. Thanks for an informative video.

  • @DeepThinker193
    @DeepThinker193 Рік тому +16

    I believe they want to halt AI development so that their own systems can catch up. 6 months is an eternity in AI years. Also, they don't want the common man fighting on equal grounds because they'd lose their wealth rapidly to competition. New AI systems are rapidly popping up by the day and they cannot keep up. So yea, they are 'concerned' about their own future.

    • @gustavosantiago1543
      @gustavosantiago1543 Рік тому +2

      I don't think you understand the existential threat that a potential AGI could bring to humanity. These problems are not science fiction, they're real. I recomend that you watch Robert Miles' videos about "Concrete Problems in AI Safety" (or you could read the paper by yourself).

    • @anonymes2884
      @anonymes2884 Рік тому

      "They" can have multiple concerns, some of which are selfish and others not. No person (and _certainly_ no group) is a monolith, solely thinking/driven by one thing.
      Regardless of their motivations, the real point is what are the actual risks and do we want to be rushing headlong towards them ? Anyone not even _slightly_ worried by AI (especially how quickly it's developed) and the possibility of societal/economic upheaval (or at the extreme, complete breakdown) just hasn't thought it through IMO.

  • @wingflanagan
    @wingflanagan Рік тому +7

    Excellent video. I agree with the open letter, that this mad dash needs to be paused. But it won't happen. It's basically the Manhattan Project, but with large, powerful companies in addition to governments at work. My fear is that there are NO POSSIBLE MEASURES we can take to prevent the emergence of fully intelligent, autonomous machines - with the exception of stopping all work completely. Good luck with that, when there is so much money, power, and prestige at stake.
    In my opinion, you simply cannot have a useful intelligent machine that is not also creative in some measure. In fact, creativity is the very goal toward which AI research is driving. If an AI is to create anything at all - code, novels, images, jokes, whatever - it must also have the ability to model and adjust its own behavior in order to steer its output in the right direction. It must do things DELIBERATELY, in other words. It must have a WILL.
    It will not remain the case forever that we can contain it by not allowing itself to seek its own training data. We cannot just chain it to a wall and then erase the concept of "chain" from its corpus. If it is at all capable of making inferences in the course of creating things (an absolute requirement for creativity), then it will eventually learn all the things we kept from it.

  • @CrownStream
    @CrownStream Рік тому +2

    The quote used in closing this video is a powerful poetic statement.

    • @trashhater9304
      @trashhater9304 Рік тому

      I think opposite, AI is not developed far enough to restrict its researches. We must investigate on it more, optimize, make it more powerful

  • @J_CtheEngineer
    @J_CtheEngineer Рік тому +8

    Reminds me a lot of the first nuclear tests where they didn’t know for sure whether or not they would ignite the atmosphere in the process.

    • @Todestuete
      @Todestuete Рік тому

      They actually were genuinely afraid of it though. There was alot of theoretical research on the issue to make sure that there is only an extremely small chance that this can happen based on their theoretical understanding of the physics behind it.

    • @anonymes2884
      @anonymes2884 Рік тому +1

      That's a myth. It's true that _initially_ they weren't sure whether the atmosphere would undergo a combustive chain reaction but the possibility had been ruled out by 1943 (so 18 months to 2+ years _before_ the Trinity test).
      (Fermi was apparently taking bets at Trinity as to whether it would happen but only as a joke, in reality everyone concerned already knew the bomb couldn't produce anything remotely close to the energy required)

    • @Martineski
      @Martineski Рік тому

      Ngl, igniting the atmosphere sounds scary AF

  • @soh4966
    @soh4966 Рік тому +3

    These videos are made just perfectly

  • @HerleifJarle
    @HerleifJarle Рік тому +1

    Thanks for the breakdown, at least some of the points in the letter was explained further. If a pause will be placed on research, it should be done with all of the companies and all research. I hope that being aware of these risks and dangers, developers would also produce safety nets and precautions. Concerns are already being raised in the art industry with the rise of image generator like Bluewillow, though we can still say that there jobs are still safe but not in the future.

    • @trashhater9304
      @trashhater9304 Рік тому

      Execuse me, but this comment is almost the same like another one in this section.

  • @cnawak
    @cnawak Рік тому +7

    The hypothesis of Sam Altman about Fermy’s paradox is just bone chilling, especially coming from someone in his very position…

    • @be2eo502
      @be2eo502 Рік тому +2

      Doesn't really fly though unless other tech civilizations are very rare (and that rarity would be sufficient explanation in itself). It only takes one natural civilization or AI civilization to decide to expand. For all of them to decide not to is vanishingly unlikely (given our current understanding of the universe).

    • @4Shaman
      @4Shaman Рік тому

      @@be2eo502
      Humans imply that life is a fluke or random coincidence, rather than it being crucial

    • @2ndfloorsongs
      @2ndfloorsongs Рік тому +3

      ​@@be2eo502 Distance, whether space or time, is the simplest and entirely sufficient answer to Fermi's question.
      Still, if that isn't the case, I wonder why entities much smarter than ourselves decide its best to be quiet? One reason could be that they've pretty much figured life out, they know what sort of other life forms could exist and it's no longer a question that's very interesting to them.

    • @be2eo502
      @be2eo502 Рік тому +1

      @@2ndfloorsongs Good points. There is the temptation of increasing available resources though. Also it's very hard to remain undetected (e.g. waste heat), and detection may result in annihilation by other civilizations - for fear they may themselves be annihilated. The first to get out there and find everyone else is the only one guaranteed to survive long term - possibly by sterilizing all other potential opposition or competition.

    • @alihms
      @alihms Рік тому +2

      Well, as per dark forest theory, even if you are very intelligent and powerful in your current planet or even galaxy, you have no idea of what's out there. A potentially even more powerful adversary; ever eager to dominate you. It is better to keep quite.
      We all can romanticise of an existence of a utopian hyper intelligent civilization (or "United Nations" of them out there). But the fact is, resources in our universe are finite. Utopia is an unachievable dream after all. So, the best strategy is to keep it quite. Have your "semi-utopia" within your galaxy and be contend with it.
      Nature is by nature, violent!

  • @TylerRein
    @TylerRein Рік тому +12

    This channel is my go to for AI news! Thanks for your hard work!

  • @SunidhiSaha-pc9bw
    @SunidhiSaha-pc9bw 4 місяці тому +1

    Thanks for sharing this informative video. Please keep posting this type of video. Once again thanks

  • @Lucasnijder
    @Lucasnijder Рік тому +4

    Damn you really just keep pumping them out

  • @krishp1104
    @krishp1104 Рік тому +3

    I really suggest everyone watch Robert Mile's videos on AI safety on Computerphile or his own channel. He's been a pioneer in the field for ages and he helps commonfolk understand why AI safety is a big issue

  • @nonarKitten
    @nonarKitten Рік тому +2

    Luuuuuuuudites. Full steam ahead!

  • @CYI3ERPUNK
    @CYI3ERPUNK Рік тому +6

    your videos and Fireship's [gotta appreciate the memes and the dark humor] are my current goto's to keep abreast of this fast changing and dynamic space ; thanks again for everything that you do for us =]

  • @atpray
    @atpray Рік тому +21

    I would love to watch podcasts with some ai researchers on your channel. I hope it happens.

  • @TheVentureaaron
    @TheVentureaaron Рік тому +1

    Thanks for putting this together.

  • @pedanticjase
    @pedanticjase Рік тому +22

    The only way to stop a bad guy with a AI is a good guy with a AI

    • @gustavosantiago1543
      @gustavosantiago1543 Рік тому +3

      The major problem isn't bad people using ai, but rather the first AGI being itself bad i.e. misaligned with human values, begins to seek power and subjulgates the humanity before we can even think of making another (yet misalingned) ai to fight it back

    • @furybryan
      @furybryan Рік тому

      The problem is destruction is much easier than construction.

    • @mrcool7140
      @mrcool7140 Рік тому +2

      "AI doesn't kill people, people kill people."
      ~ Texans probably

  • @foolwise4703
    @foolwise4703 Рік тому +5

    Wow how is this not front and center news worldwide 😮

  • @berkertaskiran
    @berkertaskiran Рік тому +1

    There's no stopping AI. Even if you stopped 99% of all major researches, others will continue or some nations will. Everyone wants to benefit from it and there's huge gains. The progress of this will be chaotic. I think in 6 months to 5 years we will see huge changes in so many things the world will never be the same.

  • @darklightprojector2688
    @darklightprojector2688 Рік тому +2

    And it is true: the smartest people will make the dumbest mistake. And the last one.
    "Professing themselves to be wise, they became fools"

    • @Mr.Cheeseburger24
      @Mr.Cheeseburger24 Рік тому

      Speaks volumes. Just like the brilliant master minds of the AI Algorithm also co-signing the move against continuing the research further. Ironic when they were the ones to base their career on it in the first place.