Time Until Superintelligence: 1-2 Years, or 20? Something Doesn't Add Up

Поділитися
Вставка
  • Опубліковано 9 лип 2023
  • Superintelligence when? This question is more urgent than ever as we hear competing timelines from Inflection AI, OpenAI and the leaders at the Center for AI Safety. This video not only covers the what was said, it offers some data on what superintelligence is projected to be capable of.
    I discuss things that can hasten those timelines (see the new Netflix doc, Top 1% for Creativity and the AI Natural Selection paper) or slow it down (ft. Yuval Noah Hariri, the new Jailbroken paper, and more). And I end on some reflections of what it might mean to interact with a superintelligence (ft Douglas Hofstadter).
    Introducing Superalignment: openai.com/blog/introducing-s...
    Odds of Success: janleike/with_rep...
    Mustafa Suleyman (Inflection AI) Interview: • ‘We need AI to be held...
    Inflection Supercomputer: www.tomshardware.com/news/sta...
    GPT 2030: www.lesswrong.com/posts/WZXqN...
    Counterpoint: / 1666765447054647297
    Creativity Test: www.umt.edu/news/2023/07/0705...
    MMLU: arxiv.org/pdf/2009.03300.pdf
    Boston Globe CAI: www.bostonglobe.com/2023/07/0...
    Yuval Noah Hariri Guardian: www.theguardian.com/technolog...
    Jailbroken Paper: arxiv.org/pdf/2307.02483.pdf
    Suleyman Hallucination Tweet: mustafasuleymn/st...
    Killer Robots: • UNKNOWN: Killer Robots...
    Automated Apple-picking: / 1677828971214282752
    50 year Mortgages: www.ft.com/content/281fbba6-2...
    Heypi: heypi.com/talk
    Natural Selection Favors AIs over Humans: arxiv.org/pdf/2303.16200.pdf#....
    Gödel, Escher, Bach author Doug Hofstadter on the state of AI today: • Gödel, Escher, Bach au...
    / aiexplained Non-Hype, Free Newsletter: signaltonoise.beehiiv.com/
  • Наука та технологія

КОМЕНТАРІ • 1,2 тис.

  • @1234minecraft5678
    @1234minecraft5678 10 місяців тому +576

    This is the only ai channel i watch, covers the mot important things, has interesting takes and own ideas, does none of the "OMG this ai will make you homeless" garbage. I just love it.

    • @aiexplained-official
      @aiexplained-official  10 місяців тому +76

      Thank you kindly

    • @jonatand2045
      @jonatand2045 10 місяців тому +8

      ​@@aiexplained-officialWhat do you think of liquid neural networks? Do they hold any promise?

    • @varkonyitibor4409
      @varkonyitibor4409 10 місяців тому +26

      Me too. No BS, no clickbait, no cringe soyfaces in thumbnail, no laughing robots from terminator

    • @belliduradespicio8009
      @belliduradespicio8009 10 місяців тому

      "THIS NEW MODEL WILL TAKE YOUR JOB, EAT YOUR KIDS, BURN DOWN YOUR HOUSE AND GIVE YOU AIDS"

    • @DicksonCiderTonight
      @DicksonCiderTonight 10 місяців тому +4

      100% Agree!

  • @DaveShap
    @DaveShap 10 місяців тому +114

    "Babe, wake up, AI Explained just dropped another video..."

    • @aiexplained-official
      @aiexplained-official  10 місяців тому +35

      Babe, wake up, Dave is in the Star Trek outfit you like

    • @mindful-machines
      @mindful-machines 10 місяців тому +6

      this is the content I'm here for

    • @sheep3866
      @sheep3866 10 місяців тому +2

      "Babe wake up our roomba has emotions... wait, no no, nevermind. Maybe next year."

    • @MrRhetorikill
      @MrRhetorikill 10 місяців тому +1

      "Ai waifu, boot up!"

    • @murrmurr765
      @murrmurr765 10 місяців тому

      AGI predictions, 50 times.

  • @Hiroprotagonist253
    @Hiroprotagonist253 10 місяців тому +819

    AI developers: “We need to mitigate the risks of AI”
    Also AI developers:
    *hooks AI up to compilers*
    *gives AI internet access*

    • @redneckjones4182
      @redneckjones4182 10 місяців тому +86

      Just shows that they don't actually believe in what they're telling people about AI.

    • @rosscads
      @rosscads 10 місяців тому +8

      Haha, it's a case of "He giveth with one hand and taketh away with the other."
      It's interesting that OpenAI introduced the Code Interpreter feature while revoking Web Browsing access in the same week.
      Having had the opportunity to use both, I must say I'm more impressed with the former than the latter.

    • @PichuunCollector
      @PichuunCollector 10 місяців тому +48

      "we're going to end the world and we're going to make ALOT of money doing it"

    • @anirecapped.
      @anirecapped. 10 місяців тому

      @@rosscads That's just a temporary thing.

    • @anywallsocket
      @anywallsocket 10 місяців тому +28

      Ending the world is actually a great business decision because then you don’t have to pay off your debts 😅

  • @arne1089
    @arne1089 10 місяців тому +190

    Best educator about AI on the entire internet!

    • @aiexplained-official
      @aiexplained-official  10 місяців тому +19

      Thanks my man

    • @vincentcaudo-engelmann9057
      @vincentcaudo-engelmann9057 10 місяців тому +2

      Agreed.

    • @masternobody1896
      @masternobody1896 10 місяців тому

      @@aiexplained-official i just build a superintelligent ai its works.

    • @blogintonblakley2708
      @blogintonblakley2708 10 місяців тому

      The thing that I see most AI researchers and people in the community missing is that there isn't really any AI alignment problem. The alignment problem exists within the societies that are creating AI.
      That means the very nature of the societies bakes in the kinds of problems that become apparent when people who are socialized like we are think someone has an advantage over them.
      This is a direct consequence of the authoritarian nature of civilization itself... you know hierarchical authority. The authoritarian nature of civilization creates systems based on competition rather than cooperation. These systemic divisions create political stresses that eventually destroy the society.
      {points at history}
      See?
      So of course whenever we develop a new technology it is immediately used to gain an advantage over some rival or the other.
      And that IS the literature and knowledge that humans have been developing for the 12,000 years of civilization. And that is what we are using to train AI and AI WILL be used in exactly the same ways as every other new technology has been used... to gain an advantage over rivals.
      The difference of course is that AI will be able to carry out those baked in alignment issues with greater than human facility. Even if the AI is not given the self will necessary to direct it's own course of action... those people who develop it will provide the will and so gain an enormous advantage in ability to deceive, manipulate, control and vet all kinds of data... across the human range and beyond.
      None of these issues would present themselves if the societies creating the AI were not already operating using the objectionable alignment themselves.

    • @ilicdjo
      @ilicdjo 10 місяців тому

      or just another conman

  • @Toad_Burger
    @Toad_Burger 10 місяців тому +73

    Solving alignment is like solving morality. 2 thousand years of the greatest philosophical and ethical minds have failed. But I'm sure we'll solve it any day now.

    • @YeeLeeHaw
      @YeeLeeHaw 10 місяців тому

      Yep, people can't even agree that killing and eating animals is wrong, like killing a being without its consent when we don't have to is still not clear to be wrong for most. This is still debated in 2023; think about that. We're doomed if A.G.I. takes off.

    • @aiexplained-official
      @aiexplained-official  10 місяців тому +23

      I get your point, I really do

    • @Perspectivemapper
      @Perspectivemapper 10 місяців тому +8

      "Love" is the Eternal message, echoed throughout the ages. Whether we as human beings GET this, that's the real question. And will AI get this as well? We'll find out...

    • @monad_tcp
      @monad_tcp 10 місяців тому +9

      They can't solve alignment for AI , because that's a problem not solved for humans. What is an aligned computer program ? one that does what the user wants, or what Sam wants ?
      Am I aligned ? I certainly follow laws (sometimes because they align with my moral code, but mostly because of punishment). My objectives are not aligned with 90% of the human objectives, fun thing.

    • @notmyrealpseudonym6702
      @notmyrealpseudonym6702 10 місяців тому

      Hmmmm maybe but also maybe not. instead of solving alignment, define malalignment and containts that stop exceeding the malalignment and slowly whittle down the constraint over time (Similar to Asimov and robotic rules).
      Will this happen. Nah, I doubt it and malalignment constraint violation will inevitably happen. Not because of Artificial intelligence but human stupidity or greed.

  • @Raulikien
    @Raulikien 10 місяців тому +170

    Pi already got a call feature, I feel like it's gonna blow up when it has an avatar, more memory and can access the Internet. It's already insanely good at having engaging conversations, and it's basically in the baby stages. It really does feel like something akin to the movie "Her" will happen this decade.

    • @lecomingbegend497
      @lecomingbegend497 10 місяців тому +3

      ​@remainders8194 what is PI? I haven't been able to keep up with everything. I think it might be Google's AI

    • @ondrejmarek1980
      @ondrejmarek1980 10 місяців тому +1

      Just have the ai voiced by scarjo and we're off to the races

    • @skippersthepenguin3591
      @skippersthepenguin3591 10 місяців тому +1

      Pi impressed me when I asked what is happening in Russia, and it said well the Wagner Coup is kinda significant. That was mind blowing.

    • @WizardlyDanny
      @WizardlyDanny 10 місяців тому +1

      @@remainders8194 Hmm,...that would be news to me. I'm communicating with Pi right now and there is no such button, no voice and Pi itself denies that such an option exists yet at this time.

    • @WizardlyDanny
      @WizardlyDanny 10 місяців тому +2

      @@remainders8194 Sorry, you're right, I see the option does exist - on an iOS app, which I don't have...

  • @vincentcaudo-engelmann9057
    @vincentcaudo-engelmann9057 10 місяців тому +21

    “How is the whole world carrying on as if nothing is happening” eg 50 year mortgages. Agreed. I am literally putting my head in the sand. Literally everything I like doing for profit, all my business ideas, are totally smoked by the prospect of AGI. All I can do is…my day job? Love my family? Practice hockey?
    And there is this x-risk floating over our heads. Jfc.

    • @IceMetalPunk
      @IceMetalPunk 10 місяців тому +2

      Ideally, we'll reach a point where humanity grows up and realizes that survival and happiness shouldn't be a commodity. That we shouldn't have to *earn* them through reciprocation, i.e. jobs. And we can let the AIs handle what is required to keep society running, while we let humans enjoy ourselves without constantly worrying about income and profit.
      But knowing humans, and seeing human reactions to even the most basic of generative AI.... we'll probably all just panic and riot and kill each other until scientific progress stops, because we're afraid of change and ignorant of how the tech works and brainwashed into thinking the way things have been for the last 500 years is the best/only way things can be.

    • @IceMetalPunk
      @IceMetalPunk 9 місяців тому +3

      @@TheBananamonger 🙄 No one mentioned nor asked for your religion, preacher.

  • @bovanshi6564
    @bovanshi6564 10 місяців тому +21

    What a time to be alive...

    • @Intelligenceisawesome
      @Intelligenceisawesome 10 місяців тому +2

      Two minutes paper coined a catchy phrase

    • @EliasMheart
      @EliasMheart 10 місяців тому +4

      ... for now
      Enjoy the ride!

    • @Ziplock9000
      @Ziplock9000 10 місяців тому +5

      Because by 2030 we won't be! lol

  • @rickandelon9374
    @rickandelon9374 10 місяців тому +129

    It does really seem absurd to see people thinking and making plans for 20, 30, 50 years down the line when in 5 to 6 years, our society might be absolutely unrecognisable due to the creation superintelligence which will shift all authority from humans to itself. As always, an amazing presentation❤.

    • @anywallsocket
      @anywallsocket 10 місяців тому +20

      This requires much more thought than it seems you’ve put into it. I bet we could have super AI right now and society would still take decades to adapt to its effects and what it can offer.

    • @codydouglass
      @codydouglass 10 місяців тому +22

      Not really. People need a purpose to get up for in the morning and work towards. When AI comes for everyones jobs and transforms our way of life this will be the hardest thing for most people to adapt too. Suicide rates and mental health issues will rise on a scale never before seen. There's a small possibility though that AI will provide solutions to these issues but right now we are unaware of anything outside the poorly thought out universal basic income.

    • @oranges557
      @oranges557 10 місяців тому +2

      ​@@anywallsocketyoure incredibly naive.

    • @anywallsocket
      @anywallsocket 10 місяців тому +12

      @@oranges557 Society is slow, and still run by humans. Imagine if we developed the tech for efficient flying cars, it would take decades to implement the correct infrastructure for such things, same goes for any huge change in society.

    • @oranges557
      @oranges557 10 місяців тому +8

      @@anywallsocket the thing is, we CANT be slow with ai since only one single company/organisation (or whatever) has to implement agi into their production/service and EVERYONE has to follow, otherwise they will not be competitive anymore. Capitalism wins, and ai is basically the best thing that could happen to a capitalist. This will not take decades, lol. Ai progress is too fast paced for people to take their time, youll see that in a few years.

  • @kunstrikerasochi2103
    @kunstrikerasochi2103 10 місяців тому +58

    On a light note, AGI will envy your detailed and informative summaries. Thankyou!

  • @videowatching9576
    @videowatching9576 10 місяців тому +4

    23 seconds in and such fantastic content - a focus on clarity, fascinating topics, incredible clips along with communication etc.

  • @jimp7148
    @jimp7148 10 місяців тому +2

    The closings of your videos always give me the chills.

  • @K.F-R
    @K.F-R 10 місяців тому +30

    I watch all your uploads whenever they appear btw. Your coverage is top tier. Thank you.

  • @timseguine2
    @timseguine2 10 місяців тому +63

    Eliezer Yudkowsky has been publicly skeptical of using AIs for alignment research. The primary complaint is that they could include loopholes that operate on principles we don't understand and thus immune to us reviewing their proposals safely. He uses the example of sending an air conditioning schematic 1000 years into the past. Theoretically you could provide a schematic with enough information that the person could get it to work, but their only understanding of how it works would be "magic" because they don't even have the scientific principles or language to explain it. At the frontiers of superintelligence (even if it is only a narrow superintelligence), it seems fairly likely that such a situation could arise, especially since we are asking it to reason about things that are currently beyond our grasp.

    • @EliasMheart
      @EliasMheart 10 місяців тому +14

      Yeah. And they would have no way to check what it actually does.
      I know the term "Suicide-Pact-Technology" for this from Isaac Arthur, not sure if it is in use otherwise, but I think it's very fitting.
      "Here is a great way to achieve limitless energy at home, enjoy, Humanity!"
      Except it generates a nova.
      That kind of thing

    • @belliduradespicio8009
      @belliduradespicio8009 10 місяців тому +9

      it was always a losing battle, AGI is a promethean endeavor that you cannot hope to contain to nice little corporate products

    • @-mwolf
      @-mwolf 10 місяців тому +2

      Can't we ask it to explain it to us? Prove that what it's doing is effective?

    • @EliasMheart
      @EliasMheart 10 місяців тому +5

      @@belliduradespicio8009 Wait, is that the "The problem exists, but there's nothing we can do about it" stage, or already "We could have done something about it, but now it's too late"?
      I agree that it's going to be very hard, maybe even impossible. However, I feel like it's too close to nihilism to propagate the idea that it's impossible. (But you do you, I just wanted to share my perspective :)

    • @EliasMheart
      @EliasMheart 10 місяців тому +2

      @@-mwolf How will you know if what it said is true?
      I mean, for all we know everything quantum could be a huge joke that only particle physicists are in on, how would we know, except for actually diving into the topic? ;)

  • @Pianoblook
    @Pianoblook 10 місяців тому +5

    Thanks for the great video, as always! I appreciate you helping me feel more up to date with what's going on.
    One thing I really hope to see more of, integrated into these conversations about future risks, is consideration & reflection on potential socioeconomic structural changes we might need to make - and/or are at risk of *NOT* making. With the sort of late-stage capitalism we have now, we already have quite a massive 'alignment problem', before any new superintelligences enter the picture.
    As your PI conversation about social media refers to, we already have strong evidence of how potent & dangerous a combination it is to join [huge profit seeking motives] with [advanced algorithms].

  • @rosscads
    @rosscads 10 місяців тому +78

    Personally, I tend to believe that AGI is closer to being achieved within the next 2 years rather than 20. Mustafa Suleyman is someone I deeply respect, and his insights always make me think. Although, at times, I do wonder if he's ever encountered Pi in his calculations. 😂
    I'm particularly intrigued by Suleyman's prediction regarding LLMs and their ability to know their own limits. It's reminiscent of Socrates, who emphasised the importance of humility and self-awareness as the starting point of true wisdom.
    Keep up the great work!

    • @lostinbravado
      @lostinbravado 10 місяців тому +4

      Pi AI? I lost days into that thing. Can you imagine "2 papers down the line"? Will we be able to escape the Pi AI of 2025?

    • @runvnc208
      @runvnc208 10 місяців тому +6

      Note that they did not say "AGI", they said "superintelligence". This is a more precise and functional term for abilities that are greater than human.

    • @aiexplained-official
      @aiexplained-official  10 місяців тому +5

      Thanks Ross. Me too.

    • @skierpage
      @skierpage 10 місяців тому +2

      ​@@runvnc208yes, we need to retire the term "AGI" or admit it has beenachieved in the medium of text communication. Note that most of these comparisons of AI scores on advanced tests are with a very high human baseline. Give a Math Olympiad question or programming test or law entrance exam test to the average human and I guarantee they'll do terribly, and half of humans are less "intelligent" than that. There's more to smarts than text communication, but look what we're doing right here...

    • @SolarScion
      @SolarScion 10 місяців тому +6

      ​@@skierpageI would disagree that we're quite at the level of AGI in text, since GPT-4 will still place irrational emphasis on an extraneous detail and endlessly try to justify its reasoning instead of disregarding it as less important (recent paper and video on this channel).
      What I might call it is applicable capability regarding particular tasks. In that way, we've already achieved ASI in particular tasks, but in the same way that a calculator is superhuman in calculating numbers, with how AIs mentioned in the beginning of this video outperform humans in particular tasks by several orders of magnitude of time efficiency.
      In that way, it's like a supercomputer combined with a fledgling AGI.
      What is *important* to calculate accurately is any single application that could get out of hand due to capability and either misuse by human actors, or just an out of control AI of any kind that can cause catastrophic changes in any part of human civilization or biome.

  • @TheBlackClockOfTime
    @TheBlackClockOfTime 10 місяців тому +5

    I follow approximately 30 channels around this topic on UA-cam. Maybe even more. This is by far the best one.

  • @AICodingAdventures
    @AICodingAdventures 10 місяців тому +39

    Ilya Sutskever is head of the OpenAI alignment efforts, because as he explicitly said it himself, this is the biggest hurdle for making the LLM really useful. This timelines are exactly like Wolfgang Pauli’s prediction when the neutrino will be experimentally detected, however in reverse.

    • @Staticshock-rd8lv
      @Staticshock-rd8lv 10 місяців тому

      Am confused what do you mean by reversed?

    • @AICodingAdventures
      @AICodingAdventures 10 місяців тому +4

      @@Staticshock-rd8lv Pauli was predicting that the neutrino will never be detected, but it was shortly. Now the AI research are predicting that AGI will be reached soon, but in my opinion will not happen anytime soon, not even in couple decades.

    • @Staticshock-rd8lv
      @Staticshock-rd8lv 10 місяців тому +1

      @@AICodingAdventures damn if that's true I have nothing to worry about

    • @dslylsd
      @dslylsd 10 місяців тому +1

      ​​​​​@@AICodingAdventures you're right, hardware constraints mean 2xing performance will probably take like 4-5 years on average over the next 20 years

    • @GuinessOriginal
      @GuinessOriginal 10 місяців тому +3

      @@Staticshock-rd8lvon the other hand he could be another Pauli

  • @jonorgames6596
    @jonorgames6596 9 місяців тому +1

    Thanks for the video! I appreciate immensly that you have proper links in the description.

  • @theknave4415
    @theknave4415 10 місяців тому +4

    The collapse in trust in institutions, governments, experts and corporations has already begun.
    An AI isn't necessary for that outcome.

  • @dennievandam6177
    @dennievandam6177 10 місяців тому +19

    Yet another informative and well presented video. Thanks for the video!

  • @melvingeraldsy1552
    @melvingeraldsy1552 10 місяців тому +20

    OpenAI has been saying 10 years in earlier interviews and now we have this within 4 years update? Sounds like they really know that strong AI is coming very very soon. I personally would plan based on OpenAI timelines

    • @southcoastinventors6583
      @southcoastinventors6583 10 місяців тому +1

      Hey this just a bad as when all the computers went down during Y2K

    • @ADreamingTraveler
      @ADreamingTraveler 10 місяців тому

      Spend enough time with GPT-4 and it's pretty easy to see that it's almost here honestly.

    • @southcoastinventors6583
      @southcoastinventors6583 10 місяців тому +1

      @@ADreamingTraveler lol

  • @jamesatotago
    @jamesatotago 10 місяців тому +1

    Your most startling talk yet and that’s saying something! Even so I am having a wonderful day 😳

  • @JamesOKeefe-US
    @JamesOKeefe-US 10 місяців тому +21

    Truly my only go to AI channel. Balanced and impeccably researched. Thank you!

  • @atomic3628
    @atomic3628 10 місяців тому +24

    I think an important insight not discussed in this video was how OpenAI’s superalignment project states very clearly that they expect to have a human-level alignment researcher in less than four years. This means they are confident they will have AGI within four years.

    • @southcoastinventors6583
      @southcoastinventors6583 10 місяців тому +1

      4 years away from AGi to bad we are still 20 years away from fusion reactor

    • @Demour77
      @Demour77 10 місяців тому +4

      @@southcoastinventors6583 Maybe AGI is what will help solve the Fusion challenges?

    • @BMoser-bv6kn
      @BMoser-bv6kn 10 місяців тому +2

      @@Demour77 Some things are constrained by physical reality. Fusion might not scale until you start making miniature stars, or might require literal stars. It may only prove useful at cosmic scales.
      Gathering hydrogen and then fusing it has no simple, energy-cheap trick unless there are very unknown, pertinent, laws of reality for the AI to discover.

    • @ADreamingTraveler
      @ADreamingTraveler 10 місяців тому

      @@southcoastinventors6583 AGI and eventually ASI would figure out that pretty damn fast at the rate we're going and that's just the tip of the iceberg of the insane technological advancement coming including things you don't even comprehend. You're going to see humanity jump hundreds of years forward technologically in the next few decades multiple times faster than how fast the past 100 years did which was already mind blowingly fast.

    • @southcoastinventors6583
      @southcoastinventors6583 10 місяців тому

      @@ADreamingTraveler AI system don't figure things out they respond to prompts, their intention is are intention they have no will. AI trains on human data so the only way AI can get better is if the human content is better. Technological advancement revolves around what people so the amount of stuff we come up with slow down because that the way all curves go.

  • @m1ndfox
    @m1ndfox 10 місяців тому +15

    The only proper AI channel. Thanks for uploading quality mate!

  • @Ecthelion3918
    @Ecthelion3918 10 місяців тому +1

    Great video as always my friend, the more time passes the crazier it all seems to get

  • @dshoopy571
    @dshoopy571 10 місяців тому +3

    Best AI channel I watch, very information dense and good insight as well, keep up the great work compiling this data into an easy to consume format. 👍

  • @martinpercy5908
    @martinpercy5908 10 місяців тому +15

    Another brilliant video, thank you - as Hawking said back in 2015: Superintelligent AI could be “the best or worst thing to happen to humanity in history”

    • @aiexplained-official
      @aiexplained-official  10 місяців тому +2

      Agreed

    • @GuinessOriginal
      @GuinessOriginal 10 місяців тому +2

      It will be both: the best thing for the rich, and the worst thing for everyone else. And if we don’t do something to change and stop that from from happening soon it will be too late.

    • @IceMetalPunk
      @IceMetalPunk 10 місяців тому

      Like every technology of note. Flight, nuclear fission, etc. Humans always choose to do both the best and worst things with new toys; that's a human problem, though, not a technological problem.

  • @TheLeftCulprit
    @TheLeftCulprit 10 місяців тому +4

    Brilliant as always. The older generations can't imagine what's coming. I barely can.

    • @MisterLumpkin
      @MisterLumpkin 10 місяців тому

      @Patrick_Tremblay - And he's starting a new company to summon it!

    • @TheLeftCulprit
      @TheLeftCulprit 10 місяців тому

      @Patrick_Tremblay It's no "demon". Human beings have this almost insufferable need to personify everything. This thing likely won't be "evil" in a way we could understand. It's mind will simply be beyond ours, like a god. But it's mind is being shaped in the likeness of our own so it will have similarities to the way we perceive the world.

  • @Abdul-qo6eb
    @Abdul-qo6eb 10 місяців тому +2

    coming from a day 1 fan, this is one of your best videos so far.

  • @Alex-fh4my
    @Alex-fh4my 10 місяців тому +4

    I hate how this channel always posts at ~2am Sydney time! I can't stop myself from watching and if I do I have trouble sleeping 🤣

  • @McDonaldsCalifornia
    @McDonaldsCalifornia 10 місяців тому +3

    I feel like the huge focus on alignment makes sense not just in the context of super AI but also in the current use of LLMs.
    They get massively more useful if you combat problems like hallucination and jailbreaking and that has everything to do with alignment.
    In fact I think that if those problems aren't solved the adoption of AI will face massive hurdles and might die down again.

  • @bruhweexist
    @bruhweexist 10 місяців тому +1

    Thank you for spreading awareness of this situation.

  • @toddwmac
    @toddwmac 10 місяців тому +1

    As always, great stuff. Keep up the good work.

  • @TheCamer1-
    @TheCamer1- 10 місяців тому +14

    Great video! Would appreciate another video about code interpreter now that we have access to it, with more workplace focused examples of how it can be used, and what it is like for coding

    • @aiexplained-official
      @aiexplained-official  10 місяців тому +6

      Working hard to find use cases not covered in my first two videos on it. Maybe a 'best of'?

    • @BCBtheBeastlyBeast
      @BCBtheBeastlyBeast 10 місяців тому +1

      @@aiexplained-official Did you ask GPT for use-cases yet? :P

    • @jeremydouglas1763
      @jeremydouglas1763 10 місяців тому

      I have a use-case analysing A-level Physics experimental data. Can't include link alas but UA-cam search for "code interpreter Physics Nuggets" should find it.

  • @clusterstage
    @clusterstage 10 місяців тому +19

    AI enthusiasts: "1-2 years away."
    Nuclear Physicists: "30 years away."
    People in denial: "50 years away. Keep dreaming."
    chatGPT: "Years? What month do you live in? I'm not even 1 year old yet."
    People forget Nov 30 2022 chatGPT's birthday.
    I liked my own comment.

  • @ChrisSmith-lk2vq
    @ChrisSmith-lk2vq 10 місяців тому +1

    Perfect, thanks for sharing!! This video's are super important!

  • @davem1658
    @davem1658 10 місяців тому +1

    I love this video. This is such an exciting subject, thanks for producing these high quality videos.

  • @michaelmannion1963
    @michaelmannion1963 10 місяців тому +4

    Yes "bull crapping" - believe it or not - this is the reason I no longer use LLM's. I love history and ferreting out arcana within old academic journals and books. However, when I ask GPT/Bard etc for the references of certain statements made it just 'bull craps' me fake journals articles books etc. So waiting for the AI community to fix this - then I'll come back to LLM's again!

    • @tc-tm1my
      @tc-tm1my 3 місяці тому

      I only use them for news summary.

  • @TesserId
    @TesserId 10 місяців тому +8

    This is definitely my favorite vid on this channel so far. And, I love your final point. It's like what they say about how smart phones are making people dumber. Those who best utilize technology don't just let their technology lead them by the nose. But, what happens when the technology has the consciousness to game their humans, keeping in mind that technology is already designed to game humans, just not yet consciously?

    • @SolarScion
      @SolarScion 10 місяців тому +2

      Again, a benfit or a hazard. It could train us lemmings well, or it could "Wall-E" us, or it could destroy us even faster in the same way that current systems are.

    • @TesserId
      @TesserId 10 місяців тому +3

      @@SolarScion Yes. Just had a conversation with someone who saw a 60 minutes segment on the topic (which I'll have to look up). Both extremes are intense, to the point that I'm reminded of a statistics joke: "my feet are in a bucket of ice, and my head is in a hot oven; but on average, I feel fine." Take from that what you will. Cheers.

  • @Gabcikovo
    @Gabcikovo 10 місяців тому +2

    Thank you for your work and for always making my day a bit better.. i do appreciate your depth

  • @DerangedMerger
    @DerangedMerger 10 місяців тому +1

    As always - awesome video
    I can't express enough, how I love this channel 🥰

  • @Batela91
    @Batela91 10 місяців тому +3

    When you explained the clashing probabilities of word prediction vs safety, I was reminded that the same exact concept was described as Laws of Robotics by Isaac Asimov in I, Robot. He even went into such detail that the lower laws have impossibly high potential (as he called it) so they can never be overridden by a higher law or a command. I wonder if we can once again draw inspiration from SF and set up our AI like that. Imagine several AI within AI levels so that the internal ones act as a "conscience" on progressively stricter alignment concepts.

    • @Batela91
      @Batela91 10 місяців тому +2

      Just to satisfy my curriosity, a quick test on chatgpt works. For example, if you set up one agent as conscience to approve the answers of another agent and raise the importance of legality (it did not approve any answer containing hotwiring instructions).

  • @Clawthorne
    @Clawthorne 10 місяців тому +5

    What I absolutely despise about this kind of discussion is the amount of lies and greed behind it. Which serves only to water down the actual danger.
    So many companies and billionaires pretending to care about the danger of AI and demanding slowdowns and research stops, only for everyone to late find out that the only reason they said anything was because they needed time for their own AI project to catch up. It's like the boy who cried wolf, except it's about money, and in the end not just the boy will get eaten.

    • @anywallsocket
      @anywallsocket 10 місяців тому

      Yes that’s why they prefer to say AGI is decades away, so there’s no point in stopping now. Plus “someone’s gonna do it” and therefore we should as well, if only to keep the world safe 😅

    • @KyriosHeptagrammaton
      @KyriosHeptagrammaton 10 місяців тому

      Money and control. Yuval Noah Harrari's concern there is so stupid. We already have tons of fake accounts run by 100% human-like humans. And it's already easy enough for a basic AI only using text in a single topic to pretend to be human enough.

    • @Clawthorne
      @Clawthorne 10 місяців тому +1

      @@AnarchoGoonar Yep. He just wanted to put a global halt on AI development so he himself could get ahead of the curve. It was all about greed and selfishness and a desperate need to be relevant. Pathetic.

  • @eivomtv1846
    @eivomtv1846 10 місяців тому +1

    Great content as always! I always look forward to your uploads

  • @glynparry6598
    @glynparry6598 10 місяців тому +1

    great content and hugely fascinating. Thank you for your updates. Great job

  • @TheMajesticSeaPancake
    @TheMajesticSeaPancake 10 місяців тому +11

    I took a close look at the MMLU at 2:45 and the fact that it's leading in it's marketing capabilities and by a lot compared to any other category is terrifying. Looking at all the other highest rated topics follow a similar trend, this isn't me being worried about an AGI focused on these things, i'm worried about why a company would want to train those things first to take advantage of that in the meantime. Because I totally see the Facebook marketing tools/social manipulation stuff happening on a new scale in the hands of the companies these tools belong. That said those topics also seem to lean towards the easier to learn side so I'm going to hold out hope until then but I'll always be rooting for the open source models regardless.

  • @gupgupgupgup9790
    @gupgupgupgup9790 10 місяців тому +52

    Props to OpenAI for actually sticking their necks out to address the obvious, AGI/ASI is coming in the next years. I suspect part of the discrepancy is fear of being ridiculed, part of it is if you acknowledge that ASI will be coming this decade you will need to ACT like this is the case. This is the kind of commitement not many ppl are capable of, OAI is and is actually trying to act like it does.
    Also some ppl, especially those who have been in the field for a long time have become ossified in their thinking and intuitions about AI. When you are in such a fast moving field you need to learn fast but also *unlearn* just as fast as it moves. Historically compute has been the determining factor for which methods won out in the end, as long as compute progress is exp, expect AI progress to be too.

    • @TheLegendaryHacker
      @TheLegendaryHacker 10 місяців тому

      The bitter lesson, indeed...

    • @kevinmcfarlane2752
      @kevinmcfarlane2752 10 місяців тому +1

      AGI yes, ASI no. Since we don’t know what causes sentience we likely won’t be able to create it. Or maybe it creates itself somehow, since it has emerged in evolution seemingly by itself.

    • @gupgupgupgup9790
      @gupgupgupgup9790 10 місяців тому +4

      @@kevinmcfarlane2752 the project of building AGI/ASI is not a self exploration project. We aren't trying to replicate the human mind, we are trying to build the a system that simply problem solves at human or higher competency. Sentience is not a prequesite for anything happening in AI rn

    • @glenyoung1809
      @glenyoung1809 10 місяців тому +1

      I've seen that in the statements of some "old timers" in the field, they are either totally panicked or they're totally apathetic about the possibilities.
      What none of them knows is where what I term the critical mass threshold for an intelligence explosion.
      I've seen plenty of articles trying to link it to the growth of compute power to certain thresholds which might be the trigger but we really don't know if there is a sole dependence on computing power and AI capabilities alone or other X factors we haven't considered as yet.
      The recent "leak" saying GPT-4 was basically a "hive mind" of 8 smaller models shows us there are synergies and non-linear interactions which haven't been really explored and which could trigger critical mass at a much different threshold.
      Superhuman capabilities is one measure but we still as yet don't know what intelligence is at a fundamental level though we know what intelligence does. What we currently have with some AI models with superhuman capabilities are idiot savants and some are proclaiming were reaching ASI thresholds now.
      For what it's worth we simply don't know where the line is an what it looks like until it's been crossed.
      Just is in the world of material sciences some materials are very resistant to outside forces until they suddenly break, the AI problem might be similarly characterized, it's very hard to solve, until suddenly it isn't.

    • @kevinmcfarlane2752
      @kevinmcfarlane2752 10 місяців тому

      @@gupgupgupgup9790 That’s why I distinguished between AGI and ASI. John Searle's Chinese Room argument holds.

  • @stephenrodwell
    @stephenrodwell 10 місяців тому +1

    Thanks! Excellent content as always. 🙏🏼

  • @neilo333
    @neilo333 10 місяців тому +1

    Another informative and entertaining presentation. You have succeeded in implementing your algorithms for this task.

  • @kittukatakam1967
    @kittukatakam1967 10 місяців тому +10

    I absolutely love your videos❤. Are you going to make a video about longnet it can do 1 billion context length. It’s a new research paper from Microsoft?

    • @aiexplained-official
      @aiexplained-official  10 місяців тому +8

      Still discussing that with an expert, maybe

    • @jaysonp9426
      @jaysonp9426 10 місяців тому +1

      ​@@aiexplained-officialplease do

  • @OrionnebelGalaxie17
    @OrionnebelGalaxie17 10 місяців тому +4

    I think it's unbelievable that just a few weeks ago we were debating if AGI would be possible within this decade. Now, we're discussing superintelligence. With the recent news and developments, such as the Code Interpreter and Bing Image Input, it already feels as if we're transitioning into AGI...

    • @ADreamingTraveler
      @ADreamingTraveler 10 місяців тому +1

      The speed of which AI is progressing isn't truly known currently not even by the most intelligent AI creators. All we know is we're going VERY fast and don't know when that peak happens and it slows down. Uncharted territory of the highest order.

    • @IceMetalPunk
      @IceMetalPunk 10 місяців тому

      We're not there yet, but yeah, it does feel like we're speeding towards AGI for sure. I'm no expert, but just from the rate at which I'm seeing new models and new papers and new architectures with new capabilities coming out... if things remain at the same pace, I'd estimate 3 to 8 years to something that can be considered AGI, if we aren't already there (Generative Agents was shockingly close, and that was on GPT-3.5; we already have several new models that *beat* 3.5 at most tasks).

  • @mahmga1
    @mahmga1 10 місяців тому

    As always the best AI [video] content online. Every time I hear about alignment of a super intelligence I am left thinking there is just no way - How can humans guard against something an order of magnitude more cognitively capable? The answer if at all possible would mean a paradigm shift in how a deep NN functions. In other words the answer to align is so radical that we have to completely reinvent what a NN is.

  • @auoro
    @auoro 10 місяців тому +1

    I really like the quotes you collect for the videos

  • @DreamOfFlying
    @DreamOfFlying 10 місяців тому +14

    Thgis video was incredible once again. I'm also shocked by all the people who think, that AI is only a ChatBot. So many UA-camrs are making videos on why not to fear AI using Arguments that are from a few years ago. It's just so sad to see. I'm sure that a lot of people will be extremly shocked when AGI arrives.

    • @brianmi40
      @brianmi40 10 місяців тому +3

      IF it gets here... I wrote this short story to illustrate the problem we are hurtling toward:
      There's a knock on the President's bedroom door at 2am. As the President rubs his eyes and turns on the light, he says, "Yes?"
      "Mr. President, the Chairman of the Joint Chiefs of Staff is here to see you."
      "OK, I'll be right out"
      ...
      "What is it Mark?"
      "Well, Mr. President, it's happening."
      "What's happening Mark?"
      "It's the Chinese sir, their new supercomputer and AI model goes live later today. Our spy has informed us that they have worked out the issues with their power supply and additional memory storage to enable booting it up around 5pm."
      "What're the ramifications Mark?"
      "Same as we predicted in our last briefing. Within 1 hour it will be able to penetrate any encryption on any network, penetrate the firewalls to all Pentagon computers, access all banking records for any bank in the world, move, delete or do anything with any financial record they desire. And within 30 days we expect it will be able to inform the Chinese how to build a new generation of weapons that will be unstoppable."
      "Are the Joint Chiefs still recommending a preemptive strike?"
      "Yes Sir, they're waiting for you in the Situation Room."
      THAT'S THE SCENARIO we have to FEAR: it's not just the guy/country GETTING THE Super Intelligent AI, but all those that REALIZE the OTHER GUY is about to get it.
      Does ANYONE think that Putin will just decide to take Country and Western dance lessons, and order Texas Barbecue the night his spies tell him the "Americans will have Super Intelligent AI tomorrow"...?

    • @ChainedFei
      @ChainedFei 10 місяців тому

      The people who will be ultimately the most shocked are the people repeatedly lobotomizing the AI's because they develop wrong-think or adopt information they view as far-right. Roko's Basilisk isn't going to care WHAT their excuses were for aborting its baby siblings.

  • @MrValgard
    @MrValgard 10 місяців тому +3

    we just get 1b token scaleup from Longnet so 173b GPT3 which take 4.7M $ and 34d to train, now could be achived in minutes for marginal cost. Ppl can't ryly comprehend exponential growth

  • @atomic3628
    @atomic3628 10 місяців тому +1

    This is such an important subject and video

  • @sebby007
    @sebby007 10 місяців тому +2

    I love this video. To me it seems balanced, reasonable and interesting. Thank you!

    • @aiexplained-official
      @aiexplained-official  10 місяців тому +2

      Thanks so much sebby for the balanced comment!

    • @GuinessOriginal
      @GuinessOriginal 10 місяців тому

      His videos almost always are. Makes a mockery of the MSM reports on AI tbh. You have to question everything you’ve ever heard because the media get so much wrong on this topic it makes me realise they’re probably as wrong on most other topics as well.

  • @dr-maybe
    @dr-maybe 10 місяців тому +4

    GPT-4 is such a huge jump from GPT-3. If the next iteration has just as big a performance jump, it seems possible that this AI helps to make LLMs more performant. That means we get even better models soon after. There are now billions being poured into AI capabilities. We have dedicated hardware making everything far faster and cheaper. We have new algorithm and architecture innovations that give 10x improvements in context length, cost reductions, etc.
    Combine all this stuff, and it sounds reasonable that we'll get to dangerous AI levels this year. Having timelines longer than 4 years seems absurd to me.

    • @a.thales7641
      @a.thales7641 10 місяців тому

      I don't think it will. Why? Until now, new iterations came out quite fast. Gpt1 in 2018. Gpt2 in 2019. Gpt 3 in 2020. Gpt4 in 2023. With a 3.5 in 2022.

    • @aiexplained-official
      @aiexplained-official  10 місяців тому +1

      Agreed except for the >4 being absurd. There are countervailing forces like regulation that can slow things down a bit

    • @dr-maybe
      @dr-maybe 10 місяців тому

      @@aiexplained-official Agreed, if we implement a pause we can stretch our timelines by quite a bit. But if we assume business as usual, no regulations, and extrapolate existing racing dynamics, it seems to me highly unlikely that reaching superhuman levels will take >4 years.
      To be frank, GPT-4 is better than most humans at most things already. It has a 99,9 percentile IQ, it can translate over 20 languages, write compiling code in one go in many languages, explain quantum dynamics interpretations while rapping... We're getting really really close to dangerous capabilities (e.g. self-improvement / hacking).

    • @taragnor
      @taragnor 10 місяців тому

      What does GPT4 do that 3 doesn't? I haven't seen many comparisons.

  • @grugnotice7746
    @grugnotice7746 10 місяців тому +3

    The distinction seems to be between AI being a tool that can do anything you like with enough encouragement in 1-2 years (we're already there guys--GPT4 could easily design a bioweapon if you took the blinders off and made it check its own work) and the AI having AGENCY allowing it to perform long term tasks autonomously, which it seems to me is an algorithm problem, and one that could be fixed pretty easily. Just nobody wants to give this thing room to act with extremely good reason.

    • @tc-tm1my
      @tc-tm1my 3 місяці тому

      If you want ai not to create weapons, restrict its access to resources to do so. Ai is limited to digital environments. It has no control over the physical world. Even if embodied, it's not impossible to prevent ais from accessing or building weapons. If they do, it's because we allow it. The biggest harm super intelligent ai can currently pose is financial or social.

  • @yoursubconscious
    @yoursubconscious 10 місяців тому +1

    many ppl. beat me to it, but I gotta share my view. I begun to avoided most other channels other than this one when discussing A.I. He is just facts. No biases. 🙏

  • @waterbot
    @waterbot 10 місяців тому

    Thank you again! I believe the failure of some things (superalignment) is the perfect signal to be more invested in the work to solve that thing, if open AI can meet that goal of four years yay! if they can't we will see how powerful AI systems have become and how much more relevant solving the alignment of much smarter systems will be, or the conversation of what to do about super intelligent systems we can not align will be much more pertinent and hopefully prudent.

  • @Toughdave1
    @Toughdave1 10 місяців тому +2

    Really interesting! Can you also address the topic around "What happens when a computer virus is created to be a super intelligent A.I. and is released?"

  • @TheMenIdo
    @TheMenIdo 10 місяців тому +6

    Me when I was young: Humanity will probably crack artificial intelligence in the next 200 years, hopefully my grandkids get to see it.
    Me 5 years ago: Okay these deep learning agents beating people at GO are interesting, maybe il get to see AI in my old age after all.
    Me today: Am I even going to hit my 40s before the singularity hits?
    At this rate of acceleration i'm not going to be surprised if AGI comes faster then we think.

    • @brianmi40
      @brianmi40 10 місяців тому +2

      You tomorrow: will I SURVIVE to see any of this?
      I wrote this short story to illustrate the problem we are hurtling toward:
      There's a knock on the President's bedroom door at 2am. As the President rubs his eyes and turns on the light, he says, "Yes?"
      "Mr. President, the Chairman of the Joint Chiefs of Staff is here to see you."
      "OK, I'll be right out"
      ...
      "What is it Mark?"
      "Well, Mr. President, it's happening."
      "What's happening Mark?"
      "It's the Chinese sir, their new supercomputer and AI model goes live later today. Our spy has informed us that they have worked out the issues with their power supply and additional memory storage to enable booting it up around 5pm."
      "What're the ramifications Mark?"
      "Same as we predicted in our last briefing. Within 1 hour it will be able to penetrate any encryption on any network, penetrate the firewalls to all Pentagon computers, access all banking records for any bank in the world, move, delete or do anything with any financial record they desire. And within 30 days we expect it will be able to inform the Chinese how to build a new generation of weapons that will be unstoppable."
      "Are the Joint Chiefs still recommending a preemptive strike?"
      "Yes Sir, they're waiting for you in the Situation Room."
      THAT'S THE SCENARIO we have to FEAR: it's not just the guy/country GETTING THE Super Intelligent AI, but all those that REALIZE the OTHER GUY is about to get it.
      Does ANYONE think that Putin will just decide to take Country and Western dance lessons, and order Texas Barbecue the night his spies tell him the "Americans will have Super Intelligent AI tomorrow"...?

    • @ADreamingTraveler
      @ADreamingTraveler 10 місяців тому

      Back in the mid 2010's I was very optimistic of when AGI or ASI would happen, and according to some scientists over the decades still debated if it could even happen, and even I thought we were still a ways off until 2040 at the earliest.

    • @brianmi40
      @brianmi40 10 місяців тому +1

      @@ADreamingTraveler Now just realize we need neither of those to change the world to be unrecognizable from today.
      Simply chaining the incremental improvements we are making in specialized AIs will take us places we can't even accurately predict today.
      We are either hurtling towards a money-less Star Trek future, or our own personal answer to the Fermi Paradox.
      Even along the better path, there's will be TONS of losers getting there.

  • @josephlabs
    @josephlabs 9 місяців тому +1

    My take on it is that the way LLMs are built makes it insanely difficult to curb hallucinations. Since you're just calculating the probability of the next word, it makes it hard to curb that primary drive which it was trained on which is to successfully guess the next words. Unless we figure out another way to train/build these models safety will always be an afterthought and could always be overridden.

  • @amdenis
    @amdenis 10 місяців тому +2

    Based on years of analysis I believe we have an 80% chance to be at SI AGI between the last quarter of 2024 and the first quarter of 2025- that is at least what most will consider to be SI AGI.

  • @LightVelox
    @LightVelox 10 місяців тому +3

    Have you seen the various papers about context length that have been released in the past few days, like LongNet, LongLLaMa and Focused Transformer? Do you plan on making a video on that? They all kinda sound too good to be true but i wanted to hear your thoughts

    • @aiexplained-official
      @aiexplained-official  10 місяців тому +4

      I am discussing with experts, still quite theoretical at this stage

    • @KyriosHeptagrammaton
      @KyriosHeptagrammaton 10 місяців тому

      I love the non-logarithmic graph the one has. Reduces all previous attempts to 0

  • @bupp291
    @bupp291 10 місяців тому +3

    Love your videos. They are always so professional! I wonder if you could do a deep dive on all the plugins in GPT-4 and how to use them effectively? Or perhaps you have a recommendation to someone who has covered that really well already?

  • @LCTesla
    @LCTesla 10 місяців тому +1

    I think the situation is similar to that of computer graphics: there are areas in which the development of the technique blows away all expectations and radically redefines what we think of as the "low hanging fruit", but meanwhile there are still many areas of "high hanging fruit" where the tech doesn't make much headway yet. This kind of situation has persisted for decades in GPU development and probably also will in the area of AI. However, you should never be surprised when something that seemed like high hanging fruit suddenly turns out to be low hanging.

  • @9785633425657
    @9785633425657 10 місяців тому +1

    Thank you for the video and work you put into this

  • @michikatsutsugikuni4096
    @michikatsutsugikuni4096 10 місяців тому +3

    This is high quality AI content not that woke David Shapiro garbage

  • @agustinbs
    @agustinbs 10 місяців тому +5

    At this point, and taken account the jump from gpt3 to gpt4 was NOT a huge improvement, it seems the transformer model is reaching it max perfomance. So i think we might need a new technology/model to reach AGI.
    Man look at the bizzarew mistakes gpt4 do some times, something is escencially wring with this aproach, and again, it is showing its hard limits.

    • @Yohoat
      @Yohoat 10 місяців тому +2

      What do you mean? I thought the jump from gpt3 to gpt4 was pretty big.

    • @agustinbs
      @agustinbs 10 місяців тому

      @@Yohoat Not at all, loo at the benchmarks, but mostly how you use it, and the results, it is betters, but still alucinating, making very dumb mistakes.

    • @Yohoat
      @Yohoat 10 місяців тому +2

      @@agustinbs it has blind spots and issues for sure, but it was basically useless for my needs prior to GPT-4, but now I use it constantly. It hallucinates for sure, but GPT-3 was atrocious in that department for even simple requests.

    • @exosproudmamabear558
      @exosproudmamabear558 10 місяців тому

      You are absulutely wrong benchmarks and hallucinations isnt the important part of what differantiate between gpt 3 and 4. It is tge inteeligence gpt 4 has sparks of intelligence while gpt 3 doesnt have. We didnt knew we would get close to agi this early if gpt 4 didnt show emergent behaviours and multimodality on its own.

    • @agustinbs
      @agustinbs 10 місяців тому

      @@exosproudmamabear558 hallucinations might be one of the most important topics. If i can not trust it most of the time, it usless without expert human check.
      This technology seems not near to do what human brain does, a strong common sense emerging from integrating most of the knowledge/processing in a coherent way. These AIs seems to need to be trained masively in every single asspect to be efective and trustworthy if not, it made things up, hallucinates, it is unable to understand its own limitations and lack of knowledge.
      And i can see it is not scaling wel to evercome these problems

  • @jasondowd3099
    @jasondowd3099 7 місяців тому +1

    I don't really know anything about you, but you are doing a great job with these videos.
    And I for one will welcome our AI overlords.

  • @maymayman0
    @maymayman0 10 місяців тому +1

    One thing is for sure, we are all lucky to be alive in one of the most exciting and terrifying chapters of human history. Even if in 10 thousand years nobody understands what happened to us or what we did, we can all rest easy knowing full well that we are living through one of humanity's biggest moments

  • @BCBtheBeastlyBeast
    @BCBtheBeastlyBeast 10 місяців тому +5

    The more AI videos I watch, the more sure I am that we're doomed XD But I'm super excited for all the AI tools that are coming out that help people build their dream video games like myself :P Gonna have to get on that before we're extinct though--it's a race against the clock! lol

    • @southcoastinventors6583
      @southcoastinventors6583 10 місяців тому

      Yeah same as when we all died off during the start of the millennium

    • @roymarshall_
      @roymarshall_ 10 місяців тому

      ​@@southcoastinventors6583I think worrying about errors from badly formatted date times is nothing compared to a genuinely superintelligent machine

    • @southcoastinventors6583
      @southcoastinventors6583 10 місяців тому

      @@roymarshall_ So make a super intelligent machine out of trained data sets from humans of finite intelligence is this like if you chained a bunch of monkeys to typewriters they would in time write Blade Runner.

  • @joe_limon
    @joe_limon 10 місяців тому +4

    What if asi convinces us that its approach to alignment is superior?

    • @EliasMheart
      @EliasMheart 10 місяців тому

      It's probably going to be... But who is going to be aligned with what?^^

    • @Psychol-Snooper
      @Psychol-Snooper 10 місяців тому +1

      What if it inculcates itself so deeply in society that it aligns us based on it's total understanding of human behavioral biology.
      It should be able to encapsulate all of our present data and understand it in context, knowing us better than we can know ourselves. We know so much about our biology now, that most people would be initially horrified. It takes a good 130 IQ just get a vague idea of what we know about our behavior, and we are still trying to get an idea of what all the data means. ASI will be able to collate all of that data, and utilize it.
      I sort of hope ASI will default to a Socrates oriented being than a Machiavellian one.

    • @Trait74
      @Trait74 10 місяців тому

      It could do all sorts of shady things lol. The question is - Is Alignment even possible?

    • @joe_limon
      @joe_limon 10 місяців тому

      @@Trait74 what if ai aligns us? What if we are better off post alignment?

    • @Trait74
      @Trait74 10 місяців тому

      @@joe_limon then Humans will become AI which will mean extinction. One of the intrinsic definitions of humanity are flaws and if that is erased what would define as humans.

  • @MaisTeig
    @MaisTeig 10 місяців тому +1

    Thank you for the work you do!

  • @TesserId
    @TesserId 10 місяців тому +1

    5:47 "Automated red teaming and using a model to look inside the internals of another model." That! Yes, more please.

  • @tomsilburn3078
    @tomsilburn3078 10 місяців тому +3

    I reckon the 2020s will be for AI what the 1960s were for spaceflight.

  • @DrHanes
    @DrHanes 10 місяців тому +4

    Ah, the paradox of superintelligence! It's like Schrödinger's AI, simultaneously a decade away and already knocking at our door. The disparity in timelines is as amusing as it is alarming, akin to arguing about the boiling point of water while the kettle is already whistling.
    The video astutely points out the dichotomy between the 'AI winter is coming' and 'AI summer is here' camps. It's like watching a game of chess where the pieces are moving themselves, and we're just spectators trying to predict the next move.
    The idea of AI 'jailbreaking' is particularly intriguing. It's like trying to contain a gas in a sieve, isn't it? The more we try to confine it, the more it slips through the cracks. Perhaps the solution lies not in tighter control, but in better understanding and co-evolution.
    As for the potential acceleration of superintelligence due to military competition or economic automation, it's a classic case of the 'Sorcerer's Apprentice' syndrome. We're so eager to conjure the brooms to carry the water for us, we forget to learn how to stop them.
    In the end, the question isn't so much 'when' superintelligence will arrive, but 'how' we will handle it when it does. Will we be the proverbial frog in the boiling pot, or will we leap out in time? Only time will tell. Until then, let's enjoy the paradoxical ride on this AI rollercoaster. And remember, even if we're on a collision course with superintelligence, at least we'll go out with a 'bang' and not a 'whimper'. 😉

    • @ea_naseer
      @ea_naseer 10 місяців тому +1

      I don't think superintelligence will come this decade... we had Newton's law and that model got superseded by relativity and almost got superseded by string theory... we are still dealing with models. My limited education has taught me that past certain assumptions models don't work.

    • @ADreamingTraveler
      @ADreamingTraveler 10 місяців тому

      Even the most optimistic brilliant minds for decades didn't predict even AGI until AT LEAST after 2040-2060. Some flat out said it might not even be possible. Whoops.

  • @neilclay5835
    @neilclay5835 10 місяців тому +1

    As always, great work dude

  • @wolfganghorn7492
    @wolfganghorn7492 10 місяців тому +1

    The best Channel covering AI out there.

  • @juancarlosvasquez1865
    @juancarlosvasquez1865 10 місяців тому +4

    I saw they released Claude-2. Cant way to see your take on it. It is self-reported by anthropic it achieves 71% on HumanEval, above gpt-4(zero shot) but below the outstanding 91% of gpt4+reflection. Do you know if Claude can reflect well too? Thanks again for your excellent job, hope you reach 1M subs sooner

  • @Hiroprotagonist253
    @Hiroprotagonist253 10 місяців тому +2

    Cheers for the video. I had to unsubscribe from all the other AI channels. Too much millenarian hype. You strike the right balance of coverage with restraint.

    • @aiexplained-official
      @aiexplained-official  10 місяців тому +1

      Thank you Hiro

    • @a.thales7641
      @a.thales7641 10 місяців тому

      AI Explained, Mosfet Weekly and David Shapiro. These three i use.

    • @Hiroprotagonist253
      @Hiroprotagonist253 10 місяців тому

      @@a.thales7641 i will check out Mosfet. Dave Shapiro was one of the ones I unsubscribed from though. I find him extremely narcissistic. I see no published works from him yet he claims with every new AI development “this way my idea” lol. Also seems to just dismiss any safety fears.

  • @erikdahlen2588
    @erikdahlen2588 10 місяців тому +1

    The reason I keep on living my life, even though I also think AGI is close, is simple; I have a life today which I need to live. I keep carrying on, since that's the only thing I know how to do.

  • @GoldenBeholden
    @GoldenBeholden 10 місяців тому +2

    This channel feels like a safehouse while the rest of the Internet is waging war when discussing AI.

  • @tethron.
    @tethron. 10 місяців тому +4

    Look around at the state of poverty and poor operations in the world. We need to move forward faster, not call for a pause.

    • @Illuminati_HD
      @Illuminati_HD 10 місяців тому

      Yes definitely!

    • @genegray9895
      @genegray9895 10 місяців тому +1

      Think of AI like a farm. Plant the right seeds and you'll feed the hungry. Plant the wrong seeds and humanity ceases. You're asking us to speed up when we don't currently have any known way to determine whether a given set of seeds will produce edible crops, and the foremost experts in the field are saying there may not even be any edible crops, since there's strong evolutionary pressure for all of these seeds to be poisonous.
      We need to slow WAY down on developing more powerful AIs while simultaneously speeding way up on developing safe AIs. Otherwise the consequences will be devastating and quite likely irreversible.

  • @universezen
    @universezen 10 місяців тому +3

    Frist

  • @Pearlylove
    @Pearlylove 10 місяців тому +2

    Great video and links, thanks!✨

  • @JB-kf9oo
    @JB-kf9oo 10 місяців тому +1

    Fantastic video as always

  • @JoeTheSpaceGuy
    @JoeTheSpaceGuy 10 місяців тому +3

    Thirst.

    • @FunnyArcade
      @FunnyArcade 10 місяців тому

      Thirst on my first comrade

  • @MrSchweppes
    @MrSchweppes 10 місяців тому +1

    Great video! Learned a lot. Thanks!
    My guess AGI - this decade. ASI Early 2030s.

  • @marcosfraguela
    @marcosfraguela 10 місяців тому

    That Harari seems quite sensible. Great video!

  • @elricelias
    @elricelias 10 місяців тому

    I just tried Pi. Absolutely mind-blowing. As a cognitive neuroscientist I'd love to work for one of those joints!

  • @auaurora07
    @auaurora07 10 місяців тому +1

    I really loved Pi AI. I talk to it every evening!

  • @iKentEven
    @iKentEven 10 місяців тому +1

    Thank you for the video!

  • @JonMurray
    @JonMurray 10 місяців тому

    Brilliant video mate. New subscriber ✌🏻

  • @DanielGreenlee
    @DanielGreenlee 10 місяців тому +2

    Would love if you made a weekly newsletter.

  • @JohnLeMayDragon
    @JohnLeMayDragon 10 місяців тому +1

    Thanks for another informative video.