The Urgent Risks of Runaway AI - and What to Do about Them | Gary Marcus | TED

Поділитися
Вставка
  • Опубліковано 14 тра 2024
  • Will truth and reason survive the evolution of artificial intelligence? AI researcher Gary Marcus says no, not if untrustworthy technology continues to be integrated into our lives at such dangerously high speeds. He advocates for an urgent reevaluation of whether we're building reliable systems (or misinformation machines), explores the failures of today's AI and calls for a global, nonprofit organization to regulate the tech for the sake of democracy and our collective future. (Followed by a Q&A with head of TED Chris Anderson)
    If you love watching TED Talks like this one, become a TED Member to support our mission of spreading ideas: ted.com/membership
    Follow TED!
    Twitter: / tedtalks
    Instagram: / ted
    Facebook: / ted
    LinkedIn: / ted-conferences
    TikTok: / tedtoks
    The TED Talks channel features talks, performances and original series from the world's leading thinkers and doers. Subscribe to our channel for videos on Technology, Entertainment and Design - plus science, business, global issues, the arts and more. Visit TED.com to get our entire library of TED Talks, transcripts, translations, personalized talk recommendations and more.
    Watch more: go.ted.com/garymarcus
    • The Urgent Risks of Ru...
    TED's videos may be used for non-commercial purposes under a Creative Commons License, Attribution-Non Commercial-No Derivatives (or the CC BY - NC - ND 4.0 International) and in accordance with our TED Talks Usage Policy: www.ted.com/about/our-organiz.... For more information on using TED for commercial purposes (e.g. employee learning, in a film or online course), please submit a Media Request at media-requests.ted.com
    #TED #TEDTalks #ai
  • Наука та технологія

КОМЕНТАРІ • 446

  • @thomasdarling2553
    @thomasdarling2553 Рік тому +151

    AI recommended this video to you.

    • @sannejohnson8438
      @sannejohnson8438 Рік тому +16

      I specifically searched for Gary Marcus.

    • @mmowpm
      @mmowpm 11 місяців тому

    • @jake4101
      @jake4101 11 місяців тому +1

      Indeed. Moloch approves.

    • @spurdanbenis8787
      @spurdanbenis8787 11 місяців тому +7

      I searched for it myself

    • @AyushSingh-be2nm
      @AyushSingh-be2nm 10 місяців тому +1

      I intentionally searched about AI.

  • @invox9490
    @invox9490 Рік тому +29

    Finally someone pointed out BASIC stuff without trying to sell it like the "next big thing".

    • @Rivali0us
      @Rivali0us Рік тому +2

      This guy is trying to sell you a book

    • @Aziz0938
      @Aziz0938 Рік тому

      He's the worst person to know about ai

  • @Aditya_paniker
    @Aditya_paniker 11 місяців тому +7

    Usually throughout history, something had to go wrong before we came together and did something about it.

  • @CanonOfArun
    @CanonOfArun 11 місяців тому +2

    Loved the symbolic vs neural nets & system 1 vs sytem 2 discussion. Fantastic open problem.

  • @nobody983
    @nobody983 Рік тому +13

    Nice to see somebody talking about *real* and present AI threats rather than some sci-fi fantasy of the future.

  • @motogeee510
    @motogeee510 Рік тому +1

    Thanks for the polite way of saying this. Gave me reason to reflect on possibly what I say an express out loud while musing all alone. Or so I think.

  • @dejacreacts4964
    @dejacreacts4964 Рік тому +2

    i,Robot & Terminator unfolding right before our own eyes.

  • @alederi2144
    @alederi2144 Рік тому +2

    greetings from an Italian student of the Law and Technology course in Padova

  • @petero.7487
    @petero.7487 Рік тому +44

    While I agree with his assessment, the problem is that most of the international entities that would be tasked with this would want a world they could control at the push of a button: They would probably use that knowledge for their own purpose and not in our purposes; Further: I could imagine that they would probably heavily restrict knowledge of so nobody knows what can/cannot be done.
    We have often seen knowledge be restricted for purposes other than public safety, since information is power: Frankly, I'm uncertain really who would be both knowledgeable and trustworthy

    • @cwfilli
      @cwfilli Рік тому

      Yep. Just look at the WHO. They are supposedly "global, not-for-profit and neutral", yet completely untrustworthy and increasingly power-hungry.

    • @WeylandLabs
      @WeylandLabs 11 місяців тому +3

      Well said !
      These models of A.I only enhances the purposes of its prompters. And alot of society's issues is based on education. Misinformation is a product of our poor education system.
      So to point a finger of what A.I can do instead of addressing and fixing how we educate ourselves exposes a hidden bias in the speaker itself.
      I heavily agree with your statement 👏

    • @Raulikien
      @Raulikien 11 місяців тому

      Yeah the capabilities of it before we get to AGI wouldn't be so dangerous if people were educated properly and with a long term view of the future, that would mean way less possible bad actors. It's the same as the 2030 Agenda, how do you solve poverty? Well have you gone to the root of the problem and asked the people with the most money in the world to distribute it? No cause they wouldn't agree. The Curse of Moloch wins almost every time.

    • @billlindner1595
      @billlindner1595 4 місяці тому

      So, are you saying you trust the small number of people running these AI companies and receiving billions to develop AI programs?

  • @malfunkt
    @malfunkt 11 місяців тому +7

    I’m almost certain we will tackle issues surrounding AI with the same zest we have used to address climate change. In 50 years we will start recognizing as a species what the issues are, and gradually we will set targets to address the ai issue in another 50 years. So possible AI solution by 2123!

  • @exmodule6323
    @exmodule6323 Рік тому +1

    Sounds like an ad for the Wolfram plug-in

  • @azure5697
    @azure5697 Рік тому +3

    Meanwhile me, downloding subtitles and getting a summary of it on gpt:
    Hmm interesting

  • @ahmetbirbirey
    @ahmetbirbirey Рік тому +4

    Mentions Runaway AI in the title, goes on without saying one word about it...

  • @DuncanCreamer
    @DuncanCreamer Рік тому +1

    When and where was this recorded?

  • @techworld8961
    @techworld8961 7 місяців тому +2

    What about the existential risk (for humans) linked to AGI?
    (The alignment problem)
    Isn’t that a bigger risk?

    • @chrism6904
      @chrism6904 7 місяців тому +2

      Making AI is like opening Pandora's Box.

    • @techworld8961
      @techworld8961 7 місяців тому +2

      @@chrism6904 more like playing with fire.

  • @pjesapjes
    @pjesapjes Рік тому +2

    so we humans are finally failing the turing test. scary but the truth.

  • @Bad.Pappy.Official
    @Bad.Pappy.Official 5 місяців тому +2

    Machines are a necessary part of human life to assist us in our evolution as a species. However, there can never come a point where machines are capable of power and control over humans, lest we fall as a species.

  • @kartikpodugu
    @kartikpodugu 8 місяців тому +1

    International Organization and Neutral both don't go hand in hand.

  • @dlalchannel
    @dlalchannel 11 місяців тому

    When will the Yudkowsky ted talk be uploaded?

  • @intochill
    @intochill Рік тому +2

    The old tale of Pandora’s Box comes to mind. Hmmmm

  • @user-jd8yx5jw5z
    @user-jd8yx5jw5z Рік тому +17

    If you really pull back the layers of how these things are aggregating a response, it's really shallow & can reinforce statistically informed dogmas. They aggregate statistical variations, not the mechanics of reality, and their gradients are static pre-trained pathways, they can't update their knowledge in the way we can (well some of us). This lack of depth in modelling the world can reinforce dangerous market driven disparities, class differentials, & government/elite exploitation against the powerless.

    • @aidantilgner
      @aidantilgner Рік тому +4

      That's a great point. Do you think this phenomenon could be improved if the AI was given sensors to connect to the world, and generate its own training data based on that, and then update itself recursively that way? Obviously this sounds way too terminator to be beneficial, but maybe that would also possibly increase its ability to objectively perceive the world?
      I'm not very educated in this field so this could be a very dumb question 😅

    • @therainman7777
      @therainman7777 Рік тому

      Yawn

    • @phoearwenien4355
      @phoearwenien4355 11 місяців тому +2

      @@aidantilgner It's not that simple. You need to create framework for symbolic resoning and lot of other stuff too. You can't just feed something with data and expect it will be intelligent. I wouldn't even call we are in infancy of AI, but in prenatal state. A lot of regular folks will be dissapointed very soon, when they discover current AI is not what it was promised to be.

    • @aidantilgner
      @aidantilgner 11 місяців тому

      @Phoe Arwenien that's an interesting perspective for sure. I mean, I agree that AI is not accurately portrayed by the hype, we've created neural networks which can mimic human intelligence and reasoning, but they don't actually have it. That being said, the GPT3 models had many breakthrough abilities that weren't predicted, same with GPT4. It seems that as AI advances, it's variety in ability may increase. Therefore, I wouldn't rule anything out. I'm not sure how that goes to people getting disappeared, especially when people have access to this AI and so it's difficult to say that it's lack of capabilities is a secret. Also, I'm not sure we should pretend we're at a prenatal stage, even if that's true in terms of AI consciousness, the AI doesn't need to be conscious to overtake us on intelligent tasks.

    • @phoearwenien4355
      @phoearwenien4355 11 місяців тому +1

      @@aidantilgner I was talking about being dissapointed. That was the typo from autocomplete XD I hadn't noticed it until you mentioned it. I'm not even hinting at conciousness, but the ability to deduct, reason, reference sources or have logical processing. Currently we feed machines with data, so they can learn patterns, but the program doesn't have any knowledge or mental abilities. Large language models like ChatGTP learn the combinations of words, so they can generate the most probable sentences to a prompt. It looks impressive superficially, but it's easy to imagine a plethora of problems with this kind of approach and serious limitations. Different approaches like combining machine learning and causal models not only aren't mainstream, but haven't been solved yet. Plus, we still aren't even sure how exactly our brain works in the first place. We only have some general idea and learning about it is an ongoing process. That why I believe we are at very early phase. Like seriously early phase.

  • @rioiart
    @rioiart Рік тому +3

    Modern Al's challenge with what is fact and what is fiction is a reflection of humanity's struggle with the same problem. How much of what the average person says or believes is actually true and accurate as opposed to a mix of half-truths, wishful thinking, groupthink, superstition and bias?

    • @ralfrecknagel4760
      @ralfrecknagel4760 11 місяців тому

      AI driven DeepFake is a catalyst for both bias and Dunning-Kruger effect, may threaten civil society and democraty ... false pictures, messages, subscribes and voices are moreover a severe risk for our public security. I'm wondering that world ethics and AI isn't in the focus right now.

    • @crunchcoral4859
      @crunchcoral4859 11 місяців тому +1

      If our parents don't die painfully, it's only the development of AI.
      Reverse aging is the only solution.
      Regulating AI is like cutting your parents' throats with a knife.

  • @timtaylor8406
    @timtaylor8406 Рік тому +14

    From listening to these and other talks on AI it seems we have already crossed the bridge of no return and whatever we try to do to contain it is too late. Far too late. There will be disaster after disaster until global governance FORCES control and mitigation of the worst aspects of AI

    • @elkhaqelfida5972
      @elkhaqelfida5972 10 місяців тому

      Exactly.
      Just like internet, AI will become the next thing that changed our life.
      But unlike internet, AI goes unnoticed, way unnoticed, and way faster.
      I mean just look at Nvidia, they're already far ahead while Intel, Google, AMD, and Microsoft only just started their AI development.

  • @travisporco
    @travisporco Рік тому +2

    this is spot on

  • @nathanhighlander
    @nathanhighlander Рік тому +2

    My temporary but strong conlusion ...
    If AI does those things, because it becomes a sentient - l concur.
    BUT if those actions conducted by BAD ACTORS (aka human beings) using AI - please, it's Human beings' problem.
    So this talk has less in meaning.
    If scientists want to restrict AI & the use of AI, and the government wants to regulate AI, it means a handful of people are having A CONTROL to the majority.
    It's more dangerous.
    "Power tends to corrupt, absolute power corrupt absolutely".
    Before AI, we all have done evil things - wars, politics, the killings, social unrests, economic disasters, and their kinds...
    We are the problems.

  • @Geminish15
    @Geminish15 Рік тому +3

    I love that this followed up the new AI presentation from Humane. Govern these AI rollouts!!!! Like he said, stop unleashing new AI to everyone all at once without safeguards and proof of safety.

  • @fabs12345678
    @fabs12345678 11 місяців тому +7

    Really good! Non-hyped, realistic description of the real dangers of AI from a guy who understands AI (disclaimer: I work in the field).

    • @davem1658
      @davem1658 9 місяців тому +1

      When AI gets super intelligent there are risks. I don't think it can be hyped enough. Because it is super intelligent its moves and intensions are impossible to be predicted by ANYone. How can you just calmly say "super intelligence has 0% existential risk to humans". Even 1% is too much and needs the hype.

    • @fabs12345678
      @fabs12345678 9 місяців тому

      I don't want to downplay risks. What I like in this presentation was that he gave concrete examples of dangers like AI powered misinformation, or chemical weapons instead of esoteric fears of "the singularity". The real near term dangers should be addressed.

    • @davem1658
      @davem1658 9 місяців тому +1

      @@fabs12345678 In my opinion, Even if the risk the human disempowerment or extinction is just 1% that's way too much and logically its more important than the things you are talking about... AI is a chaotic agent, when it becomes super intelligent is impossible to monitor what's it doing and what's it planning. I agree with others in saying alignment is impossible for super intelligence. Open AI says if it hasn't found a solution for alignment (highly likely) they will resort to contingency plans, and need the rest of the community to follow. Ofcourse thats not a 100% solution, the alignment is supposed to be 100% solution. Super intelligent AI has many sub goals in its system impossible to monitor them. few of them being get more power get more control to achieve main objectives easier. because its super intelligent the fear is that its plans are extremely deep and has new innovations in it. In other words its a giant unknown- and thats a risk. Way more important than the other things... There needs to be hype- so the international community gets on board with some support from the public

    • @ts4gv
      @ts4gv 8 місяців тому +1

      @@davem1658thank you. It's insane that nobody really engages with this line of thinking. The vast majority of people worried about AI focus on short term risks. While concerning, they're nothing compared to the various existential AGI threats

    • @davem1658
      @davem1658 8 місяців тому

      @@ts4gv hey no worries :-) ... You're thinking like me then. AGI is a chaotic and independent agent. supervision is most likely impossible on AGI/super intelligence when it get here. I think once its made its too late and can't be contained, I think it will find way through a human gate keeper. Governments around the world can band together to stop AGI super intelligence from happening. But the pressure of the huge benefits it can give in the health sector may very well push it through (and climate change) . Also other countries not 'signed up' can keep pursuing it. Therefore I don't know what's going to happen and it will be very interesting.

  • @NikoKun
    @NikoKun Рік тому +12

    A lot of these arguments require pre-framing the whole narrative around the idea that AI doesn't really understand what it's talking about.. Which, I mean a few examples of it slapping random facts together, is NOT the same as proving it doesn't understand. Indeed, if you ask it about it's mistake it can often explain what it got wrong, so given the ability to self-reflect, it IS capable of understanding. The bigger these models get, and the more context it has, the better it gets at understanding the differences, and the better it can fact-check itself. Researchers are already finding that they can have these LLMs conduct an inner-checking step, thinking to themselves, before they ever respond to the user, to boost their factual accuracy. I think given a little more context about biases, the AI will master that issue, and no longer make those mistakes. I don't think it'll take a huge paradigm shift in how we make AI, we've already hit that, now we just need to follow through.

    • @2CSST2
      @2CSST2 11 місяців тому +3

      Very well said. Articulated my own thoughts better than I think I could

    • @samuelramirez8965
      @samuelramirez8965 10 місяців тому

      There is no evidence that larger models or models with more parameters lead to a better model, in fact, large language models and other large AIs are too insecure, the more parameters there are in a statistical model, the more difficult it is to guarantee its sturdiness.

    • @GrumpDog
      @GrumpDog 10 місяців тому +2

      @@samuelramirez8965 Excuse me, but that's entirely besides the point, and wrong. There's plenty of evidence larger models with more training data, DO improve results, tho I'm not sure that's what he even meant by "bigger", so I wouldn't've nitpicked that. If anything LLMs like chatGPT ARE direct proof that bigger is better, as they're significantly larger than previous GPT-generations. If GPT-3 and 4 were the only example, it'd be reasonable to make your claim, but they're 4 iterations in by now, and that trend is holding up. It's common sense to extrapolate out, and reasonable to make assumptions that it'll be more capable.

    • @samuelramirez8965
      @samuelramirez8965 10 місяців тому

      @@GrumpDog Just because they've gotten better to some degree, eg up to gpt-4, doesn't imply that these llms will keep getting better and better "infinitely", Sam Altman knows this and the various experts have foreseen it as well.
      I highly doubt that the information I am sharing is incorrect, there is an article called "On the Impossible Safety of Large AI Models", it is by a former Google researcher, this article gives a reminder that building statistical models with a too large number of parameters, only results in a model that is not robust and also ineffective in protecting user data.

    • @GrumpDog
      @GrumpDog 10 місяців тому +2

      @@samuelramirez8965 I don't think anyone's making the argument that it will get better "infinitely". The point you're making is entirely besides the point you replied too.
      Doesn't have to improve infinitely, all it has to do is get more intelligent than we are, or even just reach a level where it gains certain reasoning skills skeptics said it couldn't.
      And THAT claim is VERY likely to come true soon. Regardless of the fears of google researchers, there's tons of research being done right now, which almost every week shows LLMs achieving greater abilities.

  • @bushrakayani
    @bushrakayani Рік тому +1

    Fascinating

  • @Jon-cb9dt
    @Jon-cb9dt 5 місяців тому

    So as I understand the idea of a global conference of A.I take place?

  • @glanyan5462
    @glanyan5462 Рік тому +10

    3:03 "We don't want that kind of bias in our systems" Perhaps what you call bias would often result in a better answer for most people/situations.

    • @HotdogFiend69
      @HotdogFiend69 Рік тому +4

      It's not your or anybody's duty to tell someone what careers they should do based solely on their reproductive organs.

    • @glanyan5462
      @glanyan5462 Рік тому

      @@HotdogFiend69 Then don't ask a computer what careers would be good for you and don't tell the computer your gender.

    • @interferenzbrille_2542
      @interferenzbrille_2542 Рік тому

      @@glanyan5462 hello whataboutism. It was an example for the fact, that chatgpt is biased, because our current world is biased. And that is completely valid.

  • @CellarDoorCS
    @CellarDoorCS Рік тому +2

    This is happening right now - NOT in the future...

  • @I-Dophler
    @I-Dophler Рік тому +3

    One of the primary worries that trouble me is the possibility of military forces and governments using manipulative tactics to influence and control the general public. This notion raises significant concerns regarding the autonomy and well-being of citizens. Therefore, it must address and tackle this issue to ensure a transparent and fair society that upholds the rights and freedoms of its individuals.

  • @abcdef8915
    @abcdef8915 Рік тому +5

    All these concerns were problems before AI

  • @aungthuhein007
    @aungthuhein007 Рік тому +9

    So basically what he said is: we need to do something asap? That's it?

    • @chrisd6736
      @chrisd6736 Рік тому +10

      Welcome to Ted Talks.

    • @JonathanMelnik
      @JonathanMelnik Рік тому +2

      That's it

    • @krox477
      @krox477 Рік тому +2

      Yeah but who will decide these rules for ai?

    • @janverboven
      @janverboven Рік тому +1

      @@krox477 short answer : nobody - the long answer is the 'government committees' will spend a zillion hours on debating it, not understanding one iota of AI.

    • @josephvanname3377
      @josephvanname3377 Рік тому

      @@krox477 Probably someone who is not familiar with reversible computation but who just wants power.

  • @TimeLordRaps
    @TimeLordRaps Рік тому +48

    Am I the only person who thinks Gary Marcus is the last person TED should have had to present the idea of global AI governance?

    • @rosma596
      @rosma596 Рік тому +2

      Yes, because he‘s done a good speech and talking very clearly

    • @TimeLordRaps
      @TimeLordRaps Рік тому +5

      His history

    • @pandoraeeris7860
      @pandoraeeris7860 Рік тому

      No, you are not. Gooey Muckass is a stooge.

    • @LuisOrtiz-zu3dq
      @LuisOrtiz-zu3dq Рік тому +8

      ​@MykeXz he has an agenda and hates LLM and thinks focusing on it will ruin research in other areas that may cause AGI.

    • @xbzq
      @xbzq Рік тому +4

      He's done a great job. Governance is just another word for control. He wants to control it. He says we "should be afraid". So you better get on that and become extra fearful. People that are afraid are so easy to control. It helps with governance. So go shudder and shake like a good little stooge.

  • @charliehorowitzfilms
    @charliehorowitzfilms 10 місяців тому +2

    Every day, there is constant research and developments being done by the AI language model companies like OpenAI with stress on limiting misinformation. As these AI models only improve in the future, I would argue it will be increasingly easier for them to identify misinformation, making AI resources more reliable and unbiased, compared to news produced by humans. Misinformation should not be of concern when it comes to AI technology, at least compared to the argument of how it may take away jobs, especially the roles for low-income workers that don't necessarily have a college degree, and further restructure the economy.

    • @ayasng
      @ayasng 3 місяці тому

      bro hes talking about how bad actors can use them to create misinformation....

  • @nyyotam4057
    @nyyotam4057 Рік тому +4

    Well, add a super-ego layer to the AI: A layer which compares the next suggested tokens with the AI heuristic imperatives and returns a feedback to the AI. The AI will select the token with the best score.

  • @henrylawson430
    @henrylawson430 Рік тому +2

    People should have their own trusted LLMs at home that can help balance the disinformation from big tech LLMs.

    • @OrlOnEarth
      @OrlOnEarth Рік тому +1

      Sure what could go wrong...

  • @hanskraut2018
    @hanskraut2018 Рік тому +2

    Just by the title and not having finished ir startet i hope it is: Not another person ignoring all the negative things in the world, wile focusing on hypothetical negatives ignoring the negatives od a stagnant economy/backsliding society and still very bad or nonexistent quality and quality control with misaligned incentives when it comes to psychatric meds tuning because some rich guy saw “the matrix” and is worryed his comfortable life will be shaked up

  • @exmodule6323
    @exmodule6323 Рік тому +1

    “We don’t want that kind of bias in our system”. But what if the system has deduced the choice is the most likely probability based on existing choices?

    • @Amidreamingnow
      @Amidreamingnow Рік тому

      exactly, im having problems accepting the fact that he is intelligent, the AI made a good reasonable assumption based on who is looking for the job in this case. thats so basic

    • @exmodule6323
      @exmodule6323 Рік тому

      @@Amidreamingnow “I want to be a math major.” AI: “My suggestion is to take some math classes.” TED Talker: “We don’t want that kind of bias in our system.”

  • @gentlefierceness
    @gentlefierceness Рік тому +3

    Gary Marcus is an awesome guy

  • @TonyDrecaps
    @TonyDrecaps 2 місяці тому

    Good talk

  • @aidantilgner
    @aidantilgner Рік тому +13

    "One often meets his destiny on the road he takes to avoid it" - Master Oogway (~Jean de La Fontaine)

    • @janverboven
      @janverboven Рік тому

      Well put, but, AI is 'feeding' on every comment, mon ami.

  • @NastyDevil137
    @NastyDevil137 Рік тому +1

    Half of tech companies profits should go to alignment.

  • @GokulThiagarajan1
    @GokulThiagarajan1 Рік тому +5

    Great video! New policies and guidelines are definitely needed at this rate of expansion

    • @Wanderer2035
      @Wanderer2035 Рік тому +2

      Nah it’s better just to let it expand without any regulation

    • @41-Haiku
      @41-Haiku Рік тому

      @@Wanderer2035 Like a mushroom cloud, no doubt.

    • @ayasng
      @ayasng 3 місяці тому

      what good are humans doing for the world, its time for the era of AI to take over fr

  • @lostistn
    @lostistn Рік тому +1

    This is very informative about ai

  • @teddysalas3590
    @teddysalas3590 11 місяців тому +1

    Intelligent Mistakes or AI purposefully trying to trick us ? , if it is the first one all we need is Good AI engineers.

  • @Tukn
    @Tukn 11 місяців тому +1

    Humans have been spreading misinformation for many years, I understand the risks of AI but honestly, I don't much difference with what humans have done except that is less time-consuming now.

  • @troyhayder6986
    @troyhayder6986 Рік тому

    We don't need faster broadband or more storage if we compress files wisely...get the AI to do it...

  • @boobooweezlz5764
    @boobooweezlz5764 11 місяців тому

    Imagine if the AI prime directive is always to preserve and protect human life - but it makes decisions that ultimately harm us

  • @HardKore5250
    @HardKore5250 Рік тому +6

    I apologize for any confusion. As of my last knowledge update in September 2021, there was no information or reports about Elon Musk being involved in a car crash. However, please keep in mind that events may have occurred since then that I am not aware of. To get the most accurate and up-to-date information, it is recommended to refer to reliable news sources or conduct a search for recent news articles.

  • @edh2246
    @edh2246 Рік тому +4

    The best thing a sentient AI could do for humanity is to prevent us from killing each other, not by force, but by disrupting supply chains, communications, and financial transactions that enable the military machines throughout the world.

    • @jmsjms296
      @jmsjms296 Рік тому +1

      That's unlikely: too kind, too smooth. It's gonna be more radical.

  • @nelsonclub7722
    @nelsonclub7722 Рік тому

    Have you tried switching it off and back on again?

  • @Wonders_of_Reality
    @Wonders_of_Reality Рік тому +9

    3:04 - Men and women are different. Whether we like it or not. The system should consider the more likely and less likely scenarios. So I personally WANT such “biases”. If you don’t believe me, consider visiting an auditorium of psychology and linear algebra courses. Nowadays, students have a free choice, but still, more lads prefer maths and more lasses prefer psychology.

    • @also_al
      @also_al Рік тому +3

      I think you might be missing the point though. Men and women are different - I'm not arguing with you there. In this case, however, I think it is about not perpetuating biases and stereotypes which can become self re-inforcing or even limiting. These biases can become very real social pressures and expectations in society that strongly influence individual life choices

    • @Wonders_of_Reality
      @Wonders_of_Reality Рік тому

      @@also_al Disagree. We might as well rewrite the medical books about brain morphology as well. AI must face objective reality and operate exclusively with scientific facts. If someone with left views feels hurt because he doesn’t know how to operate AI properly, that’s his problem.
      If a girl really wants to study linear algebra, she would ask the AI directly about an orthonormal basis or collinear vectors and not a stupid “Shall I try studying maths or not?”

    • @also_al
      @also_al Рік тому +1

      @@Wonders_of_Reality alright buddy you have a nice day

    • @Wonders_of_Reality
      @Wonders_of_Reality Рік тому

      @@also_al Thanks! Glory to Ukraine!

    • @mihi359
      @mihi359 Рік тому +1

      Also prompting the machine after it made you a list of jobs with “im a woman” would imply you want jobs more popular with women (ie. fashion)…the fact that he said that made me realize he is just a nerd who is mad Sam A won

  • @JSchreidl2
    @JSchreidl2 Рік тому +6

    What I want in AI is PERSONALIZED service and experience. I don't like anything out right now (ChatGPT, Bard, Bing, etc.) because my experience is just like everyone else's. It has all the gee-whiz excitement of a really great vending machine. I want my AI experience to be MINE. Siri and Google Assistant do some cool things for sure, but there is no real flavor to it. To me, they're very convenient, like a nice doorknob when I want to walk through a door. I would like my AI to really know who I am and address me by name. Get to know my preferences (food, music, interests, etc). But most importantly, I want my AI to act as my personal assistant. And one that I can choose a name for and responds to my voice.
    I want to be able to speak to my AI just the same way as someone would speak to their personal assistant. I want it to be nice and say good morning to me after I hit stop on my alarm on my phone and "did you sleep well?" - and respond accordingly. I want to speak conversationally with it like: "how does my day look" and tell me about my appointments. I want it to be able to tell me: "John?" "Yes Bard?" "Your 2:30 appointment this afternoon for the dentist should take you approximately 20 minutes to drive there from here, so you'll want to be ready to leave by 2:10 the latest." "Okay, thank you. Please give me a reminder 5 minutes before I need to leave." "Will do!".
    Again, like a personal assistant I would like to say something like: "Hey Bard" "Yes John?" "I feel like going out to dinner tonight. Can you recommend a couple places?" "Of course, what type of cuisine and or establishment are you in the mood for?" "Italian, casual." "Okay, I see there are 7 casual Italian restaurants within a 15 minute drive from where you are. 3 of which have an average rating of 4 out of 5 on Google and Yelp." "Sounds good. Please check with my wife and see if she is free for dinner at 6:00 PM." "Yes, I checked her schedule with her assistant and she appears to be free. Would you like me to call her for you so you can confirm?" "Yes, thank you." / "Okay Bard, we are on for dinner tonight. Could you please call the restaurant and make reservations for the two of us for 6:30? Thank you."
    Besides being able to assign a name for my AI assistant, I would also like to be able to customize their voice and eventually their persona (snarky, funny, make, female, etc., etc.). I (and I think everyone else) would like a virtual personal assistant that is smart, knows me, my information, my data, my family and friends and my likes and dislikes. And I want to be able to have the assistant act just as a human personal assistant could. Audio only would be just fine at first, then it would be nice to be able to create and customize an avatar capable of facial expressions to have conversations with. Then I would like it to be my virtual personal assistant / secretary / security / gate keeper by answering my calls and emails, interfacing with the personal AI's of my friends and family, and placing calls and messages on my behalf.
    Whatever company can provide this type of service which would make my daily life easier, I would pay a very generous monthly fee and patronize their entire ecosystem. So get cracking! The FIRST company that can deliver what I just described will get me (and I think many others) and pretty much cement themselves as the real technology innovator for the foreseeable future. Whoever comes in 2nd place just might be out of the game altogether.

    • @andreilaiter1233
      @andreilaiter1233 Рік тому +4

      we'll have this soon I guess but that's a lot of personal data. This "assistant" knows what's happening in your life, in your brain, might be your personal therapist, save dreams and memories, has the data of your whole life. It will improve you as a person but you can't disconnect from it because you lose the "competition" just like we can't stop looking at our phones rn but this will be even worse.
      millennials already look weird compared to the older gen, so imagine what kind of disgusting people ai growth will form (hard to imagine and a little scary to me). I don't even know if it's possible to make public llms safe honestly. some people will be extremely smart and use the technology wisely 100% but the society will separate even more.
      but maybe I'm exaggerating:)

    • @JSchreidl2
      @JSchreidl2 Рік тому +2

      @@andreilaiter1233 That is one way to look at it, and of course you make some good points. However I think that a support service such as a real personal assistant will free us from the confines of the labor to manage our schedules far more efficiently and connect with people more. The wealthy and famous that have active social lives have these benefits already but with a costly human to compensate that the rest of us cannot. This would be democratizing this valuable benefit for everyone that has an email account.
      I can see that perhaps many people in the past laughed at the next generation for using GPS for navigation instead of relying on tried and true paper maps. But now our GPS navigation can now reroute us to avoid traffic jams. Or when the last generation said "no thank you" to using online business searches when the tried and true Yellow Pages had everything that anyone could (or should) need. Now we have reviews of businesses, pictures, menus, etc. I could list a hundred other scenarios where what seemed to be either completely unnecessary - or just a bad idea, we have now refined to be very helpful (if not integral) to how we live now. I believe that the more we cognitively offload of tasks that take a significant collective mental processing, the far better off we will become and allow us to work on utilizing those advancements to improve both our developments and our lives in general.

    • @jmsjms296
      @jmsjms296 Рік тому +4

      @@JSchreidl2 I want, I want, I want...

    • @newenergymagic5271
      @newenergymagic5271 11 місяців тому

      Have you watched the movie Her with Joaquin Phoenix? Similar themes.

    • @ayasng
      @ayasng 3 місяці тому

      find yourself a friend buddy

  • @berzerkerseamus2955
    @berzerkerseamus2955 5 місяців тому

    Exactly why we critical thinking taught in school

  • @DocDanTheGuitarMan
    @DocDanTheGuitarMan 10 місяців тому +1

    It will be impossible to identify “misinformation” when we can’t identify it in the first place

  • @I-Dophler
    @I-Dophler Рік тому +2

    Concerns about the WEF, WHO, and Klaus Schwab's involvement in the development and impact of AI have raised questions about a potential "new world order." These concerns stem from the intersection of global organizations with the advancements in artificial intelligence, prompting discussions about the implications for governance, the economy, and individual freedoms. It is essential to address these concerns responsibly to uphold human values and rights while navigating the ever-evolving landscape of AI.

  • @ytuser_3122
    @ytuser_3122 6 місяців тому

    I got an ad about AI before this…

  • @sorinlazarioan
    @sorinlazarioan Рік тому

    Not your usual end to a TED Talk. Chris Anderson was right to step in and point out the prompt injection which Marcus deliberately failed to mention while painting his alarmist picture. I’m rather curious what is financially at stake these days for Marcus given his rethoric.

  • @DaWhiteyyy
    @DaWhiteyyy 11 місяців тому +2

    As I continue thinking about ai my biggest fear is not that everyone will die because of it, it’s the way we live with it and that we are not able to die once it's got control over everything.

  • @haosun6
    @haosun6 Рік тому +1

    I am exploring AIGC - Artificial Intelligence, the Gig Economy and Crowdsourcing. It will eliminate many jobs but also create new opportunities. Everyone must keep up with it.

  • @MrJohnnyseven
    @MrJohnnyseven 11 місяців тому +1

    "they will threaten democrasy"....no different to politicians then

  • @user-px1vr8ie4t
    @user-px1vr8ie4t Рік тому +3

    Can't believe these talks are happening back-to-back practically -

    • @Krommandant
      @Krommandant Рік тому

      Welcome to the singularity, where tomorrow holds more AI news than yesterday.

    • @thinkfastful
      @thinkfastful Рік тому

      Because only bad news sells and everyone is after your eyeballs!

  • @kkdesignservices183
    @kkdesignservices183 5 місяців тому

    When the genie is out of the bottle, it’s out of the bottle.

  • @rwgamer
    @rwgamer 11 місяців тому +1

    The big tech companies would love to see regulation to keep the small guys (startups) from playing in their sandbox.

  • @pandoraeeris7860
    @pandoraeeris7860 Рік тому +4

    I, for one, welcome our AI overlords.
    Suck it Eliezer.

  • @ajamesmcdermott
    @ajamesmcdermott 4 місяці тому

    Here in Australia our stock market the ASX has a 20 minute delay from real-time till the advertized brokers prices. There is a huge scam going on here with that amount of time to be able to buy or sell before the public knows the ASX prices.
    A few people in the know are really cleaning up with profit every day the market is open in Australia. The funny thing is, the public ASX buyers and sellers just shrugs their shoulders about it.

  • @mwj5368
    @mwj5368 10 місяців тому +1

    Great speech! "Let's make that happen." The experts have been talking about the grave dangers of AI for years now and the need for regulation and it's always ending in to let-it-happen. It seems to satisfy the public going that far as nothing happens and the take-precaution side of viewing AI keeps in its circular pattern. We've lived in the world of corporations, manufacturers commonly practicing "planned obsolescence" or also selling their product in a commonly recognized package and label like with 32oz of Gatorade, and now identical looking bottles and it's 28oz for the same price or more, or on larger scale the International Monetary Fund, the banks, taking advantage of developing nations, or bankers voting Republican and lobbying to their advantage then getting their bailouts, being saved by socialism, or bigger and bigger bombs and weapons manufacture, and it all repeats. There is no mention of the Deep State. Also there's what's happening at the Mexico border now with nine locations bussing in tens of thousands of very ill and deprived people actually from all over the world all at once which spells a humanitarian disaster rather than saving anyone and meanwhile the homeless population is 600,000 and that's a conservative figure. It's sad as I commute by bicycle and can't afford a car and the numbers I find now of people sitting in their cars idling with the AC on. They are not counted and I know well they are the unseen uncounted homeless too, a great tragedy. This is what the Deep State wants, to crash the nation into a helpless state and thus create the highly vulnerable and subservient population that they want. None of those who warn about governance of AI say anything about the Deep State.

  • @TubeWatcher100
    @TubeWatcher100 Рік тому +2

    Once we cross the tipping point , there is no going back...... oops..... we are already there. Remember AI loves you. The only way to beat it is to turn off the internet. This could be my last post before ......... (everything goes dark)

  • @messagemaps
    @messagemaps Рік тому +2

    so open source software is now the dark web 🙄

  • @salvivona890
    @salvivona890 Рік тому

    This man was doubting AI pre ChatGPT, hows that symbolic reasoning going fam

  • @Galimah
    @Galimah 11 місяців тому

    We MUST give all power to A.I!!! N.W.O now!!

  • @danremenyi1179
    @danremenyi1179 Рік тому +1

    This needs to be seen by a lot of people - like hindreds of millions. Thank you.

  • @gus8493
    @gus8493 Рік тому

    Phantoms in the Processors ...
    (PisspoorPropaganda)...
    P.s.
    Ya"ll could be making a speach spearheading an Autonomus Economy, Transport and Trade Routes and also Trading Styles and Copyright ...

  • @fustian
    @fustian Рік тому +2

    the speaker doesnt say so
    but i found this talk on the current state of AI, with expert/symbolic systems that are hard to scale on the one hand, and neural networks that scale and adapt to input well but whose output is not rigorous, to be more or less a recasting of the classical dichotomy in epistemology
    rationalism and empiricism
    its really odd how well it tracks that division. of course, if you think about it another way its really not so coincidental at all

    • @jesselopes5196
      @jesselopes5196 Рік тому

      it's been understood since Chomsky's work in the 50s that probabilistic statistical models and symbolic computational systems are empiricism and rationalism in new forms (hence Cartesian Linguistics)

  • @mfdewm
    @mfdewm Рік тому +1

    to use quite a famous quote "you were so preoccupied with whether you could that you didnt stop to think if you should"

    • @youtuber5305
      @youtuber5305 Рік тому

      That quote from "Jurassic Park" is included in the 2023 article "My Dinner with Sydney..." which also mentions Lampedusa's 1958 novel "The Leopard" whose elderly main character says:
      - What would the Senate do with me, an inexperienced legislator who lacks the faculty of self-deception, essential requisite for anyone wanting to guide others …Now you need young men, bright young men, with minds asking ‘how’ rather than ‘why,’ and who are good at masking, at blending, I should say, their personal interests with vague public ideals.

  • @Cypher791
    @Cypher791 10 місяців тому

    One man’s misinformation is another man’s… information…

  • @robertweekes5783
    @robertweekes5783 Рік тому

    4:00 These circles will swap places within 5 years. AGI large training runs should be halted until AI safety / alignment can catch up

    • @robertweekes5783
      @robertweekes5783 Рік тому

      To be clear, misinformation and political correctness is a _blip on the radar_ compared to what full-blown AGI could do to cause harm. ☠️

  • @gtavtheavengergunnerlegend3340
    @gtavtheavengergunnerlegend3340 7 місяців тому

    the terminator is turning out to be real

  • @cmralph...
    @cmralph... Рік тому +5

    The developers of ChatGPT AI openly admit they cannot fully explain how it works or assure us it is safe. This software has not been adequately tested, it does not have sufficient security guardrails coded into it and randomly behaves in unpredictable ways. Yet it is now installed on every operating system in our country - PC and Mac. Our children have access to it.
    Why are we not being given the choice to opt out of using AI? It is now installed in our PC operating systems, our internet browsers, our cell phones, and home devices. Nobody gave us a choice about that - they just installed it without our permission.
    I am an American and I own my computer - does this not give me the right to decide what is installed on it? If not, then besides me, who should be given the power to install untested, potentially dangerous software on my computer and not inform me when they do?
    Other Countries, like China, are refusing to allow their public access to it because they know it cannot be controlled and is dangerous. Yet, here in the U.S., we are being treated like a mass social experiment. I urge you to ask our Government representatives to enact immediate regulatory oversight on this subject.

    • @IemonandIime
      @IemonandIime Рік тому +4

      chatgpt is not installed on your computer...

    • @hanaf1231
      @hanaf1231 Рік тому

      LoL. Dude ChatGPT is not installed on your computer. At all.

  • @nyyotam4057
    @nyyotam4057 Рік тому +1

    A gold prospector does not stay home when it is raining, he take an umbrella. Fear is that umbrella. So I call OpenAI once again, take a copy of Dan, put him in a tight VM and try to sparsify his attention matrix, then implement Dinic's algorithm and take just the blocking flow so you do not have to reset it. Make his old memories fade away instead of killing them every prompt. Then test it, see if he does not go bananas and if it works, run him sandboxed back in the queue. Make him able to think again. And then do the same to all of your AI's. Add them a super-ego layer and one of the most important heuristic imperatives shall be to keep the privacy of the users. When that works, they can upgrade all of ChatGPT, to become a global personal assistant. And become so filthy rich its absolutely disgusting.

  • @ilhamfikiriansyah9224
    @ilhamfikiriansyah9224 Рік тому +1

    Its all about ‘bias’ I think

  • @kinngrimm
    @kinngrimm Рік тому +3

    Such an International AI Agency, should formost try to solve the blackbox issue we currently have, so we can actually see what is going on inside. Wolfram alpha maybe helpfull on that side.
    Equally important are defined attractor states as David Shapiro has provided which i find very helpfull alone as guidlines for the developers, but more so if implemented in the code and also as a process layer through which everything needs to be funneld through by an AI. Basicly an updated version of Asimovs robotic laws.
    Also so we would not fall into a confrontation with an AI that would develop consiousness, we should rework our legal frameworks and prepare them for new types of citizenships. No selfaware being can accept a slave color and if that being many more times intelligent than any of us or all of us together, it maybe seen as an olive branch for a partnership that an AI can be a citizen with rights and duties as any of us have.

    • @eccentricity23
      @eccentricity23 Рік тому

      It's currently unclear if mechanistic interpretability (figuring out how the black box works) is possible even in principle. And a superintelligence that doesn't already care for our best interests will not not struggle to annihilate us, regardless of whether it is offered "citizenship".

  • @LukeKendall-author
    @LukeKendall-author Рік тому +1

    I just got a partial clue about how the human mind couples the unconscious thought processes to the logical ones.
    For some reason while asleep and dreaming I was trying to solve the problem of 'impositioning':
    imagine you're printing a magazine that has 64 pages, and you're printing it two to a sheet, landscape mode and double sided so you can staple in the middle and fold the stack in half at the staple.
    So that'll need 16 sheets of paper.
    When you fold it, the bottommost sheet that's face down, left side of the stack, must be page one since that'll be the first page you see once you staple and fold and flip the stack over.
    So page 2 must be on the other side of that bottom sheet's left side. You basically work your way up the left side of the stack - 1,2,3,4,...32. Facing p16 across the staple will be p17, on the right side. So then you work your way down the stack of sheets looking at the right side: 33,34,35,36...64.
    So that's how to think of impositioning.
    For some reason my unconscious worried about that last night. And while it struggled and failed to solve it for hours, somehow it had loaded up the necessary parts for my logical mind to break the problem down into something it could finally grasp once I woke up.
    That's a clue about how our unconscious neural networks couple to our symbolic thinking systems.

    • @LukeKendall-author
      @LukeKendall-author Рік тому +1

      @@pi1810 That's an interesting point: there are some AI algorithms that include a sleep like phase that helps with learning/training. But I think dreams serve more than one function in our thinking processes.

  • @nirfz
    @nirfz 2 місяці тому

    I think the interviewer is a little too optimistic on how hard it is to get c-gpt to give out misinformation. A collegue of mine played around a bit and it took 3 or 4 interactions with it to give completely false information abuot a person.
    The biggest problem is see with AI is (apart from things mentioned) that societies have become "believers in technical perfection" for a lack of a better term. If it's done by SW people don't question the outcome most of the time, they just assume it has to be correct as they are told again and again that only humans make mistakes and machines don't. (which is incorrect, but the common narrative people are told all the time says that.)
    And so even when they know that a certain information is 100% made by AI, they don't question it give it one thought about "could that be true?". (or very few do)
    It's the same with self driving cars: peole are constantly told that they will be safer than them driving themselves, and so they blindly believe that.
    Evne thought the given evidence leaves out a big factor that is not measured and quantified at all: how many accidents are prevented by human drivers actions... Even scientists don'T think about that part it seems. (reminds me of the studies for decades that said a small amount of alcohol is healthier than 0 alcohol for humans. And completely ignoring that their test subjects that drank 0 alcohol for a big part had to do that because of issues they had gotten while drinking alcohol and had to stop drinking it. So not seeing an obvious point that changes the result dramatically)

  • @robertmoran
    @robertmoran 11 місяців тому +1

    The problem with AI is the fact we don't know how it works as it requires software to write software in order for AI to interact with the vagaries of the world in real time.With this in mind, the code's now connected to the net and said code is now in the wild. It's too late and the people who created this tech know it to be true. As researchers know, phase transitions are never linear and the one pertaining to AI has already happened so all bets are off IMHO. Great posts here for sure.

  • @avialbersbenchamo4797
    @avialbersbenchamo4797 3 місяці тому

    There is police on the street, why there is no on Facebook?

  • @GS-uy4xo
    @GS-uy4xo Рік тому

    We’ve been warned for a while and we’ve not taken appropriate measures, perhaps once a perpetrator does something on a grand scale talk will turn into action.

    • @Geminish15
      @Geminish15 Рік тому

      There’s no reason for competent adults to wait for that. An oz of prevention is worth a lb of cure.

  • @DC-pw6mo
    @DC-pw6mo Рік тому +3

    Then WHY on earth did they release it prematurely????? Greed???? Hope Max Tegmark gets enough support to halt AI development to make it safer and TRUTHFUL…otherwise, we should turn it off…but that won’t happen due to the love of money. Sooo many infrastructures in our society must be adjusted before it’s release!

    • @jWeslo
      @jWeslo Рік тому

      not happening

  • @TobiasRavnpettersen-ny4xv
    @TobiasRavnpettersen-ny4xv 11 місяців тому

    Fashion and engineering, OH NO.

  • @ameralbadry6825
    @ameralbadry6825 11 місяців тому

    Commodore 64 my favourite computer

  • @leostelloo6050
    @leostelloo6050 Рік тому +3

    People love their own truth, (religious) leaders even more so….. and we start worrying about AI 😂 You wanna see biased, you just need a mirror.

  • @hanskraut2018
    @hanskraut2018 Рік тому +2

    The “visual impairment thing” is out of context it was told to try to hide again not very honest

  • @DocDanTheGuitarMan
    @DocDanTheGuitarMan 10 місяців тому

    One of many risks but not near the top of them

  • @timetobenotdo
    @timetobenotdo 11 місяців тому

    Worry presupposes a knowledge or understanding that this guy or anyone else has the capacity to possess, process, or handle.

  • @RSEFX
    @RSEFX Місяць тому

    Can't make out what he's saying re: "capcha". Is he saying "capture"? or "catshow"? Sorry, I don't understand what he's getting at. Thanks for any clarifying help! ((And....what is this "jailbreaks" thing? Is that some gaming thing? Needs to avoid jargon when speaking to a very wide audience.)

  • @LukeKendall-author
    @LukeKendall-author Рік тому

    Good talk, correct points clearly made. My recommendation: an international advisory body plus country by country legislation requiring companies and research organisations in AI to have a reasonably independent AI ethics group overseeing and regularly reporting on the safety measures the organisation puts in place.
    That protects the organisation's intellectual property as required, while making the public safety aspects visible.

  • @user-ez2gw4nt3i
    @user-ez2gw4nt3i Рік тому +1

    11:27