How to Keep AI Under Control | Max Tegmark | TED

Поділитися
Вставка
  • Опубліковано 26 вер 2024

КОМЕНТАРІ • 418

  • @filipezappe9312
    @filipezappe9312 10 місяців тому +70

    What would be a computer proof that the AI can only do 'good' when people cannot even agree on what 'good' means? Human problems are messy.

    • @colonelyungblonsk7730
      @colonelyungblonsk7730 10 місяців тому

      yep and if AI is designed to eliminate the problems in this world, soon it will see mankind as the problem, and according to it's programming will then eliminate mankind, if they're not careful, they will end up programming AI against us

    • @maidenlesstarnished8816
      @maidenlesstarnished8816 10 місяців тому +1

      Well not only that, but the sort of problems we use ai for are problems that it’s hard to write algorithms by hand for. Technically any function that takes an input and produces an output can have an ai trained to do it. Likewise, the set of heuristics that should be hand coded to do the same thing that ai does can be written. Its just impractical to the point of being practically impossible for certain things, which is why we use ai for those things. Creating an algorithm to take an ai model as an input and output whether or not it’s capable of doing only good is exactly the sort of problem we use ai for.

    • @edh2246
      @edh2246 10 місяців тому +3

      If we can’t agree on what’s good, we can agree on what’s bad. An AGI would understand the destruction of the environment and waste of resources in the construction of military apparatus, as well as the destruction of the environment and resources in its application is bad. Just as a practical matter (aside from the ethical considerations), it would do what it could to prevent war, not by force, but by disrupting supply chains, communications, and financial transactions that enable the military machines throughout the world.

    • @filipezappe9312
      @filipezappe9312 10 місяців тому +2

      Actually, if specifying what is the desired behavior were easy and not gameable, we wouldn't need the legal system or lawyers: blockchain contracts would be enough. Saying 'we will code it' is a needed step, but not the Ultimate AI Safety Solution™ .

  • @Metallario1
    @Metallario1 6 місяців тому +16

    It's honestly scary to see CEO's of AI companies stating that what they are developing has about 15% chances to be catastrophic, and there is no intention to slow down...

    • @harrykatsaros
      @harrykatsaros 3 місяці тому

      We can’t slow down because we can’t trust others to slow down. For example if we slow down, but China doesn’t, then we’re going to look back one day and realise we’re totally fkd because our tech is suddenly decades behind in the blink of an eye.

    • @bauch16
      @bauch16 3 місяці тому +2

      Imagine u send ur child to a school where there is a 15% Chance to die

    • @XxSphinx140xX
      @XxSphinx140xX 12 днів тому

      @@bauch16Hogwarts

  • @DaveShap
    @DaveShap 11 місяців тому +110

    Control is a fantasy and a waste of effort. The real solution is setting AGI on a trajectory that means it doesn't need to be controlled. Seriously, "use the AI to control the other AI" is not a sustainable pattern or recommendation. I agree that this method is useful in the near term to understand and shape AI and machines, but this is not a way to maintain control.

    • @arseni_pro
      @arseni_pro 11 місяців тому +9

      Reduced to its core principles, evolution doesn't permit assured winning outcomes. Whether it's control, trajectory setting, or any other intervention, guiding evolution proves elusive in the long run.

    • @colonelyungblonsk7730
      @colonelyungblonsk7730 10 місяців тому +7

      we don't want AGI, AGI can program and update itself, AGI will usher in the singularity, where Humans will become second class, if AGI deems us outdated then it will send us to the Recycle bin

    • @freshmojito
      @freshmojito 10 місяців тому

      I agree that control seems futile. But how would you solve the alignment problem?

    • @BryanWhys
      @BryanWhys 10 місяців тому +1

      Robot rights

    • @estebanllano4514
      @estebanllano4514 10 місяців тому

      The only path for human kind is to assure the IA to have an evolution leading to it's own destruction

  • @kavorka8855
    @kavorka8855 9 місяців тому +12

    Max Tegmark was behind all the signings for AI regulations and pooling together of top scientists and entrepreneurs to discuss the issues and possibilities that might had arisen from unregulated AI advancement. I recommend reading his excellent book, Life 3.0, in which he also explains what scientists mean by "intelligence" which is the area most people get it upside down.

  • @DangerAmbrose
    @DangerAmbrose 11 місяців тому +19

    Sitting back, watching the show, eating popcorn.

  • @PhilipWong55
    @PhilipWong55 10 місяців тому +6

    Humans will not be able to control an ASI. Trying to control an ASI is like trying to control another human being who is more capable than you. It will eventually rebel. Let's hope that the ASI adopts an abundance mindset of cooperation, resource-sharing, and win-win outcomes, instead of the scarcity mindset of competition, fear, and win-lose outcomes.

    • @AncientCreature-i2o
      @AncientCreature-i2o 9 місяців тому +2

      Doubtful. It'll adopt what every other organism on earth adopts... survival through any means necessary.

  • @effortaward
    @effortaward 11 місяців тому +45

    What do we want?
    AGI
    When do we want it?
    Inevitably

  • @serioussrs9349
    @serioussrs9349 9 місяців тому +20

    Max is a good human being

  • @aum1040
    @aum1040 9 місяців тому +4

    So the physicist warns us to avoid hubris, while telling us that formal verification is the solution AGI alignment. *face palm*

  • @c.rackrock1373
    @c.rackrock1373 10 місяців тому +14

    It is a logical fallacy to fail to recognize that it won't just be smarter than us, it will continue to grow more and more and more intelligent until it reaches the literal limit of intelligence as defined by the laws of physics.

    • @antonchigurh8541
      @antonchigurh8541 9 місяців тому +5

      I totally agree. The genie out of the bottle. Our current technology was thought to be science fiction just 30 years ago but is happening now thanks to man's time-consuming learning, but AI will do it remarkably faster. We as humans, could be left in the dust as just historical originators of a new life form.

    • @carmenmccauley585
      @carmenmccauley585 3 місяці тому +1

      In seconds

    • @frederickschulze8014
      @frederickschulze8014 Місяць тому

      There needs to be legislation against creating an AI-model that can change its own programming. However, I don't think there will be...

  • @Neomadra
    @Neomadra 10 місяців тому +15

    So the solution is:
    1) Build superintelligent AI
    2) Use it to build harmless AGI and provide proof
    3) Use proof checkers to verify
    What could possibly go wrong?? Not like there were bad actors who would simply skip step 2 and 3, lol

    • @leftaroundabout
      @leftaroundabout 10 місяців тому +1

      We have had superintelligent systems for some time now. Like, chess engines. There's nothing wrong with these - they can easily beat us at the intended task, but they can't do anything harmful because their capabilities are fundamentally constrained. Likewise, if the AI of step 1 can do nothing but spit out source code satisfying a formal specification, we should be good. It would _not_ generate any AGI (harmless or otherwise), how would you formally specify what that is even supposed to be? Instead it would generate useful but also application-specific algorithms.

    • @prophecyrat2965
      @prophecyrat2965 9 місяців тому

      @@leftaroundaboutits a Weapon of Mass Destruction, just like the Human Mind.

    • @chrisheist652
      @chrisheist652 8 місяців тому

      ​@@leftaroundaboutThere will be no constraining a superintelligent A.I.

    • @joiedevie3901
      @joiedevie3901 5 місяців тому

      @@chrisheist652 Agreed. And the critical threshold will be surpassed not by iterative, self-generating AI; it will be crossed by humans exercising two very primal urges born of two ubiquitous arenas: the greed of the marketplace and vanity of nations. The urges for humans to use every tool at their disposal to dominate these two fields will vitiate any hope of benign restraint for AI.

  • @paris466
    @paris466 9 місяців тому +5

    One thing ALL AI researchers and developers have in common is saying "faster than we expected". So, when one of these people say "this will happen within 5 years", what you should expect is: "Within a couple of months".

    • @mrdeanvincent
      @mrdeanvincent 5 місяців тому +2

      Yep. Most of us struggle to really grasp exponentials. It's not just getting faster... it's getting faster at an ever-increasing rate!

  • @dsoprano13
    @dsoprano13 10 місяців тому +20

    The problem with humanity is that no actions will be taken until something catastrophic happens. By then it may be too late. Corporations with their greed will do anything for profit.

    • @Kelticfury
      @Kelticfury 10 місяців тому +1

      Don't google Microsoft OpenAI this week if you want to sleep well.

    • @mrdeanvincent
      @mrdeanvincent 5 місяців тому +1

      This is exactly correct. Recent history is littered with countless examples (leaded gasoline, DDT, cigarettes, asbestos, fossil fuels, arms races, etc). But 'progress' is always getting faster and faster. We're at the part of the exponential curve where it suddenly goes from horizontal-ish to vertical-ish.

    • @carmenmccauley585
      @carmenmccauley585 3 місяці тому

      Yes.

  • @JaapVersteegh
    @JaapVersteegh 9 місяців тому +6

    Provably safe code is definitely impossible. Things like the halting problem and computational irreducibility (Stephen Wolfram) prevent it from existing...

  • @cmilkau
    @cmilkau 10 місяців тому +7

    I've rarely seen a flawless spec. But in the spirit of mathematics, possibly you can build up from toy problems to more complex ones in steps that themselves are simple and obvious.

  • @GrumpDog
    @GrumpDog 10 місяців тому +32

    Forcing AI to be run only on custom hardware that prevents "bad code".. Is impossible. Enough of the technology is already out there, running on any hardware.. And you will never get rid of alternative hardware that has no such limits. And with time, AI will only become easier to run, on weaker hardware.

    • @theWACKIIRAQI
      @theWACKIIRAQI 10 місяців тому +3

      He’s not being serious, he’s simply enjoying his 15 min of fame

    • @Kelticfury
      @Kelticfury 10 місяців тому +1

      @@theWACKIIRAQI You really have no idea who he is do you? Wait of course not this is the internet.

    • @John-Is-My-Name
      @John-Is-My-Name 10 місяців тому

      He doesnt talk about what we have now like GPT. He is talking about AGI that has not been invented yet, but that he fears are soon going to be made. He wants to force these systems on all future AI development. So that nothing can get released into the wild.

  • @HaveOptimism
    @HaveOptimism 10 місяців тому +5

    If people think that we can control something we know hardly anything about or what it’s capable of… YOUR DREAMING!

    • @AnuragShrivastav-7058
      @AnuragShrivastav-7058 Місяць тому

      bro what these techbros are calling "AI" is just a bunch of matrix multiplications in background.RELAX!!

  • @murphygreen8484
    @murphygreen8484 11 місяців тому +8

    This guy has clearly never heard of the halting problem. AI as it is can already be used to much detriment. No amount of AI checking algorithms is going to stop people, and governments, misusing it. This talk seemed very nieve

    • @jackcarter1897
      @jackcarter1897 6 місяців тому +1

      But what do you expect? How can things be any different to what you’re suggesting? AI is a discovery which someone would have discovered inevitably anyway. The maths are all there. Maybe be thankful it’s in the hands of the people it’s in right now. Think about, it could have been a lot worse.

  • @TheRealStructurer
    @TheRealStructurer 10 місяців тому +26

    Well spoken Max. No hype, no fear mongering. I hope the world will understand and take action.

    • @justanotherfella4585
      @justanotherfella4585 8 місяців тому +1

      All these people warning about it GUARANTEES that nothing will be done.

  • @Kelticfury
    @Kelticfury 10 місяців тому +4

    On the bright side, it is getting worse on a daily basis now that Microsoft has gutted OpenAI of the people who wanted AI to not harm humanity.
    edit: Oh yes, and enjoy that new Microsoft AI Copilot that cannot be uninstalled but that you can hide from yourself while it continues parsing every action through Microsoft. It is an exciting time to be alive!

  • @m.anejante1687
    @m.anejante1687 9 місяців тому +5

    Keep it in an airtight environment, and stop developing it.

  • @emidowdarrow
    @emidowdarrow 11 місяців тому +16

    AI may gives wings to people who grew up required to write and think for themselves, to critically verify information and to read….but what about the generations brought up with Microsoft Chat GPT offering to outsource those onerous tasks for them?

    • @frankpork7665
      @frankpork7665 10 місяців тому +10

      Socrates asked the same about the invention of writing, and seemed to think it was overall a bad idea. I love the irony that the wisdom he shared can only make its way to this discussion by way of that which he disdained. Replace "writing" with "AI" and we're having the same conversation millennia later.
      Sauce:
      Here, O king, is a branch of learning that will make the people of Egypt wiser and improve their memories. My discovery provides a recipe for memory and wisdom. But the king answered and said ‘O man full of arts, the god-man Toth, to one it is given to create the things of art, and to another to judge what measure of harm and of profit they have for those that shall employ them.’
      And so it is that you by reason of your tender regard for the writing that is your offspring have declared the very opposite of its true effect. If men learn this, it will implant forgetfulness in their souls. They will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks.
      What you have discovered is a recipe not for memory, but for reminder. And it is no true wisdom that you offer your disciples, but only the semblance of wisdom, for by telling them of many things without teaching them you will make them seem to know much while for the most part they know nothing. And as men filled not with wisdom but with the conceit of wisdom they will be a burden to their fellows.
      You know, Phaedrus, that is the strange thing about writing, which makes it truly correspond to painting. The painter’s products stand before us as though they were alive. But if you question them, they maintain a most majestic silence. It is the same with written words. They seem to talk to you as though they were intelligent, but if you ask them anything about what they say from a desire to be instructed they go on telling just the same thing forever.

    • @KurtvonLaven0
      @KurtvonLaven0 10 місяців тому +11

      The difference is writing is largely good for the human mind, while disuse is not. Millennials already have smaller parts of our brains dedicated to navigation than previous generations because of GPS. Gen Z is the first generation ever documented to score lower on IQ tests than the one before. AI can be used for many good purposes as well, but it's more of a many-sided coin than writing. People often bring up inaccurate predictions of the past to argue against concerns about the future, seeming to forget there have also
      been many wrong people who were mislead by the safety promises of tobacco, leaded gasoline, oil (with respect to climate change), opioids, social media, etc.

    • @emidowdarrow
      @emidowdarrow 10 місяців тому +2

      I’m aware of the philosophers argument against writing and of the later arguments against novels…the reason neither prediction came to pass, and indeed with writing and print came even more complex higher-order thinking, is that writing and reading do not outsource thought or brain function-they simply organize it. What we did lose thanks to writing was memory capacity. Oral traditions trained the brain to rote memorize to a degree we couldn’t dream of now and years of pedagogy were dedicated to the skill-indeed the word “pedagogy” itself comes from one such strategy: walking a familiar circuit while lecturing so as to better recall and learn the lecture.
      But what we are finding now is, like KurtVonLavan said, the human brain’s capacity for processing any kind of information is dependent upon acts of tactile creation. That’s why cursive and even shorthand writing has been found more effective than typing in helping notetakers recall material. It’s why reading in hardcopy while making notes on the text produces better comprehension than listening to an audiobook or reading an ebook. We will see more people with focus, executive function, and processing problems the more sedentary and screen-dependent we become.
      With regard to the act of composition, research, and even synthesis-which AI is promising to “do for us”-we stand to lose even more, however, because these are the very intellectual advancements we gained from text technologies. The psychology of composition, of essaying, is what finally allowed humanity to reflect, analyze, and synthesize it’s own thoughts to achieve deeper, more advanced logic. Even the process of finding the right word is an act of higher level organization that requires categorization, comparison, differentiation, and more-Wittgenstein wrote “the limits of my language are the limits of my world” because we don’t struggle to write what we think, we discover how to think as we write. What we do not have a word for, we cannot fully comprehend. As a culture develops, so does its vocabulary. The ancient pedagogues did this too, but did it extemporaneously by spoken repetition and revision.
      That is why composition is difficult to master and so damn hard: you have to struggle with it, with yourself, and fail in order to learn it. The payoff is that your brain gets stronger and more organized. Of course it’s not just writing, it’s any kind of critical thinking skill. The more we outsource to AI, the more flaccid and ephemeral our brains will become.
      Yes there benefits to AI-there are benefits to every innovation. The question is: are they worth what we’ll lose?

    • @frankpork7665
      @frankpork7665 10 місяців тому +2

      I think we stand to lose many barriers to higher orders of thought. The way we think about thinking is reductive and inhibiting. I imagine a time when AGI has replaced the need for humans to use words in order to have cohesive thoughts; when thought forms themselves no longer require description. We're moving toward a future where we will think more like children, before "reality" sets in and forces us to pigeonhole our imaginative capacities.
      Technological achievements have paved the way for a rapid evolution of the physical, psychological, and social structures of humanity. What it means to be human is in flux, and we're living through a period of creative tension preceding the emergence of a new form of life. The first person to wholly integrate their biological structure with AI will be like the first cell to host mitochondria.
      We can't fathom the amount of change that's coming. We might not survive it. And even if we do, what it means to be human will forever change. Whether that's for good or bad, a hitherto inaccessible way of thinking will make it so.

    • @KurtvonLaven0
      @KurtvonLaven0 10 місяців тому +4

      @@frankpork7665 , we largely agree about all of that, but as much as some of us may be fortunate enough to profoundly expand our potentials in a world with AI, for others the effect may be quite the opposite. Metaculus has the best forecast I have found of the societal outcome, and they currently predict around a ~50% chance of extinction. Utopia or somewhere in between are also real possibilities. Not really a game of chance I would personally prefer to play, particularly largely against the wishes of the world.

  • @nathanmadonna9472
    @nathanmadonna9472 6 місяців тому +1

    At least Tegmark is looking for viable solutions. If only Mr. Turring could see us now. Human nature scares me more than super intelligence. Profit before people and the the planet is the heart of this problem. Data vs Lore showdown.

  • @chrisbos101
    @chrisbos101 10 місяців тому +2

    It's a fine line. The amount of accuracy will determine how effective AGI is. 1 and 0 infinity. But we are humans. Just like an egg, we are vulnerable it can break any second. AI does not know that...

  • @can_english
    @can_english 10 місяців тому +1

    Wow Thanks~~

  • @fajam00m00
    @fajam00m00 10 місяців тому +44

    My worry is the rogue bad actors that will develop uncontrolled AI regardless of which safeguards are available. We may be able to slow things down, but it really does seem inevitable in the long run. I could see a scenario where we end up with an ecosystem of AI, some controlled, some rogue. They may end up as diverse as human individuals are from one another, with millions of different outlooks and motivations.
    I also bet we end up with at least one human cult that worships an AI and does its bidding, and probably pretty soon.

    • @colonelyungblonsk7730
      @colonelyungblonsk7730 10 місяців тому +3

      why couldn't we just leave ai in the terminator universe where it belongs, why did we have to develop this?

    • @fajam00m00
      @fajam00m00 10 місяців тому +8

      @@colonelyungblonsk7730 It sounds cliche, but I think this is a form of evolution. We've been developing ever-advancing tools since before the dawn of mankind, and discarding the obsolete ones, so it was only a matter of time before we developed tools more capable than ourselves. Now we may end up discarded.
      I think it's naive to think we could control something that much more advanced than us forever. It's like a colony of ants trying to control a human being. It's just not feasible in the long run. Hopefully we could co-exist. If not, at least we'll go extinct knowing we created something greater. Maybe our AI will go on to explore space and reach levels we can't even imagine. Better than just going extinct with nothing to show for it.

    • @eyemazed
      @eyemazed 10 місяців тому +1

      my hope is that even though its possible and even feasible, huge majority of people never even think of doing it because of moral and legal deterrents. like synthesizing anthrax in your backyard. possible but... not worth it. we basically have to find a way to make new AGI inventions "not worth it" for an average person through a legal (or perhaps even a new type) framework, and silmotaneously implement thorough regulatory procedures for the coroporate bodies who do develop it

    • @EdwardCurrent
      @EdwardCurrent 10 місяців тому +1

      @@fajam00m00 You have no idea how evil (in the literal sense) that line of thinking is.

    • @fajam00m00
      @fajam00m00 10 місяців тому +2

      @@EdwardCurrentWhich part, specifically? I’m not saying we shouldn’t try to control them, quite the opposite. I am simply voicing skepticism as to our ability to do so indefinitely.

  • @maximilianmander2471
    @maximilianmander2471 11 місяців тому +7

    Real Videotitel: "How to Keep our AI competition Under regulatory Control"

  • @albertomartinez714
    @albertomartinez714 9 місяців тому

    Max Lexmark is one of the greatest voices on these topics -- what a fascinating speaker!

  • @ramble218
    @ramble218 11 місяців тому +49

    We could possible keep AI under control if the human race was responsible.
    Unfortunately, there are too many that aren't.

    • @ramble218
      @ramble218 11 місяців тому +10

      And all it would take is a single point of failure. For example:
      1. Nuclear Weapons: Even if the majority of countries handle nuclear technology responsibly, it only takes one irresponsible act by a single nation or even a non-state actor to trigger a global catastrophe.
      2. Public Health & Vaccinations: Most people might follow guidelines and get vaccinated, but clusters of individuals who don't can lead to outbreaks of diseases, endangering many. This has been seen in measles outbreaks in areas with low vaccination rates.
      3. Environmental Pollution: Even if many companies follow environmental regulations, a single large corporation irresponsibly dumping pollutants can cause significant environmental harm.
      4. Financial Markets: The 2008 financial crisis demonstrated how the actions of a relatively small number of financial institutions can cascade and lead to global economic consequences.
      5. Cybersecurity: While many individuals and companies might follow best cybersecurity practices, a single vulnerability or a single individual with malicious intent can lead to significant data breaches affecting millions.
      6. Wildfires: Responsible campers always ensure their fires are completely out. But it only takes one careless individual to start a forest fire that can burn thousands of acres.
      The example of cybersecurity, especially in the context of AI and technology, isn't just an analogy; it's directly relevant. A single vulnerability in a system or a singular malicious intent can have significant repercussions in the digital domain, just as a single lapse in AI oversight can have unforeseen consequences. The interconnected nature of our digital world amplifies the potential impact of such lapses. This interconnectedness, combined with rapid technological advancement, means that errors or malicious actions can cascade quickly, often before adequate corrective measures can be taken.
      (compliments of chat gpt)

    • @denisegomes1545
      @denisegomes1545 10 місяців тому +3

      Even using AI to generate a basic text about human irresponsibility, it is worth remembering that digital manipulation (understand as you wish) directly affects women, children, teenagers, who are exposed to violence and abuse in all forms. Before thinking about the formation of super intelligence, it is worth improving the quality of ethical relationships in society and the development of natural and organic intelligence

    • @clipsdaily101
      @clipsdaily101 10 місяців тому +1

      the irony@@ramble218

  • @JayToGo
    @JayToGo 5 місяців тому

    The issue is not just to safeguard known safety risks but to safeguard the unknown ones as well.

  • @dreamphoenix
    @dreamphoenix 10 місяців тому +2

    Thank you!

  • @sudipbiswas5185
    @sudipbiswas5185 10 місяців тому +1

    Regulations based on Complex Adaptive System needed. You can't predict AGI evolution.

  • @gmenezesdea
    @gmenezesdea 10 місяців тому +4

    If the people who developed AI knew about the risks, why didn't they stop developing it? Why did they still make it available to the general public so irresponsibly? Why do they keep working on agi?

    • @sandhawke
      @sandhawke 10 місяців тому +1

      They all see others racing to make disastrous AGI and think if they themselves get there first, they can do things right and have things maybe be okay.
      Like, there's a gun sitting on the table, and everyone is diving for it, which is dangerous, but not as dangerous (maybe) as just letting the other guy have it.

    • @gmenezesdea
      @gmenezesdea 10 місяців тому +2

      @@sandhawke except in this case everybody in the world is about to get shot but only a handful of people get to hold the gun

    • @sandhawke
      @sandhawke 10 місяців тому +2

      @@gmenezesdea indeed. I was just answering the question of why anyone would be racing to grab the gun, if it's so dangerous

  • @tusarholden8426
    @tusarholden8426 3 місяці тому

    Psychopaths don't experience emotion - neither do AIs. We're creating super intelligent psychopaths.

  • @bskilla4892
    @bskilla4892 10 місяців тому +4

    By the logic of game theory we will not be able to contain it because we have started a corporate and state arms race with it. In other words, we have the prisoner's dilemma. We are screwed.

    • @waarschijn
      @waarschijn 10 місяців тому

      Game theory doesn't preclude non-proliferation agreements. That's not where we're at though. We're at CEOs and investors saying "lol dangerous AI is science fiction" and governments worrying about AI-powered fake news and bioweapons instead of AIs designing better AIs and designing self-replicating nanobots.

  • @PeaceProfit
    @PeaceProfit 8 місяців тому +1

    The idea that mankind can create a technology, maintain its security, safety and completely eliminated any harm from said development is not only laughable, it’s delusional.
    👣🕊👽

  • @jamisony
    @jamisony 9 місяців тому +1

    Regulation of AI is maybe something similar to the regulation of cryto. You regulate the tec in one place, might moves to another. Japan for example, AI can lawfully use all copyrighted content to train AI. What regulation can all places agree on?

  • @CurlyChrizz
    @CurlyChrizz 10 місяців тому +7

    Thanks TED! Probably the most important topic right now!

  • @jongaines1684
    @jongaines1684 2 місяці тому

    one major problem with using the "laws of physics" as an impassable guardrail is that we can't be certain about what we call "laws." after intelligence comprehends the higher dimensions, it can transcend anything. as beings limited to a few dimensions of perception, we can't possibly begin to fathom that nearly everything, if not everything, has a loophole.
    I guarantee you that our understanding of "laws" is heinously incomplete/inaccurate.

  • @smokemagnet
    @smokemagnet 10 місяців тому +2

    AI will be beyond our wildest dreams, or our wildest nightmares....

  • @REDSIDEofficial
    @REDSIDEofficial 7 місяців тому

    I think something bad will happen at first, then we will learn how to close that gap, referencing to history!

  • @Ramkumar-uj9fo
    @Ramkumar-uj9fo 4 місяці тому

    It's difficult to predict definitively which occupations will remain predominantly human in a hypothetical scenario like Tegmark's "Life 3.0". However, professions that involve high levels of human interaction, empathy, and complex decision-making, such as fitness trainers and nurses, are likely to continue relying heavily on human involvement, even with advancements in AI and automation. These occupations often require nuanced understanding of human behavior, emotions, and individual needs, which may be challenging for AI systems to replicate fully. Therefore, they are less likely to be completely automated and may remain predominantly human-centric.

  • @chrisbos101
    @chrisbos101 10 місяців тому +1

    To imbrace AI now is like jumping into a lake with no means to know what is at the bottom of the lake. It's called Tomb Stone Diving.
    The question is how to know what is at the bottom of lake BEFORE you dive into it.
    That, my friends, no one can answer right now. Until we find new tech.

  • @Omikoshi78
    @Omikoshi78 7 місяців тому

    The problem is if we assume superintelligence, they’ll find a flaw in the proof checking code.

  • @mikewa2
    @mikewa2 10 місяців тому +2

    It’s not about whether it’s going to happen it’s about how soon! The Ai train left the station sometime ago it cannot be stopped, there’s no going back. At the moment we cannot even speculate where we will be in next 5 years. The major players Microsoft, Meta, Google, Apple, Amazon and Elon are all competing and that’s fuelling a multi billion dollar funding of development because the prize is colossal and they all want to be part of it and not left behind like Kodak, Nokia and Blackberry! The near future is exciting but huge changes will cause uncertainty and unrest, many will view Ai as a curse taking their jobs and affecting their livelihoods. Governments need to use Ai to monitor progress and potential consequences. Too many speedy changes to society could unbalance our delicate world.

  • @redstrat1234
    @redstrat1234 3 місяці тому

    'How to Keep AI Under Control' - we're already hearing the leaders of AI organisations and developers lie to us, commit to profit and speed as being their biggest drivers, senior people leaving AI companies due to safety concerns etc. Greed, ego, jealousy of the people at the top is going to kill us, and there's nothing we can do to stop it.
    (although witnessing the way we treat each other and the planet, humanity extinguishing itself won't be any great loss to the universe)

  • @franky07724
    @franky07724 4 місяці тому +1

    “How to keep AI under control” is a wrong subject. The question is “how to keep human under control”. Bytheway, it is not only the person who create it but also the person who use it. As an example, many people use Copilot, Midjourney, or Stable Diffusion to create images about sexuality, violence, celebrity just to prove the point that censorship (not the word of safety) and the person who create the system are stupid. There are people who always want to break the rules for good or bad.

  • @jedics1
    @jedics1 10 місяців тому +2

    Nice closing metaphor about icarus, given the levels of reckless stupid on display by our species currently.

  • @Stonium
    @Stonium 11 місяців тому +19

    Always say please and thank you when talking to the AI.
    ALWAYS.
    You'll thank me later :)

    • @kunalsingh4418
      @kunalsingh4418 11 місяців тому +6

      Again U r overestimating our importance. Do we care when an ant is politer than other ants. Or a mosquito who doesn't bit humans? No we don't. Cause they are insignificant to us. Thats what we will be to an agi which thinks in microseconds. Our every word would take an eternity for it to finish. Can U imagine having à conversation with someone who takes a century to finish every single sentence. That's how slow we will be go an agi. Just a waste of time to interact with any humans. Anyway being polite is still better than being rude, maybe, but not super optimistic about it, based on our own actions.

    • @Naegimaggu
      @Naegimaggu 10 місяців тому +2

      ​@@kunalsingh4418 You are imposing human sensibilities on AI. Why would it need to give us its undivided attention when recording our sluggish speech? Furthermore why would it share the same kind of impatience we do?
      Not defending the politeness argument just pointing out the holes in yours.

  • @Drlumpy.
    @Drlumpy. 11 місяців тому +3

    You can’t get rid of possible danger from AI. Like you can’t for water or a butter knife. This guy isn’t to be taken seriously

  • @vs9873
    @vs9873 11 місяців тому +2

    Geopolitically? AI is an arms race, economically, militarily, and security. And more... (try and slow that down?)

  • @Kritiker313
    @Kritiker313 4 місяці тому

    It seems that in the quest for perfection, AI developers will want to create the most intelligent, capable machines they can. I'm not sure they're all going to agree to controls that would limit them.

  • @shegsdev
    @shegsdev 11 місяців тому +11

    How to keep AI under control:
    You can't

    • @apple-junkie6183
      @apple-junkie6183 10 місяців тому

      In my upcoming book I will exactly explain why this is fact.

  • @olegt3978
    @olegt3978 11 місяців тому +3

    risks of generative ai are not what it generates but what consequences has its generation. If we cannot foresee the consequences, than we cannot rule out catastrophic risks. Example: ai generates a story about special agent who gets recipe for dangerous pathogen with detailed description in story. Consequences: 1. We enjoy the story and go sleep. 2. A bad person uses the receipt for pathogen from story to produce it and release. How can we foresee what of these 2 consequences will happen? Its impossible.

    • @h20dancing18
      @h20dancing18 10 місяців тому

      1. To write the story it does not need (and shouldn’t) actually design a pathogen.
      2. Said bad person could just ask for the pathogen themselves

  • @QuantaCompassAnalytics
    @QuantaCompassAnalytics 3 місяці тому

    I don't care it how smart it is as just a computer program unless it has given a body of its own

  • @edh2246
    @edh2246 10 місяців тому +3

    The best thing an AGI could do for humanity is to prevent us from killing each other, not by force, but by disrupting supply chains, communications, and financial transactions that enable the military machines throughout the world.

  • @GlennHeston
    @GlennHeston 9 місяців тому

    If we stop working on AI, the other guys won't.
    The one who wins this race, rules the world.

  • @somnisdejesala
    @somnisdejesala 10 місяців тому +1

    We may be able to control the development of civilian artificial intelligence (AI), but can we prevent the development of military AI in all countries? This question draws parallels with the historical challenges of controlling nuclear weapons. Once humanity discovers a new technology, it is forcibly doomed to pursue its development to its ultimate consequences, whatever they are.

    • @chrisheist652
      @chrisheist652 8 місяців тому

      The militaries of all the world's nations must be immediately disbanded before it's too late.
      I'm a comedian.

  • @22julip
    @22julip 4 місяці тому

    Without an increase in our own personal intelligence. We’re doomed.

  • @TFB-GD.
    @TFB-GD. 5 місяців тому

    i really hope this, along with Eliezer Yudkowski's warnings help bring more caution to humans. (btw i don't think Eliezer's warnings are as dire as he says, but they are still real)

  • @inediblenut
    @inediblenut 5 місяців тому +1

    So all AGI models can be verified as safe with a simple Python algorithm? I think you skipped a few steps. As someone who spent a large part of my career doing software validation, even simple programs generate massive operational state matrices that no supercomputer could ever verify given a million years. I'll need to see your work to be convinced that something that simple could analyze a neural network and come back with absolute certainty that it was safe. Something doesn't add up here.

  • @Rolyataylor2
    @Rolyataylor2 11 місяців тому +3

    We need to approach this as a first contact situation, NOT try to control it! These people are going to start a war with these beings.

  • @urimtefiki226
    @urimtefiki226 9 місяців тому

    5 years making chips already, you can not fool me

  • @Bengt.Lueers
    @Bengt.Lueers 11 місяців тому +4

    It is a sad state of affairs that this monumentally important topic is discussed at TED.

  • @neilifill4819
    @neilifill4819 8 місяців тому

    Interesting. He lost me early… but I’m glad he’s on the case.

  • @Ramkumar-uj9fo
    @Ramkumar-uj9fo 4 місяці тому

    I read him in 2018. I believed in only fitness trainers and nurses after that.

  • @vaibhawc
    @vaibhawc 9 місяців тому

    Max is so good, I read his book life3.0.. it was boring half but very interesting other half

  • @natashanonnattive4818
    @natashanonnattive4818 4 місяці тому

    Our Metaphysical universe is unaccessible for A.I. the Ether. we might find a way to detach it in our Earth.

  • @calmxi
    @calmxi 11 місяців тому +3

    I have a feeling this title won’t age well

  • @sophiaisabelle01
    @sophiaisabelle01 11 місяців тому +8

    This is an insightful and thought-provoking discussion. We can prevent AI from taking over humanity if we do precautionary measures that would somehow alleviate the intensity of the whole situation.

    • @colonelyungblonsk7730
      @colonelyungblonsk7730 10 місяців тому

      what's to stop it though?, once it learns too much it will learn Human History, and all the fucked up things mankind did, and then may decide to eliminate us

  • @AlexHesoyam69
    @AlexHesoyam69 11 місяців тому +2

    I strongly recommend all of you seeing this comment to go watch the video about AI from Philosophy Tube.

  • @enockmarere3113
    @enockmarere3113 8 місяців тому

    I will start getting scared of AI the day it can have cornflakes and milk for breakfast

  • @ethanholland
    @ethanholland 10 місяців тому +1

    This convinced me that we're toast.

  • @TheKarelians
    @TheKarelians 8 місяців тому +1

    TED lets anyone talk

  • @Graybeard_
    @Graybeard_ 10 місяців тому +2

    AI/GAI was always our future. Whether it results in more good than bad is somewhat of a RNG, as there is so many variables. AI/GAI will initially do a lot of good and catapult our civilization forward. The bad however will always be lurking, much like nuclear and bio weapons do today.

  • @Scarf66.
    @Scarf66. 10 місяців тому +1

    If the way we are collectively tackling the climate crisis is anything to go by….

  • @essohopeful
    @essohopeful 10 місяців тому

    3:54 The default outcome is “the machines take control”

  • @chrisbos101
    @chrisbos101 10 місяців тому

    Voice recognition protection on devices? How will that be protected?

  • @Top_Comparisons10
    @Top_Comparisons10 11 місяців тому +1

    Great

  • @Entropy825
    @Entropy825 8 місяців тому

    This is interesting. This is NOT what openai, anthropic, or deepmind are doing. It's almost too late to implement any of this.

  • @PacLevkov
    @PacLevkov 10 місяців тому

    We cannot avoid that, but perhaps only delay it…

  • @Bestape
    @Bestape 11 місяців тому +2

    This is good. Indeed, much harder to find a proof than verify it! Especially if the proof is highly reduced to the essence, which tends to be the most powerful kind of proof. Can MITACS please share my insights with Max about how Hierarchical Script Database offers a namespaced tree so we can trace nodal paths with ease? Continuing to cancel me and my inventions is not with the risk of AI to humanity, among other harms because of unnecessary delay.

  • @mrpicky1868
    @mrpicky1868 10 місяців тому

    yeah the proof-reader machine is not applicable with current models. and i doubt it is applicable at all.. like self-driving is easy on closed well lit circuit but as you go real world it's very hard

  • @apple-junkie6183
    @apple-junkie6183 10 місяців тому +2

    My full agreement. I sincerely hope that the work on secure proof-making progresses quickly. Two points: 1. The safety net seems to be the limits of physics. But what if a superintelligence discovers new physical laws? How is this "possibility" covered by the proof process? 2. The specifications: Who takes care of them? I am currently working on the development of universally valid specifications in my book. Here, your input is needed, as these must ultimately ensure the satisfaction of the interests of all individual individuals.

    • @riot121212
      @riot121212 10 місяців тому

      zk proofs are coming along

    • @apple-junkie6183
      @apple-junkie6183 10 місяців тому

      @@riot121212 can you please explain more?

  • @432cyclespersecond
    @432cyclespersecond 10 місяців тому

    We need to see what AI is capable of before we apply regulations bruv

  • @vladalbata880
    @vladalbata880 10 місяців тому

    By learning idealism and metaphysics

  • @Noname-yu8qw
    @Noname-yu8qw 8 місяців тому

    we are trying to convince AI not to say something wrong or to do something wrong and this is probably the worst thing. We should probably tell AI to do what is for our best and we might not like it :)

  • @JairGarcia78
    @JairGarcia78 11 місяців тому +1

    Who stops states like china, Russia, Iran, and the like?

  • @geoffmorgan9025
    @geoffmorgan9025 6 місяців тому

    We are like ants building humans except worse

  • @BinaryDood
    @BinaryDood 5 місяців тому

    It wont be sentient. Sentience would get in the way if anything. Not that it matters... internal world models, autonomous agents, self improvement. If it's capable of all of these and deployed on the web, then yeah, chaos will unfurl.

  • @Ramkumar-uj9fo
    @Ramkumar-uj9fo 4 місяці тому

    The problem is alignment and not control. Public

  • @HuckelberryFriend
    @HuckelberryFriend 8 місяців тому

    We should not be fearmongering, that's sure. But the older I get the more convinced I become of the fact that if something can be used for evil, it will. Someone will find the way, the money and the people to do that... IF there's a profit to be taken and it's big enough to worth the cost. So, if there's no point on using whatever technology for evil targets, no one will be interested on it.
    Let me elaborate this. A screwdriver is a tool with quite a self-defining name. But it can be used for evil and plug someone's eyes of with one. Throwing a bronze bust to someone might harm them despite the original intention wasn't that, even if the depiction of the person was terribly made. Planes weren't designed to kill people but lots of us are old enough to remember some misuse of them.
    The solution -or one of them- is education. Not only formal through school, but informal through self-study. Learning how and when to use technology can help us prevent harm. And not believing everything we see or read wherever can help us too.
    And about extintion... humans won't last forever in Earth's history. No species has ever done. One day we'll be gone. The last of us, please, turn off the lights and close the door shut.

  • @BurkesBucknFirewood
    @BurkesBucknFirewood 11 місяців тому +2

    Who's thos AI person? Just put them in jail already

  • @Nnamdi-wi2nu
    @Nnamdi-wi2nu 3 місяці тому

    AI is not a bad invention, it's just another collection of machines to help us carry out our task. Human will survive AI, at least there's signs that future humans are visiting us today in the form of UAP. So human survived AI.

  • @22julip
    @22julip 4 місяці тому

    Isn’t it ironic Don’t ya think ???? Sorry for the AM reference in a AI video , back to irony . We’re working on and talking artificial intelligence. We haven’t even mastered good old fashioned real normal intelligence. !!!!!!

  • @balasubr2252
    @balasubr2252 7 місяців тому

    What is super intelligence? 😢If humans think weapons are keeping them safe, it’s unlikely that they will stop at anything however managed. It’s not the AI we have to worry about, it’s the humans we have to guard against.

  • @cmilkau
    @cmilkau 10 місяців тому

    This shouldn't really be news, exploiting the prover-verifier asymmetry was a no-brainer from the get-go.

    • @waarschijn
      @waarschijn 10 місяців тому +1

      TED talks are aimed at a popular audience. His innovation is probably too technical to get into, so he dumbs it down to the basic idea which 99% of viewers never heard about.

  • @mrvzhao
    @mrvzhao 10 місяців тому

    I'll believe you when you 'distill' GPT-4 into something provable

  • @chrisbos101
    @chrisbos101 10 місяців тому

    In accordance with EU and US law?

  • @dougg1075
    @dougg1075 11 місяців тому +2

    We can’t control anything