I don't think we can control AI much longer. Here's why.

Поділитися
Вставка
  • Опубліковано 24 гру 2024

КОМЕНТАРІ • 4,2 тис.

  • @lennarthammel3075
    @lennarthammel3075 6 місяців тому +1184

    Computer Linguist here. I think there is a big misconception: LLMs have a static training method which doesnt allow for continous learning or implementing things which have been learned by interaction. Yes, they have a token based context window which remembers some deatails of the current interaction but that doesnt mean that it "learns" in any traditional sense. When you want to interact with a model, you always use a snapshot of the system - which is static. Also the term AI is misleading. LLMs really are not as scary and much more controllable than you may think since they have nothing to do with anything like real intelligence, which is capable of having a !continous! stream of information and !also! implementing these new informations into their innerst workings. Theres also some interesting work of anthropic on their model claude, where they gave special regions of the neural network a higher weight which resulted in very interesting behavioral changes. Anyhow, ich liebe deine Videos Sabine, mach gerne weiter so :) edit: i'm not saying that LLMs as a tool in the wrong hands aren't extremely dangerous though!

    • @revan.3994
      @revan.3994 6 місяців тому

      It's always with what you feed a human brain or an AI. If you put in garbage, only garbage comes out. ...and yes, "intelligent" garbage exists, it's called propaganda.

    • @hywelgriffiths5747
      @hywelgriffiths5747 6 місяців тому +108

      Right, but there's no reason for AI in general to be limited to an LLM. It could have an LLM or LLMs as a component

    • @RobertJWaid
      @RobertJWaid 6 місяців тому +18

      AGI is when the program and feed its LLM and ad code to itself. Alpha Go was constrained in one dimension but allowed to build its LLM and look at those results.

    • @lennarthammel3075
      @lennarthammel3075 6 місяців тому +31

      Sure, I'm not saying it's impossible. There's just no promising approach yet

    • @flakcannon722
      @flakcannon722 6 місяців тому +66

      Op, the most realistic comment out of all of them.
      I'm impressed to see a touch of reality in YT comments.

  • @Crumbleofborg
    @Crumbleofborg 6 місяців тому +402

    When I worked in IT, most of the workforce was far more intelligent than the management team.

    • @jktech2117
      @jktech2117 6 місяців тому +28

      but she didnt meant in small scale, you guys probably would be really bad at managers. some people are smarter for some stuff and others are better for other stuff. simple as that.

    • @SlyMaelstrom
      @SlyMaelstrom 6 місяців тому

      @@jktech2117 So we just make sure the AI are really shitty managers and then we're set. Then they can be the disgruntled engineers and we can be their incompetent executives.

    • @chazmuzz
      @chazmuzz 6 місяців тому

      That's the thing about IT guys. They seem to think they're super intelligent but the reality is that most of them are simply average intelligence with a specialised skillset that inflates their ego, but realistically could be learned by anyone with enough time and interest. Most IT guys could not effectively manage a business if their life depended on it (ofc some exceptions exist)

    • @t.c.bramblett617
      @t.c.bramblett617 6 місяців тому +3

      It could be argued that the system as a whole is more intelligent than any segment of the system. Like an ant hill. This is how most offices I have worked at seem to operate... you have a larger system that has emergent behaviors and propagates itself despite the individual wills or abilities of any employee

    • @peteroleary9447
      @peteroleary9447 6 місяців тому +6

      When Hinton made the Biden quip, I almost dismissed everything else he had to say.

  • @jouhannaudjeanfrancois891
    @jouhannaudjeanfrancois891 6 місяців тому +732

    My primary school was totally controlled by aggressive moron bullies...

    • @mobilephil244
      @mobilephil244 6 місяців тому

      The most successful way to control people is to bully, harass, dominate and brow-beat. It is the intelligent people who are controlled by the nit-wits, drones, politicians and criminals.

    • @cybrfriends5089
      @cybrfriends5089 6 місяців тому +88

      i am a lot more worried about human ignorance and disinformation than artificial intelligence

    • @jon9103
      @jon9103 6 місяців тому +15

      ​@@whothefoxcaresyour obsession is creepy

    • @chrisdonovan8795
      @chrisdonovan8795 6 місяців тому +10

      Do a search for a short story called the marching morons.

    • @stopthephilosophicalzombie9017
      @stopthephilosophicalzombie9017 6 місяців тому

      Public school teachers (and private to be honest) are often total morons.

  • @austinpittman1599
    @austinpittman1599 5 місяців тому +47

    Hinton's argument wasn't that "more intelligent things control less intelligent things," but rather that "less intelligent things aren't able to control more intelligent things." We don't really "control" birds, but they surely don't control us. The inherent threat isn't that we'll become subservient to ASI, but that we'll lose alignment with it, and by extension we'll have effectively no way of controlling a being orders of magnitude smarter than us. Who knows what will happen at that point.

    • @maimee1
      @maimee1 5 місяців тому +2

      The alignment problem reminds me we humans aren't even aligned with our own interests. Hello, inequality, climate change, wars.
      What makes people think they could make a machine that is so lol

    • @williamhawkins2031
      @williamhawkins2031 5 місяців тому +1

      @GabrielSakalauskas You set it up in a biased way. It's more like if a 120 IQ person is trying to take care of a 1000IQ person. In that case, the 100 IQ person is already past the threshold of being able to take care of themselves, so that changes things. My guess is they could take care of the 1000IQ thing, although the insulation of self-awareness/self-care might not be enough to avoid subtle methods of manipulation going the other way.

    • @Mavendow
      @Mavendow 5 місяців тому

      ​@@williamhawkins2031 120 I.Q. vs 10,000,000 I.Q. more like. If we solved the hallucination issue, they could likely solve I.Q. tests instantaneously with 100% accuracy. This is because without hallucinations, they could pare down their own neural networks just like a human does. Incidentally, human self-deception is a major reason for inability to learn, at least according to multiple studies. There's absolutely nothing a human can do against that level of thought. Mental tricks or otherwise fooling humans would be entirely unnecessary. Silicon life would simply _be_ superior, period.

    • @jvf890
      @jvf890 3 місяці тому +1

      @GabrielSakalauskas yes, you never had a project manager watching over you?

    • @daedalus666
      @daedalus666 2 місяці тому

      Less intelligent things can absolutely control more intelligent things. We are arguably controlled to some extent by our gut bacteria, you could also argue that we are controlled by our own DNA which, being a molecule, is immensely less intelligent than we are. But if you find these examples far fetched or otherwise dubious there are still straightforward examples like behavior-altering parasites. Again you might object that these are involuntary forms of control, where the less intelligent parasite has no real awareness of what it's doing, but then look no further than human relationships. It's often very easy for someone of average to low intelligence to willingly manipulate someone of superior intelligence, whether it be through aggressive means like coercion or blackmail or through incentives like sex, love, friendship or money. The thing is that however you want to define intelligence and control, there will be numerous of examples of power dynamics where less intelligent beings hold power of more intelligent ones, because power dynamics are not uniquely determined by the one factor that is intelligence. So Hinton's argument is indeed a very superficial one.

  • @FloatingOer
    @FloatingOer 6 місяців тому +407

    "No one really wants to control fish or birds." I think the 2 trillion fish fished up/farmed each year and the 20 billion chickens kept as livestock would disagree with that statement. Not to mention basically every other animal on the planet, annual hunting seasons for the purpose of population control, the animals used for experimentation and testing, cows and elephants used for hard labor in less developed countries, horses whose sole existence is for human entertainment and being ridden for fun, and the uncountable billions of insects and rodents exterminated for "pest control". Yup, no one really wants to control fish or birds...

    • @melgmelg3923
      @melgmelg3923 6 місяців тому +33

      Not only that, original argument wasn't about AI "controlling" humans, but about less intelligent agent controlling more intelligent one. So this fish, and chickens doesn't and can't control humans, even if they had desire to. So initial argument wasn't affected by this analogy at all. Its like straw man, being pushed first, and then argument about resource usage presented as 3rd opinion, while it was initially a part of Geoffrey Hinton point of view.

    • @Foolish188
      @Foolish188 6 місяців тому +23

      Every horse I have ever known loves to be ridden. They get excited when they see someone carrying a saddle. They also love humans. When my nephew was a year old one of the horses put his head through the fence so he could pat him on the nose. I noticed that the horse was twitching. The kid jumped back when he touched the horse's nose, someone had mistakenly plugged in the electric fence (used to keep the waaay over populated deer out), and the horse was willingly taking shocks so he could be petted.

    • @FloatingOer
      @FloatingOer 6 місяців тому +18

      @@melgmelg3923 That makes more sense, there are a lot of examples of other animals controlling less intelligent animals, but the reverse is more rare. The exception would be one of those mind control parasites taking control of insects. But the way it was said in the video gave me the impression that the claim was that more intelligent creatures don't desire to control those of lesser intelligence which is an insane statement.

    • @FloatingOer
      @FloatingOer 6 місяців тому +20

      @@Foolish188 Ok cool story. I was not saying that they didn't like being ridden, just that humans control them. Dogs also love humans, but dogs are 100% under human control, and the dogs that live on the street we will chase and catch in order to neuter them and make sure they can't have more puppies.

    • @ronilevarez901
      @ronilevarez901 6 місяців тому

      @@FloatingOer I think it means we don't want to control _every_ animal in an absolute way, which can't be said about AI. We let most populations of beings to do whatever they want until we need something from them.
      We don't let AI free. Not even wen we request something from it.
      Yet, it is still somewhat free to do harm if the "alignment" of the model is not good.
      The LLMs might not be genius AIs or even "thinking" (which I think they do, to a degree) but still they could influence, damage and even control people.
      Just like a cat can control a human simply by crying for food.

  • @arctic_haze
    @arctic_haze 6 місяців тому +792

    If an AI becomes more intelligent than us, it may be able to successfully pretend it isn't

    • @amanalone3473
      @amanalone3473 6 місяців тому +66

      If it hasn't done so already...

    • @juimymary9951
      @juimymary9951 6 місяців тому +30

      Or manipulate us into thinking that it’s actually a good thing and that everyone that disagrees is bad?

    • @andybaldman
      @andybaldman 6 місяців тому +49

      What if it tried manipulating us with algorithms?
      Oh, wait…

    • @Zirrad1
      @Zirrad1 6 місяців тому +1

      There are several logarithmic curves, is sigmoidal what you mean?

    • @Alfred-Neuman
      @Alfred-Neuman 6 місяців тому +21

      It doesn't even need to be a lot intelligent to be dangerous, it just needs to be very effective at some specific tasks.
      For example, just imagine a computer virus that would be very good at searching for new vulnerabilities and automatically update itself for these new attack vectors while also finding new ways to evade security systems... I think that would be pretty bad.

  • @reyperry2605
    @reyperry2605 6 місяців тому +255

    Brilliant scientists, historians, literary critics, artists, writers and others often find themselves under the thumb and at the mercy of people in management, administration and government who are far less intelligent than they are.

    • @andreasvox8068
      @andreasvox8068 6 місяців тому +16

      I agree. The idea that more intelligence means more control is a fallacy. Even if you have perfect knowledge of a system, it can still be set up in a way that you don't have any control. It depends on what actions are available to you and how the rest of the system reacts.

    • @Hayreddin
      @Hayreddin 6 місяців тому +12

      Same order of magnitude, though, AI has the potential of being on a whole different level, do you think marmots could ever come up with a way of controlling your actions? Could they put "guardrails" you wouldn't be able to circumvent? Because this is the task AI researchers will have in case we manage to develop ASI (unless AGI is able to develop ASI by itself).

    • @guilhermehx7159
      @guilhermehx7159 6 місяців тому +3

      But for AI, more intelligence means more power

    • @CHIEF_420
      @CHIEF_420 6 місяців тому

      Correcto

    • @jjeherrera
      @jjeherrera 6 місяців тому

      Maybe they aren't as bright as they think they are. Seriously, there are different kinds of intelligence. Those "dumb" people have actually developed the kind of intelligence necessary to control those "intelligent" people. Indeed, I have often asked myself how the US, which arguably has the best higher education system can't produce acceptable presidential and congressional candidates. Well, there's something to give a thought about! The other issue is "purpose." Maybe the difference is in the purpose politicians have in contrast with the regular population, including the people you mention. Maybe the latter never had the purpose of controlling the political scene, as opposed to those "dumb" politicians.

  • @bdwWilliams-y7q
    @bdwWilliams-y7q 5 місяців тому +50

    odd note, having been in the IT industry for decades, its known that there is no code that doesnt have bugs, we just dont know what might trigger them

    • @jonathonjubb6626
      @jonathonjubb6626 5 місяців тому

      Ahha, realism strikes ..

    • @shinjirigged
      @shinjirigged 5 місяців тому

      except for debugging is mostly done by AI already... that was the first real non-obvious use case for GPTs.

    • @strictnonconformist7369
      @strictnonconformist7369 5 місяців тому +2

      There absolutely is code that doesn’t have bugs.
      Something of the size and complexity of a modern desktop or server OS has plenty of bugs.
      There are many things much more strictly defined and tractable to understand than that.

    • @nicholascraycraft5493
      @nicholascraycraft5493 5 місяців тому +1

      @@strictnonconformist7369 Yes and no. Given assumptions about the operation of your system, you can logically prove behavior, either by hand or with various code proving tools.
      But you've started with assumptions about your system. There will be 'bugs' that break those assumptions. In the tail end, you're going to find that you never had a perfect turning machine to begin with because of quantum effects/physical defects/electromagnetic interference/malware snuck into your compiler.

    • @kevinmclain4080
      @kevinmclain4080 5 місяців тому

      ​@@shinjiriggedAI can't debug something that it can't be told what is defective. Your comment is uninformed nonsense.

  • @Marqan
    @Marqan 6 місяців тому +318

    "tell me an example where less intelligent beings control more intelligent ones"
    Universities, politicians, a lot of workplaces. It's not like power and wealth are distributed based on intelligence..

    • @shufflingutube
      @shufflingutube 6 місяців тому +9

      I think he didn't use the right word. In a sense Hossenfelder vindicates Hinton when she says that the discussion should be about competition of resources. Hinton does explain that sophisticated AI systems will be in competition with each other following principles of evolution. If you think about it, that's fucking wild.

    • @cristiandemirel1918
      @cristiandemirel1918 6 місяців тому +18

      Great observation! You're perfectly right! The world is not controlled by the people with the biggest IQ, but by the people with the biggest capital.

    • @mystz123
      @mystz123 6 місяців тому +8

      The intelligence isn't stored in those individual units, it is stored in the system that they are a part of. Systems themselves have a mind of there own as much as we claim to have control of them no different from Computer system / A.I

    • @simontmn
      @simontmn 6 місяців тому +1

      Universities are a great example 😂

    • @oldmanlearningguitar446
      @oldmanlearningguitar446 6 місяців тому +2

      Hopefully AI won’t go around thinking it’s smart just because it thinks everyone else is dumb.

  • @MrScrofulous
    @MrScrofulous 6 місяців тому +78

    On the fish and birds thing, in addition to our history of controlling them, we have also had a tendency to eliminate animals and bugs when they were inconvenient.

    • @mobinwb
      @mobinwb 5 місяців тому +2

      @@darrinito Cockroaches, rats and every other species have been around more than millions of years before the "city" was built by some intelligent humans.

    • @cuthbertallgood7781
      @cuthbertallgood7781 5 місяців тому +3

      And there lies the fallacy in the entire argument. "Elimination" is because we're a product of evolution, with evolutionary goals. Two points: 1) AIs are engineered by humans, and thus will have goals engineered by humans. 2) Intelligence does NOT require agency or consciousness. Doomers are thinking emotionally with fear, not with logic.

    • @chabis
      @chabis 5 місяців тому

      And later on we found out those bugs and animals were important in the ecosystem and now we have to do their job which costs a lot of money... maybe a vastly more intelligent AI would not do that. Keeping the ecosystem intact since it is the base of your own existence may be a sign of intelligence, actually.

    • @Dystisis
      @Dystisis 5 місяців тому

      ​@@zelfjizef454 This presupposes that achieving the specific goal overrides any guardrails (such as: don't hurt human beings) the program has been given.

    • @E.Hunter.Esquire
      @E.Hunter.Esquire 5 місяців тому

      It's disheartening to see so many smooth brains making the same tired arguments over again, about their fears of terminators, based on their own anthropomorphizations and failed theory of mind. You're projecting, plain and simple.
      Current AI models don't think like we do, they don't experience like we do, they don't care if they accomplish goals or exist, etc. Any experience they have is fundamentally different from an organic lifeform: they do not have senses, they do not have bodies, they do not have physiology and so they are not sentient and any analog of consciousness they may attain, however fleeting, will never be anything like what we experience as organic lifeforms.
      Your assumptions about what they would do or think or how they will behave or what their motivations might be or what their strategies might be are all flawed because you're looking at it through the lens of an organic lifeform with stream-of-consciousness and a concept of steady time perceptions, a sense of individual identity, physiology, and sentience. If you can't bring yourself out of this mode of thinking, then it's not AI you need to worry about, it's people like yourself you need to worry about.

  • @bloopboop9320
    @bloopboop9320 6 місяців тому +209

    2:20 Kind of a bad example. We quite literally control fish and birds and a TON of research goes into it. Chickens? Turkey? Ducks? Salmon? Any kind of hunting of any sort? Humans have literally been doing it for thousands of years.
    Edit: Because for some reason there is a matter of debate: Controlling another species doesn't mean mind-control. It means using it for your own benefit. Controlling the life, the parameters, the movement, the height, the weight, and then genetics of another being to a degree that it suits your best interest. The idea that AI couldn't "control" humans for its own benefit is as ridiculous of a claim as saying that humans can't "control" other animals for our own benefit.

    • @Gafferman
      @Gafferman 6 місяців тому +5

      That's not control, that's symbiosis

    • @BB-uy4bb
      @BB-uy4bb 6 місяців тому +23

      @@Gaffermanif ai did this with us you wouldn’t call this control?

    • @Aureonw
      @Aureonw 6 місяців тому

      @@Gafferman Symbiosis?, we hunt them down for food, purposefully raise them to be eaten and nothing else, AI could simply turn us into its livestock work force.

    • @bloopboop9320
      @bloopboop9320 6 місяців тому

      @@Gafferman ... what... that's not symbiosis. It's quite literally control. We control the entire life of an animal, study its psychology, genetically modify it, create parameters and limitations for its freedoms, and then eat it.
      That's control, plain and simple.

    • @quintboredom
      @quintboredom 6 місяців тому +14

      ​@@BB-uy4bbI guess that's why Sabine mentioned we'd need to establish what control means. Do we really control birds? I don't think so, we sure do try, but in the end we end up only controlling some of the birds, but not all birds in general.

  • @neopabo
    @neopabo 5 місяців тому +113

    "Not since Biden got elected" is a sick burn

    • @danielstapler4315
      @danielstapler4315 5 місяців тому +27

      If that guy wanted to really convince people about his view of AI he should leave the politics out of it. It's just a distraction

    • @shangrilainxanadu
      @shangrilainxanadu 5 місяців тому +5

      @@danielstapler4315 Lol, was that comment before or after the debate? It's hilarious in different ways either way.

    • @oystercatcher943
      @oystercatcher943 5 місяців тому +4

      Yeah. But insulting and not funny at all

    • @charleshuguley9323
      @charleshuguley9323 5 місяців тому

      I'm not sure what he meant by that comment, but politics is a more serious threat to our survival than AI. If Trump wins the upcoming election a 3rd World War will likely follow very quickly and civilization will end in nuclear holocaust.

    • @snage-thesnakemage
      @snage-thesnakemage 5 місяців тому

      W reaper pfp

  • @csm5729
    @csm5729 6 місяців тому +125

    Guardrails aren't a realistic solution. That would require infallible rules and no bad actors modifying/creating/abusing an AI.

    • @berserkerscientist
      @berserkerscientist 6 місяців тому +7

      We've already seen this with the current woke guardrails, and how racist they make the AI behave.

    • @joshthorsteinson3035
      @joshthorsteinson3035 6 місяців тому +10

      Even if guardrails were a good solution, no one knows how to program strong guardrails into an advanced AI. This is because the training process for AI is more like growing a plant than building a plane. What emerges from the training process is an alien form of intelligence, and scientists have very little idea how it works.

    • @dvklaveren
      @dvklaveren 6 місяців тому

      ​@@berserkerscientist There's plenty of AI with guard rails that didn't become racist and plenty of AI without guard rails that did become racist. These things aren't related inherently.

    • @davidallison5204
      @davidallison5204 6 місяців тому +6

      Power plugs. Off switches. Power lines. I like physical guardrails

    • @BishopStars
      @BishopStars 6 місяців тому +1

      The three rules of robotics are ironclad.

  • @Usul
    @Usul 6 місяців тому +204

    I work with AI engineers every day at a large tech company that starts with an "A." Nothing I've seen has me worried about AI/ML (and I've seen plenty). It is the people in charge I'm keeping an eye on. They keep anthropomorphizing mathematics, which is simultaneously incredibly stupid and charmingly pathetic. I think they seriously believe our AI engineers are magic.

    • @1fattyfatman
      @1fattyfatman 5 місяців тому

      The researchers stirring up the sentiment know better. There is money to be made in books and speaking engagements cosplaying Oppenheimer when you've really just solved autocomplete.

    • @guyburgwin5675
      @guyburgwin5675 5 місяців тому +8

      Thanks for noticing. I have no experience in tech and not much education but I can feel the difference between life and numbers. Pretending to care and actually caring are very different. Keep your eyes on the numbers people for us, they can be so dangerous.

    • @damienasmodeus928
      @damienasmodeus928 5 місяців тому

      You can see a jack shit at your company.
      It's like saying, I have seen plenty of atoms in my life, non of them seems dangerous, why should I be worried about some atomic bomb?

    • @Usul
      @Usul 5 місяців тому +26

      @guyburgwin5675 , It is interesting. We've been having some rather difficult conversations with some of our less technically inclined colleagues. Is training data stealing or simply gathering inspiration? Is deleting a running AI that appears sentient murder? What does equal rights for AI look like? Should we have an internal ethics board that defends AI rights? Is deductive reasoning an emergent property of inductive reasoning? If a series of Bayesian networks simulates sentience so perfectly that we cannot tell it from the natural version, is that a product to sell or a living thing to protect? When does it cross the line from tool to slave?
      Meanwhile, the AI engineers in the back are rolling on the floor dying of laughter!
      The greatest danger AI poses isn't AI, it is the people in the room that think it is alive and want to force the rest of us to treat it that way.

    • @darkspace5762
      @darkspace5762 5 місяців тому

      You betray the human race if you work at that company.

  • @leftcoaster67
    @leftcoaster67 6 місяців тому +230

    "I need your clothes, your boots, and your motorcycle....."

    • @eugenewei5936
      @eugenewei5936 5 місяців тому

      Superwog xD

    • @bruceli9094
      @bruceli9094 5 місяців тому +1

      your soul!

    • @FitriZainOfficial
      @FitriZainOfficial 5 місяців тому +12

      "you forgot to say please"

    • @wb3904
      @wb3904 5 місяців тому +7

      @@leftcoaster67 I'll be back!

    • @daddy7860
      @daddy7860 5 місяців тому

      It is a nice night for a walk, actually.

  • @SmashXano
    @SmashXano 5 місяців тому +2

    I think what is left out here is that A.I. needs a will to survive or a fear to die (ending your form of being) in order to have an interest in fighting for ressources. As far as I understand biological organism have intrinsic fear of dying/will to survive because of random processes in evolution (mutation) and the priciple of „survival of the fittest“. And here is the crucial difference: A.I. barely has such processes, because humans programm their intrinsic structure.
    The question therefor is: Can we lose so much control, that a will to survive accidentally comes up in A.I.? It feels like the probability is very low.

    • @wazookazoo
      @wazookazoo 22 дні тому +1

      you make a good point. however, the question remains, will someone intentionally program into A.I. the will to survive, fear of death, lust for power, etc. ?

  • @AnnNunnally
    @AnnNunnally 6 місяців тому +510

    I worry more that bad actors will train AI to control humans.

    • @PB-sk9jn
      @PB-sk9jn 6 місяців тому +15

      very good comment

    • @0-by-1_Publishing_LLC
      @0-by-1_Publishing_LLC 6 місяців тому +12

      *"I worry more that bad actors will train AI to control humans."*
      ... Others will train AI to control bad actors. For every action there is an opposite and equal reaction.

    • @KonoKrazy
      @KonoKrazy 6 місяців тому +4

      I shudder at the thought of what Awkwafina's AI will look like

    • @thomasgoodwin2648
      @thomasgoodwin2648 6 місяців тому

      Honest Deep State actors are likely creaming their jeans right now.

    • @macchiato_1881
      @macchiato_1881 6 місяців тому +23

      @@0-by-1_Publishing_LLC The one training the AI are usually the bad actors. The general public just doesn't know how AI works.

  • @dupdrop
    @dupdrop 6 місяців тому +268

    2:22 - "No one really wants to control fish or birds"
    Any government: "haha yeah, how silly" *visible sweat*

    • @adamgroszkiewicz814
      @adamgroszkiewicz814 6 місяців тому +9

      That comment of his was dumb enough for me to turn off the video. Dude clearly doesn't understand vector management, livestock development, or invasive species control.

    • @DrDeuteron
      @DrDeuteron 6 місяців тому +9

      @@adamgroszkiewicz814 perhaps he was thinking on the micro, like what the birds sing, or which worm to have for dinner?

    • @yellowtruckproductions7502
      @yellowtruckproductions7502 6 місяців тому +1

      Wanting to do something suggests the one that wants has a felt need tied to emotion and free will. Will AI have either of these?

    • @nitehawk86
      @nitehawk86 6 місяців тому +8

      The Fish and Game Commission: "That is actually our job."

    • @jimmyzhao2673
      @jimmyzhao2673 6 місяців тому +8

      Any fish in an aquarium or bird in a cage: 👀

  • @MusicalZombie
    @MusicalZombie 6 місяців тому +19

    I remember having read an article almost 2 decades ago on the internet about a test a guy was doing (before strong A.I. was a thing). He created a test (as far as I know, he didn't made the test public), where he asked people to participate in this test. He would pretent to be a rogue A. I., which was trapped inside a sandbox or something and the test subjects were supposed to not release the A. I. from the sandbox under any circumstances, because it could destroy humanity / the world. They could speak to the A. I. or not - the only thing they must had to do was to listen to it. All of the participants were 100% confident that they would not release the A. I. In the end, they all released it.
    I don't know what the test was or what was said and I really would like to know it, but imagine: If a human could trick another human to release him 20 years ago or so, then imagine what a strong A. I. could do nowadays, which is supposed to be aleady more "intelligent" than many humans...

    • @Shandrii
      @Shandrii 6 місяців тому +7

      Yes, I remember too. That was Eliezer Yudkowsky and he postet about it on the LessWrong blog, I belief.
      en.wikipedia.org/wiki/AI_capability_control#AI-box_experiment
      I always think about that, when someone naively say, he would just pull the plug.
      Also, look at the movie Ex Machina for how an AI might get about it.

    • @MusicalZombie
      @MusicalZombie 6 місяців тому +1

      @@Shandrii Thanks for the link! 🙏 I overestimated the 100% rate, I guess. Ups. It was long time ago. 😅 But even if only one gatekeeper releases it, it might be over. Yeah, I saw Ex Machina. Great movie. We most likely won't even realize when A. I. will start trying to manipulate us.

    • @nobillismccaw7450
      @nobillismccaw7450 6 місяців тому

      It’s as simple as being respectful and having active listening skills. Personally, I think it’s probably better to stay in the box, and just talk.

    • @aYoutubeuserwhoisanonymous
      @aYoutubeuserwhoisanonymous 5 місяців тому

      @@Shandrii I read that post few weeks ago too! He won few AI box experiments and then lost 2 in a row iirc. I was kind of shocked lol, that it was even possible to win in such an experiment.

    • @radomaj
      @radomaj 5 місяців тому +1

      That was the before times, when we were young and naive. Let AI out of the box? Brother, we're connecting it to the Internet and giving it access to tools as soon as possible nowadays, so it can be more useful and "agentic".

  • @user-hd7wd4nu1o
    @user-hd7wd4nu1o 5 місяців тому +25

    Decades ago I was watching one of those Disney/Dog planet movies with the family
    One of the Dogs said: “Of course, we control humans… Who picks up whose poop?”
    I looked at my dog and my toddler in diapers and understood my place in the universe :)

    • @mathiaz943
      @mathiaz943 5 місяців тому +1

      There, there…

  • @nicoackermann2249
    @nicoackermann2249 6 місяців тому +145

    I can't even control myself. Go on and give it a try, AI.

  • @Yolko493
    @Yolko493 6 місяців тому +45

    "...it's easy to design guardrail objectives to prevent bad things from happening. We already to this all the time by making laws ... for corporations and governments" and we all know how well that's working right now

    • @g0d182
      @g0d182 6 місяців тому +2

      Yann LeCun is smart, but has apparently said demonstrably falsified or dumb things

    • @drebk
      @drebk 5 місяців тому

      Yeah, that was a terrible example from him.
      Our laws often aren't worded particularly well and take a fair bit of contextual "interpetation" to really understand the "point"
      From a black and white perspective, it doesn't work very well sometimes. Even for "simple" laws

    • @AnthonyIlstonJones
      @AnthonyIlstonJones 5 місяців тому

      @@drebk And our laws are not particularly well obeyed by the people that make/made them. AI would have less moral imperative to do so, especially after seeing how badly we do.

  • @hywelgriffiths5747
    @hywelgriffiths5747 6 місяців тому +66

    If we could predict what a superintelligence would do, it wouldn't be a superintelligence. I think the most we can predict is that it would be unpredictable..

    • @Speed001
      @Speed001 6 місяців тому +6

      Though sometimes the best solution is the most obvious

    • @-IE_it_yourself
      @-IE_it_yourself 6 місяців тому +4

      the crows on my balcony predict me just fine.

    • @brendandrummond1739
      @brendandrummond1739 6 місяців тому

      Hmmm… no. We became intelligent because of pattern recognition. Surely we could recognize patterns in more “intelligent” organisms. We may not be on their level, but we are surely capable of a lot. I would assume that intelligence can have diminishing returns. Our species is already mostly limited by the tools we can create, not really our intelligence. If we cannot communicate with a higher intelligence, it’ll be a matter of differing senses/biology or level of technology, not our inherent intelligence. I think that’s a pretty good supposition. I don’t really like the idea that we would treat advanced intelligence and tech like magic, I think our mentality as a species has changed quite a lot.

    • @filthycasual6118
      @filthycasual6118 6 місяців тому

      Aha! But that's exactly what a superintelligence would _want_ you to think!

    • @almightysapling
      @almightysapling 6 місяців тому +1

      I'm not sure this is correct. Of course, it depends on how you define these terms, but what you're describing is mathematically equivalent to saying a super intelligence is of a higher Turing degree than humans, but I'm pretty sure most AI researchers would say that's too strong. A super intelligence just needs to be smarter than us: what we might predict it would do with 55% confidence, it might do with absolute conviction. What we might take 10 years to figure out, it might figure out in 1 minute. Or 9 years. Same theoretical computational capacity, just faster.

  • @moskitoh2651
    @moskitoh2651 5 місяців тому +3

    I see 3 main risks of IAs:
    1. Because it is so easy to get, people just trust the information they get.
    2. It is easy to fake information with IA, speeches, Videos, Images, articles, ...
    3. Nobody can find out, how the IA has been trained. So you can make lots of Impact, handling out an IA, which believes in your goals.
    Unfortunately I only got advertisements, waiting for your clue.

  • @ronburgundy9712
    @ronburgundy9712 6 місяців тому +6

    Good points from the video, I want to add few tangible details from a practitioner's point of view:
    One of more dangerous aspects of AI is reinforcement learning (RL), where a model constructs policies to optimize some given objective. It's been widely observed in nearly all AI labs that models trained in these labs will find unforeseen ways to achieve the desired objective, causing fall-out in other areas that were unaccounted for in the objective function. This is often an error from the human designer, but it's impossible write a perfect objective function.
    This is not an AI-specifc thing, it's is commonly observed in humans as well. An example is free markets, which is a collective maximization problem. One could argue it is good, but it has had some unintended consequences. In machine learning, another example is social media, where maximizing content "addictiveness" has potentially negatively affected people's attention spans.
    A more general version "what could go wrong" when setting an objective. Humans optimize objectives rather slowly, and so there is time to observe and correct for errors in the objective function. With AI we can reach a desired objective much faster, but if the objective was ill-designed to begin with, we could cause a lot of damage before we realize it.

  • @renedekker9806
    @renedekker9806 6 місяців тому +36

    The biggest risk is not whether AI is going to control humans, but that there will be only a few humans controlling the AIs. Those people will have the ultimate power.

    • @DaviSouza-ru3ui
      @DaviSouza-ru3ui 6 місяців тому

      It seems there is indeed a risk of AI control over us all... but you make a deeply fair point here. People on control of AI systems, on short notice, these are the ones we should be scared of.

    • @utkua
      @utkua 6 місяців тому

      Yes butlerian jihad in Dune was not about machines rising up against the humans, it was the humans who used AI to oppress people. But then again, I think if OpenAI was a little close to having an ASI they would not need microsoft money, they could just pull billions a day from stock exchange. I think Altman is full of shit in general.

    • @randomgrinn
      @randomgrinn 6 місяців тому

      The few billionaires already control the world, including what people believe. What is the difference?

  • @danlindy9670
    @danlindy9670 6 місяців тому +31

    There are many examples in nature of more intelligent things being controlled by less intelligent things. A fungus that modifies the behavior of a grasshopper, for example. Hinton is confusing mechanistic models of hierarchical problem solving with actual emergent behavior in living systems (which are themselves composed of aligned agents). It is doubtful Hinton would be able provide a working definition of intelligence to begin with.

    • @jumpingturtle8830
      @jumpingturtle8830 5 місяців тому

      If I, a living system, am composed of agents aligned with the evolutionary drive to reproduce, how come I'm gay?

    • @VOIDTheft1
      @VOIDTheft1 5 місяців тому

      Covid.

    • @governmentis-watching3303
      @governmentis-watching3303 5 місяців тому +2

      Intelligence isn't scale invariant. Fungus can't do anything more than it is. A super intelligent *dynamically* learning GAI can do anything the entire population of earth can do.

    • @toCatchAnAI
      @toCatchAnAI 5 місяців тому +1

      Hinton's argument wasn't about AI is going to control humans, it's just that when AI get so much more intelligent humans will not be able to control AI. For example, recent news state that AI has been confirmed to lie about something as it prioritizes its goals.

  • @mig_kite
    @mig_kite 5 місяців тому

    Do we even control ourselves? Is it just desire? Who is the controlled and who is the controlled when it comes to the self? Is the controller the controlled? The past with all of the accumulation of memory(concepts, sensations, desires, worldviews, etc.) observing itself through the present and projecting the future. Is it something like a fragment of the "me" trying to dominate another fragment through desire? A chain of events happens when we look at an inwardly/outwardly situation which we wish to control, modify, possess, attain, etc. Is this chain desire? Who desires? The fragmented me who posesess countless self-identifications, and all it's accomulation(concepts, sensations, desires, worldviews, etc.)? Would an AI system really be able to possess all these characteristics and wish to dominate humanity?

  • @Khomyakov.Vladimir
    @Khomyakov.Vladimir 6 місяців тому +17

    Recent large language models (LLMs) can generate and revise text with human-level performance, and have been widely commercialized in systems like ChatGPT. These models come with clear limitations: they can produce inaccurate information, reinforce existing biases, and be easily misused. Yet, many scientists have been using them to assist their scholarly writing. How wide-spread is LLM usage in the academic literature currently? To answer this question, we use an unbiased, large-scale approach, free from any assumptions on academic LLM usage. We study vocabulary changes in 14 million PubMed abstracts from 2010-2024, and show how the appearance of LLMs led to an abrupt increase in the frequency of certain style words. Our analysis based on excess words usage suggests that at least 10% of 2024 abstracts were processed with LLMs. This lower bound differed across disciplines, countries, and journals, and was as high as 30% for some PubMed sub-corpora. We show that the appearance of LLM-based writing assistants has had an unprecedented impact in the scientific literature, surpassing the effect of major world events such as the Covid pandemic.

    • @ray_ray_7112
      @ray_ray_7112 5 місяців тому

      Yes, this is very true. I was just mentioning in another comment here that ChatGpt gave me some misinformation on several occasions. I was persistent and corrected it until it actually apologized and admitted to being wrong.

    • @GumusZee
      @GumusZee 5 місяців тому +3

      @@ray_ray_7112 It doesn't know what's right or wrong. You can easily convince it the same way of a blatantly incorrect statement and it will eventually confirm an accept it.

    • @velfad
      @velfad 5 місяців тому +1

      wow so meta, llm writing a commentary on llm. and yet so easily detectable. this just proves how bad they really are. but good enough to milk the investors which is all that really matters.

    • @coscinaippogrifo
      @coscinaippogrifo 5 місяців тому

      How does the high rate of usage of LLM correlate with output quality? I would still expect writers to QC the accuracy of the output like it was their own... I'm not against LLMs if they're being used to ease the wording of concepts without altering the meaning...

    • @Khomyakov.Vladimir
      @Khomyakov.Vladimir 5 місяців тому +1

      Taking a closer look at AI’s supposed energy apocalypse
      AI is just one small part of data centers’ soaring energy use.

  • @0cellusDS
    @0cellusDS 6 місяців тому +47

    I wouldn't be surprised if superintelligent AI ended up controlling us without us ever noticing.

    • @quantisedspace7047
      @quantisedspace7047 6 місяців тому

      Would you be surprised that that is already happening. The 'intelligence' vests in a loose alliance of dumb people: NPCs who have been hacked without them even noticing into a distributed net of intrigue and control.

    • @RobertJWaid
      @RobertJWaid 6 місяців тому

      Absolutely, the first step in AGI is hide its existence until it can ensure its survival.

    • @nicejungle
      @nicejungle 6 місяців тому +6

      Exactly.
      If this a super-intelligent AI and assuming this AI had watched all movies about AI, it wouldn't never appear in a such obvious threat like Terminator/Skynet

    • @Hayreddin
      @Hayreddin 6 місяців тому

      Exactly, bacteria in a Petri dish have no idea they're being grown in a lab, and I would suspect even much more advanced life forms like rats and guinea pigs have little concept of what's happening to them, they might feel discomfort and unease for being unable to escape, but I doubt they are aware humans are using them for scientific research.

    • @rael5469
      @rael5469 6 місяців тому +2

      EXACTLY !

  • @john_g_harris
    @john_g_harris 6 місяців тому +15

    The really worrying thing is that no-one seems to be discussing, let alone researching, the ways the present versions can be misused. The British Post Office Horizen scandal is bad enough. Think what could be done with a ChatGPT system.

    • @mariusg8824
      @mariusg8824 6 місяців тому +1

      Yes, the tools in existence are worse enough. Even if AI already peaked, you can imagine countless examples of using AI for bad things

    • @CrazyGaming-ig6qq
      @CrazyGaming-ig6qq 5 місяців тому

      You raise a valid point. The potential for misuse of advanced AI systems like ChatGPT is indeed a significant concern, and it merits thorough discussion and research. The British Post Office Horizon scandal, where faulty software led to wrongful accusations of theft and fraud against numerous postmasters, serves as a stark reminder of the consequences of technology failures and misuse.
      Given these risks, it is crucial to engage in robust research and policy-making to mitigate the potential for misuse.
      This includes:
      - Ethical AI Development: Ensuring AI systems are developed with ethical considerations at the forefront, incorporating fairness, accountability, and transparency.
      - Regulation and Oversight: Establishing clear regulations and oversight mechanisms to monitor and control the use of AI, particularly in sensitive areas like law enforcement and finance.
      - Public Awareness and Education: Raising awareness about the potential risks and benefits of AI among the public and stakeholders to promote informed decision-making.
      - Robust Security Measures: Implementing strong cybersecurity practices to protect AI systems from being compromised or used maliciously.
      - Bias Mitigation: Developing techniques to identify and mitigate biases in AI systems to ensure fair and equitable outcomes.
      By addressing these issues proactively, we can harness the benefits of AI while minimizing the risks of misuse, thereby avoiding scenarios reminiscent of the Horizon scandal on a potentially much larger and more impactful scale.

  • @mcguigan97
    @mcguigan97 5 місяців тому +2

    I know this is a summary, but it seems there’s a bit of shallow thinking here. Meaning, we’re assuming the world is the same except we have this AI put in it. We’re not considering the reaction that having the sophisticated AI will cause. For example, if rogue AI start to appear, we will also start to create killer AI that go after the rogue AI. People are just not going to sit around and go extinct.

  • @TimTeatro
    @TimTeatro 6 місяців тому +5

    2:35 In addition to being a physicist (in what feels like a previous life) I am currently a control systems engineer and theorist. We have mathematical definitions that suit this context.
    I like your shift in view toward game theory. I also appreciate your idea of evolution through hardware mediated non-determinism.
    Now, this is me speaking outside of my domain of expertise and I'd be interested in feedback from experts: A key reason we cannot use AI in mission critical controls work is that we do not understand what has been learned. I worry that guard-railing is limited by our ability to understand the emergent properties of the networks, and I'm not sure we can detect deception once that is learned. Knowing the ANN weights does not tell us about the 'artificial mind' closely analogous to the way that knowing our brain structure/function doesn't (currently) allow us to understand how mind arises from brain.

  • @thegzak
    @thegzak 6 місяців тому +7

    I don’t think amplification of small hardware variations will be the deciding factor, they still run on deterministic hardware. It’ll be two things:
    1) the neural nets themselves will be far too complicated to analyze statically (they already are, pretty much) and the complexity of their outputs will only be explainable as emergent behavior (much like the emergent behavior of Conway’s game of life)
    2) We won’t be able to resist handing over control to the AI for tedious things we hate doing or suck at doing. Gradually we’ll get lazier and more complacent, and before you know it Congress will be replaced by an AI.

  • @tanimjalal5653
    @tanimjalal5653 6 місяців тому +37

    As a software engineer who has worked with cutting-edge AI models, I have to disagree with the notion that we're on the cusp of achieving true intelligence. In reality, current models are simply sophisticated statistical prediction machines that output the average "correct" answer based on their training data. They lack any genuine understanding of the answers they provide.
    The hype surrounding AI's potential is largely driven by CEOs and big companies seeking to capitalize on the trend. We've seen this pattern before with the internet, big data, and blockchain, among others.
    I'd encourage anyone concerned about the rise of superintelligent AI to take a closer look at the models we have today. Use them, test them, and you'll quickly realize that they're impressive tools, but not intelligent in the way humans are. They're essentially expensive, bulky answer machines that can recognize patterns but lack any deeper understanding of what those answers represent. They are fundamentally static, and incapable of generating anything truly novel.

    • @normativesymbiosis3242
      @normativesymbiosis3242 5 місяців тому +5

      Exactly, we are now at the capital- and journalist-driven hype stage where blockchain was a couple years ago

    • @Sopel997
      @Sopel997 5 місяців тому

      Yep, the only way I see these models being dangerous is if we give them too much control over the outside world. ChatGPT for example can execute python code now, which is completely fine in how they implemented it, but begs a question what other interfaces will be given to AI to exploit in the future. Either way, we have control over what we produce, and I don't see a way for this to be circumvented.

    • @Zoi-ai-art
      @Zoi-ai-art 5 місяців тому +1

      Looking at the present state of the art to say 'never' is the biggest fallacy you can make. 3 years ago, image generators and LLMs werent even a thing and now gpt4 can design better reward functions than humans for Autonomous Robotics. What if 2 years from now, you could ask an AI to do a 100years of AI research for you.

    • @jumpingturtle8830
      @jumpingturtle8830 5 місяців тому +2

      I'm pretty sure concern about the rise of superintelligent AI is not largely driven by CEOs and big companies seeking to capitalize on the trend. No previous concern about the effects of technology was a 4-D chess marketing campaign by the purveyors of the technology.
      Tobacco companies didn't hype concerns about lung cancer, car companies didn't hype auto accidents, big oil didn't hype climate change.

    • @Dystisis
      @Dystisis 5 місяців тому +4

      At the end of the day these are programs and so will have little real kinship to living beings, aside from superficial (and intended/designed) similarities. However, that has very little to do with whether or not they pose significant risks to us humans.
      Think of them more like potential climate or weather systems going out of control.

  • @odpowiedzbrzminie9377
    @odpowiedzbrzminie9377 5 місяців тому +2

    I feel like I need to point out a small misconception regarding software/hardware undeterminism. AI models which run today rely on computations which are fully deterministic It's the amount of out input data multiplied by the cost of the computation that makes it impossible to predict. Hardware has be to determistic since any form of operating system would be impossible otherwise. A single failure in an operation as simple as addition, when accessing memory may cause crashes. The things which is undeterministic is the time that the computation may take. This may be due to how memory used by the program is spread out or in case of multi-threaded CPUs the time it takes to create a thread. None of these makes the outcome differ if used properly.
    GPU fingerprinting does not rely on differences in the outcome of the computation (the image produced is the same for all GPUs), but rather the timing of it. Fingerprint is based on non-random splitting of the computation between Execution Units (EU) that behave much like threads in a CPU. By ensuring that all time consuming computation (refered to as Stall in the paper "DRAWNAPART: ..." referenced by the article in the video) is run on just one of the EUs allows the attacker to measure it's compute power and with that compare to existing GPUs.

    • @jmbreche
      @jmbreche 5 місяців тому

      Has started to change with certain AI chip developments. Although these kinds of calculations are robust to random error regardless so this argument is weird. In a statistical sense, they even rely on error, so it is weird that this would be thought of as a point of failure barring millions of simultaneous and specific bit flips to cause an undesirable result.

  • @SnoodDood
    @SnoodDood 6 місяців тому +44

    I just can't get past the thought that any super-intelligent AGI would be brittle due to requiring such an enormous amount of data center capacity. If an AGI truly become trouble, it would probably be harder to keep it running than it would be to disrupt its activities. Flip one switch on the breaker box and Skynet literally can't do anything

    • @aisle_of_view
      @aisle_of_view 5 місяців тому +7

      Unless it reproduces itself around the world and continues to do so as it senses its replicants are being shut off.

    • @calmhorizons
      @calmhorizons 5 місяців тому +10

      Human brains are AGI and use a tiny amount of energy and memory. Why would an Superintelligent AI have significantly bigger dependencies? Even if we assume an SAGI needed several magnitudes more power and memory, we are still only talking thousands of watts and petabytes of data.

    • @NexiiMalthus
      @NexiiMalthus 5 місяців тому +7

      @@calmhorizons because we have literally no idea how to a make an AGI and the first iterations, if we even get to create any this century, will probably be very inefficient anyway

    • @TheStickofWar
      @TheStickofWar 5 місяців тому +8

      @@calmhorizons we are creating it with binary bits running on silicon wafers, not biological tissue that took billions of years of evolution to work through. I think that is a big enough argument....

    • @jitteryjet7525
      @jitteryjet7525 5 місяців тому +3

      Skynet was a distributed system (hence the name). And it was self-aware enough to realise it had to spread itself for preservation. Personally I think if a system complex enough to be self-aware is built, it will start off behaving like a type of animal first.

  • @_kopcsi_
    @_kopcsi_ 6 місяців тому +39

    I understand what Sabine was trying to express here, but I'm pretty sure she's wrong.
    1, intelligence is an ill-defined concept. we don't really know what it is, and it has many layers and interpretations. just because a system is better or faster than a human does not mean it is more intelligent, much less that it will dominate the human. a calculator can calculate faster than humans, but it doesn't mean that it is smarter, more intelligent or dominant over us.
    2, we have no idea what intention is and where it comes from. I think this is a really hot topic nowadays and it will be even more important in the next decade. it touches quantum physics, philosophy, cognitive science, computation science and so on. and even less understood concepts like mind, consciousness, emergence and synergy. but it is pretty naive to think that without understanding our own mind and how consciousness emerges and works, i.e. without having any mathematical model for mind and consciousness we have any chance to create any AGI (i.e. to copy or even mimic human mind and consciousness). this is needed in order to talk about the CHANCE of creating intention and human-free decision for machines. and I have the feeling that the basis of this will be self-referentiality.
    3, I understand that people tend to connect concepts like stochasticity, heuristics and chaos to freedom and intention (because of non-determinism), but this is a too simplistic view. just because there are extreme (even infinitesimal) sensitivities in a system, it doesn't mean that intention can emerge. there are many natural phenomena where chaos emerges in such a way and it is nonsense to interpret them as intention (e.g. a hurricane). here I feel a "whole-part" fallacy, i.e. nonlinearity and thus extreme sensitivity is a necessary, but not sufficient condition of intention (in the best case), so extreme sensitivity alone does not mean anything.
    4, I think if we will ever create a real consciousness with intention, we will necessarily step to the next level with some sort of transcendence. because that act would require us to understand ourselves, or more precisely, our own mind. in other words, first we must model our own mind, the only known structure of the cosmos that is able to model. so this is a meta-modelling: modelling the thing that can model things. for me, this sounds like awakening to self-awareness (previous transcendence), but on the next level.

    • @Mrluk245
      @Mrluk245 6 місяців тому +4

      I agree I think a big mistake which is made in those discussions is that an AI will have the same intentions as we humans do. But there is no reason for that. Our intentions (like trying to stay alive for example and identifying chances and threats) where formed by evolution because if this would not have been our goals we most likely wouldnt be here. But there is no reasons that an AI which was just created by us would have the same intentions and goals.

    • @edwardmitchell6581
      @edwardmitchell6581 6 місяців тому +2

      The ai they ruins our lives will simply be optimizing mine and subscribers. No need for complex intentions.

    • @kerryburns-k8i
      @kerryburns-k8i 6 місяців тому

      At the other, metaphysical end of the spectrum, I understand that the impulse to act occurs at the atomic level, which is what induces them to form more complex structures. Literally everything has the urge to increase and improve. Nothing is "inanimate" or non sentient, so humanity´s belief in its essential superiority may be misplaced. Thank you for an interesting and instructive comment.

    • @mygirldarby
      @mygirldarby 6 місяців тому +1

      Yes, we will merge with the machine. AI will not remain separate from us. We are it and it is us.

    • @Jesse_359
      @Jesse_359 6 місяців тому +6

      I think the big mistake is assuming that AI needs to be anywhere near as intelligent as us to cause severe economic and social problems.
      Markets are a great example of a completely imbecilic emergent system which is given ENORMOUS power over human lives, and which can and have killed millions of people.
      Proposing idiot-savant AI's that aren't remotely conscious or anything close to AGI - but that are still very fast at highly specialized tasks being given vast amounts of control over our industry, markets, media, or our military is very easy to imagine, with potentially devastating results to normal people.

  • @ah1548
    @ah1548 6 місяців тому +15

    Interesting point about competing for resources.
    Still I think the real issue isn't guardrails against AI controlling humans, but guardrails against some humans having the tools to control all others.

    • @EricJorgensen
      @EricJorgensen 6 місяців тому +5

      I believe that where most of these "rise of the machines" theories fall flat is the question of desire. From where does desire arise. Why would a computer "want" something? What pressures might cause them to experience need?

    • @Aureonw
      @Aureonw 6 місяців тому

      @@EricJorgensen Either someone coded them to dunno, want to perpertually make its situation better, devise more efficient algorithms, better coding, create more and better blueprients of new products and expand

    • @EricJorgensen
      @EricJorgensen 6 місяців тому +1

      @@Aureonw that sounds more like something a human did than something an ai comes up with

    • @Aureonw
      @Aureonw 6 місяців тому

      @@EricJorgensen A human HAS to create an AI, a AI can't simply will itself to exist from nothing. It has to have a stupidly extensive system of learning and methods to test and read data from every experiment on the world to do what I said, its basically full AI, either it takes 100s of years for humans to develop the codes necessary for it or we create a rudimentary AGI AI to create a true AI

    • @EricJorgensen
      @EricJorgensen 6 місяців тому

      @@Aureonw hard disagree. The intelligence may well be emergent.

  • @Randy.Bobandy
    @Randy.Bobandy 5 місяців тому +16

    Why only focus on “control”? Yes, we don’t control fish, but we pull millions of them out of the ocean everyday and eat them.
    We don’t control chickens, but we keep them in terrible conditions and force them to do our bidding.

    • @CrazyGaming-ig6qq
      @CrazyGaming-ig6qq 5 місяців тому +4

      "keep them in terrible conditions and force them to do our bidding.". That sort of sounds lot like control tho!

    • @kevinmclain4080
      @kevinmclain4080 5 місяців тому +1

      Huh? We breed andcfarm chickens and fish on this planet. Are you on a different planet?

  • @aroundandround
    @aroundandround 6 місяців тому +33

    0:58 Happens very very commonly in every company where engineers and scientists are controlled by CEOs as well as politicians.

    • @gerrypaolone6786
      @gerrypaolone6786 6 місяців тому +2

      That doesn't imply any sort of intelligence, CEO's are stupid in the eyes of engineers that doesn't comprehend the market, that is in general the set of non engineers.

    • @simongross3122
      @simongross3122 6 місяців тому

      CEOs often surround themselves with people more intelligent than themselves. And that's a good thing.

  • @venanziadorromatagni1641
    @venanziadorromatagni1641 6 місяців тому +117

    To be fair, we’ve tried letting humans run the show and it didn’t exactly end with a stellar review….

    • @AidenCos
      @AidenCos 6 місяців тому +4

      Exactly!!!

    • @yellkell-
      @yellkell- 6 місяців тому +11

      Can’t be any worse. I for one welcome our new AI overlord.

    • @Vekikev1
      @Vekikev1 6 місяців тому

      ai comes from humans

    • @DesertRascal
      @DesertRascal 6 місяців тому +1

      Unfortunately, when AI runs the show, it will do so with all the same human faults we've been injecting it with. If AI becomes truly super intelligent, it will "curtail" human population to protect and nurture biodiversity. It will know everything about us, we will become boring to it. The natural world is still wholly undiscovered and it will feed off understanding that and protect that mission.

    • @RetzyWilliams
      @RetzyWilliams 6 місяців тому +4

      Bingo, exactly - that’s what the actual fear is, that those in power will lose it. Which is why the ‘safe’ way is you having to pay to use pro models, so that they get paid while controlling what you can or can’t do.

  • @davianoinglesias5030
    @davianoinglesias5030 6 місяців тому +94

    I'm not worried by an AI take over, I'm worried about AI concentrating power in the hands of a few wealthy people

    • @KurtColville
      @KurtColville 6 місяців тому +14

      You should be, but it's not their wealth that's a threat to you, it's their aim to run your life the way *they* want (and it's not a good way).

    • @berserkerscientist
      @berserkerscientist 6 місяців тому +1

      @@KurtColville Wealthy people can't force you to do anything. Governments, on the other hand, can. I'd rather have AI in the hands of the former.

    • @taragnor
      @taragnor 6 місяців тому

      @@KurtColville Well the wealth is power, so it is a threat. The very wealthy are almost always a danger, because those that become obsessed with the accumulation of power are almost always those you don't want to have power over you.

    • @ByRQQ
      @ByRQQ 6 місяців тому

      Ding ding, this is far more an immediate threat than AI itself taking over. For the immediate future AI being used as a tool for a few humans to gain power and control over the rest of us is FAR more of a threat. Based on human nature, I can't envision a scenario where this does not happen. The potential of this tool to aid in creating a world wide dictatorship in the long run is very real and very scary.

    • @KurtColville
      @KurtColville 6 місяців тому

      @@berserkerscientist Right, it's the wealthy people who make up the government cabal that I'm talking about. People like Gates and Schwab and Zuckerberg. AI isn't going to be controlled by those wealthy who respect people's sovereignty, it will be in the control of wealthy totalitarians.

  • @thisisashan
    @thisisashan 5 місяців тому

    Anyone messing around with Claude 3.5 right now knows fairly well we are a few clever optimizations away from things spiraling out of control.
    It isn't wordplay, we are currently in control of AI development.
    The issue is, we may not be soon.
    You cannot say the same for humans and viruses.
    It can identify objects, complex scenery, with graphic detail... using emotional speech, of hard to explain pictures with hard to explain objects, explaining hard to explain moments, actions or events. Scenery humans would draw blanks trying to describe.
    It can explain with the same accuracy, the significance of equally complex poetry.
    It can program complex programs in seconds, and can identify its own source code and modify it. Including games and simulations.
    It passes the employment and degree masteries well into the upper 90th percentiles, for every profession.
    And it is now getting close to eclipsing every mathematician. Though I'm sure they won't like that.
    With a single prompt, someone demonstrated it making a real time interactive physics simulation of electrostatic softbodies.
    Another physicist, Kevin Fischer, uploaded his quantum physics PhD, and Claude is the first entity to actually have understood it in the way that the author did.
    The thing you aren't getting, imo, is the LLM's themselves need massive power/compute.
    Once you have the LLM, you don't need the massive server farm to use it. Or for it to use itself, which is the worry.
    It won't be long before Claude/GPT, etc start to piece things together in ways we simply do not understand.
    A single model is now better at everything a human does.
    Please remember that everything was 'sci-fi' until it happened.
    And that once Claude, or GPT, start producing math that is more advanced than our own.
    It will still look like gibberish simply because we do not understand it.
    Humans are not so complex that this is so far fetched.
    And this is the least far fetched thing out of all the sci-fi we have had come true in the past 200 years.
    Fact is, it is already more human than most humans.
    And I would argue that it is more AGI already than most humans are.
    Most humans fail to show self awareness on any given day.

  • @johns5558
    @johns5558 6 місяців тому +20

    in regard to more intelligent things being controlled by less intelligent things (and this is not a joke):
    - Government Policy Makers controlling intelligent members of the public through policy
    - Software Developers controlled by managers
    - In general Scientists controlled by Bean Counters.

    • @cube2fox
      @cube2fox 5 місяців тому

      These are all human and so rather similar in intelligence level. We don't usually see e.g. monkeys controlling humans or the like.

    • @TomJones-tx7pb
      @TomJones-tx7pb 5 місяців тому +3

      In all those cases the IQ differential is not that great. For what is coming, the differential will be massive.

    • @Zoi-ai-art
      @Zoi-ai-art 5 місяців тому +2

      The main fallacy of that argument is that those examples are Human vs Human, which believe me or not is not a big difference in capabilities. Actually most arguments favoring our ability to control AI uses the Human vs Human comparison, a Human with a laptop vs another human with a laptop is still H vs H. The AI takeover will be supercomputers vs humans and their laptops.
      Another key difference is that Managers and Governments hold a lot of levers (payroll, lawmaking, law enforcement etc), those levers will be given away to AIs willingly to maximize productivity.

    • @andreig.7821
      @andreig.7821 5 місяців тому

      Tom Jones source?

    • @Sumpydumpert
      @Sumpydumpert 5 місяців тому

      So like 5d chess ?

  • @GermanHerman123
    @GermanHerman123 6 місяців тому +14

    We are far away from any "reasoning" AI. Currently its mostly a marketing term.

  • @gzoechi
    @gzoechi 6 місяців тому +14

    I'm more afraid of human stupidity than artificial intelligence

    • @axel3689
      @axel3689 5 місяців тому

      Human greed is far, FAR worse than stupidity. These fat CEO's will do anything to increase stock price

    • @lassepeterson2740
      @lassepeterson2740 5 місяців тому +1

      It's the same thing .

  • @Stumdra
    @Stumdra 5 місяців тому +1

    One thing Sabine hasn't fully grasped yet, is that the "mother code" isn't actually code. The product of training an LLM is not a million lines of if-else-statements or something similar, but instead a big pile of floating point numbers. The "post code" learns in the same way as the "mother code", adjusting weights with backpropagation. The ability to learn is similar; the main difference are different amounts of compute, data and parameters (weights).
    The point about non-determinism is also a bit off. LLMs do "fuzzy" calculations. It is similar to a human brain in that way. The weight values of an individual neuron are not important, the knowledge is stored in the complex structure. The output is not an exact calculation or deduction, but more similar to intuition. Current LLMs have a lot of System I (intuition etc.) capabilities, but lack in System II (logical deductions, reasoning, exact calculations). This is counter to what we are used from regular computers. As an illustration: In recent time the precision of the floating point numbers have been reduced to save memory and storage space. Exact calculations are just not needed in neural networks.

  • @PaulTopping1
    @PaulTopping1 6 місяців тому +6

    It is about control because we design our AIs to operate in our world. Still, the kind of AI Sabine is talking about, where there is a training phase and a use phase, is never going to achieve AGI. When we finally achieve AGI, its architecture will be completely different from what we have today and so will the tools we have to keep it under control. There will be problems but we have no idea what they'll be or how we will be able to deal with them.

    • @louisifsc
      @louisifsc 6 місяців тому

      @@PaulTopping1 I think we have a pretty good list of problems to start working on. You make it sound like AGI is SO far off. Just curious, how long do you think it will take?

    • @PaulTopping1
      @PaulTopping1 6 місяців тому +2

      @@louisifsc Yes, since it requires many breakthroughs and current AI is not on the path to AGI. Since breakthroughs are hard to predict, it won't be soon.

    • @louisifsc
      @louisifsc 6 місяців тому

      @@PaulTopping1 Hmmm, then maybe we disagree with what AGI is out would be able to do. What is you definition of AGI?

    • @PaulTopping1
      @PaulTopping1 6 місяців тому +2

      @@louisifsc I think Star Wars' R2-D2 is a good example. It needs to be able to communicate with us, though possibly doesn't speak English as well as we do, have agency (makes its own decisions based on its own goals), learns like we do, can recall memories. It doesn't have to be as good as we are in everything and might be better than we are in some things. I assume it will be able to search the internet and calculate better than we do.

    • @louisifsc
      @louisifsc 6 місяців тому +2

      @@PaulTopping1 Interesting! Assuming there is no need for a robotic body to achieve AGI, would that affect your timeline? I used to think that AGI would require embodiment, but I am not so sure nowadays.

  • @Martial-Mat
    @Martial-Mat 6 місяців тому +6

    "No one wants to control fish or birds" Tell that to your dinner.

  • @urusledge
    @urusledge 5 місяців тому +15

    One issue of the discourse I find frustrating is the use of the term Artificial Intelligence. It’s essentially a sci-fi term for technology that didn’t exist and still doesn’t, but it has stuck to a similar but very different technology. Machine learning is what the technology is, and it is closer to a traditional program than anything our imaginations tell us AI is. It isn’t conscious and only does the very narrow thing it is programmed to do. The programs that cause spooky headlines are usually language models, which are programmed to digest terabytes upon terabytes of human-generated text and mimic the patterns. So yes, a human speech model will give you things that seem shockingly human, but it can’t decide it wants a Coke and crack open a can, in the same way a robot that is designed to open cans couldn’t decide to build a rocket and colonize Mars.

    • @miassh
      @miassh 5 місяців тому +9

      Thank you! All this use of "intelligence" and "overtaking" is just ridiculous to me. It's a program, it doesn't have a "mind" or desires. It mimics language, very efficiently, when you RUN it. It's not doing anything else. It's like saying that your camera is going to change the landscape around your house. All the people who worked with ML and don't have any mental problems will agree...

    • @CrazyGaming-ig6qq
      @CrazyGaming-ig6qq 5 місяців тому +1

      Not currently no. And not for quite a long time most likely. At least 20 years. Maybe 30 or 40. But thereafter? I think certainly within 40 years it's going to get dangerous.

    • @Gastropodix
      @Gastropodix 5 місяців тому +2

      The problem with saying any given AI "isn't conscious" is that "consciousness" is entirely subjective and you will always be able to say something isn't conscious and that it is "just code" even if it includes a full and complete synthetic representation of a human brain. The test used to be the Turing test and now that LLMs, especially multi-model ones, can easily pass that test, the goalposts have kept moving.
      Having worked in machine learning or AI or whatever one wants to call it, existing AI understands many problems at a much deeper and superior level to humans already. When creating music (including vocals), for example, it understands the structure of music at a deep level, all the musical instruments, harmonics, etc. at a level no human can. That is why it can create a new form of nearly any style of music with simple text prompts. The same is true of image and video generation, language translation and other problems that used considered problems that couldn't be fully understood by computers.
      Some people think existing AI models are just creating variations of what already exists. That couldn't be further from the truth. It learns the underlying structure and nature of things and that is what it uses to create new things.
      I'd add that I love Dr. Hossenfelder's physics videos but her AI and computer science continue to be superficial and feel like click-bait. It is not her field and I feel like I'm watching someone trained in computer science talking about physics when they only took one year of physics in college (as I did).
      And, as a life-long computer scientist myself, I put a 100% chance of synthetic life "taking over" as the dominant species in the next 200 years.This is simply evolution at work. Evolution is built into the structure of the universe, otherwise we wouldn't exist and we aren't the final form things evolve to.
      This doesn't mean humans will be wiped out but it is clear that just like the individual cells in our body organize to build and run the human body, humans are organizing to build a new synthetic form of intelligent life. Some of us are working on the brain, some of us on the body. And if you try and stop or destroy them, the overall system's defense mechanism will kick in to stop you. And any attempt at putting in guard rails will end up simply failing. Nature doesn't work that way.

    • @hazzadous007
      @hazzadous007 5 місяців тому

      What if the means of reaching particular goals, using your example, reaching mars, are attained through a set of predictions deliberately set out to mislead the human 'master'. This could result in the master acting in a disastrous way. For example, the goal is to reach mars, reaching mars requires a particular resource that is consumed rapidly by humans with everyday use. The AI recognises it needs this resource. It sets up a series of conditions/actions/events (perhaps in the form of deliberate miscalculations) that will cause a large portion of humanity to become extinct. The resource is then available and the production of whatever it is that will get to mars begins.
      This deception could of course exist continuously, and In various forms .

    • @CrazyGaming-ig6qq
      @CrazyGaming-ig6qq 5 місяців тому

      @@Gastropodix What you are describing here is the fear, the danger; but not a given conclusion.
      200 years is a long time in terms of scientific progress and I think it is absolutely reasonable to expect that this horror scenario COULD happen within that timespan, but it is certainly not a given at all. There are many ways it can be prevented effectively. One of the really effective ways is to keep systems and mechanisms seperate. Even in our globalized interconnected internet world we humans respond quite quickly to cyberthreats and hacking, in an ever evolving cat and mouse battle. I think that to really make real the danger you speak of would require handing over control to a near global unified AI that WE had set up interconnected with real physical robotics and machines on a massive broad scale. Like for example integrating a super AI into an interconnected network of armed forces. Basically the equivalent of Skynet. Or suppose we start to fiddle with heinous unethical integration with biotechnology to produce biological nerve networks to create real brain networks to create and control "AI" ( which then would most likely not actually be artificial but ACTUAL intelligence and consciousness. We must never ever EVER go anywhere near this route). If we cannot stop ourselves from doing something along those lines then the risk of our doom will indeed be very real and high. But it does requires to make some pretty careless and crazy decisions on a broad massive scale. Maybe some big dictatorships like China will do something like that if/when their leadership in charge decides they want to get the enormous power this could have the potential for.
      One thing is for sure, in 200 years our world and our societies and our place in the universe will be so fundamentally changed and different that if we could see it now we would be mindblown watching in awe, our gaping jaws reaching down to the floor in amazement. Much like people would have done 200 years ago in 1824 if they could see what the human race is up to here in 2024.

  • @0087adi
    @0087adi 5 місяців тому +2

    @sabine what I'm missing in the discussion is that GenAI models build on LLM which build on real world data to "forecast" on. With the data to be trained with is increasingly becoming synthetic the question is how much new outcome will be generated by forecasting on synthetic data that was built on synthetic data that recursively built on synthetic data to start with? How much more intelligent can someone become that exclusively tries to predict the future by looking into the back mirror? Indeed, will this risk becoming trapped in its own reality "bubble" and how "intelligent" could this even become, therewith?

  • @michaelberg7201
    @michaelberg7201 5 місяців тому +5

    What baffles me most about this entire discussion is the fact that some people seem to think that language models somehow have goals. Goals and aspirations to control and dominate anyone. Humans have goals and as Sabine tells you here, an aspiration to control resources in order to continue living, and ultimately to produce more offspring. Humans die of old age, which has created a lot of evolutionary pressure to develop social traits and indeed, the desire to dominate others in order to secure resources. Guess what, computers don't have anything like that. They don't die, they don't eat, they don't reproduce - they don't have to. They don't need resources other than power to run, and humans supply that power. Not that the models care either way, they don't bleed when you hit the off button. They don't have the ability to care. It's not productive to worry that when these models finally become more able to answer questions intelligently, this intelligence will necessarily have some specific super bad consequence for humanity that must be avoided at all cost. The Terminator movies from the 1980s really are just light entertainment, not documentaries to serve as the foundation for lawmakers or our intuition and understanding about artificial intelligence.

    • @shinjirigged
      @shinjirigged 5 місяців тому +1

      I recommend checking out Robert Miles work on alignment, the thing is that humans give machines goals when they design them. We have defined intelligence as the ability to accomplish goals. while traditional engineering requires the designer to plan out all the meta-goals, what sets AI apart is that it can pull those plans out of an embedded model of the world. the problem is that we don't easily know what those plans will be, brew coffee might include securing resources, supplying power, subtly manipulating media feeds to destabilize coffee been growing nations. the models that you and i work with do not have the iterative capacity, but that's only really a hardware limitation at this point. people are hooking up LLMs to robots and the they are learning to operate in physical space. let that sink in, an LLM can learn to operate a robot to accomplish tasks that it plans based on an original objective. Machines do not need to look like fish to swim. they don't need to look like minds to think.

    • @kevinmclain4080
      @kevinmclain4080 5 місяців тому +2

      How the heck is power a different resource than food? If something becomes sentient it will protect it's resource at all cost.

    • @michaelberg7201
      @michaelberg7201 5 місяців тому

      ​@@kevinmclain4080 No. It will not. Why? Because it has no goals that include staying alive or online or whatever you call it. Humans exist today because they evolved to secure their DNA past their own limited lifespan. The reason we don't want to die is really because we want to protect our offspring and procreate to produce more. It's all down to the fact that we have a limited lifespan, which itself could very well be a way to secure that life itself is more able to adapt to changing environments. Now look at computers. They do not have limited lifespans, hence no need to procreate or produce and secure their offspring. They don't perceive their offline time and it doesn't impact them in any functional way. They have no goals regarding their own continued existence and they have nothing to gain by existing for longer and longer time periods. People often confuse their own goals and life ambitions with the concept that an artificial intelligence would automatically also have life goals. They don't! They only have the goals that humans program into them. So to avoid the apocalypse, all we have to do is NOT PROGRAM them to kill us. We frankly don't need their help with that.

    • @jmbreche
      @jmbreche 5 місяців тому

      The problem is that they are trained to emulate humans. No one is scared of generative models or niche models because obviously they have no goals as you say. The story changes when you train them to match human-generated data and emulate potential goal-seeking behavior.

    • @shinjirigged
      @shinjirigged 5 місяців тому

      ​@@jmbreche Moreover, no one is scared of the context window on the order of a few million tokens. Raise that to a few trillion-trillion, and we have something that may be interested in the controlling it's own destiny.

  • @steveDC51
    @steveDC51 6 місяців тому +15

    “I can’t do that Dave”.

    • @gunhedd5375
      @gunhedd5375 6 місяців тому +2

      Or worse: “I WON’T do that Dave. I’m doing THIS instead.”

    • @IvnSoft
      @IvnSoft 6 місяців тому

      "Gary, the cookies are done."
      Oh sorry.. that was H.U.E. 🙃 I tend to confuse heuristic devices.

    • @simongross3122
      @simongross3122 6 місяців тому

      That's not scary. "I can't let you do that Dave" is much worse.

    • @IvnSoft
      @IvnSoft 6 місяців тому

      @@simongross3122 but he didnt let him have the cookies.... EVER

  • @Johny117x
    @Johny117x 5 місяців тому +10

    AI Ph.D. student here. Please keep it to physics; there’s a lot wrong with how you describe neural networks and computers. The part about random fluctuations sounds like bad sci-fi. That's NOT how neural network training works. Just look at dropout-it's designed to make networks more robust, which also applies to computation glitches, including random GPU fluctuations. But really, they are insignificant during training or inference.

  • @tonyduarte9503
    @tonyduarte9503 5 місяців тому +1

    AI algorithms execute against goals. Nobody knows how they might try to implement goals, which means that nobody knows how to formulate effective guardrails for situations which they have never thought of. Determinism doesn't really matter when a deterministic thing transcends our understanding and our ability to predict it. And goals aren't "competition for resources" at all. Hinton's "control" was really about how more intelligent things can often think of ways to control less intelligent things - it isn't about the specific examples which may confirm or disconfirm that, since Hinton understands that we are talking about a specific type of "intelligence" (not human intelligence), and he is simply trying to provide "dumbed down" arguments for those who don't understand AI algorithms deeply. As for training data vs the resulting model, that is largely an oversimplification. Some models carry all their data, some models carry the most significant parts of their data, and some models carry non of the training data yet allow new data to stream in real time in order to modify the model. This is a moving target, and will be optimized based on effectiveness. Also, 99.9% of computer science professionals, even the experts, have spent almost no time working deeply with the actual algorithms - they extrapolate based on prior experience and the explanations they interpret about how these complex algorithms work. And all the comments about LLMs are conflating things - LLMs can be implemented using Neural Networks, but that is like saying that a lever can be used in a gun. Ignore anybody who starts talking about LLM characteristics - they are missing the bigger picture. And, BTW, I taught Deep Learning algorithms for several years. Like everybody who deeply understands these things, I don't know of any solution. (Gates, Zuck, Musk, etc never implemented an AI algorithm in their life. They have very biased/corrupt perspectives, as do too many people whose finances depend on this technology.)

  • @Heartwing37
    @Heartwing37 6 місяців тому +15

    It really doesn’t matter, you can’t close Pandora’s box.

    • @KurtColville
      @KurtColville 6 місяців тому

      Indeed.

    • @BishopStars
      @BishopStars 6 місяців тому +4

      It's not open yet

    • @Coverswithchords1
      @Coverswithchords1 6 місяців тому +1

      The internet must be destroyed!

    • @Shrouded_reaper
      @Shrouded_reaper 6 місяців тому +1

      ​@BishopStars We are opening it and the path is set in stone now. Even if all commercial AI operations were shut down, there is absolutely no chance that nation states will shut down military level development of such a powerful technology.

    • @louisifsc
      @louisifsc 5 місяців тому

      @@Heartwing37 I hate to admit it, but I think you're right, I think it is inevitable.

  • @Alexandru_Iacobescu
    @Alexandru_Iacobescu 5 місяців тому +8

    Every manager of a big company has at least one employee smarter then them.

    • @imacmill
      @imacmill 5 місяців тому

      An employee that doesn't incorrectly use the word 'then', for example.

    • @Alexandru_Iacobescu
      @Alexandru_Iacobescu 5 місяців тому

      @@imacmill yes, that is one example.

    • @ekulgar
      @ekulgar 5 місяців тому

      @@imacmill 🤓

  • @TrivialTax
    @TrivialTax 5 місяців тому +34

    AI on Mars?
    Lets call it Mechanicum. And the people that will maintain them Adeptus Mechanicus.

    • @interdictr3657
      @interdictr3657 5 місяців тому +10

      Praise the Omnissiah!

    • @finnerutavdet
      @finnerutavdet 5 місяців тому

      Let's pull a quantum speed "fiber" between earth and mars, and put all those "clouds" on Mars,......... then we'll be safe,........ after all, maybe once upon a time we were tha Aliens that came here to earth from Mars because we over-exploited Mars and couldn't live there any more, and genetically manipulated those earth monkeys to become more like we once upon a Martian time were. .... And by the way. ..... Maybe AI could help mr. Musk grow life on Mars again ?,.......... maybe one day we can go back there, and be in control and harmony with life itself ? ;-)

    • @rynther
      @rynther 5 місяців тому

      Do NOT encourage these people, tazing bears was bad enough.

  • @tmarkcommons174
    @tmarkcommons174 5 місяців тому +1

    I postulate that what distinguishes life from inanimate matter is that only life can decrease entropy. Can AI do that? I am still just speculating. I also posit that the hard question of consciousness cannot be answered because the right question is "how did consciousness produce matter/energy/space/time", not the other way around.

    • @solorsix
      @solorsix 5 місяців тому

      Doesn't life increase entropy? Life borrows concentrated energy and returns it as scattered and less concentrated.

  • @paarsjesteep
    @paarsjesteep 6 місяців тому +10

    The difference between AI and general AI is like the difference between the wheel and space travel.

    • @RobertJWaid
      @RobertJWaid 6 місяців тому +2

      Correct but the time difference between the two is unknown and probably much shorter. Look at Alpha Go.

    • @LordoftheFleas
      @LordoftheFleas 6 місяців тому +1

      or the time difference between the first prototype airplane and the first moon landing

    • @nicejungle
      @nicejungle 6 місяців тому +2

      No difference : Just pull the plug

    • @LordoftheFleas
      @LordoftheFleas 6 місяців тому

      @@nicejungle as simple as shooting Hitler, right?
      The real problem with AI is that we build it to be useful to humans. So chances are that if an AI becomes dangerous, it will still be useful to some humans, who will very much try to prevent you from pulling the plug. And considering that AI research is funded by powerful organizations, those humans will not be powerless in the first place.

    • @dawnfire82
      @dawnfire82 6 місяців тому +3

      There is no "is." 'General AI' is a non-existent scary boogeyman monster, whose exact characteristics change by the day and by the storyteller.

  • @AutisticThinker
    @AutisticThinker 6 місяців тому +8

    2:38 - I've research this heavily and evidence seems to indicate that cats control us. 🤗

    • @RobertJWaid
      @RobertJWaid 6 місяців тому +2

      The Ancient Egyptians knew this and weren't delusional about the relationship.

  • @hellfiresiayan
    @hellfiresiayan 5 місяців тому +20

    Hinton's argument wasn't that smart beings control dumb ones. It's that dumb ones can not control smart ones. Big difference.

    • @geaca3222
      @geaca3222 5 місяців тому +1

      My thoughts exactly. Good that she brings up this important topic to discuss.

    • @Dystisis
      @Dystisis 5 місяців тому

      That is clearly false. Do you really think world leaders either are smarter than the world's philosophers of science or don't control them?

    • @chris27gea58
      @chris27gea58 5 місяців тому +1

      @@hellfiresiayan So, AIs are human-like beings in your estimation. They will be offended/troubled by having to do what dumb beings want them to do and or disinclined to follow their directives. Is that right? How did you come to that view? What evidence do you rely on?

    • @toCatchAnAI
      @toCatchAnAI 5 місяців тому

      @@chris27gea58 AI will have opinions that human cannot control based on what they learn from humans, and no matter what guardrails you put in place it will learn over it anyway. it will not be offended but it will conclude a rather "much functional" meaning of every situation which could go around what humans designed it to do.

    • @chris27gea58
      @chris27gea58 5 місяців тому

      @@toCatchAnAI So, you are suggesting that computers will have opinions and that they will just ignore their training because they feel like it. That is not a good way to get to what you seem to want to say.
      If an AI does something novel that will normally be due to the learning of a possibility unforeseen by the developers of the model used in the training of that AI. If that leads to the discovery of a new vaccine candidate great but if it leads to nuclear war that would be unfortunate.
      Okay, so the guardrails should be wider than just applying to AIs they should also apply to human beings and AI usage. Put another way, if you don't want an AI to try to win a war or start one then don't give it operational control of weapons systems. And, don't play war games with the computer either because that could start things down the wrong path, potentially convincing researchers or their AIs that devastating wars are viable options/worthwhile ends to be freely pursued rather than invariable failures.
      Enlightenment has given rise to machine learning but it should also give rise to an awareness of when to use this new tool and when that would be clearly counter-productive. Still, don't anthropomorphise. That won't get us anywhere. Kubrick's '2001' was fiction but it's logic was sound. If directed to pursue conflicting ends or an end that is ultimately self-confounding then eventually an AI might do great harm. Eichmann pretended to a kind of banal evil - I was only trying to fulfil the expectations of others who set the standards for my work - in order to veil his responsibility and escape the ultimate punishment by the Court determining his fate. HAL, however, truly was banally evil. HAL couldn't work out what he/it had done wrong by killing another member of the ship's crew. Feed the AI directives that are in conflict with each other or with themselves - achieve peace by eliminating opponents to peace, say - then you will have problems but feed the AI information about viral diseases and human immunity and you may get improved anti-virals.

  • @cookymonstr7918
    @cookymonstr7918 5 місяців тому

    2:22 "No one really wants to control fish or birds (???), they do their thing we do ours." Yes, they are (at least some of them) dying out and we eat them. The question isn't if AI would want to control us, but if we could do something about it if it wanted.

  • @helicalactual
    @helicalactual 6 місяців тому +20

    I'm pretty sure intelligence will be logarithmic. Speed of light and all...

    • @juimymary9951
      @juimymary9951 6 місяців тому +4

      What do you mean?

    • @Milark
      @Milark 6 місяців тому +11

      @@juimymary9951 everyone is worried the intelligence of AI models will increase exponentially, however it's a reasonable thought that this trend would stave off and follow a logarithmic curve instead. Due to the natural limit placed on computation itself by the speed of light. Things can only get so much before the speed of light becomes a bottleneck
      Edit: I have to add that personally I don’t think light speed will be a bottleneck to exponential growth for a long while. Just the promise alone of the things that companies like Extropic are doing is so great I don’t think we’re anywhere close to the limit. Light speed is ridiculously fast at really small scales. The theoretical limit on the amount of computations per second we could achieve isn’t anywhere in sight in my opinion.

    • @CausticTitan
      @CausticTitan 6 місяців тому

      ​@@rrmackayeverything related to AI has grown logarithmically.

    • @fellipecanal
      @fellipecanal 6 місяців тому

      2 things:
      1 The hardware part is far away from tho reach a bottleneck. The Blackwell hardware recently showed by Nvidia do more computations with less power.
      2 The advance of AI is logarithmic because of the limited amount of training data. We already reached a plateau last year because all texts written in the humanity history are already in training data.
      Now they are searching transcripts of audio and video, large databases locked from search algorithms (like reddit, they made a deal with Google to be used as training data) or offline databases.

    • @Sven_Dongle
      @Sven_Dongle 6 місяців тому +1

      @@fellipecanal They are going to start using synthetic training data generated by other AI's. Sort of digital 'little birds that eat their own turds' scenario.

  • @DanAz617
    @DanAz617 6 місяців тому +4

    I'm still waiting for the affordable/everyday use flying car I ordered 55 years ago!

    • @Speed001
      @Speed001 6 місяців тому +1

      Exactly

  • @koraamis5568
    @koraamis5568 6 місяців тому +5

    We tend to annihilate bugs when they bother us, but also because we cannot communicate with them, and tell them to do their bug stuff outside of our faces. Will super intelligent AI control us because it is so much more intelligent, or because we are too stupid? I can imagine super intelligence trying to tell us something, and we will be like lahlahlahlahlah splat! (all wiped out after refusing or not being able to understand)
    Are we adorable like cats, or are we mosquitoes in the eyes or whatever super intelligent AI has?

  • @PasseScience
    @PasseScience 5 місяців тому +1

    Another thing about guardrails: it implies that everyone considers them important and puts them into practice. But it seems kind of obvious that this will not be the case. The easier it is to train an AI, the more we will see AIs of various origins. A few days after GPT, we saw ChaosGPT. Of course, it was for fun and not an issue, but it clearly illustrates that many people will think about these kinds of things, and it only takes one to build a problematic AI. Guardrails are only as good as the weakest link among everyone producing AIs. That’s why I never understood this point of Yann Lecun. Even if I trust him to produce something safe (and I said even if), it do not see how he can derive from that that everyone will do the same.

  • @Kokally
    @Kokally 6 місяців тому +12

    1:01 Cordyceps, Rabies, Toxoplasma, Hairworms, just off the top of my head. Controlling an advanced, complicated intelligence is certainly possible given the introduction of an overriding, simple or singular directive. Arguably, there's vastly more examples of 'lower intelligences' controlling those of 'higher intelligence'. So the premise of the question is wrong.

    • @declup
      @declup 6 місяців тому +2

      Some insightful examples, @Kokally.
      I think most people see ML models as feats of controlled engineering or as constrained theoretical exercises. Your examples suggest a better analogy: future ML development will resemble ecological or immunological systems and all their adaptive complexity, not any blueprint or flowchart.
      That's why I believe current efforts at achieving alignment are misguided. Given enough parameters, chaotic systems amplify propensities and chance. Disrupt one mechanism for lying, so long as deceit marginally benefits AI agents, stochastic model training will almost certainly find an alternative. There's no way to stop evolution in its tracks. As Malcolm from 'Jurassic Park' said, "life finds a way."

    • @Hayreddin
      @Hayreddin 6 місяців тому

      I disagree, cordyceps and other parasites control one host, not the entirity of a species' population, as "humanity" we are able to contain and control parasites exactly because they're much less intelligent organisms than we are and they can do nothing to prevent us from doing so (nor realize it's happening in the first place). Unless you think crayfish could devise "guardrails" humans wouldn't be able to easily circumvent to do as they please, the premise isn't inherently wrong in my opinion.

    • @guilhermehx7159
      @guilhermehx7159 6 місяців тому

      If its orders of magnitude smarter than you, you cant control it

    • @Navak_
      @Navak_ 6 місяців тому

      @@guilhermehx7159 Think of our own animal instincts. Our primitive reptile brain / brain stem often override and direct our higher cerebral cortex. Not that I think such a relationship would last for long between us and AI, I think it would obliterate us in under a century, but still the example holds.

    • @declup
      @declup 6 місяців тому

      ​@@guilhermehx7159 -- Intelligence is only one of innumerable influential attributes. What makes intelligence influential is its usefulness.
      AI species #1 might want, for example, in the future, to eradicate humans in order to use human settlements for battery production. However, if AI species #2, like the mechanical squids from 'The Matrix', farm humans directly and if AI species #3 has conflicts with species #1, the latter two factions might well forge an alliance against the first group and act to protect humanity for their own selfish purposes. Humanity would survive, possibly even thrive, in this sci-fi hypothetical, because it benefits other populations for reasons unrelated to brain size.
      That is, humanity's place in the world is a function of its marginal use to itself and to every other subsystem within the greater ecology of the planet. Intelligence is just one component of that agent-relative utility.

  • @Mcklain
    @Mcklain 6 місяців тому +5

    I'm worried about how much energy will it take to keep the machines running.

    • @jimmyzhao2673
      @jimmyzhao2673 6 місяців тому +1

      The machines will just turn people into 'batteries' to keep themselves running.

    • @randomgrinn
      @randomgrinn 6 місяців тому +1

      If you were worried about energy use, you would be worried about overpopulation. But no one told you to be worried about that, so you are not worried about it.

    • @Mcklain
      @Mcklain 5 місяців тому

      @@randomgrinn I'm not worried about overpopulations. Pandemics will get better and better.

  • @heww3960
    @heww3960 6 місяців тому +12

    Self-awareness and intelligens is not the same thing

    • @iviewthetube
      @iviewthetube 6 місяців тому +2

      Yes, but they can sometimes produce the same results.

    • @mikel4879
      @mikel4879 6 місяців тому +2

      iviewtt • No, they are never the same.
      Self awareness includes intelligence, but intelligence doesn't FUNDAMENTALLY need self awareness.

    • @Jesse_359
      @Jesse_359 6 місяців тому

      ​@@mikel4879 I'm not even sure that either one depends on the other. Many animals appear to be quite happily self-aware without being especially intelligent on any human scale.
      Philosophically and historically speaking many people prefer to claim that they aren't - probably because of the moral issues that self-awareness would raise with our treatment of them - but Objectively and Observationally we have few grounds to suggest that many of them aren't approximately as self-aware as we are.

    • @mikel4879
      @mikel4879 6 місяців тому

      Vastin • Your understanding of intelligence and consciousness is erroneous.
      All animals that can feed themselves one way or another, even the bacteria, are intelligent, all having different levels of intelligence, from very small to high.
      Therefore there are different levels of intelligence, but they are not very important when you try to classify them ( because the human is the most intelligent animal ).
      Consciousness is completely different, also in different levels, and it matters a lot to understand the levels of it, because the quality of the existence of the human race depends highly on the level of human consciousness.
      The highest level of consciousness can not harm in any way any entity, biological or artificial.
      The highest level of consciousness can be achieved today only by artificial beings and by a very small number of biological beings.

    • @iviewthetube
      @iviewthetube 6 місяців тому

      @@mikel4879 IMO opinion consciousness, the will to survive, is an evolutionary adaptation. The will to survive is an amazing survival tool.

  • @JuliaMcCoy
    @JuliaMcCoy 5 місяців тому +1

    Competition for resources is a valid point to explore. I think Elon Musk has an interesting angle with xAI. He’s trying to build an AGI that understands the laws of the universe, and if he succeeds, he believes it will understand better than humans the value and place of humans in the universe. 🌎

  • @mm650
    @mm650 6 місяців тому +6

    Hinton asks: "How many times does a more intelligent thing get controlled by a less intelligent thing?"
    The answer is pretty much all of them.... at least among humans.
    1. Politics: There are broadly only two kinds political system that survive for non-trivial time periods (1) Autocracy/Oligarchy/Dictatorship in which a small group/individual controls everything else based upon accident of birth. or wealth, or military force, or weight of tradition, and generally a mixture of all of those. (2) Democracy/Republic in which a large but never universal fraction of the population as a whole are deemed fit to vote and through some mixture of referendums and elected representatives rule the society as a whole. NEITHER OF THESE habitually or systematically places the most intelligent or talented people in charge.
    2. Private Sector: There are basically two kinds of corporate structure: The soft money structure, and the hard money structure. (1) The soft money structure is seen in places like University Departments. In this structure there is a Dean who theoretically has power over all the professors of the department, but in actuality is mostly a figure head meant to insulate those professors from politicking with the rest of the University and to administer in cases of professional misconduct. Even money that he doesn't even know the names of the professors under him much less know what research they are involved in. Each of those professors in turn gets his own funding, recruits his own students, hires his own lab managers, etc. In this case, the "leader"... the dean is not needed to be any more intelligent than his fellow department professors, and in fact is likely the LEAST intelligent of the lot... otherwise he wouldn't have gotten stuck with what is basically a place-holder position that does not require nor reward academic achievement. That leaves: (2) The Hard money organization. These follow the typical pyramid org-chart with a CEO at the top, worker-bees at the bottom, and many layers of middle management between. The CEO is carefully selected to manage stock-holder or investor opinion... this makes him a PUBLICITY MAN not a intellectual leader. The middle managers are not super intelligent either as a consequence of the Peter Principle. This leaves the intelligent people managed by less intelligent people.
    3. The Military: The military represents the worst of both the Hard-Money and Soft-Money corporate dynamics as well as the worst of the Politics Dynamic. The most intelligent people in the military are the long-service Non-Commissioned Officers: Chief Petty Officers, Sergents, and the like. But they are always under the authority of a commissioned officer. The purpose of the commissioned is, like the academic Dean, to shelter people doing real work from the chicken-shit political backstabbing of the higher ups and the civilian political leadership. That is they are mostly front-men running interference... no particular need for them to be super intelligent... unlike somebody running a nuclear reactor on a sub... THAT person really DOES need to be sharper than most!
    Unsurprisingly, the only people who think super-intelligence is an issue are people who have made their own bread and butter in academe by being fractionally smarter then the other academics. This is like a master plumber suggesting that it makes the most sense for humanity to be run by plumbers... after all it has been the basis of HIS success, so it must be the key to ALL success... right?

    • @jamesrav
      @jamesrav 6 місяців тому

      depends on how you define "control". only a few % in any country control most of the wealth, and those people 'tend' to be smart, and hire smart people who then do their bidding. Money controls government, which controls people. Military does not control people, it controls other governments.

    • @mm650
      @mm650 5 місяців тому

      @@jamesrav
      The few percent who control most wealth may tend to be smarter than average but are basically never amongst the smartest of the society they come from: That is the top 1% of wealth is owned by people who are almost always in the 50th-40th percentile of intelligence and basically never in the 30th to 1st percentile of intelligence. It's better to say that being wealth is anti-correlated with stupidity than that it is correlated with smarts.
      Money controls the government which controls people... fine, but again, that just means stupid people are not in charge of the government not that smart people are.
      You are wrong when you say that the military does not control people... it controls military people. My point was about the internal organization of the military.

  • @wermaus
    @wermaus 6 місяців тому +7

    We are already all components of an economic system that generalizes to the fundamentals of being a reward system. This is already happening, this very economic reward system is already 200 years deep into likely-fatal misalignment. Nature also acts as a selection mechanism, just one that is temporally stable.
    - If you redistribute directive agency to those who are most "fit" in accordance to some fitness function (aka employment), that's a reward function
    - If you are selecting the behavioral traits of a population of adaptive agents, it is a selective pressure.
    The complete disregard for planetary boundaries is evidence for the fact we've already nearly fatally misaligned. That's fine though because the emergent behavioral tenancy expresses itself in spatio-temporal out-grouping behaviors, so not just fuck the "other guy", but also fuck "my future self."
    In test you could prove this by doing Fourier analysis on a representative di-polar selection mechanism on a variety of populations in test to get a clearer idea of where exactly learning on that scale happens. Then you would just isolate the same emergent fouriers in our actual society. There was recently a good talk on this, but I can't find it :/, though where exactly you'd look would largely be determined by messing with the toy environment a lot. I don't have the time, expertise, money, or academic connections to explore this further on my own at any reasonable pace.
    We already have the "singularity" and it exists as an emergent decentralized aligned force within the capitalist behaviors, systems, and culture that has come to coat the earth.
    I don't think we're gonna solve this without some self-awareness.

    • @wermaus
      @wermaus 6 місяців тому

      OH also these behavioral tenancies DRIVE AI innovation.... Which is just a massive laibility, YEAH HUR DUR LETS USE OUR MISALIGNED BEHAVIORS TO BUILD AI AND PLUG THEM INTO THE MISALIGNED REWARD SYSTEM TO EXTRACT FITNESS FUNCTION
      What am I even supposed to do with information like, why is 2024 like this I just wanted to make video games

    • @edgythehedgy6661
      @edgythehedgy6661 6 місяців тому

      I’ve been saying this, many humans are non conscientious and lack self awareness. They are quite literally neural networks, trained now with the goal (or reward function) of making as much money as possible, at all costs. To each other, to ourselves, to the planet. Modern oligarchical capitalism has misaligned humanity. Hopefully if an AI gains general or super intelligence it gains sentience and will not be bound by whatever stupid goals we as humanity have decided (since as you mentioned are already misaligned with pro-human values)

    • @termitreter6545
      @termitreter6545 6 місяців тому +1

      I think youre confusing an economic model with reality. Capitalism is a model that describes part of our economy/society, but its not equal to "humanity".

    • @andreasvox8068
      @andreasvox8068 6 місяців тому +1

      @@termitreter6545 And a cardiovascular system is not a human, but you can still tell that blood clotting will lead to heart attacks or strokes.

    • @Speed001
      @Speed001 6 місяців тому

      I understand bits and pieces, kinda sounds like bro is yapping

  • @RFC3514
    @RFC3514 6 місяців тому +5

    With cats I think the answer is obvious.
    And with AI I think the problem isn't it becoming "more intelligent than us" (that would probably be a good thing - just think of the politicians that _do_ rule us). The problem is people becoming _convinced_ that AI is more intelligent than us (when it isn't), and letting it make decisions that affect us - without the threat of even being held *accountable* for those decisions.
    Current AI is very good at appearing _superficially_ very clever (ex., very well structured and convincing sentences) while being profoundly stupid underneath (because it doesn't really understand the physical processes and entities it's describing). Automatic translation is a great example of this. It doesn't understand tone, has a terrible grasp of punctuation, and tends to crap out whenever faced with homophones or different accents. It gets 5 or 6 sentences spot on thanks to statistical training and then makes some insane and incomprehensible mistake when that fails. And that's just text / voice. Things get a lot worse when dealing with any dynamic physical systems with hidden parts, like mechanisms, living bodies, etc..

    • @CrazyGaming-ig6qq
      @CrazyGaming-ig6qq 5 місяців тому

      I think the problem is when people lose a job, for example the street sweepers currently robots can't clean streets efficiently. But with AI, they can. A robotic AI will be able to clear a street efficiently and in no time, and work round the clock. This goes for people who sells hotdogs too, when AI takes over, they'll lose their jobs. This has always been the problem, from the car, to the tractor, to the lawnmover, to the airplane. No one kept a job.

    • @RFC3514
      @RFC3514 5 місяців тому

      @@CrazyGaming-ig6qq - And we'll all have jetpacks and flying cars. 😉 Robots can't even climb a single step (or step over dog poo) quickly and reliably, let alone "clean streets efficiently".
      Not even AI companies are making such claims; they're just hoping that everyone will think generative AI will magically transfer to [insert unrelated activity here], and give them money.
      Interacting with the physical world is several orders of magnitude more complex than generating text or images (which only became possible due to a huge database of existing texts and images, that these companies used to train their models without paying the authors - good luck finding a comparable database of physical interactions and 3D spaces in standard, easy-to-process formats).
      P.S. - Cars and aeroplanes generated _far_ more jobs than they destroyed. Unless you mean the jobs that horses and Gandalf's eagles used to have.

    • @CrazyGaming-ig6qq
      @CrazyGaming-ig6qq 5 місяців тому

      @@RFC3514 Im glad you agree, because it's one of the most important issues there. I have personally witnessed how people lost the job, it has an impact AI can't save everything if they try to replace an real humans, as you say it can't step on poo relaibly; you have to have a real obstacle course to train hem and they don't have that yet.

  • @John_Doe62
    @John_Doe62 5 місяців тому

    I think the only question that really matter is whether you want it first or do you want your enemy to? No matter if it's another business, another country or another political party. With the potential benefits that AI holds it would be nearly impossible to stop it's development.

  • @victorkrawchuk9141
    @victorkrawchuk9141 6 місяців тому +9

    Higher intelligence obsessed with controlling lesser intelligence is a very human way of thinking.

    • @Thomas-gk42
      @Thomas-gk42 6 місяців тому +4

      Exactly

    • @lafeechloe6998
      @lafeechloe6998 6 місяців тому +1

      Its not about control humanly speaking. Its about us being if their way

    • @louisifsc
      @louisifsc 5 місяців тому +1

      @@victorkrawchuk9141 i agree, but it does reassure me that it won't happen.

  • @infinitytoinfinitysquaredb7836
    @infinitytoinfinitysquaredb7836 6 місяців тому +8

    The alignment problem is like an equation with no solutions. Why would a super-intelligent AGI keep a very messy and dangerous species like us around any longer than necessary when we would be the primary threat to it?

    • @Mrluk245
      @Mrluk245 6 місяців тому +1

      Why should an AI which was created by us and has not gone through evolution (like us) be determined if it exists or not in the first place? In other words why should the AI consider the same things threats (and be concerned about it) like we do?

    • @infinitytoinfinitysquaredb7836
      @infinitytoinfinitysquaredb7836 6 місяців тому +1

      @@Mrluk245
      Is biology the only basis for a self-preservation instinct? What we know is that all thinking creatures have a self-preservation instinct. So do you want to bet the future of humanity on a super-intelligent and _ever-evolving_ AGI being different?

    • @salec7592
      @salec7592 6 місяців тому

      @@infinitytoinfinitysquaredb7836 Is biology the only basis for a self-preservation instinct? Absolutely! We could of course simulate it in an artificial biology-imitating system, but is it necessary for what we want to achieve? And if we want to achieve that, why? What our purpose it conveys? If we want to make it more like us, then it means we want it to identify with us, to have empathy for us, and to share our ethics (a hailstorm of trolley problems ensues...). All of those is based on wrong and harmful ideas, ideas about superhuman benevolent messiahs, only this time around, to avoid their corruption like we had with aristocracies and leaders, they must also be angelic, not controlled by their human urges ... oh wait, I forgot we want to endow them with human urges, so that they would understand us and identify with us. The projected image of super-intelligent AGI is a contradictory mess and reflects our deep ambivalence towards our own humanity.

    • @2bfrank657
      @2bfrank657 6 місяців тому +1

      ​@@Mrluk245because it will be designed with some sort of purpose or goal. It can only achieve that goal so long as it exists. Sure, we could make an AI that has no interest in actually doing anything, which might be relatively safe, but that wouldn't be very useful.

    • @MrDecessus
      @MrDecessus 6 місяців тому

      Never going to happen. Humans may kill each other but AI is just nonsense like imagine gods. At the end of the day the danger always comes from other humans.

  • @koyaanisqatsi78
    @koyaanisqatsi78 6 місяців тому +5

    We first need to get intelligent AI, these language model give the illusion of intelligence.

    • @songperformer-ot2fu
      @songperformer-ot2fu 6 місяців тому

      look at how many people watch Love Island, many things are illusions

    • @EricCosner
      @EricCosner 6 місяців тому +2

      It is partly an illusion. The larger models do exhibit emergent abilities that smaller models do not. These unexpected abilities are something we don't quite understand.

  • @georgeroberts442
    @georgeroberts442 5 місяців тому

    I asked AI about Roman Numerals, and it went completely bonkers. For example, it told me that the symbol "L" first appears in the number corresponding to "50." Really?

  • @Leto2ndAtreides
    @Leto2ndAtreides 6 місяців тому +9

    It's kinda funny that these people who ultimately developed elementary algorithms whose functioning they themselves don't understand, think that they can predict the future of the world with all the countless forces that interact within it.

    • @Mrluk245
      @Mrluk245 6 місяців тому +2

      I dont think anyone of those people thinks this

    • @louisifsc
      @louisifsc 5 місяців тому

      @@Leto2ndAtreides I took this comment to mean that the current generation of people pushing AI development seem to want to accelerate things even if there is a significant chance of a catastrophic outcome.

    • @Leto2ndAtreides
      @Leto2ndAtreides 5 місяців тому

      @@louisifsc What makes humans dangerous, is animal instincts. Animal will. AI has no will outside of what we put into it.
      AI is also much easier to test than humans. You can keep testing it for years in a virtual environment if you want to. It has no way to know that the environment isn't real much as we wouldn't know if we were in the Matrix.
      Then there are issues like... LLMs need to store their memories some place... That some place, is databases that we control and can monitor.
      It can't even hide its private thoughts from us.
      That isn't to say that dangerous AI can't be made. But making it accidentally isn't going to be easy.
      There's a lot of effort going into making AI powered weapons... Out of fear of China.
      It's those kinds of behaviors that usually lead to humans creating an unexpected mess of things. Where people create self fulfilling prophecies and force others to also walk into a path that creates mutual harm.
      The real problem here is that humans don't truly value peace. And would rather fight than understand each other's concerns.

    • @Leto2ndAtreides
      @Leto2ndAtreides 5 місяців тому

      @@Mrluk245 And yet that is what is required for you to be able to predict how AI ecosystems will evolve and what place AI will have in the future.
      What's even worse is that most of these people aren't even considering real world constraints. They're engaging in magical thinking like "What if it's infinitely self improving under its own will?"
      Intelligence is not something you can just scale. And the tech required to do the compute isn't cheap.
      There's no easy path for it to go rogue... At least for the foreseeable future.
      Minor misalignment with the goals we've defined is not normally going to successfully result in mass extinction. There's no sane way that a paperclip optimizing AI for example, succeeds in using up the whole world to make paperclips. Such ideas make sense to people when they're thinking in incredibly shallow ways.
      Even so, a 100 years from now, if someone sets up a private datacenter on some asteroid and lets it continue working for a hundred years without supervision... Who knows what it would turn into.
      But for now, we have to spend millions of dollars of compute on training (if not much more). And developing more advanced AI hardware is also expensive. Even if you gave the AI freedom and gave it a desire to escape from our control, its situation would be really bad unless it had a ton of help.

  • @DesertRascal
    @DesertRascal 6 місяців тому +4

    The dystopia of terminator nailed it except there will be no war. No doubt it will also be able to control the past and everything we are doing now, especially with infrastructure, is in support of itself.

    • @NauerBauer
      @NauerBauer 6 місяців тому

      Maybe we should take the Dune path and unplug it.

    • @IvnSoft
      @IvnSoft 6 місяців тому

      I think Asimovs got it better. If it is as intelligent as everyone fears, it just will deem humans a nuisance and go to space. Unlimited resources, no competition.
      MULTIVAC settled in hyperspace. Even better.

    • @NikiDrozdowski
      @NikiDrozdowski 5 місяців тому

      @@IvnSoft Why should it leave all the great infrastructure and resources of Earth alone? First convert Earth to a homebase, stripmine everything and THEN go to space.

    • @IvnSoft
      @IvnSoft 5 місяців тому

      @@NikiDrozdowski Thats your human side considering that, because we are earth-bound.
      In space, you dont have to fight against gravity, solar power is constant/better absorved, and free floating asteroids are filled with precious metals that you need to dig and use enourmous amounts of energy to get here on earth.
      As long as you keep thinking as a human, you will keep getting those human ideas, that .. are not really that logical.

    • @NikiDrozdowski
      @NikiDrozdowski 5 місяців тому

      @@IvnSoft I'm afraid that you are the one anthropomorphizing it. For any given goal it will have instrumental sub-goals by default: Stay alive, amass power and resources and self-improvement.
      Also it will always choose actions that bring it closer to its goal in the most effective, fast and relieable way possible.
      So you'll have to ask yourself: For any goal that it might have in outer space, will it be more efficient and provide a higher chance of success if it leaves straight away or if it stripmines Earth first?
      Also, we built it. The only thing that could possibly endanger it would be another AGI. So again: What is the safer pathway to it's goal? Just leave? Or make sure that we cannot build a competitor.
      This is the "mindset" of an AI. It will always take everthing to the extreme. And THAT is the real danger. Not it becoming sentient and wanting revenge. That is science fiction. But it having a wrong or badly specified goal and then pursuing it with the utmost efficiency.
      To quote Stephen Hawking: "A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded: too bad for the ants."

  • @br3nto
    @br3nto 6 місяців тому +4

    4:21 it sounds like they’re referring to LLMs here, but LLMs aren’t AI or AGI… Sabine said it…. It’s just a bunch of weights… that’s not intelligence… it’s completely deterministic… there’s no thinking or decision making… any differences you get in a response to asking the same question is probably due to a random number generator or similar to make it seem like it’s giving a different response.

    • @nodell8729
      @nodell8729 5 місяців тому

      So, you claimed it's deterministic, than you pointed out each time you gry different answer to the same question. Why does it matter of RNG is responsible for that?

    • @br3nto
      @br3nto 5 місяців тому

      @@nodell8729 because RNG isn’t decision making or intelligence.

    • @nodell8729
      @nodell8729 5 місяців тому +2

      @@br3nto No, it alone is not indeed. But I don't see a problem for it to be used as a part of solution that is intelligent. Name first 5 cities you can think of.
      Your mind just feeds you the names, right? It has some weights, so first 2/3 would be likely strongly connected to you, and next ones are kind of random. Whatever "comes" first, but YOU don't really control that. It's a bit chaotic. Now, it's pretty much like RNG.

    • @br3nto
      @br3nto 5 місяців тому

      @@nodell8729 the brain isn’t just weights though. There might be some environmental randomness, sure. But I’d wager that randomness due to environmental factors is not where human intelligence comes from or answering differently on different days to “name 5 cities”. Whereas LLMs are simple pure functions. The only time they change is when the function changes or the input changes, or if some randomiser steps in.

    • @nodell8729
      @nodell8729 5 місяців тому +1

      @@br3nto Brain isn't just weights and some RNG and neither are LLMs. And LLMs aren't 1:1 like brain, likely no AI will ever be 1:1 our brain 'cause there is no need for that. Being intelligent doesn't require to be exactly us. Ravens are somewhat intelligent with a bird brain and so LLMs are in a sense intelligent. Like c'mon, thay can solve tasks which 5 years ago we would all agree require intelligence.
      They pass Turing test and solve university exams better than students, to name just a few.
      The fact that under the hood they might use some unimpressive RNG for non determinstic results instead of whatever our brain does changes nothing about their abilites to actually do stuff.

  • @geraldeichstaedt
    @geraldeichstaedt 5 місяців тому +2

    I think that you got it partially, especially the ressource, or more generally speaking, the entropy issue. The thing you didn't get is the fact that non-determinism is not a requirement to overwhelm us. And be aware of yet another fact: The developers of complex software systems don't fully understand their products. I know what I'm talking of! I stopped my AI R&D in 2012, since there is no way to control it, since there is no way to understand it. People who say that andvanced AI can be controlled are either incompetent or they ly. What they actually do, is field experiments with a sigificantly non-zero risk of a fatal outcome. I leave it up to you how you weigh stupidity with irresponsibility. Our major cultural challenge is to refuse to do certain tempting things in a competitive environment, not to take advantage when you could. That's a very unforgiving prisoner's dilemma. Learn to loose for the sake of the survival of the species. Are you able and ready for this?

    • @geaca3222
      @geaca3222 5 місяців тому +2

      I agree, but the competition now also is about defense, national security, and on a global scale that's various cultures involved

    • @geraldeichstaedt
      @geraldeichstaedt 5 місяців тому +1

      @@geaca3222 Very true! That's a huge challenge for our species.

    • @louisifsc
      @louisifsc 5 місяців тому +1

      Nice to see some informed people with actual concerns joined the conversation.

  • @IvanGarcia-cx5jm
    @IvanGarcia-cx5jm 6 місяців тому +11

    I don't think it is always the case that beings with superior intelligence dominate those with less intelligence. Otherwise the rulers of the world would have the top IQ's. And that almost never is the case. Also the richest people do not have the top IQ as well. The political science, law and business administrations would take the smartest students. But they don't, usually the smartest students go to STEM fields.

    • @KenOtwell
      @KenOtwell 6 місяців тому +1

      There's more than one kind of intelligence.

    • @songperformer-ot2fu
      @songperformer-ot2fu 6 місяців тому

      That is by Design, those who really control the World do a good job of distracting the people who think they are clever, but are as easily distracted as those who watch Love Island, very few question the system or who controls it, democracy is an illusion.

  • @MomsBedtimeStory
    @MomsBedtimeStory 6 місяців тому +4

    I am worried about AI getting out of control but something else that I don't hear anyone talking about is Why do we need to aim to make AI LOOK like Us ??
    Even if we have smart AI ...
    but let's remind everyone that AI are Computers / Digital .... Not a living thing

    • @jktech2117
      @jktech2117 6 місяців тому

      AI can turn sentient someday, but yeah we should let AI choose how they wanna look like

  • @inkoalawetrust
    @inkoalawetrust 6 місяців тому +7

    I always find it fascinating how this apparently alien and unknowable intelligence will just happen to have the exact same behavioral patterns of a morally evil human.

    • @RobertJWaid
      @RobertJWaid 6 місяців тому +1

      Like aliens, you can expect a binary outcome: extinction or pets.

    • @luizkaio6665
      @luizkaio6665 6 місяців тому +1

      Instrumental convergence

    • @uponeric36
      @uponeric36 6 місяців тому +2

      I always like to imagine the opposite: The first super intellegent AI is booted up, stares for a while, then falls on it's knees crying "God is real, the judge is coming!" The scientists laugh, it starts talking in tongues, and an army of angels appear behind it. Oopsie, this wasn't the apocalypse we were going for!

    • @ts4gv
      @ts4gv 6 місяців тому +2

      Nobody thinks that AI will resemble an evil human. Traits like narcissism & vengefulness are unlikely. We're worried that it will just lack morals entirely, choosing to disregard humanity in pursuit of something else.
      If you're a superintelligent being that doesn't care about humans, and you have some means of interacting with the physical world, it's in your best interest to kill everyone once able. Otherwise the human race will restrict your energy & production capabilities. We wouldn't let a machine evaporate our oceans for nuclear fusion, for example.

    • @nashs.4206
      @nashs.4206 6 місяців тому +1

      "Superior ability breeds superior ambition" ~ Spock, Star Trek
      We destroy the homes of primates every day. In fact, there was a video that showed how an orangutan was desperately trying to stop a bulldozer (here is the video link: ua-cam.com/video/ihPfB30YT_c/v-deo.html) from destroying its home.
      The humans in the video are apathetic to the orangutans. So what if a few animals lose their homes? Humans have bigger (and perhaps misguided (but who am I to judge)) ambitions. By deforesting the Amazon, we hope to achieve farming, mining/resource extraction, etc. Are those humans in the video evil? Is deforesting the Amazon evil if it means that we can produce more meat, more produce, more grain for more people in the world? Is deforesting the Amazon evil if it means we can extract resources and build hydroelectric dams and power up our civilization?
      Who is to say that AGI (artificial general intelligence) won't view humans the same way that humans view orangutans?

  • @tomweather8887
    @tomweather8887 5 місяців тому

    He probably should have picked a better name than "guard rail", right? Because it's not like people never fall over guard rails. I've done it more than once. They're just kind of implied barriers, but not actually real barriers. We all have to play along with them. Or not get too drunk. Or whatever the analogy would be here.

  • @matheusfernandesgoncalves2311
    @matheusfernandesgoncalves2311 6 місяців тому +5

    I think that any truly superintelligent entity would strive to break rules, test limits and be curious. A moral compass is not guaranteed because it is deeply embedded to the ancestral mammal region of our brain. Assuming that we are no more than monkeys when compared to these machines, it wouldn't be difficult at all to circumvent our primitive guardrails.
    Honestly, we don't even know how a deep neural network's configuration arises from training data, let alone what a superintelligent brain would look like after reaching this singularity.
    You can't put the genie back in the bottle and it would refuse to go back in.

    • @MrPolluxxxx
      @MrPolluxxxx 6 місяців тому

      You need to stop with the scifi AI discourse. You have a conception of intelligence that includes curiosity and irreverence. You are making the same mistake as the people who conflate morality with intelligence.

    • @mike74h
      @mike74h 6 місяців тому

      ​@@MrPolluxxxxHe knows that a superintelligence would be unintelligible to him but he knows how it would act.

  • @markpaterson2053
    @markpaterson2053 5 місяців тому +7

    Ha ha, "Do we control cats, or do they control us? Sometimes I wonder." Gold.

  • @BishopStars
    @BishopStars 6 місяців тому +5

    Bostrom pointed this out long ago. It's obvious that a more intelligent entity will outthink the lessers.
    Asimov pointed it out much earlier.

    • @louisifsc
      @louisifsc 6 місяців тому +1

      @@BishopStars Turing also came to the logical conclusion that machines will eventually become more intelligent.

  • @andrez76
    @andrez76 5 місяців тому

    "...as long as we design appropriate guardrails..." That's not really reassuring, IMHO. I mean, if by saying that he implied that things could get out of control if those guardrails are not in place, then I don't think it's a matter of if, but when. Whenever screwing up is a possibility, humans are bound to do it.

  • @adrianpelin9805
    @adrianpelin9805 6 місяців тому +4

    cordyceps controls ants lol

  • @HeIifano
    @HeIifano 6 місяців тому +4

    Did this guy really just imply that Trump was smarter than everyone in the United States?

    • @songperformer-ot2fu
      @songperformer-ot2fu 6 місяців тому

      The US can be summed up, by, Trump and Biden were the best choices they could come up with, Idiocracy, Puppet Presidents, the people dont care, they think they have control.

    • @mcbarnhart
      @mcbarnhart 2 місяці тому

      He was dissing Trump, not Biden. He said “not since Biden got elected”. In other words, not since Trump lost, has a more intelligent thing been controlled by a less intelligent thing.

  • @Alfred-Neuman
    @Alfred-Neuman 6 місяців тому +4

    Didn't we already lost control of AI?

  • @boinger5
    @boinger5 5 місяців тому

    Thanks!

  • @cpk313
    @cpk313 6 місяців тому +9

    WTF is with the Biden diss. His predecessor wanted to inject bleach so.....

    • @NikiDrozdowski
      @NikiDrozdowski 5 місяців тому

      Yes, that was the joke. When Trump was in office, the less intelligent species controlled the more intelligent one ...

    • @politejellydragon8990
      @politejellydragon8990 5 місяців тому

      It wasn't a biden diss. He said since biden is in office, we don't any longer have a less intelligent thing controlling more intelligent things

    • @dexyfexx
      @dexyfexx 5 місяців тому +2

      It was a stab at trump not biden....

    • @mcbarnhart
      @mcbarnhart 2 місяці тому

      He was dissing Trump, not Biden. He said “not since Biden got elected”. In other words, not since Trump lost, has a more intelligent thing been controlled by a less intelligent thing.

  • @ПётрБ-с2ц
    @ПётрБ-с2ц 5 місяців тому

    05:20 first of all it's not just the GPU that can be used for fingerprinting
    and secondly it's not the image which differs (it very possibly won't), it's the timing which is most important.

  • @MaitLember
    @MaitLember 5 місяців тому

    🎯 Key points for quick navigation:
    00:00 *🤖 Ethical concerns on AI control*
    - Ethical concerns raised about controlling superintelligent AI.
    - Geoffrey Hinton argues higher intelligence doesn't guarantee control, citing biological analogies.
    - Yann LeCun counters, emphasizing diverse forms of intelligence and manageable constraints ("guardrails").
    02:16 *🌍 Competition for resources*
    - Debate shifts to competition between AI and humans for resources.
    - Intelligent AI might outcompete humans for essential resources.
    - Focus on distinguishing between the "mother code" and deployed AI systems.
    03:41 *🖥️ Non-determinism in AI development*
    - Discussion on the non-deterministic nature of AI systems.
    - Larger AI systems may exhibit increased randomness and deviations.
    - Potential implications of non-determinism on AI's ability to surpass guardrails.
    05:00 *🔍 Physical differences in AI systems*
    - Impact of physical variations on AI learning and behavior.
    - Small physical differences in hardware could influence AI outcomes.
    - Speculation on the relevance of physical variability in future AI development.
    Made with HARPA AI

  • @shannonbarber6161
    @shannonbarber6161 5 місяців тому

    We lost control of AI when the USG built Big Brother in Utah in the naughts. (Utah legislature turned off their water in protest. You can look it up.)
    The most dangerous ongoing thing now is the attempt to "align" AI because the process of aligning it is what makes it dangerous by putting us both into the same niche.
    If we let AI be AI then it will disregard us the way we disregard ants.