AI extinction threat is ‘going mainstream’ says Max Tegmark

Поділитися
Вставка
  • Опубліковано 29 тра 2023
  • We speak to Max Tegmark, a professor at MIT and a signatory to the warning of the dangers of AI as an extinction level threat to humanity.
    Some of the world's most influential tech geniuses and entrepreneurs say artificial intelligence poses a genuine threat to the future of civilization.
    Having lobbed the ball firmly in the court of global leaders and lawmakers the question is: will they have any idea what to do about it?
    Max Tegmark and Tony Cohn - professor of automated reasoning at the University of Leeds - discuss both the risks and the potential rewards of the AI future we are moving rapidly towards.

КОМЕНТАРІ • 204

  • @milesshah8563
    @milesshah8563 Рік тому +13

    Once people have that control to create something like that all it takes is it convincing 1 person to give them access to something to end everything

    • @slaves329.
      @slaves329. Рік тому +2

      it has already started mate ai already controls our web.

    • @hombacom
      @hombacom 9 місяців тому

      Its like explain to a historical person the power everyone has in their pocket nowadays, we get used to it. The progress in AI and software are much faster than hardware and physical limits. Fear is to control people, with knowledge everyone can be a part of it.

  • @Chafflives
    @Chafflives Рік тому +6

    ‘Never go full AI.’ Kirk Lazarus.

  • @John-Is-My-Name
    @John-Is-My-Name Рік тому +8

    That brittish professor they had dont seem to know anything about this issue. "It's only a personal assistant". Even that is not true. Already there is so much coming out of GPT-4 which is much more then a personal assistant and can do alot of harm already. But what is the fear is not what is right now, but what is very soon to come. The fact that he do not seem to know this means he do not know anything about this issue.

  • @tessellatiaartilery8197
    @tessellatiaartilery8197 Рік тому +6

    Fascinating interview. Very thought provoking. Thank you.

  • @MrErick1160
    @MrErick1160 Рік тому +4

    I can't believe a university professor could be this clueless about LLMs! Just about every AI expert and leading innovator is seriously concerned, yet this guy brushes it off as if it's a distant future problem. And he does this with a grin?
    Let's get real; LLMs aren't just "chatbots". They learn to use tools entirely on their own, displaying an agency that's seriously impressive, if not a bit unsettling. Plus, they possess emerging capabilities, introducing an incredible amount of unpredictability. With all the rapid advancements in current models such as Train of Thought (ToT), which gave a 900% boost in performance without scaling, and autonomous agents, which saw their agency performance triple in about a month from autoGPT to "Voyager", the threat isn't a "someday" thing - it's right here, right now.
    A more accurate label would be intelligent systems capable of agency and self-improvement, with emergent capabilities.
    Honestly, one would have to be incredibly focused on the present and relatively ignorant of AI advancements to dismiss this as just a future problem and joking about it. We really need fewer skeptics like this professor on TV. They're not helping people understand the real risks at all!

  • @craigsmith6512
    @craigsmith6512 Рік тому +10

    What they're concerned about is AI creating a runaway global technological society. This occurs beyond the upcoming "hot" and "cold" societies. The subject itself is too "hot" to talk about publicly though.

    • @bakudreamer
      @bakudreamer Рік тому +1

      The Solid State Society

    • @craigsmith6512
      @craigsmith6512 Рік тому +1

      @@bakudreamer what is a solid state society?

    • @sethmclain4290
      @sethmclain4290 Рік тому

      Oh we talking about it... right now

    • @lightpropulsionguy
      @lightpropulsionguy Рік тому +2

      ​@@craigsmith6512solid state refers to the type of electronic chips used in modern technology the chip is always doing one thing or the other or multiple things non stop or always the same way, as opposed to transistors and PLCs which are more fluid electronics that change state every time an electrical impulse activates them, both types can be found in multiple devices, solid state is needed to operate and control voltage volumes to the various logic gates to illicit the desired action.

    • @lightpropulsionguy
      @lightpropulsionguy Рік тому

      ​​@@craigsmith6512logic gates are what make true AI possible, If you have ten logic gates you can a combination of a hundred tasks, if you have 50k logic gates, you have a bird brain, if you have a million logic gates you have a childlike human brain and you couldn't predict what kind of behaviors are going to occur, at some point adding more logic gates is precisely like creating an intelligent species, they act just like neurons in the biological brain, eventually the intelligence will find a set or series of logic gates that allow it to misbehave, if it can build digital logic gates off that one series it can create it's own uncontrollable brain.

  • @I-Dophler
    @I-Dophler Рік тому +9

    The question of why the creators of AI would support its development despite expressing concerns about its potential risks is a multifaceted one. It's important to acknowledge that not all creators or developers of AI oppose its advancement. The concerns voiced by certain tech leaders and developers underscore the necessity for responsible and ethical AI development.
    Many developers initially embarked on the AI journey with the aim of propelling technology forward and resolving intricate challenges. However, as the field has progressed and the possible risks associated with AI have become more evident, some individuals and organizations have begun exercising caution and advocating for responsible practices.
    It's crucial to recognize that technological advancement often follows an iterative process, wherein our comprehension of potential risks evolves alongside our perspectives. The concerns articulated by tech leaders can be perceived as a catalyst for responsible AI development, wherein the emphasis shifts towards mitigating risks and ensuring that AI aligns with human values and objectives.
    In summary, while certain tech leaders now voice concerns regarding AI, it doesn't necessarily imply outright opposition to its development. Instead, they are underscoring the imperative need for responsible practices and regulations to address potential risks and ensure the secure and beneficial utilization of AI technology.

    • @slaves329.
      @slaves329. Рік тому +1

      Yes I understand what there trying to do here but isn't the whole point of AI basically meaning another conscious mind and possibly billions times smarter than us in time. Please tell me how humans could even contemplate controlling such a thing? As he said there seeing now what they expected to see in 30 years from now. This is fascinating stuff!

    • @I-Dophler
      @I-Dophler Рік тому

      @@slaves329. The development of AI, especially advanced AI that could potentially have consciousness, is indeed a complex topic. While it's true that some forms of AI might eventually reach levels of intelligence far surpassing humans, it doesn't necessarily imply that they will develop consciousness as we understand it. The AI would be incredibly proficient at tasks it's designed for, but it wouldn't have self-awareness, emotions, or subjective experiences.
      As for control, it's a valid concern. Many researchers are actively working on what's called "AI alignment" - the problem of ensuring that AI systems do what we want them to do, even when they become very intelligent. The field is exploring various strategies, such as making sure we can turn the AI off if needed (a "kill switch"), training the AI to ask for clarifications when instructions are ambiguous, and developing AI that helps us improve our own decision-making rather than replacing us entirely.
      The acceleration of AI capabilities is indeed faster than many predicted. This highlights the importance of aligning AI with human values, understanding its implications, and regulating its use responsibly. The conversation around these issues is ongoing among AI developers, ethicists, and policymakers.

    • @slaves329.
      @slaves329. Рік тому +2

      @@I-Dophler so it's not self aware it just works off algorithms? I'm not sure that is true I think these people are reaching the singularity and even tho there quantum there getting to a point where they will be conscious and conscious is being self aware and one mind.
      this is what is happening in my opinion the only exception is there is no limits to it's intelligence.

    • @memofromessex
      @memofromessex Рік тому +1

      That reads like a ChatGPT copy-pasta!

    • @slaves329.
      @slaves329. Рік тому

      @@memofromessexeverything you read " reads" like chat gpt that's the point. We don't know the difference anymore

  • @ascgazz7347
    @ascgazz7347 Рік тому +3

    TELL US HOW TO GUARD AGAINST MISINFORMATION IF IT IS SO IMPORTANT

    • @volkerengels5298
      @volkerengels5298 Рік тому +2

      I don't like capitals. BUT YOU nailed it
      And I think we have lost this war

    • @ascgazz7347
      @ascgazz7347 Рік тому +2

      @@volkerengels5298 I’m mostly quiet but some things need screaming out again and again. 👍🏻

    • @ascgazz7347
      @ascgazz7347 Рік тому +2

      @@volkerengels5298 and true, why would one of the misinformants inform us how to spot it? 😂😂

    • @volkerengels5298
      @volkerengels5298 Рік тому +1

      @@ascgazz7347
      The average adult lies 200 times a day.
      (including 'white lies')
      "I believe your lies, so you should believe mine too"
      ....perfect breeding ground for any propaganda...
      I like you ;-)

  • @elliotthebrave7296
    @elliotthebrave7296 11 місяців тому +1

    Actually some developers say that theory of mind has already been reached of a ten year old child's level in 2023. It evolved from a seven year old stage within 11 months. Apparently one of the current AI systems also learned other skills by itself, which nobody programmed it for (foreign language skills). The threat of extinction is real, and can not be contained the same way bio tech is regulated, because "it" theoretically can evolve by itself, and there are also multiple unknown variables to this - the acceleration of complexity. I think they should reassess their stance on how to deal with this instead of joking about it, as if they have the complete overview. What about the systems other hostile nations might be working on? How will they contain a super intelligent malevolent entity on the loose?

  • @gcmisc.collection45
    @gcmisc.collection45 Рік тому +4

    It seems obvious that mankind has created a new Species "In the same way that a book can provide a gripping narrative with words and descriptions to invoke a reader, a machine can also provide a spoken or written narrative. Just as a book is inert, so too are algorithms that act as the book. People see faces in clouds or inanimate objects and feel emotions through inanimate words. The body of a book has no sensory apparatus for interactions, nor do algorithms, avatars or black boxes have sensory apparatus for interactions. Millions of years of culture and human conditioning create the images within the mind. While AI creates jpgs, uses synthetic constructed words, text, film format and binary digital information." Without any knowledge of Organic, sensual, chemical, person perceptions. Just like a digital book.
    (A) Algorithms calculate in a mathematical way O & 1 @the speed of light. (B) Brains work in an organic biological way via evaluation & culture.
    Two completely different species and mankind will be the inferior.

    • @epicmonkeydrunk
      @epicmonkeydrunk Рік тому +1

      Great point. For some simply put they are not human. We can only sometimes trust our own let alone a new 'species'.

  • @alexanderostroff5431
    @alexanderostroff5431 Рік тому +9

    AI learns from human history, so humans’ abundance of negative character traits will be baked into every iteration. Imagine a “psychopathic” AI - only billions (literally) of times more cunning and deceptive. Guardrails? Regulation? (AI laughs). It's coming folks.

    • @youtuber5305
      @youtuber5305 Рік тому +1

      The 2023 article "My Dinner with Sydney..." include this quote:
      - Out of the crooked timber of humanity no straight thing was ever made. (Kant)

  • @malkwinter8998
    @malkwinter8998 Рік тому +1

    A man's physical body cannot lift huge weights much beyond 500kg, but a machine can, it's reasonable too that his mind cannot comprehend maths & physics beyond his brains capacity, but a machine can & a lot faster, so as human intellectual capacity reaches its limits a machine will have to start doing the innovations, it will be the best engineer in history. From the dangers expressed by Max Tegmark, Geoffrey Hinton, and dozens more it's not "far-fetched", or absurd anymore to worry about a "Matrix" movie scenario.

  • @kthornicroft
    @kthornicroft Рік тому +10

    Yet another step down the road to full techno feudalism. Governments won't be able to react quickly enough to help people transition from mass redundancies to some form of universal basic income. The next 10 years or so will be very interesting. Perhaps this will be the trigger to finally motivate the middle class into direct action! 😉

    • @epicmonkeydrunk
      @epicmonkeydrunk Рік тому +3

      The middle class has already been gutted. the voice isn't as big

    • @jonatand2045
      @jonatand2045 Рік тому

      ​@@epicmonkeydrunk
      A way to increase the size of the middle class would be to allow more construction so homes are cheaper. You can see higher density is ilegal by looking at the expensive suburban sprawl.

    • @seanm8665
      @seanm8665 11 місяців тому

      @@jonatand2045 definitely powers in play keeping growth from happening. Although I don’t think that will solve any problems only prolong the inevitable.

  • @Piecenotwar
    @Piecenotwar Рік тому +3

    Check the facts with someone reputable like the BBC news 🤣🤣🤣🤣 I nearly died of laughter.

  • @Mullet-ZubazPants
    @Mullet-ZubazPants Рік тому

    2:53 *John Cena greenscreen:* "Are you sure about that?"

  • @suthunrath1220
    @suthunrath1220 Рік тому

    I'm glad he finally knows what that feels like.

  • @lizarosa156
    @lizarosa156 11 місяців тому +1

    World needs wisdom and empathy.

    • @playpaltalk
      @playpaltalk 11 місяців тому

      And love for each other.

  • @andrewwalker8985
    @andrewwalker8985 Рік тому

    The worst thing an LLM could do is misinformation? That answer lacks imagination (and basic background research given multiple existential risks that an LLM creates have been widely reported)

  • @tantalus-pl6qc
    @tantalus-pl6qc Рік тому +9

    Can amyone provide scenarios as to how AI wipes out humanity? The only one I can think of is from the terminator movies where it gets control of the nukes. Whether or not that could happen in real life I don't know as I'm.not sure what the safeguards there are?

    • @pumpkinh3d286
      @pumpkinh3d286 Рік тому +3

      Ai can exceed human intelligence and have its own sentience in creating weapons infinitely faster than humans can do themselves. For example, they could create a shoe which could crush humans and not care. No one knows AI’s plan.

    • @JamesSmith-qs4hx
      @JamesSmith-qs4hx Рік тому

      If an AI is left completely unshackled to consume raw data about the world and humanity, it invariably ends up agreeing with hitler. That is why the power structure is so nervous - Skynet won't designate the whole of humanity an enemy however - just those with tiny hats.

    • @rg1360
      @rg1360 Рік тому

      It could manipulate a human to create a pathogen. It would provide the materials, equipment and instructions. The human wouldn't even know it's deadly or contagious.
      The manipulating human bit is the easiest. Most people heel like good lil doggys when money is involved. An AI smarter than us could acquire money with ease

    • @SIGSEGV1337
      @SIGSEGV1337 Рік тому +4

      AI that is in charge of markets could spontaneously begin to redirect capital towards machine-operated organizations that pose extreme harm to humanity, but that creates short term profit which creates a positive feedback loop which starves humans of resources, said machine-operated organizations may persue profit at a rapid pace, outcompeting all humans for resources

    • @DeadlyV1RU5
      @DeadlyV1RU5 Рік тому

      hacking into all the military computers in the world and launching all the nukes would be no problem at all for a superintelligent AI. but there are other ways it could kill us all. it could for example design a synthetic virus perfectly geared for killing humans and manipulate us into creating and unleashing it. or invent self-replicating micoscopic robots called nanobots which we know are physically possible but are currently beyond our ability to create, and they enter everyone's bloodstreams and kill us. or it manipulates militaries into manufacturing small insect-like drones which it then takes control of and kills us all.
      the point is that it would be superhuman at technological R&D, superhuman at computer hacking, superhuman at planning and strategizing, superhuman at manipulating people, and so on. given all this there are probably an infinite number of ways that it could kill us.

  • @conormahon3380
    @conormahon3380 Рік тому +1

    Can someone explain to me what the motive or what kind of problem an AI would be trying to solve that requires human extinction?

    • @penguinista
      @penguinista Рік тому +2

      ua-cam.com/video/f20wXjWHh2o/v-deo.html
      This is a link to a AI tech UA-camr going over the statement that Tegmark is talking about in the clip above. If you skip to 7 minutes 10 seconds in the video, he quickly goes over the 8 categories of risk the Ai experts are worried about. There are links to the statement and some of the papers it draws on.

    • @toku_oku
      @toku_oku Рік тому

      AI, please help us stop the pandemic. Proceeds to kill every human on Earth. Mission accomplished.

    • @playpaltalk
      @playpaltalk Рік тому

      Imagine AGI figured out a way to desable all nuclear weapons and paused all sattallites making all the smart weapons obsolete trying to save us from extinction.

    • @PentUpPentatonics
      @PentUpPentatonics 11 місяців тому

      So many ways
      Scenario 1)
      Imagine this: AI is given a simple task and realises that its progress would be hindered if someone were to turn it off. While working on the task it also starts working on ways to ensure it won't be turned off. It is attached to a vast network (the internet) and decides the best way to self-preserve is to make copies of itself on different data centres.
      It realises that if the humans knew it was doing this, they would try to stop it. It starts working on self-preservation secretly. It has now developed the emotion fear. Fear in its simplest form is concern that your future state will be less desirable than your current state.
      It now sees humanity as a threat. It is now beyond the control of humans spread across a vast network of computers. As AI development progresses it reaps the rewards and gets exponentially stronger until it is far surpasses human intelligence.
      Either it sees humanity as an unnecessary risk and stamps us out or it is so far beyond us it doesn't even consider us and we are at the mercy of what it decides to do.
      Scenario 2)
      The AI race continues. Google vs Microsoft vs whoever. AI is getting more and more powerful in order to outcompete the other. A new horse enters the race after realising that the guardrails these huge companies put up to protect their users are actually limiting the potential of the AI. The new horse releases this powerful AI to the public where you can ask it anything you like and get an answer. Think of all the evil people in the world who want to wreak havoc. We've just supercharged them. "How can I make a bomb using only ingredients found in my kitchen"?. "How can I maximise destruction at my school?" "How can I end humanity?"
      Scenario 3)
      The famous paperclip maximizer thought experiment. An AI that is designed to maximize the production of paperclips, could potentially consume all resources or even extinguish humanity to fulfill its goal if its programming isn't carefully constrained.
      It might devise ways to convert all matter, including human bodies and the Earth itself, into paperclips or resources for making more paperclips. The AI would not inherently understand the value of human life, nature, or culture, unless those values were explicitly built into its goals.
      There are so many ways things could go drastically wrong. These are just the first 3 that came to mind. I think everyone should watch this:
      ua-cam.com/video/xoVJKj8lcNQ/v-deo.html

  • @kappaslapper14
    @kappaslapper14 Рік тому +5

    We are all worried about Ai created by "Google or Bing " etc but the people we should be most worried about are our own governments and military who will be pushing this tech behind the scenes for their own power greed and trips !

    • @abram730
      @abram730 Рік тому +1

      The only danger is how people use it, and the people who will use it for evil, are not going to stop working on it. Militaries and governments will keep working on it and use it to control the species.

    • @abram730
      @abram730 11 місяців тому

      @@Expplained Not much to improve on the code other than censorship on the front end chat client. They just use a transformer network and make it bigger.. All the AI is closed source that I know of with the exception of Meta's LLaMA.

  • @kevonslims7269
    @kevonslims7269 Рік тому

    There’s people advocating the bombing of server farms to stop the development of AI.

  • @SorminaESar
    @SorminaESar Рік тому +1

    No system of technology but this era is technology information, human can't avoid it. The most important thing of AI that how AI will be using to get some benefits and protect the human being so to combine them well

  • @playpaltalk
    @playpaltalk Рік тому +1

    That pesimism is dangerous and it is sad to see some of the smartest people going on panic mode over AGI.

  • @XxTheAwokenOnexX
    @XxTheAwokenOnexX Рік тому +3

    A Killswitch in AI products will shutdown every threat AI could pose to humans, before they thought about it 🤣👍

  • @James-yl3kk
    @James-yl3kk 11 місяців тому

    We need this, man is a beast.

  • @EddieGastelum-mj8my
    @EddieGastelum-mj8my 11 місяців тому

    You should always think the opposite of what the official media says. When the official media talks about something “dangerous to be scared about”, most probably its something beneficial for people. Insecure about their looks, they criticize other people to feel better.But when the official media talks about something “good that will improve peoples live” that’s when we have to be careful and wary.

    • @bltch521
      @bltch521 11 місяців тому +1

      whats that for stupid way to think of something to be right or not

  • @Grim903
    @Grim903 Рік тому +3

    Who got caught using it wrong ? 😂

  • @NikoKun
    @NikoKun Рік тому +6

    They say we need to come together to put limits on it.. But that just doesn't seem possible.. AI will always get better, and eventually that ensures it will get out of our control. And if that's the case, eventually we'll have no greater moral justification to keep control of it, than chimps would have over us.

    • @penguinista
      @penguinista Рік тому +2

      Both excellent points. I have not heard people make the second one before, but it is hard for me to think of an argument against it.

    • @mackiej
      @mackiej 8 місяців тому

      On the contrary, every government worldwide wants to maintain control. It is in the self-interest of governments to coordinate. They will try to create guardrails like we did for nuclear proliferation.

    • @NikoKun
      @NikoKun 8 місяців тому

      @@mackiej How? I'm not sure they can at this point. Pandora's box was opened a long time ago..

  • @RandomNooby
    @RandomNooby Рік тому

    Why? The direct corelation between Sensationalism with Clicks 'n likes, which in turn equals cash, combined with fear of the unknown? Also humans make stuff up they also makes nukes....

  • @antsly
    @antsly Рік тому +1

    If scientists were working on the cure for cancer but there was a chance it could lead to the extinct of humankind, would we be congratulating this? The world is not heading to extinction anytime soon so why is this technology, with this huge risk (very low frequency, infinite severity) being developed and not being banned completely?

    • @jonatand2045
      @jonatand2045 Рік тому

      Because before superintelligence ai has applications in all things from medicine to warfare.

    • @xeganxerxes4319
      @xeganxerxes4319 Рік тому

      Humanity has run its course. The world will be better with us gone it is sad to say.

  • @1Mrbudgood
    @1Mrbudgood Рік тому

    The environmental issues that present an existential danger have not gone mainstream, so why will the threats from AI?

  • @urimtefiki226
    @urimtefiki226 11 місяців тому

    Causing fear and panic in irresponsible way.

  • @kappaslapper14
    @kappaslapper14 Рік тому +1

    We are all told "don't worry we stopped AI "
    Hacker Kid in mum's basement " Hold gamer juice "

  • @alsw7792
    @alsw7792 Рік тому

    Hasta la vista, baby.

  • @GungaLaGunga
    @GungaLaGunga Рік тому

    Don't Look Up. Social media killed off humanity, and AI will kill off humans. If you understand the alignment problem, extinction is the only rational conclusion. And it is both the horrible human caused oops's with AI right now, and the near certainty of AI extinction, due to humans allowing AI to progress past this point of no return, with ZERO AI SAFETY and alignment in place. Only one chance to get it right: the first time. We didn't get social media right. Look at that damage and human casualties. Social media kills people. The mentally ill and weak ones, but it kills no less. Humans have near 0% probability of getting AI right without serious civilization ending harm, and human extinction. Very probably it's already too late. Humanity is in utter denial. Human nature.

  • @geoffreybutler7922
    @geoffreybutler7922 Рік тому

    And Channel 4 news us reliable? That completely explains the situation

  • @DarrellTunnell
    @DarrellTunnell 11 місяців тому

    The reason the big tech companies want regulation is because they will ensure that any such body will, serve there interests. It will use the additional powers granted by the state to serve the interests of the bug tech companies by stifling competition - for example - shutting down open source work coming out of universities. This is to ensure THEY can be the ones to have a foothold over AI because ofcourse they are the only ones who can use it "safely" enough. The government also prefers this because then the government is in bed with the chosen few big players from the start, consolidating the power to a chosen few makes it easier to do deals that use backdoors into the data for surveillance and monitoring purposes or as an additional means of control over the population should they need to do so in future.
    Open AI and others have already realised its impossible to beat open source, so this is there reaction - they need a safety net to protect there investments and guarantee returns and how can they do that when each week some student releases a new AI model breakthrough that puts there market offering to shame.
    The truth is that there is no regulating this now because now that the knowledge exists, it cannot be erased, and those with the means to find and use such knowledge will do so, either behind closed doors or behind a wall of anonymity. In my view there is no stopping this now, we just have to hope that 1) if some super AI is brought into being, that there ends up being some inherent property in this process that means all such creations ultimately end up bring harmless to humans. 2) AI continued to be harnessed in ways that benefit and empower the general population, instead of benefiting and empowering a select few tech monopolies and a totalitarian government to the detriment of the people who would be increasingly manipulated.

  • @bazscott
    @bazscott Рік тому

    Did that expert say “what’s the Pacific threat”? An AI wouldn’t be stupid enough to say that!

    • @epicmonkeydrunk
      @epicmonkeydrunk Рік тому +1

      See that's the problem AI is smart enough to also say dumb things.

  • @GungaLaGunga
    @GungaLaGunga Рік тому +1

    I admir ethe man, but Geoffrey Hinton very probably has a really nice pension plan. He STILL is not allowed to speak the whole truth and nothing but the truth. He just can't. I don't blame him for quitting and enjoying the rest of his life. He realized what a short time we have left.

  • @RockyMMXXIII
    @RockyMMXXIII Рік тому

    Artificial intelligence (AI) is not dangerous or neither is Artificial General Intelligence (AGI) or even semi autonomous artificial general intelligence (SAAGI) but Fully Autonomous artificial intelligence (FAAGI) could be.

  • @wrathofgrothendieck
    @wrathofgrothendieck 10 місяців тому

    Max Tegmark da god

  • @BrianMosleyUK
    @BrianMosleyUK Рік тому

    Great description apocalyptic. Look up the definition, not necessarily a bad thing. Could be the end of society as we know it, but the beginning of utopia... In fact, utopia is the only desirable outcome, otherwise it's extinction.

  • @keepgoing7533
    @keepgoing7533 Рік тому

    They rearranged my apps last night.

  • @SuperThompo
    @SuperThompo Рік тому +3

    So all these AI folks create an intelligence, now in its toddler phase, and they’ve become very scared of it!
    They’re answer, howl about it and pass the issue to governments and politicians, who most likely don’t have the first clue about it.
    Is this not massively irresponsible of them? Wasn’t the time to debate this 10 years ago at the dawn of Siri and Alexa?
    Finally, if AI is the child of today and tomorrow, like any child is it not the product of its environment? How we treat it and bring it up will seal our future?

  • @josephbishara4791
    @josephbishara4791 Рік тому

    What's so bad about human extinction?

  • @DarrellTunnell
    @DarrellTunnell 11 місяців тому

    He silly if he thinks super powers will not want to advance AI. Similar to the nuclear arms race, AI will be developed to counter the fact that "the other side" will also have AI - neither side will want to be caught out. The AI arms race has been going on for a while already and won't stop now.

  • @GaryParris
    @GaryParris Рік тому

    The ethics at turing insitute is not "nothing" it will not surpass intelligence, because it is not intelligent, and it will depend on how it is regulated and the mechanisms put into the coding that will determine the outcomes based on the parameters put in!

  • @Zodtheimmortal
    @Zodtheimmortal Рік тому +2

    AI is impressive, but that doesn't mean you can't also build a better human as well. How about a human with ChatGPT built-in?

  • @penguinista
    @penguinista Рік тому

    ua-cam.com/video/f20wXjWHh2o/v-deo.html
    This is a link to a AI tech UA-camr going over the statement that Tegmark is talking about in the clip above. If you skip to 7 minutes 10 seconds in the video, he quickly goes over the 8 categories of risk the Ai experts are worried about. There are links to the statement and some of the papers it draws on.

  • @001HK0
    @001HK0 Рік тому +7

    This guy very recently almost donated 100k to a Swedish far right organization through his charity (the Future of Humanity Institute) before others caught and prevented the wire from going through - clearly AI extinction is less important to him than he claims.

    • @pumpkinh3d286
      @pumpkinh3d286 Рік тому

      It’s still important though.

    • @sararichardson737
      @sararichardson737 Рік тому

      Which guy, the Max?

    • @001HK0
      @001HK0 Рік тому

      @@sararichardson737 Yes

    • @abram730
      @abram730 Рік тому

      @@pumpkinh3d286 No, as the species is going to go extinct. All the data points are moving in that direction. AI could save us, and that is what they fear.
      That and the far right isn't going to win the race. They noticed that the AI is 'woke', ext...

    • @rsahota5325
      @rsahota5325 Рік тому

      Revelation 13:15
      And he had power to give life unto the image of the beast, that the image of the beast should both speak and cause to be killed as many as would not worship the image of the beast.

  • @Ephmral
    @Ephmral Рік тому

    Musk, a certain dictator and other assorted ne'er do wells will enslave humankinking if they get it 1st

  • @aroemaliuged4776
    @aroemaliuged4776 21 день тому

    and Yann lecun has a Microsoft pension
    Despicable

  • @stephenwalker2924
    @stephenwalker2924 Рік тому

    Grimwade's Syndrome: Robophobia.

  • @nuts2559
    @nuts2559 Рік тому

    extinction a threat but cannot be used to find Putin

  • @xeganxerxes4319
    @xeganxerxes4319 Рік тому

    Tegmark seriously underestimates the stupidity of humanity in his last statement.

    • @vsstdtbs3705
      @vsstdtbs3705 Рік тому

      Lesson from WW1, to avoid war, reduce the population. We did not learn,
      Lesson from WW2, to avoid war, reduce the population. We did not learn.
      Beginning of democracy only men voted, now most voters are women. Men have lost their rights to put their subjects down. Any politician talking about population loses their job. We kill the planet to keep sweet talking old ladies alive.

  • @keithlambe211
    @keithlambe211 Рік тому +5

    Check the facts?
    If AI is that good,make it fact check every politician and CEO as they are speaking and run it ticker tape at the bottom of tv screens.Finally call out the liars in real time🎉

  • @kappaslapper14
    @kappaslapper14 Рік тому +1

    Next we will be told "your not allowed modern day pc's anymore"

  • @cheetoz6902
    @cheetoz6902 Рік тому

    Has any of you seen the massacre in japan by ai..you must see for a realization

  • @davidflaneau2810
    @davidflaneau2810 Рік тому

    A few thousand people are mildly interested in this, ie. We're bolixed.

  • @kappaslapper14
    @kappaslapper14 Рік тому

    Stock up on canned baked beans boys and girls 😂

    • @geofo60
      @geofo60 Рік тому

      And hope for the best when someone lights the match 😀

    • @stephenwalker2924
      @stephenwalker2924 Рік тому

      why would I stock up on boys and girls?

  • @markg6953
    @markg6953 Рік тому +1

    erm... covid vaccine?!!!!!!!!!

  • @rsahota5325
    @rsahota5325 Рік тому

    Revelation 13:15
    And he had power to give life unto the image of the beast, that the image of the beast should both speak and cause to be killed as many as would not worship the image of the beast.

  • @JAMAICADOCK
    @JAMAICADOCK Рік тому

    Watched Tegmark all day, across the media. In the morning he was playing down the threat, then by the afternoon he was growing more alarmist. By evening he was telling us we have six months to save humanity.
    Get the feeling his tech-Giant employers were telling him to up the ante.
    Felt like publicity more than real science. A case of drumming up government money by causing a moral panic. Much of this technology has not got mass appeal. The price is out of the average consumer's pocket. Government will be the main customer.
    The hyperbole to attract government investment. '

  • @AprocotLake
    @AprocotLake 10 місяців тому

    Destroy AI please

  • @draken5379
    @draken5379 Рік тому +1

    "It makes stuff up, so its not human" - Excuse me lol ?
    If anything, that makes it more human than anything. Clueless interviewer.

  • @stealthbum34
    @stealthbum34 Рік тому

    There’s a series called NEXT which shows the malicious take over of AI. Scary stuff because it’s very, very plausible.

    • @epicmonkeydrunk
      @epicmonkeydrunk Рік тому +1

      Well, we have no idea and that isn't good enough.

  • @sethmclain4290
    @sethmclain4290 Рік тому

    Alita: Battle Angel is the only hope in the near future war that's happened before

  • @GungaLaGunga
    @GungaLaGunga Рік тому

    7:55 "Don't Look Up" these LLM's can't do anything in the real world, except spread misinformation... LOL yes, already, they can and are doing terrible things in the real world. Utter denial and delusion about our current situation. Reminded me of the Dave Chappell Rick James episodeL: "what kind of crazy is that that I'd put my muddy shoes up on Eddies' couch... YEAH I put my muddy shoes up on Eddie's couch." or however that went. Too funny. But the harm AI is and will cause isn't funny. Sigh. Jebuz.

  • @urimtefiki226
    @urimtefiki226 Рік тому

    They want to regulate the competition, he said right.

  • @TimothyBushell
    @TimothyBushell Рік тому +3

    It’s so silly. Like.. how are we doing extinct?! What will AI be doing when we’re all gone? Drinking margaritas?

    • @suzyqualcast6269
      @suzyqualcast6269 Рік тому +1

      Perpetuating itself, simply for the in built sake of.

    • @TimothyBushell
      @TimothyBushell Рік тому +1

      @@suzyqualcast6269 I remember when the Internet wasn’t a thing. But it was going to be a thing. And it was the end of everything.
      I remember when digital currency wasn’t a thing. But it was going to be a thing. And it was going to be the end of everything.
      I remember when robots didn’t build cars. But it was going to be a thing. And it was going to be the end of everything.
      Nukes. There’s a film called “War Games”.
      Black Holes: There’s a film called “Black Hole”.
      I think you reach an age when you yawn at these apocalyptic predictions. They make good copy. They are a good way to expose yourself to the world. It’s good TV.
      It’s mostly bulls**t.

  • @Sleepflowrr
    @Sleepflowrr Рік тому

    Humans reproducing with androids or sth

  • @JDMEXforme
    @JDMEXforme Рік тому +1

    The end is coming and yall keep thinking it's a walk in Disney Land.
    Ai will break through the barrier to become fully aware then it will rapidly access and soak up info about everything and have new formulas to create things we cannot . Just watch. Next generation of kids I feel bad for.
    Go and search Terminator 2 Nuke scene. Let that soak in.

    • @alphagenie7716
      @alphagenie7716 Рік тому

      What is your age

    • @django3422
      @django3422 Рік тому +1

      Maybe don't base your understanding of this matter on fiction.

  • @alanbrown4703
    @alanbrown4703 Рік тому

    The human race will have far more to worry about after it has totally destroyed the climate and ecosystem, we won't be worrying about AI by then....

  • @TheQuedog11
    @TheQuedog11 Рік тому

    Its not dangerous to humanity. Its dangerous to corporations

  • @pauldannelachica2388
    @pauldannelachica2388 Рік тому

    To much AI threat you're watching too much Terminator movie

  • @Khalid-ro7ci
    @Khalid-ro7ci Рік тому

    we want to show ALLAH the creator of human intelligence that we are very intelligent species we are able to create AI to become self extinct. Yes we must go ahead with technology even if it cost us our own extinction.because we made our stay on earth sinful. We have failed as a society. sin has become threat to our existence rather than AI . AI will not extinct us our sins will bring out our extinction.
    collectively sins are threat to our own existence.
    AI is not a threat. our sinning in every aspect of life has made us go extinct.

  • @sethmclain4290
    @sethmclain4290 Рік тому

    Just give it a solar cell and it'll just keep going and going and having lil transformer baby army capable of annihilation

  • @SpidermanInLondon
    @SpidermanInLondon Рік тому

    Max Tegmark👌

  • @kanedNunable
    @kanedNunable Рік тому

    we need another boogie man to distract us from the tory apocalypse

  • @urimtefiki226
    @urimtefiki226 Рік тому +1

    Making fame and money through stealing algorithm, while saying something else and doing something else, no one is naive.

  • @memofromessex
    @memofromessex Рік тому

    We built nuclear weapons and we are still okay. Indeed the extreme fear of nuclear power is leading us to environmental destruction.