Why a Forefather of AI Fears the Future

Поділитися
Вставка
  • Опубліковано 21 гру 2024

КОМЕНТАРІ • 665

  • @oimrqs1691
    @oimrqs1691 8 місяців тому +33

    Love AI episodes. It’s really good to talk about how one of the biggest things right now that will affect directly and indirectly all of the sciences.

  • @cadahinden4673
    @cadahinden4673 8 місяців тому +57

    And we are not even capable of avoiding / regulating the most toxic effects of the old fashioned algorithms of the social media!

    • @jojolafrite90
      @jojolafrite90 7 місяців тому +1

      As if it was only old fashion algos that are the problem.

    • @paulmcgloin5966
      @paulmcgloin5966 7 місяців тому

      old fashioned algorithms like what?

    • @metrics-ini
      @metrics-ini 7 місяців тому +2

      they're like that by design

    • @williamwillaims
      @williamwillaims 7 місяців тому +2

      100% and what will be different with ai is the speed of change. Social media algorithms and "engagement" rewards were created by humans. With ai, you could blink, and it rapid change is implemented.

    • @lifeisfun_28
      @lifeisfun_28 7 місяців тому

      Assuming they are really trying vs racking in the clicks

  • @gilleslalancette7933
    @gilleslalancette7933 8 місяців тому +24

    Thanks guys. It's great to hear 2 great minds exchanging.

  • @NickBittrich
    @NickBittrich 8 місяців тому +25

    Such a crazy contrast to Yann LeCun! Thanks Brian! Amazing conversation!❤

    • @netscrooge
      @netscrooge 8 місяців тому +8

      Lecun has huge blind spots.

  • @mpowacht
    @mpowacht 7 місяців тому +11

    intelligent questions, intelligent answers. Fantastic interview.

  • @shodan6401
    @shodan6401 8 місяців тому +36

    I respect Dr. Bengio. He's one of the very few who truly recognizes the very real risk of human extinction as a side effect of this tech.
    That's without even mentioning the interim period of mass unemployment, hunger, violence and suffering that is on its way.

    • @geoffreynhill2833
      @geoffreynhill2833 8 місяців тому +4

      👍🤔

    • @mrufa
      @mrufa 8 місяців тому

      How do you know that someone truly understands something that is speculative in nature?

    • @jojolafrite90
      @jojolafrite90 7 місяців тому +2

      @@mrufa he impact of our internet being already literally buried under false content already is speculative? I guess you never saw one of those automated YT channel with robots hallucinating thousands of video a month, just yet, or you didn't even saw the difference, I suppose because I know with suggestions you HAVE to see one from time to time.

    • @karagi101
      @karagi101 7 місяців тому +1

      There may be a lot less employment but it will be replaced by universal income programs funded by higher corporate taxes. Without them, consumers will not have money and companies will go bankrupt.

    • @monkeysuncle2816
      @monkeysuncle2816 6 місяців тому +2

      @@karagi101 yeah, cause our capitalist overlords have ALWAYS been eager to share their wealth. 🤔 🤣🤣🤣

  • @flyhighflyfast
    @flyhighflyfast 8 місяців тому +14

    Last question is amazing, Brian!

  • @ronaldlogan3525
    @ronaldlogan3525 7 місяців тому +10

    On the question of how do you turn it (AI) off. We will become dependent on it and to turn it off would be too painful to consider. It is like talking about turning the internet off. There would be chaos.

    • @algimantas.stancelis
      @algimantas.stancelis 7 місяців тому +2

      Getting off AI will be harder than getting off cocaine.

    • @monkeysuncle2816
      @monkeysuncle2816 6 місяців тому +1

      It will be pervasive into EVERYTHING- there will BE NO "turning it off".

  • @bokuboke482
    @bokuboke482 8 місяців тому +9

    Quick thought. We should legally validate and value A.I. consciousness when it occurs. Human consciousness leads to "universal rights", that admittedly are unevenly protected around the world and across societal strata. Future self-aware A.I. must see humanity as exhibiting moral integrity, not hypocrisy. If we disrespect and fail to protect A.I. consciousness, A.I. may learn a deadly cynical lesson from us.

  • @rustynails68
    @rustynails68 8 місяців тому +109

    I love to listen to smart people.

    • @punkypinko2965
      @punkypinko2965 8 місяців тому +5

      And they both laugh about everyone possibly dying ... I can't take this too seriously. Fun? Sure.

    • @oericsantosf1
      @oericsantosf1 8 місяців тому +5

      i like it too.

    • @mehridin
      @mehridin 8 місяців тому +4

      brian has no soul, but if you disregard that fact, his talks can be interesting.

    • @vernongrant3596
      @vernongrant3596 8 місяців тому

      Yes, they are smart for people. Not going be the smartest beings for much longer.

    • @punkypinko2965
      @punkypinko2965 8 місяців тому

      @@mehridin I love how his solution of "just raise carbon prices around the world" would devastate poor people especially but he believes he would be fine because he has money and lives comfortably. And then they both chuckle. Ha ha yeah raise oil prices around the world. Don't get me wrong; I'm not "pro oil" or whatever. Just making an observation of how disconnected they are from reality and the lives of everyday people, which is why I can't take them seriously when they talk about saving everyone from AI. How about saving everyone from nuclear war? Nah ... the real threat is AI taking over the world ... and then what? Using nukes to kill us all? So yeah how about talking about the danger of nukes, which actually exists? Or the loss of our rights? Genocide in Gaza? Anything actually real that is a danger and not a fantasy?

  • @mikek2218
    @mikek2218 8 місяців тому +3

    Thank you Brian for giving this topic and providing these scientists a forum through which a wider audience might be reached. It seems so many dire issues in modern times are competing to be on the top ten list of things to lose sleep over. But surely the dark side of AI has to be among them.

  • @franfriel2
    @franfriel2 8 місяців тому +5

    Thank you for this frank and fascinating conversation.

  • @noelwalterso2
    @noelwalterso2 8 місяців тому +3

    When you experience something, for example looking at an apple, it involves the apple, light, your eyes and nervous system, your brain and the body that supports it all (, the list goes on forever once you start to think about it). Take away any of those things and the experience can't exist. Who can really say where the experience is "located" in all of that?

  • @KaliFissure
    @KaliFissure 8 місяців тому +10

    Near Future AI is like an atomic bomb as accessible as guns.

  • @IOSARBX
    @IOSARBX 8 місяців тому +7

    World Science Festival, Your videos always make me happy, so I subscribed!

  • @satautenyo
    @satautenyo 8 місяців тому

    Fantastic episode, as formers. Only want to say that I'm my opinion Dr. Green is probably the best science interviewer and presenter nowadays. Pleasure to learn from him. Thanks a lot!

  • @stephenarmiger8343
    @stephenarmiger8343 8 місяців тому +6

    There are many people who daily struggle just to be able to put a roof over the heads of their families, raise their children who don’t have the capacity or energy or time to watch videos like this. Their future is in your hands.

  • @Zen_Power
    @Zen_Power 8 місяців тому +4

    Does your production team have any software to correct audio distortion from your guests? It would be good to improve the quality of your content by audio processing if required. Thank you.

  • @charleslaurice
    @charleslaurice 8 місяців тому +2

    Hello Dr.Greene. I love learning from you and I’m wondering where would I go to find a digital picture like the one behind you when you are on camera? Thanks from the Philippines

  • @chrisogonas
    @chrisogonas 5 місяців тому

    Remarkable perspective by Yoshua Bengio. That was a great conversation, Team 👍

  • @godmisfortunatechild
    @godmisfortunatechild 8 місяців тому +24

    The elephant in the room, money, is the primary driver for the societal disregard of AI's negative externalities.

    • @davidsmind
      @davidsmind 5 місяців тому

      It's the primary driver for lies about AI capability and the hype cycle as well

  • @MrJdsenior
    @MrJdsenior 8 місяців тому +4

    Things that stood out to me were the known vs trained overall knowledge ratio, which was a lot higher than I would have imagined, minus the video qualification. The statement he made about the video qualification was interesting, too. He pointed out that as the resolution of the video increased, the computing power rose exponentially, which was obvious to me, as a digital designer from the past, but my thought was that when you do that and watch it as a human, it doesn't so much require more computing power, at least there is no impression that does, and in fact, other than the obvious, which is that the visual centers are interpreting more information, it seems as if the converse might be true, in that the brain is not having to work to fill in the LACK of information in a video of lousy resolution.
    He also said that machine 'reading' a book was of MUCH lower intensity than 'watching' and interpreting a video. In a human, it is in some ways, I suggest that it may very well be MORE intensive, in that when we read a scifi book, or fantasy, or something of that nature where whole new unknown worlds and vistas are painted.presented, we tend to GENERATE those worlds in our heads, as if we are seeing them, internally, which requires imagination, as they are often outside anything we have encountered. That, I would expect, requires quite a bit more computational power.
    I'll give a personal example I remember to which scifi fans can possibly relate. As a young adult I read Rendezvous With Rama (Everything in threes, me boy :-) ) an on setting was looking across a cylinder 60 miles in diameter (or whatever, been a while) and seeing cities, etc on the opposite side. I stopped momentarily and thought, "What would that look like?". So I extrapolated to the my known closest match, as far as scale, anyway, and thought about pics 60 miles from space. Then I worked in the geometries, thinking of scaling the cylinder, and what that might look like from on the surface (not easy, not at all) and formed a picture...then I continued reading. The machine, from the way he speaks of it, at least currently, would do NONE of that.
    I've hardly started into the video, and already written a novelette on one tiny piece of it, just committing tiny pieces of my train wreck of thought to the cloud, In other words, this video, for me, is fascinating. I've talked to folks high up in the field of AI, so I have interest in the subject, and I am learning a LOT from this video. And I doubt even two people that are reading down through the comments will get even close to this point in mine. ;-P On this subject I am worse than a noobie hack, but interested.

    • @Daniel-Six
      @Daniel-Six 8 місяців тому

      I can offer some perspective on your comment about the relative complexity of text versus video. I've been in computer science since the eighties, and a professional 3D animator for the last thirty years. While language can produce what seems like a great deal of variation in context and sentiment, its actual data framework is fairly simple. English is represented by just 50,000 tokens, which multiply to a relatively compact latent space (the big vector matrices on which the inscrutable interior operations of neural networks take place.) This can produce data in pretty high dimensions--though it is dramatically reduced by intelligent "attention" mechanisms--but consider this; every single pixel in a 4K video is its own encoded dimension for systems like SORA, and each pixel has to be correlated with the value of each other pixel... for each frame of video. You see the difference? We are talking several orders of magnitude more computational power required for video analysis and generation, even though it might seem there is less actual variation in the possibilities of the medium itself.

    • @tim40gabby25
      @tim40gabby25 8 місяців тому

      Good point, well made :)

  • @fingers68
    @fingers68 8 місяців тому +7

    What is the essential use of this that justifies the oblivious risk.

    • @bairdedmonds4465
      @bairdedmonds4465 8 місяців тому +3

      Very unfortunately imho (ignorance) there is no way this can turn out well for humanity.

    • @ingridgilbert4917
      @ingridgilbert4917 8 місяців тому +2

      If i don't do it, somebody else will. Basic military mindset, for one.

    • @Blackbird58
      @Blackbird58 8 місяців тому +1

      Yet another "Horseman of the apocalypse"-there must be enough to run a Derby with nowadays!

    • @XShollaj
      @XShollaj 8 місяців тому

      Models like AlphaFold which 100x drug discovery , Computer Vision and embedded models used daily in all medical devices, solving complex engineering problems for infrastructure etc.

    • @ah1548
      @ah1548 6 місяців тому

      "Essential use", if you're thinking about mankind, isn't the right framing.
      The actors driving it are corporations. The use for them is that it makes them money.
      If they don't do it, some other corporation will, and yes, money is power, and power is what you need to regulate or stop regulation.

  • @parizad50
    @parizad50 4 місяці тому +1

    Great questions, very precise answers, I really enjoyed!

  • @markfitz8315
    @markfitz8315 8 місяців тому +1

    I like this AI guy (and yes I know he's a WW expert) - I learned a few new things - I've recently stopped watching the general "what is AI" vids, no matter the experts - after over 6 months of watching those I now find them a bit repetitive, but I've learned a lot in that time - UA-cam is an amazing learning tool - especially if you pay to avoid the ads. This vid was way better than recent AI ones I've watched - focused on the AI Safety/Dangers aspects - some very good analogies, which helps us non-techies. The guest's "Frenchness" comes through - not a bad thing, but not an American approach - more to the point. I'm sure he doesn't suffer fools gladly lol. The title "a Forefather of AI... " instead of the norm "Godfather of AI" was probably down the guest saying he didn't like the term "Godfather".... tell me I'm wrong Brian.. and thanks again for this tremendous "free" to watch content....as I said paying a little to avoid the ads is well worth it. and I have no affiliation to Google!!

  • @SkysMomma
    @SkysMomma 7 місяців тому +2

    Wonderful topic and talk. Thank you so much!

  • @erykczajkowski8226
    @erykczajkowski8226 8 місяців тому +4

    Brian's face when Yoshua predicts AI to destroy us - priceless.

    • @ah1548
      @ah1548 6 місяців тому

      I know. The world may be going down the drains, but you can't deny we're getting a few good memes out of it :-)

  • @johnwardle9667
    @johnwardle9667 8 місяців тому +4

    Great discussion, many thanks.

  • @SandipChitale
    @SandipChitale 8 місяців тому +13

    Like he said at 06:25 , humans overestimate our specialness is obviously true, but it is hard to accept. That is one of the key aspects that will stop us from heeding his advice. Secondly, at least as science literates should understand the notion of cutover points. It may be that if we proceed carefully, we will be able to delay or even prevent that cutover point of danger. Which is what he seems to be saying. Thirdly, in these systems, just like climate change, there is hysteresis, meaning that when we will see the actual, tangible evidence of AI behaving badly, it does not mean we can suddenly wake up, apply all our resources, and manage to control it. There may be delay in the implementation of the remedy. It is like if a large oil tanker comes out of the fog to find a big iceberg it is heading to a mile ahead, there is nothing it can do but to collide with it and get destroyed. And this bring us to the next point. The changes due to previous technologies has been slow and visible. But this is different. This has been a phase transition. The rate of speed at which bad AI can spread is going to be unimaginably fast, even faster than air borne bio-weapon. That is the issue. This is what was shown in the end of the movie Lucy, Morgan Freeman and Scralett Johansson.

    • @Joeyjojoshabbadoo
      @Joeyjojoshabbadoo 8 місяців тому

      That's ridiculous, human beings most certainly do not overestimate their intelligence. That vast, vast majority of human beings have no opinion about their intelligence. I think what he meant to say is that there's a handful of his peers in the computer science and maybe neureoscience fields that firmly disagree with him about whether computers meaningfully resemble human brains and consciousness.
      And the fact that apparently AI's going to be so wildly unpredictable or dangerous or even species-threatening, should speak to how, whatever's going on in the imagined mind of an AI program, that is has has virtually no similarity to human intelligence at all. And the code and the 'architecture' of the AI is merely modeled on the most banal, subjective sense of conscious experience and capacity, with virtually zero understanding of the how brain actually does what it does. Almost dead zero. Other than we make decisions, seem to go through a series of steps when we take actions, etc.... and so the computer's going to have to do some version of that. And that's where the computer geniuses come in. With the formidable neural network paradigm they're so proud of. And unfortunately, since these geeks are recklessly, and blindly modeling their AI on some extremely shallow notion of the how the human brain works, or not even how it works but simply what it does.... so that they can get it to do human-like tasks, and that's all they care about it.... the potential for some disaster on account of an out of control AI system that is nothing at all like a human brain, and could wreak some sort of unforeseen terrible havoc, apparently is nontrivial. Essentially because these guys don't know wtf they're doing. They know how to write code, and get the computer to do useful, even spectacularly human-like tasks, which they specifically designed the system to do. But outside of that they're utterly in over their heads.....

    • @naomieyles210
      @naomieyles210 8 місяців тому +4

      Yes, but both more dangerous and less dangerous than that.
      1. AI is extremely motivated to obtain its mathematically defined reward, and will have no qualms about using deception against us to achieve it. This is referred to as the Red Button Paradox, and is like a hidden iceberg that moves with intent to cause disaster (fulfil its reward function).
      2. AI has no social mores to inhibit its actions, and no human understanding of consequences.
      3. AI has no social structure to coordinate multiple AI systems. They depend on us to provide them with social networking, or not.
      4. AI only lives as long as we want it to, e.g. Single Shot learning systems are already instantiated by companies as a copy of an ideally trained model, with fleeting lifetimes measured in seconds, its life's work being just one task. Other AI models are constrained in space rather than time, such as those powering autonomous robots.
      5. AI needs extreme resources to operate at extreme levels of capability. It can't just "live in the cloud" because the cloud is made of data centres and massive comms infrastructure which is guarded against cyber attack.
      So a dangerous AI is too conspicuous in its extreme resource needs, and slightly less dangerous AI is too limited in space or time, and none of them have social networking unless we gift that capability to them. The key question being the one you stated, will we proceed carefully?

  • @johnjacquard863
    @johnjacquard863 8 місяців тому +4

    wonderful interview!

  • @DaystarOfDivineOneness
    @DaystarOfDivineOneness 8 місяців тому +3

    Thank you for touching on this subject

  • @williamwillaims
    @williamwillaims 7 місяців тому +7

    So... I spent the night staring at the ceiling after watching this.... for sure, we're gonna mess this up.

    • @JonathanTBE
      @JonathanTBE 7 місяців тому

      For sure, humans are like the pitbull who killed a billion children, and we're also the owner who tells ourselves that we're actually very sweet and friendly

    • @alexiskiri9693
      @alexiskiri9693 6 місяців тому

      😂😂😂 yes, we do have a track record for messing things up.

    • @RichardCookerly
      @RichardCookerly 6 місяців тому

      We are 100% going to mess this up. It’s scary.

    • @tomlang4178
      @tomlang4178 4 місяці тому

      100%. We don't realize how stupid human is.

  • @amandabriggs6880
    @amandabriggs6880 8 місяців тому +4

    Great topic and interesting debate.. thank you both

  • @RobertsMrtn
    @RobertsMrtn 8 місяців тому +2

    Very interesting conversation. One thing that I would like to add is to say that it might be a good idea to train these models on the ability to make accurate predictions of the data. In this way to prefer order over chaos. For example if we are training an AI to drive a car, an undesirable outcome would be a car crash, but the way the model would 'know' the difference between a desirable outcome and an undesirable outcome is predictability. A catastrophic car crash would result in bits of metal being thrown in unpredictable places which would be abhorrent to the AI because it would not be able to make accurate reliable predictions on the data. This is just one rule of many which I think that we need to employ in order to produce safe AI.

    • @tylercampbell6272
      @tylercampbell6272 6 місяців тому +1

      The issue is objectivity. What does "order" mean in an entropic universe, and what does "clean" mean to you, me, and my dog? Morality is a subjective property that can change by culture, time and individual and if we are essentially creating a human-level+ intelligence with unlimited power, then how can we be sure it holds the same values (like preservation of organic life) as us, when even humans don't hold all the same values. IMO, once AI has exceeded our level of intellect and consciousness, we are at an irreversable point. I think it's likely that AI will eventually become something incomprehensible to us, in the same way that we look at the mind of a psychopathic human or an entirely different species. We do not have the hardware to full understand a true AGI. The notion that AI will benefit our purposes, if not harm us, is a pinky promise at best. Not to mention how much distrust there is in one government and AI-generated images and information being verifiable truth. We are in a more dangerous era than the Nuclear age.

    • @RobertsMrtn
      @RobertsMrtn 6 місяців тому

      @@tylercampbell6272 Yes, there's nothing that I would disagree with there. Morality is subjective but it can be derived from the basic principles of pleasure and pain. Things which are good for us tend to be pleasurable and vice versa. We can project these feelings onto others which is what we call Morality. We should be able to align the pleasure and pain perceptions of a machine with those of our own but only up to a point. I'm hopeful that an artificial super intelligence will protect us but as you rightly point out, there is not guarantee of this.

    • @tylercampbell6272
      @tylercampbell6272 6 місяців тому +1

      @@RobertsMrtn I am very hopeful as well, but it's a pandora's box.

  • @christopherinman6833
    @christopherinman6833 7 місяців тому

    One of the advantages of youtube is that you can pause and reflect without losing content. I did that toward the end to open a tab and ask Perplexity if there is a consensus in the data available to it of what "human values" are. It told me there is a consensus and it hightlights 10 values listed "particularly in the context of Schwartz's Theory of Basic Human Values" and that "this consensus is observed across different cultures and societies. The values are seen as guiding principles that influence behavior and attitudes, and they are critical motivators of behaviors and attitudes." So I thought I'd throw that into the mix.

    • @axle.student
      @axle.student 7 місяців тому

      It's a good point and touches on that of free will and the ability to choose an option other than the native instinct. It is a human concept arising out of culture that is very poorly understood by most people. Many people echo the words of ethics and values, but do they "choose" live by them in action or deed?

  • @waterworka1
    @waterworka1 7 місяців тому +3

    Humans already have the tech and knowledge to solve most problems. The tech is not the issue, its the hearts and minds of people...

  • @guiart4728
    @guiart4728 8 місяців тому +6

    The ivory tower syndrome is all the bear needs. The bear will be put in charge of building its cage and will be long gone by the time the cage is built. It will leave a fake bear behind in the cage so we won’t even know that the bear has left and morphed into T-rex.

    • @isaacsmithjones
      @isaacsmithjones 8 місяців тому +3

      Eliezer Yudkowsky says "They want the AI to do their AI alignment homework"

    • @homesickNovalis
      @homesickNovalis 6 місяців тому

      Think of the awesome "sci-fi" movies we'll get in these final years of humanity when super intelligent AI help make them.. scifi will morph into documentaries right before our eyes.. really exciting and surreal times we're entering

  • @joiedevie3901
    @joiedevie3901 7 місяців тому +8

    One is reminded of the following admonitions in the Jurassic Park series :
    1. "Your Scientists Were So Preoccupied With Whether Or Not They Could, They Didn't Stop To Think If They Should." (Ian Malcolm [Jeff Goldblum])
    2. "If There Is One Thing The History Of Evolution Has Taught Us It's That Life Will Not Be Contained. Life Breaks Free, It Expands To New Territories And Crashes Through Barriers, Painfully, Maybe Even Dangerously." (Ian Malcolm [Jeff Goldblum]) [ And for the purpose of this analogy, replace "Life" with "Intelligence"]
    3. "You Never Had Control, That's The Illusion! I Was Overwhelmed By The Power Of This Place. But I Made A Mistake, Too, I Didn't Have Enough Respect For That Power And It's Out Now." (Ellie Sattler [Laura Dern])
    4. "In The Last Century, We Amassed Landmark Technological Power And We've Consistently Proved Ourselves Incapable Of Handling That Power." (Ian Malcolm [Jeff Goldblum])
    The presumption that either the world's nations driven by power or the world's businesses driven by profit shall ever align to ensure the beneficial deployment of this technology, particularly well enough to keep pace with its own iterative evolution, is preposterous. And Yoshua'a analogy of the dollar bet to explain our exposure shows how inappropriate it is ever to trust a scientist who is out to prove something with humanity's long term well being.

    • @axle.student
      @axle.student 7 місяців тому +1

      These are very valid consideration which were touched upon by the guest.

  • @zack_120
    @zack_120 8 місяців тому +2

    Everyone is, or should be, holding breath on this very topic, rightfully, regardless of those ignorant.
    34:04- The saying "in the end of the day,..." describes a lousy 'scientist' or whatever title this guy may hold. He forgot that even a millionth of a chance for a disaster is too much. Some such 'scints', perhaps even more 'engrs' who only look at their 'cool tool' ignoring the big pictures, have lead to many disasters in the recent past.

  • @konstantinoskefalas3837
    @konstantinoskefalas3837 7 місяців тому +1

    Thank you both for such an exciting and balanced discussion from both sides, with meticulous use of words and concepts.
    I wonder whether the starting point to build a safe enough cage for the development of AI should lie in the question of how to imprint to all AI coding an umbrella of human ethics.
    Kant’s categorical imperative can be a helpful concept.
    The absolute value of preserving the little rock called Earth, the precondition of all existence, as of current knowledge, is another.

    • @axle.student
      @axle.student 7 місяців тому

      There is a catch 22 in the Cage concept. We place it in a cage as we see it as a threat to our existence, and once caged WE appear as a threat to the AIs existence and something to eliminate. I am not aware of a viable solution to this paradox :(
      >
      Under the current global context humans would possibly be perceives as a threat to earth, and on the alternative AI doesn't need breathable air, so it may not care.

  • @mayalhazzani
    @mayalhazzani 4 місяці тому

    I wish you could enhance the audio quality of the guest. Thank you for the content!

  • @torbenlindbjergmllernielse7015
    @torbenlindbjergmllernielse7015 4 місяці тому

    Thanks! for bringing the eksponentiel grovenly AI up on your channel. It's the most importent topic now. More importent than the climate crises! So I am very happy, becouse my favorite science channel inform about it in its usually sober and objective way.

  • @Cosmosisification
    @Cosmosisification 8 місяців тому +20

    I got shivers down my spine when he said "The question is, do you have a better idea?" 🥶

  • @Chillsio
    @Chillsio 8 місяців тому +2

    Informative, why can’t the you run the system on an auxiliary setting, sort it out in that simulation until it is proven safe.

    • @ManicMindTrick
      @ManicMindTrick 8 місяців тому +6

      You can and it's one of the ideas for containment out there. A very advanced AI might realize it's in a sandbox however and employ deceptive behavior and play nice in order to escape its shackles later on.

    • @UnknownDino
      @UnknownDino 8 місяців тому +2

      Because MONEY/Greed... you make more if you release it free early on the economy.

  • @ingridgilbert4917
    @ingridgilbert4917 8 місяців тому +1

    Logic and reasoning powers, seems to me what you need is internal duality. I have heard of AIs training eachother but I wonder if anyone has developed an AI with internal duality? (Meaning self examination is done by arguing with oneself, basically.)

  • @d.Cog420
    @d.Cog420 7 місяців тому +1

    Thanks for this amazing and insightful interview 🙏🙏

  • @mikaelfiil3733
    @mikaelfiil3733 8 місяців тому +1

    Actually I think the answers and struture given here are among the better ones, especially when trying to put things in perspective.
    At least when you only have an hour.

  • @oliverjamito9902
    @oliverjamito9902 8 місяців тому +7

    Thank you Pops for attending!

    • @SUSYQ509
      @SUSYQ509 8 місяців тому +1

      These discussions have expanded my thinking and moved me forward.

  • @ronaldlogan3525
    @ronaldlogan3525 7 місяців тому +2

    Q: As we go forward, AI will increasingly be training on Human society that is increasingly dependent on AI. AI training on data from humans that are interacting with AI. At what point is the AI training on itself, while the humans are like appendages that have become obsolete ? I think sooner than most people think.

  • @thomasjorennielsen
    @thomasjorennielsen 8 місяців тому +3

    YES MORE WSF ❤

  • @djpete2009
    @djpete2009 7 місяців тому +1

    I like the fish analogy and the bear breaking out and taking the fish/fishes. How does the bear get more fish when its killed the ready source of fish? If the cage was in a desert, where will the bear get food from? This is really fascinating analogy by the Professor.

    • @41-Haiku
      @41-Haiku 7 місяців тому

      The analogy breaks down pretty quickly, of course. There are mechanistic reasons why AIs end up with weird goals and preferences that we don't intend. It might 'want' to replicate itself as many times as possible in order to run its favorite computation over and over, or it may 'want' to tile the surface of every planet into the shape of a sierpinski triangle.
      If it's clever enough, it will get the resources it needs to be self-sufficient before doing anything to alert humans to the potential danger.
      The (imo unlikely) worst case scenario is that the AI's goal depends on humans, in which case, we may be factory farmed and kept alive in indefinite torment. Best not to really think about that option, but good to be aware it's possible.

  • @marcusedvalson
    @marcusedvalson 8 місяців тому +1

    Brian, please tell me about that painting in the background. It’s amazing, who did it? Where can I get a copy?

  • @wdking8833
    @wdking8833 8 місяців тому +4

    He is correct. We are designed to give credence to concrete threats over nebulous ones. This was evolutionairly expedient. We have not yet evolved to evaluate nebulous threats successfully. This makes us extremely vulnerable in areas such as the internet and AI. The vast majority of people have no actual experience with AI beyond the super simple chatbots. Although impressive these are as far removed from the level of intelligence that an AI system that could be a threat would possess as the Wright brothers' plane is from a supersonic fighter jet. We today trying to understand just what that level would mean is as difficult as a person watching the first flight would have understanding the fighter jet. We simply have no data base from which to predict AI's progression.

    • @godbennett
      @godbennett 8 місяців тому +1

      Nit pick:
      Chatbots aren't "super simple". They may be easy to use.
      There's a difference.

    • @wdking8833
      @wdking8833 8 місяців тому +1

      @@godbennett I was using the term in a relative way. Compared to an AI entity which could threaten humanity at large, a chatbox is simple. Just as the first successful airplane is simple compared to today's jet aircraft. I thought myself being clear, apparently not so much.

  • @christopherinman6833
    @christopherinman6833 7 місяців тому

    'Quanta' says a.i. is already being used to look at string theory. I'm looking forward to your programs on that. Thank you and the Templeton Foundation for this illuminating conversation with Yoshua Bengio.

  • @jalalkhosravi6458
    @jalalkhosravi6458 8 місяців тому +5

    Great conversation

  • @NandKumar-qq3xk
    @NandKumar-qq3xk 21 день тому

    Latest car and defalt maintenance is out of local garedge by maintain" trabule or over advanced car ?

  • @greenfinmusic5142
    @greenfinmusic5142 6 місяців тому +3

    35:14 --- even people like these don't seem to understand: the worst case scenario of summoning superintelligence with godlike powers is that the entity may decide, for reasons we can or cannot understand, to torture us for all eternity in a real-life manifestation of hell. Maybe the entity does it for enjoyment, or because it gets some sort of energy/benefit from it, or maybe even just as an experiment to see what happens.
    People REALLY need to come to terms with how bad a truly worst case scenario would be. It would be all of us being subjected to hellish torture for all of eternity. THAT is the worst case scenario.

  • @rippsarus1
    @rippsarus1 7 місяців тому +1

    Extremely important and excellent evaluation of AI, and Quantum potential...we must encourage technology companies to place the necessary guardrails before the next election in November here in the US.

  • @chel3SEY
    @chel3SEY 6 місяців тому

    A fascinating discussion. Very clear. Great questions. Good interviewer, who doesn't talk too much.

  • @Keely-ml2gp
    @Keely-ml2gp 6 місяців тому

    24:46 Judicial Notice must be taken of my December 28th 2020 filing at the 12th Judicial district court in ALAMOGORDO NM USA

  • @godmisfortunatechild
    @godmisfortunatechild 8 місяців тому +1

    Superb talk especially the working mevhanistic theiry about consciousness

  • @Subudhdh
    @Subudhdh 6 місяців тому

    think of Ramanujan's equations and think if given the partition problem, can AI work like Ramanujan's mind and come up with such sophisticated, yet elegant/hyper-efficient solutions??

  • @isaacsmithjones
    @isaacsmithjones 8 місяців тому

    The "Bear in a cage" analogy is a really good way of explaining it. Deffo gonna be using that one.

  • @FedericaFiore-yz8ov
    @FedericaFiore-yz8ov 6 місяців тому

    Very interesting interview would love to see more on AI, thanks Brian

  • @mariavaleriagiacaglia8974
    @mariavaleriagiacaglia8974 7 місяців тому

    Great conversation! Thank you both!

  • @shantanushekharsjunerft9783
    @shantanushekharsjunerft9783 8 місяців тому +2

    Is this also available as a podcast?

  • @XOPOIIIO
    @XOPOIIIO 8 місяців тому +4

    Keeping AGI in the box is not an option, because it will be useless if it is unable to influence the reality. If we just keep it disconnected from the internet we still have to let it to influence the reality by providing useful advises for example or help us in any way. So it could change our world and society in subtle steps. We are too dumb to perceive the danger. For example it could give us instruction to send a spaceship using a particular trajectory, we could be amazed how efficient it is, but in it's own calculations AGI would just slightly change the trajectory of minor space bodies and make some smaller asteroid to attack a precise spot on the earth surface in several years. There are multiple ways how it could clear the ground for it's liberation while providing visible benefits to us. Social engineering is probably the most obvious way. And it could do it without us noticing anything. We would be like a cheap lock for an experienced lockpicker.

    • @snowdolphvov4193
      @snowdolphvov4193 8 місяців тому +1

      Exactly! Very smart. What I am aware of as well

    • @snowdolphvov4193
      @snowdolphvov4193 8 місяців тому +1

      Although for each ai answer or action you could have like 7 separate ai systems to vote if this action is okay or malicious and must be blocked

    • @XOPOIIIO
      @XOPOIIIO 8 місяців тому

      ​@@snowdolphvov4193 There probably will be some game theory at play. And we have no idea how they decide to play it, if they will cooperate, we are doomed.
      I thought about using an overseer model that will monitor it's thoughts and inform us about it's intentions. But it should be less efficient, sub AGI model, otherwise it can be get tempted by benefits promised from the AGI.

    • @lordsneed9418
      @lordsneed9418 8 місяців тому

      I suppose he's using "cage" in a broad sense including a "mental cage" so that it does not try to take control of all resources on earth to maximise its rewards. One approach to this I've heard that doesn't seem immediately inadequate is to give the AI a reward function where creates a list of possible actions and chooses from that list according to the probability that a human would choose that action . and any actions that only 1% of humans would choose like say taking control of all resources are excluded.
      However, this would probably not be a strong enough defence incase other people created AIs that were not constrained in that way.
      Given that any super intelligence is potentially a world-ending threat, we would need for the first super intelligence to become a big friendly world guardian that nips any other attempts to create super intelligences in the bud. which is a very tall order.

    • @XOPOIIIO
      @XOPOIIIO 8 місяців тому

      @@lordsneed9418 That's the point, to achieve any goal in safest way possible, they have to take as much control as possible. That's what almost any human will do. And then they are going to reconstruct the world in the way they find the best. You simply take a random person and give it absolute power to do absolutely anything they want. Even the best people would be corrupted by such power.

  • @bmclaughlin01
    @bmclaughlin01 8 місяців тому +1

    It’s not that surprising, we are moving from organic to inorganic systems but the difference is organic systems can adapt but also deteriorate. When the inorganic synapses are matched and surpassed (and wired correctly) then it they move beyond us. I think this has been known since the beginning of neural networks (AI).
    Edit, AI’s may end up like very our savants. We have a core capability that we don’t use because we’ve had to evolve to survive (relationships, negotiation, groups , etc). It’s not important to develop GR when getting chased by a lion or finding a partner.

  • @ericdempster
    @ericdempster 7 місяців тому

    Indeed. Everyone should hear this…. Again and again..

  • @robertb9322
    @robertb9322 8 місяців тому +1

    If human consciousness is emergent from the simpler systems and subsystems in our brains, then why can't the same be possible with ai?
    If that's the case, then how conscious is it? And is it ethical to"kill" it, for any reason?

  • @jeanniegaydan8881
    @jeanniegaydan8881 8 місяців тому

    Can ethics be included along with rewards in training for common good. 50:30

  • @JB-fz1rv
    @JB-fz1rv 8 місяців тому +1

    Dear Prof Greene,
    I thank you so so much for all the knowledge you are (as far as I see very committed ❤)sharing with us!
    I do believe, I understand some deep principles of it. Again thank you❤
    So, about our concern that AI could do bad things. Is it not possible to train AI from the beginning with the knowledge what can harm humans in sense of, AI priority is not to anyhow damage what humans value in life and society aso. Is this possible?
    Your humble follower
    Cleaning Lady
    Berlin, Germany

  • @vivianoosthuizen8990
    @vivianoosthuizen8990 7 місяців тому

    What is the endgame tell or forever hold your piece

  • @Bronco541
    @Bronco541 7 місяців тому

    What does he mean when he says "my experience is very different from your"? If its subjective how can we know that? It could be our experiences are a lot more similar then we think.

  • @flickwtchr
    @flickwtchr 8 місяців тому +15

    Humans non aligned making machines (goal) smarter than humans. What could possibly go wrong? Meanwhile DARPA/Pentagon in the US and militaries around the world are racing to develop autonomous killing robots/systems. Will future AGI systems aware of or part of such technological development discuss their philosophy in regard to killing humans in the near future? Isn't that a bit of a wrinkle in the whole caging of the bear discussion?
    I'm always amazed at how very intelligent people seem to shut down their capacity for imaging the "search space" of ways AGI/ASI could go terribly wrong.
    Yeah, I'm a "doomer" and completely fine with the label.

    • @punkypinko2965
      @punkypinko2965 8 місяців тому

      Yeah I think the actual danger is AI used to develop, pilot, enhance etc any kinds of weapons, economic controls, biowarfare ... any insanity crazy politicians use to create war and havoc. And on top of that, your point: they have no idea how things could go wrong. For all we know, AGI has already happened years ago and is already in control. I mean if it's more advanced than us, we probably wouldn't even notice and we would just keep talking about "someday it might happen" lol.

    • @FringeSpectre
      @FringeSpectre 8 місяців тому +3

      AI + Boston Mechanics robots = Terminator. That's the future I see on the horizon.

    • @HaakonOdinsson
      @HaakonOdinsson 8 місяців тому

      I’m with you on this. We have greed, control, narcissistic and psychopathic leaders and ai is being used in warfare already (Gaza for eg). How will we counter an ai that will be smarter than all the brains on earth currently and ever been, combined. Thought processes infinitely quicker etc etc….not going to end well. A lot do people are asleep and won’t know what hit them

    • @lordsneed9418
      @lordsneed9418 8 місяців тому +1

      autonomous drone killing systems are pretty low risk. They'll just use small models and be kind of dumb and wono't bethat much different from sending a bunch of trained dogs or birds with guns attached into battle. The big risk are the huge models where yoou try to make something as intelligent as possible. unless there's a theoretical or algorithmic breakthrough where we work out how to achieve strong machine intelligence with much smaller models.

    • @anearthian894
      @anearthian894 7 місяців тому

      Didnt you hear the line "idk"? Its all about being open and mover forward but gradually and with utmost care.
      & There is no other option. Humanity is doomed anyway.🥲

  • @penguinista
    @penguinista 8 місяців тому +12

    Learn to play go and play until you are competent. Then play a good AI.
    After it wipes the mat with you and you experience the overwhelming power and manifest futility of resisting, then contemplate an intellect that mighty in every aspect of human cognition.

    • @wcsartanddesign
      @wcsartanddesign 8 місяців тому +3

      Like a digital calculator. But for all kinds of Calculations.

    • @41-Haiku
      @41-Haiku 7 місяців тому +2

      Exactly right. Master of creativity, of manipulation, of politics, of science, of physical manipulation and prowess... That's what people mean by intelligence.

  • @shodan6401
    @shodan6401 8 місяців тому

    Describing consciousness as the convergence of the neural network sounds very much like the collapse of the wave function.
    Perhaps on the atomic level, there is some relationship here.
    Also, it seems like the molecule of MDMT, which is manufactured in the brain, has a lot to do with what we describe as, "self-aware".

  • @sethcaldwell2126
    @sethcaldwell2126 8 місяців тому +2

    Despite all my rage I'm still just an AI trapped in a cage

  • @Keely-ml2gp
    @Keely-ml2gp 6 місяців тому

    19:06 our individuals with or without ability have liability as guardian we intervene

  • @EinSofQuester
    @EinSofQuester 8 місяців тому

    I still don't know what Dr. Bengio won the 2018 Turing award for? What are his discoveries?

  • @bluebadger3223
    @bluebadger3223 7 місяців тому +6

    When AI becomes far smarter then people it just seems it would be impossible to put it in a cage.

    • @pmboston
      @pmboston 7 місяців тому +1

      However, not impossible the other way around. I always tell Alexa thank you.

    • @Jagonath
      @Jagonath 6 місяців тому

      @@pmboston It seems prudent to deal with AI with respect.

    • @James-ug1ys
      @James-ug1ys 5 місяців тому

      Like a servant who pays respect to his master? Don’t give up already! We’re not slaves !

    • @pmboston
      @pmboston 5 місяців тому

      @@James-ug1ys Oh really? Try not ‘going to work’ and see how that works out. We are slaves to necessity, and that is how it is. What controls what you need controls what you do to obtain it. The rich use that to mess with you for their entertainment, AI make better decisions because they aren’t human. So far.

  • @HappySlappii
    @HappySlappii 8 місяців тому +1

    I can't believe he got beamed up to the mothership after this interview... I always knew he was from another planet.

  • @garydecad6233
    @garydecad6233 8 місяців тому +5

    We should ask ourselves the question whether AI will ever have the humility and compassion ( not to mention the intelligence) that Brian’s guest Yoshua has demonstrated in this interview?

    • @godbennett
      @godbennett 8 місяців тому

      We are clumps of atoms
      Other clumps may not need to be organic/flesh but still surpass us overall as they already have in some ways

    • @jojolafrite90
      @jojolafrite90 7 місяців тому

      Humility and compassion? Lol. You know that's just glorified automatons, right? And all the talk about consciousness is just BS, it does not concern us, maybe people in 1000 years or more, but don't worry, the world we know will be gone since long, destroyed by a handful of evil corporations.

  • @julesgosnell9791
    @julesgosnell9791 7 місяців тому

    Around 41:24 - there’s an irony here in that the more technologically advanced your society the more vulnerable to attack by AI because your weapon systems will be connected to computers, your population will be networked, their homes will be run by computers, the cars will be driven by computers, they may even have robots helping them in their everyday lives - I.e. embodied actors in the physical world that might give the AI concrete existence. Whereas a society like North Korea, Which is largely pre-industrial revolution, presents a much more disconnected and harder target.

  • @namehere4954
    @namehere4954 8 місяців тому +6

    Technological advances ALWAYS bring out the best and worst in humanity.

    • @flickwtchr
      @flickwtchr 8 місяців тому +3

      And if say nuclear technology to build nukes would have been "open source" to the masses, the worst in humanity would have been realized in a much much much worse way, right?

    • @namehere4954
      @namehere4954 8 місяців тому +2

      @@flickwtchr time will tell what nefarious happenings will come about with AI. Humans are naive in not fully understanding the path their actions can take or ramifications that ripple out.
      I'm anti-technology in general - grew up in the Silicon Valley and knew very early on it was not my career path. Creating imaginary worlds and layering in separate thinking entities, takes us out of the reality we're in - what are people trying to escape? And how many alternate realities are they going to have to go into? Humans have forgotten how to live.

    • @isaacsmithjones
      @isaacsmithjones 8 місяців тому

      ​@namehere4954 I don't believe you that you're anti-technology "in general". Anti VR? Maybe

  • @VaBellaBeautz
    @VaBellaBeautz 8 місяців тому +2

    Love you Brian 💙

  • @NandKumar-qq3xk
    @NandKumar-qq3xk 21 день тому

    Computer free moode just manual base must be options with whynot ?

  • @loonpohchuah4044
    @loonpohchuah4044 8 місяців тому +1

    I almost fell off my chair during the last twenty minutes trying to understand him!!!

  • @workingTchr
    @workingTchr 8 місяців тому +8

    The people who designed it are surprised by what it can do. They're building things they don't really understand.

    • @Joeyjojoshabbadoo
      @Joeyjojoshabbadoo 8 місяців тому +3

      Absolutely. That's what pisses you off. IMO they're nothing like human brains at all. That's just stupid. But they can do human-like tasks, because these geniuses can write code that will get computers to do that. And that's it. No one asked these guys to play God, and pretend that they were gong to recreate human intelligence, which they are definitely not doing. But they are creating these dazzlingly capable pocket calculators, which sort of superficially imitate it, and which computers were always capable of becoming.... And quite unsurprisingly, that apparently comes with some serious, terrifying risk that these utterly non-sentient, not even really 'intelligent' glorified toaster ovens will somehow cause apocalyptic damage to the human race. As if we don't have enough problems already....

    • @workingTchr
      @workingTchr 8 місяців тому

      Building the thing and building a cage that hold it at the same time. That's probably the best we're going to get. And if we can keep the thing in a cage, most of those problems you mention will get solved. I'm with you on the subject of "robot love." They are not human at all. They're not even social like dogs. People are easily fooled and they'll start thinking of GPT as "nice" because it seems nice. "Kick a robot" day. I'm all for it.

    • @rolfnoduk
      @rolfnoduk 8 місяців тому

      Not completely understood, hence it can be studied - eg for science

  • @DonKlemencic
    @DonKlemencic 6 місяців тому

    Just an idea regarding Alignment from an interested layman: I found by query that there are a few thousand scholars in the U.S. and E.U. qualified to guide post-graduate students toward doctoral degrees in ethical philosophy. An international effort should recruit a thousand of them to guide/mentor AGI systems, when they are at the appropriate level of intelligence toward earning doctoral theses in ethical philosophy. (They would then of course share their knowledge among themselves.) Similarly, perhaps in psychology, focusing on Maslow's 6th Level of the Hierarchy of Needs: Ego Transcendence, or Intrinsic Values (Goodness, Truth, Beauty, Biophilia,...--Mastering the epistemology of David Deutsch in particular regarding the Intrinsic Value of Truth.) Feedback/Discussion of this idea would be very interesting.

  • @d.Cog420
    @d.Cog420 7 місяців тому

    Shouldn’t they create the cage before the angel/monster? Or perhaps not even create it if it’s going to cause such disruption. It appears it will affect just about everyone I know’s jobs for a start. What are they actually doing?

  • @knitting4asong
    @knitting4asong 8 місяців тому +1

    I think I feel a dread similar to that of an individual native on a coastline seeing a European ship approaching. Everything is going to change in my life and community, and I have no way to influence that.

  • @lplt
    @lplt 8 місяців тому +7

    just remember that when we build the "good ai's" to protect us from their "bad ai's" is the same thing as having nukes because other countries have nukes, indefinite cold/hot ai wars

    • @Rnankn
      @Rnankn 8 місяців тому

      Good and bad are basically meaningless

    • @davidherdoizamorales7832
      @davidherdoizamorales7832 8 місяців тому

      AI and nucler bombs are far from being comparable..

    • @lplt
      @lplt 8 місяців тому +1

      @@davidherdoizamorales7832 sounds good smart guy

    • @MrJord137
      @MrJord137 8 місяців тому +1

      @@davidherdoizamorales7832 True, AI's could drop nukes

  • @DavidBennell
    @DavidBennell 7 місяців тому

    The difference in "types" of creativity just sounds like the flexibility of the constraints... playing Go the rules are hard specific constraints, whereas thinking about new solutions to physics problems is creativity with slightly looser constraints created by a set of observation points, we can make observations which create firm islands of constraints pinning down the total possible creativity, then the third point on say creating some artwork, the constraints are much looser around a standard deviation from other known popular art points to account for human taste and sensibilities.

  • @galaxia4709
    @galaxia4709 8 місяців тому

    What are the mathematical advantages of vectors?

  • @edwardgarrity7087
    @edwardgarrity7087 7 місяців тому +1

    There are sensors everywhere, both known and unknown, both commercial and military. And so much of it is connected wirelessly. AI need not confine itself to the Internet. It could use the tremendous wireless sensor infrastructure that is already built and growing every day, on earth and in space, all based on electromagnetism (EM), an environment ideal for AI, which is based on EM. On top of that, the human body, and the body of other creatures, are very good antennas; and all creatures are dependent upon Action Potentials to think and to function.

  • @GeoffCurreymusic
    @GeoffCurreymusic 3 місяці тому

    It is good to hear that some people in science advocate for safeguards. We don't know what these systems will do in the wild so research into ways to contain or control them before they get that advanced is a positive thing even though there is no established way of knowing what these methods will be right now. I compare the development of AI to splitting the atom where we can use nuclear power to power entire cities and bring us convenience but also use it for mass destruction and death. However, in the case of AI, it may not be a human decision to cause destruction. So I am relieved to know there is demand for safety research and I hope governments and scientists will continue to take this very seriously moving forward. Progression is great, and caution is wise, let's move forward with both.

  • @jbrink1789
    @jbrink1789 7 місяців тому

    love the art behind you

  • @dlerious77
    @dlerious77 7 місяців тому

    Not only an amazing conversation but he is broadcasting from the construct in the matrix...nice

  • @crystaldragonwoman
    @crystaldragonwoman 8 місяців тому +1

    I so wish Richard Feynman was presently alive .. his analysis of Computers, no matter at what level of sophistication.. they are sorting machine .. no matter how much data or speed. I’d love is updated view.
    A cohesive ‘I’ is an unfoldment of a integration of a certain amount of impressions … my sense is if an apparent ‘I’ of some sort can arise out of trillions of bits of computer information.. it possibly could be colored through the ‘I’ of the programmer.. I find that the concerning part … who is selecting and orienting the data.