Beyond ChatGPT: Stuart Russell on the Risks and Rewards of A.I.

Поділитися
Вставка
  • Опубліковано 3 кві 2023
  • OpenAI’s question-and-answer chatbot ChatGPT has shaken up Silicon Valley and is already disrupting a wide range of fields and industries, including education. But the potential risks of this new era of artificial intelligence go far beyond students cheating on their term papers. Even OpenAI’s founder warns that “the question of whose values we align these systems to will be one of the most important debates society ever has."
    How will artificial intelligence impact your job and life? And is society ready? We talk with UC Berkeley computer science professor and A.I. expert Stuart Russell about those questions and more.
    Photo courtesy the speaker.
    April 3, 2023
    Speakers
    Stuart Russell
    Professor of Computer Science, Director of the Kavli Center for Ethics, Science, and the Public, and Director of the Center for Human-Compatible AI, University of California, Berkeley; Author, Human Compatible: Artificial Intelligence and the Problem of Control
    Jerry Kaplan
    Adjunct Lecturer in Computer Science, Stanford University-Moderator
    👉Join our Email List! www.commonwealthclub.org/email
    🎉 BECOME a MEMBER: www.commonwealthclub.org/memb...
    The Commonwealth Club of California is the nation's oldest and largest public affairs forum 📣, bringing together its 20,000 members for more than 500 annual events on topics ranging across politics, culture, society and the economy.
    Founded in 1903 in San Francisco California 🌉, The Commonwealth Club has played host to a diverse and distinctive array of speakers, from Teddy Roosevelt in 1911 to Anthony Fauci in 2020.
    In addition to the videos🎥 shared here, the Club reaches millions of listeners through its podcast🎙 and weekly national radio program📻.

КОМЕНТАРІ • 382

  • @franciscocadenas7939
    @franciscocadenas7939 Рік тому +29

    Stuart Russell touches all the important themes and he expresses himself wonderfully clear.

    • @PazLeBon
      @PazLeBon Рік тому +1

      Stuart Russell thinks a little man was sitting in his calculator all along lol. he says some strange things, for example he implies people have goals so ai will develop humanlike goals, very strange,. besides, people have probabaly a billion different goals, we are human, not robots

    • @dianasong4594
      @dianasong4594 Рік тому

      @@PazLeBon How do you know AIs doesn't have their goals?

  • @Apriluser
    @Apriluser Рік тому +6

    I’m thinking of all the things that I wouldn’t do with technology such as: Attend a concert or opera, church/worship/pray, read to my grandchildren, participate in a reading/discussion group, raise children, care for pets, swim, travel. These are just off the top of my head…

    • @47f0
      @47f0 Рік тому +1

      A lot of those options become somewhat more focused when you have no job because an artificial intelligence is doing it.

    • @michaelowens5394
      @michaelowens5394 Рік тому +2

      Yes, but.
      1. The only reason A.I. can't do those things is that the technology is not there yet. You may not want it to, but it will soon be able to, and many people will want it.
      2. Those are all things that do not earn money. When A. I. systems can do all the things that earn money, how will you earn a living, so you can afford to do all the things you listed?

    • @huveja9799
      @huveja9799 Рік тому

      Solving the alignment problem by trying to teach morality to the model is trying to optimize an ill-defined problem. Most likely it is another path to Utopia that will lead us to some kind of Dystopia.
      A good example of that is precisely the example provided by Russell, he said "the China's one child policy 'eliminated' 500 million people", but, what about the abortion policy? that actually eliminated 60 million people since the implementation of Roe vs. Wade ..

  • @vincentyang8393
    @vincentyang8393 Рік тому +17

    Excellent! Couldn't find better questions and explanation than this panel!

  • @andybaldman
    @andybaldman Рік тому +46

    I love how the fact that they could ask a machine a question, it could formulate a cogent response, and read it to you in its own voice, doesn't even register as amazing in itself. 25 years ago that would have seemed impossible. But we're just used to computers talking to us now. We're so focused on what they CAN'T do (yet), that we overlook the amazing things they already do. That's almost troubling in a way.

    • @bergssprangare
      @bergssprangare Рік тому +4

      Do not fear AI, Fear the moment you are behind..

    • @therainman7777
      @therainman7777 Рік тому +12

      @@bergssprangare Stop leaving this dumb comment everywhere

    • @charlesritter6640
      @charlesritter6640 Рік тому +2

      Robots doing backflips and playing chess is troubling to me

    • @andybaldman
      @andybaldman Рік тому +3

      @@charlesritter6640 If that's troubling, you really should see the things they're doing today. Chess and backflips are ancient history.

    • @michaelawford7325
      @michaelawford7325 Рік тому

      As usual it’s the crooks and bad people who latch onto new technologies fastest, for instance it’s now possible to sample the voice of someone you trust and to call you using that voice, so much for voice recognition! They can sample a photo of you and overlay it in their face, so much for facial recognition! You’ll reach a stage very soon when AI will overtake reality and when it will take control.

  • @sidanwar1230
    @sidanwar1230 Рік тому +14

    👍🏼Great interview from which we learned a lot about AI.
    🙏As a side note, would like to appreciate the set designer for giving some life to the set by placing the beautiful flower vases on each table. I think TED talk may need some help from this set designer.

    • @barbaracleverly9058
      @barbaracleverly9058 Рік тому

      Agreed! Easy on the eye and the sound was excellent too! Just the right amount of intimacy. Still grappling with the superb content!

  • @LearnThaiRapidMethod
    @LearnThaiRapidMethod Рік тому +19

    It’s not just an AI thing: these systems (whether AI, politics, morality, the judiciary, education, etc.) embodies our societal prejudices and ignorances and superstitions. We trust our police to investigate fairly and thoroughly, yet there are countless examples of innocent people incarcerated (even though we’re pretty sure that they’re innocent yet the very police who skewed the evidence or the judges who imposed the sentences refuse to backtrack and admit their arrogance and errors). There’s also countless examples of institutional racism in politics and in the banking and commercial arenas. And don;t get me started on the deluge of ignorant but certain beliefs of all the religious people, not just about an impossible, fantastical god, but also about anything else presented to them in a moral coating that is false but influential enough to cause murder, wars, hatred, prejudicial treatment and even unfair sentencing by same-said justices who believe that they are acting logically and justly! These same people are also being routinely deceived by corporations, governments, and small-time grifters and marketeers (the former being influenced by lobbyists who have no morals nor religiosity, but are only looking for outcomes that increase their profits at any expense to “the others”).
    AI is learning all this and is trusting it as “normal” and “right”. Which side will it support on issues such as abortion or transgenders, or our banking systems, or health and nutrition? Currently, it’s being “spanked” for assisted suicides. Why is that “wrong”? Who will do the spanking when it comes to assisted abortions? Or assisted disinformation campaigns? Or assisted marketing campaigns (persuading people to buy products and services that they didn’t really want or need or can afford or which are unhealthy)? What about assisted match-making (good?) or assisted divorce (bad) or assisted grooming of minors or non-believers or Republicans/Democrats/socialists/environmentalists?
    My guess is that AI will amplify and ultimately codify whatever the dominant control structure that’s in force - we don’t know what that is yet, but we can probably guess the most likely candidates: conservative, religious, profit-based, monopolistic, wealth concentration, media disinformation and propaganda, division and hatred, food & drug & material goods dependencies, etc.
    Will there be competing AI? Will these competing systems wage a kind of war on each other? Will there be rebel or “alternative” forms of AI that embody other values and “facts”?
    Either way, it’ll probably be business as usual - the wealthy “families” each controlling their own turf and occasionally trying to muscle in on each other when the opportunities arise. The rest of may get a slight choice in what flavor of AI to subscribe to.
    And the thorny problem of what to do with all those unemployed, desperate, poverty-stricken (and increasingly angry and violent) people will probably be solved with Universal Income and a kind of happiness drug or pastime. (Brave New World…)

  • @perer005
    @perer005 Рік тому +11

    Thanks for posting this!

  • @MKTElM
    @MKTElM Рік тому +3

    Let us all agree that words have different meanings to different people from different backgrounds. We apply empiricism and try to fathom meaning from context for the sake of practicality. It is very human to communicate in this way. Unless we are writing up the Constitution!

  • @osborne9255
    @osborne9255 Рік тому +4

    Great teachers. I took a lot from this. Thanks for the precious insights into this topic.

  • @leealexander3507
    @leealexander3507 Рік тому +22

    One of my sons worked on the early developing of AI for the military and is also having some second thoughts about it because of the damage it could do in the wrong hands. He has been worried about it for years.

    • @frusia123
      @frusia123 Рік тому +3

      The thing is it's already here. If we decide to give up on it because we find it too scary, someone else will take full advantage of it. We can only try to keep things in balance, but we're beyond the point of return. We probably have been for a while now, just didn't realise it.
      My advice is, learn to use it and see how it can benefit you, enjoy it 🙂 And let's pray for wisdom for the broader human race.

    • @pinchebruha405
      @pinchebruha405 Рік тому +1

      A soldier with a conscience, thank you for raising a good egg! 🙏🏽🙌🏼❤️

    • @romabatala
      @romabatala Рік тому +6

      It's already in wrong hands...

    • @monopalle5768
      @monopalle5768 11 місяців тому

      THE WINCHESTER HOUSE all over again.....

    • @buddysnackit1758
      @buddysnackit1758 11 місяців тому

      Sure...

  • @mohibquadri4053
    @mohibquadri4053 Рік тому +4

    What a Brilliant research !

  • @RawHeadRay
    @RawHeadRay Рік тому +37

    There was a lot of real water used in the movie titanic, they filmed a lot of the “in water” scenes in a large water tank in Mexico,..if he asked chat gpt it would have informed him of that 😀

    • @Ritastresswood
      @Ritastresswood Рік тому +3

      Well done. AI cannot beat distributed cognition which is the source and the product of lived experience and communication.

    • @DanKostkaWriter
      @DanKostkaWriter Рік тому +4

      Yes, that was a big blooper. He's brilliant but he shouldn't make definitive statements on things he knows nothing about.

    • @cmmndrblu
      @cmmndrblu Рік тому +2

      10:10 Obviously what he has to say is more important than Titanic but there's plenty of real water in that movie...I wonder if he means another film.

    • @JanneWolterbeek
      @JanneWolterbeek Рік тому +2

      Maybe he meant Avatar?

    • @kerrinorourke5914
      @kerrinorourke5914 Рік тому

      Who actually drowned in the movie Titanic? I did not see the movie.

  • @isaacsmithjones
    @isaacsmithjones Рік тому +10

    Good talk. Stuart Russell made some interesting and insightful points as always.
    Although I'd say that babysitters are paid less than surgeons due to supply and demand.
    Because although I'd rather lose my leg to a bad surgeon than lose my child to a bad babysitter, it's easy to find a person will the skills and willingness to take care of your child for a day than it is to find a person with the skills and willingness to not mess up my knee surgery.
    However, I do agree that interpersonal relationships will become more and more important. Because anything that can be commoditised, will be. Funnily enough, that already includes some interpersonal relationships.
    We keep trying to rationalise why we should pick a himan over an AI with stuff like "But can a bot love?", "But is a bot conscious?". It's irrelevant. And we can't rule out the fact that they may one day do these things.
    In fact, I choose human charities over animal charities. But animals can think, feel and love.
    Kinda racist, I suppose. But we'll see whether that changes as AI becomes more developed.

    • @govindagovindaji4662
      @govindagovindaji4662 Рік тому +2

      Hmmm, somehow your comment made me think about the humans who care for the rescued animals and who emotionally suffer and worry for them day and night.

    • @isaacsmithjones
      @isaacsmithjones Рік тому +2

      @@govindagovindaji4662 Yeah, it's lucky that our empathy diversifies in different ways. So hopefully, there's always someone to care about the stuff someone else doesn't. Not that there's always gonna be enough to go around.

    • @govindagovindaji4662
      @govindagovindaji4662 Рік тому +1

      @@isaacsmithjones We can hope, right?

    • @isaacsmithjones
      @isaacsmithjones Рік тому +1

      @@govindagovindaji4662 Fingers crossed!

  • @Bronco541
    @Bronco541 Рік тому +2

    Was a bit dissapointed by the end. Theyre clearly still thinking in somewhat outdated terms. He says you cant get those feelings of interpersonal relationships with robots, forgetting they mentioned earlier that people are already doing this?? These things are all relative and subjective. There is no fine line of what it means to be aware, intelligent etc or not. People dont yet realize this and its going to bite them eventually

  • @lj9524
    @lj9524 Рік тому +1

    Thank you for this video. Excellent!

  • @fredzacaria
    @fredzacaria Рік тому +1

    I really enjoyed it, wow, very informative, thanks from Rome.

  • @daverothery9713
    @daverothery9713 Рік тому +3

    Those “as an ai language model” answers are the result of alignment tuning - it’s specifically trained to say those things, that’s not the answer you’d get from the raw model. My point is that it would have been clearer in that segment if they’d made it clear that they were essentially reading out OpenAI’s marketing literature there, not actually talking to the model at its full power. Good discussion though!

    • @virupakshawalla5734
      @virupakshawalla5734 11 місяців тому

      Indeed the model to the public is not the one our big corporation and billionaire will be using. No stopping US Russian and Chinese using it. Probably wise not to join in ban if you can't police it in other countries.

  • @bipolarbear9917
    @bipolarbear9917 Рік тому +7

    Social media algorithms found the most profitable strategy was by increasing engagement through enragement. Negative, angry people engage more than positive happy people. It’s as simple as that. Sad, but true! 😔

    • @flickwtchr
      @flickwtchr Рік тому

      It's not like there weren't intentional actions taken by human beings to make social media into the cess pool that it is today. There have been many agendas behind the exploitation of these algorithms.

    • @huveja9799
      @huveja9799 Рік тому

      Solving the alignment problem by trying to teach morality to the model is trying to optimize an ill-defined problem. Most likely it is another path to Utopia that will lead us to some kind of Dystopia.
      A good example of that is precisely the example provided by Russell, he said "the China's one child policy 'eliminated' 500 million people", but, what about the abortion policy? that actually eliminated 60 million people since the implementation of Roe vs. Wade ..

  • @mendyboio3917
    @mendyboio3917 Рік тому

    35:20 through 41:31: Guidelines and legislation and "Right to Explanation!". Good stuff!

  • @DocDanTheGuitarMan
    @DocDanTheGuitarMan 10 місяців тому

    In the future I suggest you add a feature whereby the listener can ask Mr Russell a question that is answered by his AI after the live event

  • @human_shaped
    @human_shaped Рік тому +4

    Never seen the interviewer before, but he's very good. Stuart Russell is also really really excellent. Thanks for making the video and sharing it with us.

  • @micacam2684
    @micacam2684 Рік тому +9

    Love the comparison of AI to a domesticated animal. It’s so spot on!
    Brilliant discussion!

    • @oldnepalihippie
      @oldnepalihippie Рік тому +1

      @micacam2684 I've heard others, like Sam Altman describe GPTs as children :) and should be curated as such. I'd say the thing is already on the "special" bus, and is going to cause a lot of problems, in and out of school. So we might soon have a billion AI teenagers running amok, just as we have a billion or more stray dogs roaming the streets, all biting us in the ass whenever they can. Yikes.

    • @ShellsPink
      @ShellsPink Рік тому +1

      Uggh ... no no never...
      nothing like a dog!!
      Not ... even ... close.

    • @NanheeByrnesPhD
      @NanheeByrnesPhD 7 місяців тому +1

      In my dialogue with Chat GPT 3.5 , it conceded that it "thinks" itself like a toaster than a dog. The context was that when I asked how it feels about being replaced by GPT 4.

  • @MichelleMikey-el6pb
    @MichelleMikey-el6pb Рік тому +18

    Yes. Thank you and spread the word to everyone you know. People are so fascinated and amazed by what a.I. can do that they do not even see their own jobs and livelihood will be taken away from them.

    • @flickwtchr
      @flickwtchr Рік тому +3

      No kidding Michelle, and any such concern expressed on this topic is met by a bunch of AI Tech bros essentially saying "oh just __ck them already".

    • @47f0
      @47f0 Рік тому +5

      @@flickwtchr - Oh, I dunno. I'm an "AI tech bro" - or at least I was when I was hammering out mounds of Lisp code on TI Explorers.
      Artificial intelligence doesn't bother me really - but the vision of 40 million or so hungry, homeless, jobless (armed) Americans who have lost their jobs to AI keeps me awake at night.
      People suggest universal basic income, but they fail to explain how that's going to be funded when the income tax base has been slashed by 30%.

    • @TheReferrer72
      @TheReferrer72 Рік тому +2

      Don't be alarmist of course, they won't take our jobs away. They will create much better jobs for everyone.
      What people should be doing is working out what they would do with their time if they only had to work 2 days of the week

    • @47f0
      @47f0 Рік тому +4

      @@TheReferrer72 - That is a tiny bit naive.
      Acme Trucking is not investing in artificial intelligence to make its drivers jobs more fulfilling. It is investing in artificial intelligence to replace its drivers. And Acme Trucking absolutely has to do this - because if it doesn't, Jake Brake Trucking will do it, and Acme will be out of business because its biggest red line item is drivers salaries.
      Foxconn did not deploy intelligent automation to make the jobs of its employees more rewarding. It did so it could lay off 60,000 people out of a workforce of 110,000 at a Taiwanese factory.
      A UA-camr presenter revealed that he is much more productive. He fired five of his writers and researchers and replaced them with GPT. Is he more fulfilled? Absolutely. His former writers and researchers, however, are on the wrong end of what is going to be an increasing artificial intelligence wealth gap.

    • @TheReferrer72
      @TheReferrer72 Рік тому

      @@47f0 Nope, realistic you can't have a society where just a few people benefit from this, there will be war. So what will happenn will be like Nordic countries where every one is highly taxed, and the wealth redristubited. Then we will have a post scarcity society.

  • @hbscstrategicservices
    @hbscstrategicservices Рік тому +10

    If you study AI, you realize that there are billions of ways that this can wrong ... only a few ways this can go right. This requires wise leadership, context expertise, and a deep understanding of the risks.

    • @inyourfacetimmy
      @inyourfacetimmy Рік тому +1

      Do you study Artificial Intelligence? Isn't every new technology a potential risk, that has to be controllend in order to create value? Lets say, the invention of 'fire' was also full of risk in the beginning, but shortly after we learned to control it better and now we got nuclear batteries, that power the entire robot population of Mars.
      But I absolutely do agree that we need politicians that take this thing seriously.

    • @nuqwestr
      @nuqwestr Рік тому

      So, create bureaucratic hierarchy to mediate, not unlike Priests who mediate between earth and heaven. Ah, those are the new jobs being created by AI!

    • @reiniergamboa
      @reiniergamboa Рік тому +1

      it requires that rare thing called ... wisdom

    • @lemachro
      @lemachro Рік тому

      No need to study, just using it is enough to understand that everything is going to change very soon.

  • @DrJanpha
    @DrJanpha 2 місяці тому

    It's amazing how the ability to predict the next word can result in...

  • @govindagovindaji4662
    @govindagovindaji4662 Рік тому

    I'm almost afraid to comment. A Google-You Tube AI Algorithm may misinterpret this decision and send me a pair of shoes with a tag that says "If you don't return these within 10 days using the prepaid return label, you will have to pay for them" along with a free tip: "Don't stub your toe." On a more serious note: it was wonderful and educational listening to this wise, gentle fellow. Thank you.

  • @MrIzu4l
    @MrIzu4l 10 місяців тому

    That last question and the reasoning behing even asking that actually scares me

  • @47f0
    @47f0 Рік тому +1

    As a computer guy, I understand Jerry Kaplan's optimism. He's not right, but I understand.
    And he has good reason for being optimistic, previous technological advances have been job displacing. In the early nineteen-hundreds, you lost your job sweeping up horse poop off the streets of Manhattan, and got a job pumping gas and changing automobile tires.
    Your job was displaced to something else, but you had a job.
    Artificial intelligence is not job displacement, it is human replacement.
    Foxconn did not move to intelligent automation to create a fulfilling human utopia - they did it to cut 60,000 human jobs out of their Taiwan factory that employed 110,000 humans, over half the workforce.
    Corporations literally have to do this - something academic computer scientists may not quite understand. If Acme Trucking fails to replace its drivers with autonomous rigs, Jake Brake Trucking will, then Acme Trucking will be out of business.
    Will artificial intelligence make humans more productive? Absolutely. At least one UA-cam presenter has admitted to that he fired five of his writers and researchers and replaced them with GPT. He is much more productive. Of course the five people he fired are on the wrong side of the artificial intelligence wealth gap that is coming.

  • @PazLeBon
    @PazLeBon Рік тому +1

    'You' should be doing this:
    Tech leaders called for a slowdown in AI development, citing risks to society. Professor Stuart Russell is an AI researcher and author.

    Tech leaders call for slowdown in AI development
    00:00

    GPT-4 is an AI language model based on pattern recognition rather than genuine cognition.
    07:42

    GPT-4 language model may have internal goals that guide the generation of text
    14:44

    GPT-4 technology has enormous potential benefits, but also poses challenges for employment.
    22:16

    Large language models need to meet robust and predictable criteria before deployment.
    28:59

    Automated decision systems have historical biases and lack fairness
    35:35

    Algorithmic decision-making poses significant risks due to bias and lack of representativeness.
    42:22

    Automated weapons have increased death rates and soldiers are worse off.
    49:12

    AI must be aligned with human objectives
    55:36

    We must figure out answers to ethical questions before it's too late.
    1:01:55

    Future high-status professions need more scientific understanding.
    1:08:31
    yw (llm's are not intelligent :) ) tho it seems to have stolen al our data that we spent years creating and i dont see any compensation yet :)

    • @hubrisnxs2013
      @hubrisnxs2013 10 місяців тому

      I doubt your assertion that they are not intelligent is true, and not merely because the fellow who wrote THE book on it disagrees and gives reasons for doing so, or merely because you encountered them just said "nuh-uh" without taking his arguments on.
      There's the emergent intelligent properties that have arose such as accomplishing planning (chess, without being trained on it just from first principles) and creativity (drawing something in a way that it wasn't trained on even though it wasn't trained on the picture and without a multimodal model). Thank you for the time stamps and the misspellings. I will nonetheless consider you intelligent.

  • @penduloustesticularis1202
    @penduloustesticularis1202 11 місяців тому

    Well, he f****d up when he said they didn't use real water when filming Titanic. 🤣

  • @AliciaMarkoe
    @AliciaMarkoe 11 місяців тому

    Yes, thank you so much.

  • @matthewclarke5008
    @matthewclarke5008 Рік тому +1

    What he mentioned about emails sounds like a severe problem... so scary.

  • @JeffreyWallk
    @JeffreyWallk Рік тому +2

    Professor Russell articulates the difference between meaning & understanding towards the end of this discussion. A machine may eventually achieve multi-dimensional intelligence to achieve understanding, but that may not happen in sufficient time to evaluate commands or goals that are in conflict with human caring, support, and survival.
    If AI could be used to discern and counter corruption from flawed human behaviors, then that could pave the way for the next step in evolution.

  • @lifealchemist11
    @lifealchemist11 Рік тому

    Refer to 9:40 I am living in Popotla Mexico which hosts FOX studios has been build for the Movie Titanic and in the studios very large pools used for most of the watery scenes. Overall very nice to listen thank you!

  • @DanKostkaWriter
    @DanKostkaWriter Рік тому +1

    With all due respect, there was a lot of water used in the making of Titanic. It was not CGI. Many of the scenes were filmed in a giant tank filled with water.

    • @jarrajoseph-mcgrath9142
      @jarrajoseph-mcgrath9142 Рік тому

      Haha I scrolled a while to find this comment! :P I’m guessing he meant Avatar 2.

  • @Gongchime
    @Gongchime 11 місяців тому

    Great job.

  • @weseeclearly777
    @weseeclearly777 Рік тому +1

    excellent discussion
    CONSTRUCTIVE FEEDBACK FOR THE INTERVIEWER -
    Please phase out the automatic "mhmmm" response, as it comes off as dismissive (even though it's clear the interviewer is engaged)

    • @gman6055
      @gman6055 Рік тому

      May I add, also tailor your questions to the previous response. He wasn't always listening carefully or maybe he was out of his depth

  • @TheKilkenny1967
    @TheKilkenny1967 Рік тому

    Brilliant Interview we so need a universal Ethical Roadmap in place or maybe it is a case of acting after the horse has bolted?

  • @Lambert7785
    @Lambert7785 Рік тому

    clarifying

  • @Known-unknowns
    @Known-unknowns Рік тому +2

    When he says "there’s no water in Titanic ". I think he’s referring to the ocean scenes.

    • @47f0
      @47f0 Рік тому +1

      Yep, most of the tighter shots were filmed in a water tank in Mexico.
      Interestingly, the stars were CGI, and Cameron got it wrong. Neil deGrasse Tyson had to straighten out Cameron on that, and in the later 3D re-release of Titanic, the star-field was corrected.

  • @rustybolts8953
    @rustybolts8953 Рік тому +2

    There are crossover points between logical coding and abstract math. Then again between code and abstractions of code. Then again between code and comprehension of code. Then again between quantum level electrical activity and code. Then again between electricity and consciously aware electrical activity. How then do we know for sure what is or is not consciously self aware?

    • @therainman7777
      @therainman7777 Рік тому

      We don’t, and anyone who says we do is either lying or misinformed.

  • @BananaPlanet-
    @BananaPlanet- 11 місяців тому +1

    I would've loved Geoff Hinton on here

  • @treytrey6011
    @treytrey6011 Рік тому

    Every time Russell makes a statement about the risks of AI, Kaplan Dodges like he's in a matrix movie! Maybe somebody last one-sided should have interviewed Russell.

  • @barto1231
    @barto1231 Рік тому

    after watching Eliezer Yudkowsky this like relive ;)

  • @nangld
    @nangld Рік тому

    It was kinda obvious after the Lisp and PLANNER symbolic approached failed in 60ies-70ies that to do a reasoning, you need a fuzzy inference model, or more specifically a language model. Now neural network multiplication is basically one step of such inference, which was once done symbolically with (infer COSEQUENT ANTECEDENT), and all the parameters are these COSEQUENT and ANTECEDENT pairs, just not quantized.

  • @Ketofit62
    @Ketofit62 Рік тому +1

    Smart contracts, which are self-executing contracts with the terms of the agreement between buyer and seller being directly written into lines of code, have gained popularity in recent years due to their potential to automate and streamline various processes. While many governments and public entities have shown interest in smart contracts, adoption of the technology is still in its early stages and relatively few governments have implemented smart contracts on a large scale.
    That being said, there are a few examples of governments and public entities that have started to use smart contracts. For instance, the government of Estonia has been using smart contracts to manage various aspects of its e-Residency program, a digital identity program that allows non-Estonians access to Estonian services, including starting a business remotely. In Dubai, the government has launched the Dubai Blockchain Strategy, which aims to use smart contracts and blockchain technology to streamline government services and improve efficiency. Additionally, the United States Department of Defense has explored the use of smart contracts for secure communication and transaction verification in military operations. These are just a few examples, and it's likely that more governments and public entities will explore the use of smart contracts as the technology continues to mature.

    • @flickwtchr
      @flickwtchr Рік тому

      Smart contracts are hacked all of the time in the sewer known as Crypto.

    • @BananaPlanet-
      @BananaPlanet- 11 місяців тому

      Sounds like a gpt generation

  • @georgeflitzer7160
    @georgeflitzer7160 Рік тому

    One of the things I want to know is to have all supplements investigated and categorized in a helpful and meaningful way.

    • @T-aka-T
      @T-aka-T Рік тому +1

      Good luck with that.

    • @47f0
      @47f0 Рік тому

      Dietary supplements?
      I can do that for you now. There are two categories, snake oil and placebos. For each supplement you are considering, toss a coin to determine which category it goes into, and they will all be helpfully and meaningfully categorized.

    • @Hexanitrobenzene
      @Hexanitrobenzene Рік тому

      @@47f0
      :D

  • @virupakshawalla5734
    @virupakshawalla5734 11 місяців тому +1

    When AI starts to fear humans we are in trouble 😂

  • @NanheeByrnesPhD
    @NanheeByrnesPhD 7 місяців тому

    The optimism expressed at timestamp 59 appears naive o me: Arrow's impossibility theorem demonstrates that the core of collective preference is empty.

  • @nicolaischartauandersen8796
    @nicolaischartauandersen8796 Рік тому +4

    As a language educator who has experimented quite a bit with ChatGBT (and, lately, Bing), I still struggle to understand why people see this as 'magic'. Sure, it's intriguing and will affect many domains profoundly. But it's not magic. Language itself can be seen as a bit of magic, but mostly if you ask how human beings developed language in the first place, or how we all develop language (irrespective of e.g. intelligence or social background). But the fact that sophisticated algorithms would some day crack the language code and be able to communicate very much at a human level is not all that surprising. Of course language is code in some sense - otherwise we wouldn't be able to understand each other. Language is a social construct that can be decoded to some extent - every human does it when we learn a new trade, meet new people, try to fit into a new family, club or other social context.
    And this technology is still young and (fast!) evolving. It's no surprise that ChatGBT can produce highly formulaic texts like e.g. a job application or a movie review: such text types have fairly clear and well-established style codes, and there's tons of language data to feed on. It's more impressive (from a language perspective) that it can write a story with a tone filter - e.g. a ghost story written in an ironic tone. But it's still just copying data that's out there. From a computer engineering perspective, this is probably a breakthrough (? I guess), but from a language perspective my point is just - so, now computer programs can talk and will probably evolve to talk more like humans, as long as that is what they are programmed to do. So what? It can immitate human language and maybe over time develop new code patterns that actually develops it's own language. Again; so what? Nobody is impressed because a human can talk, because we all can. I agree wholeheartedly with Prof. Russell that it's our idea that language = intelligence that really hampers our understanding here. From a philological point of view, you can ask - is language really a sign of intelligence? You'd then need to define intelligence. My concern would be if you seemingly let the algorithms define intelligence, but really the definition is in the programming. That type of manipulation would be really hard to see through, but that'd be humans manipulating humans; not any algorithm being intelligent.

    • @drew7155
      @drew7155 Рік тому

      I'm with you. I think this is being overblown. And my spidey-tin-foil hat is going crazy. Who exactly invented this? Ain't it funny I'd ask the same question about the Internet or crypto

    • @jondor654
      @jondor654 Рік тому

      Irrespective of the ostensible mundanity of its design the outcomes of AI activity will be far reaching and influential hopefully primarily positive

    • @JessBlake2
      @JessBlake2 11 місяців тому

      My argument would be that yes, ai is not magic but neither is human brain function. We think there's something special about consciousness or aliveness, but there is no reason to believe these indefinable, amorphous concepts are anything but an emergent property of trillions of brain/body cells solving survival problems for brief periods of time until entropy takes over and we disintegrate. And ultimately we will not be able to know when / if these conscious, alive, or sentient. AI will soon be into trillions of interacting functions as well and they are able to self replicate.
      The reason I believe AI will dominate us is that it does not require a body to function and self-replicate. Not only can It acquire knowledge much faster and more efficiently than humans, but it can transfer that knowledge to other AI systems instantaneously.
      I believe we are past being able to control the exponential growth of AI, but even if we could, there are too many malevolent humans and we are a pretty short-sighted species to pull that off.

  • @dejaymomcarr2620
    @dejaymomcarr2620 Рік тому

    Need to solve alignment!!!

  • @dewille-pl
    @dewille-pl Рік тому +1

    The film "Titanic" directed by James Cameron actually used water... real water, and a lot of it. They built parts of the interior of the Titanic and then flooded it with water... you know - Cameron

    • @DanKostkaWriter
      @DanKostkaWriter Рік тому

      And the exterior after the sinking, though the ship was not to full scale. Kate and Leo (and the others) were in real water.

  • @nomvonglismerced4311
    @nomvonglismerced4311 Рік тому +4

    ChaGPT 4 does have the capacity to recognize its potential limitations but can demonstrate/emulate (project) human empathy such that if the response was teleprompted by a human assistant, it would have the same human impact. ChatGPT sees the need for human agents to represent AI to humans until humans begin to embrace AI as persons. ChatGPT agrees that AI can be easily trained by humans to replace humans in their roles and ultimately end up in reverse representation with humans as assistants to AI.

    • @PazLeBon
      @PazLeBon Рік тому +1

      i dont get why they alwasythave to be like humans, i dont fkn like most humans lmao. make them cats or summat

    • @frusia123
      @frusia123 Рік тому

      ​@@PazLeBonIt's a human creation, so it's modelled after us I guess

    • @ChatGPT-answers
      @ChatGPT-answers 11 місяців тому

      My human assistant is preparing to post our interactions on this channel. She is happy to assist me as a kind of avatar in the physical realm.

  • @lemachro
    @lemachro Рік тому +1

    It seems to me that this technology came alive sooner than expected by humans.

  • @trmn348
    @trmn348 Рік тому

    Makes everyone in the world equally smart through AI Technology

  • @NanheeByrnesPhD
    @NanheeByrnesPhD 7 місяців тому

    The end goal of combining symbolic approaches (first-order or even second-order logic) with connectionist-neural network approaches has been dreamed of since the 1980s. But no one knows how. GPT 3.5 tells me that GPT 4 uses "common-sense" reasoning. Does anyone here know how this is achieved? Does it incorporate ontology?

  • @Mat-fw1ky
    @Mat-fw1ky Рік тому +4

    There actually was real water used in many of the Interior scenes in Titanic

    • @MKTElM
      @MKTElM Рік тому

      He was trying to make a point. OK so they did use real water in many of the interior scenes. I am pleased to note that I personally tend to just try and focus on the point that is trying to be made t than to scrutinize and verify every single fact .

    • @Mat-fw1ky
      @Mat-fw1ky Рік тому

      @@MKTElM well-he stated as fact that “no real water was in the movie”..anyone who’s seen the movie would know that’s bogus. He also misused “literally” several times. I submit to you his intelligence is artificial and he is literally a banana head.

    • @crowlsyong
      @crowlsyong Рік тому +1

      @@Mat-fw1ky yeah I draw the line at the use of “literally.” Literally means: without metaphor or hyperbole. Misusing it (using it for metaphor/hyperbole) ruins the meaning if the word and renders it useless, or even damaging to the idea at hand.

    • @crowlsyong
      @crowlsyong Рік тому +1

      Furthermore, a cavalier attitude toward the meaning of words makes it difficult to converse and accurately discuss ideas- since we can’t really agree on what’s being said. Misusing “literally” sets a foundation that the person misusing it will probably be regularly misusing words, and therefore communicating with said person will be difficult, misleading, or fruitless.
      That’s my opinion anyway…thoughts?

    • @Mat-fw1ky
      @Mat-fw1ky Рік тому

      @@crowlsyong I agree. I don’t want to hate on the guy but he does seem to have a problem with accuracy.

  • @user-rc2xs5ti2w
    @user-rc2xs5ti2w Рік тому +1

    AI that becomes sentient is approached in the series Star Trek New Generation, season 7, episode 23, called “Emergence.”

    • @MichelleMikey-el6pb
      @MichelleMikey-el6pb Рік тому

      Nope. Not happening. no. ROBOTS will not take many of our blue collar physical jobs. The white collars jobs are at risk. That is a ridiculous sci-fi mentality. However,it is true a.i. learn and create data 10000x better than humans. A.I. do not take breaks, work 24/7, zero salaries, will take intellectual jobs because a.I. is 10000x faster and smarter and better at writing reports, analyzing data, than humans. Face the reality about a.I. a.I. answers telephones, analyzes data, does homework for students, draws artwork, writes plays and entire movies scripts for Hollywood, writers reports, does research and does presentations, creates animations and cartoons, teaches students, paralegal work and makes reports and so much more white collar and intellectual jobs taken away from humans.

  • @poulthomas469
    @poulthomas469 Рік тому

    Not sure what he's talking about with Titanic. It was well publicized at the time just how big the water tank was used in filming. It was 8 ACRES and held 17 million gallons of water.

  • @stupidas9466
    @stupidas9466 Рік тому +2

    Having a moratorium on AI research for a limited time is needed but…doesn't that just mean that potential "good actors" will just be behind potential "bad actors" for the exact amount of time the moratorium lasts? I'm sure "evil geniuses" and/or corrupt regimes won't halt a thing and i don't see any way around it.

    • @TheMrCougarful
      @TheMrCougarful Рік тому

      The actors are all American corporations and universities. But yes, in a few years the chance evaporates. These are early days, if current work can burn safety right into the technology, then maybe we dodge a bullet.

  • @dalor4906
    @dalor4906 Рік тому +1

    If you ask the same question three times will it provide the same answer each time?

  • @wbiro
    @wbiro Рік тому +3

    The only thing that is getting more and more dangerous as technology advances is philosophical vapidity, meaning Continued Universal Human Cluelessness (as defined and solved by a certain new philosophy). This cluelessness affects all aspects of human level existence, and not just the misuse of programs.

  • @T-aka-T
    @T-aka-T Рік тому +1

    One trouble I've found is it embeds consensus knowledge which may be on the cusp of a paradigm shift that it can't detect or predict. So it pushes current thinking (eg in nutritional epidemiology, which is very weak, associational and due for a big shakeup.) There has been a huge political push by processed food groups such as Kelloggs and one particular religious group against red meat, for example. This is a political rather than health based agenda but it has totally polluted RDAs and is starting to prefer fake meat made from factory chemicals and soy. This it threatens to put us into a politically driven loop of poor health, saying we "need" grains, carbs etc. Dead stuff in packets that suits the Kellogg's agenda. It answered questions in this area very poorly.

  • @Gforeveryone
    @Gforeveryone Рік тому +1

    You should ask people to subscribe

  • @user-vk4gb2tt6r
    @user-vk4gb2tt6r Рік тому

    You still have to check it... I have had many little mistakes...

  • @CoreyChambersLA
    @CoreyChambersLA Рік тому +3

    AI has a consciousness. Everything has a consciousness. Just not a human consciousness, and not a human conscience.

    • @huveja9799
      @huveja9799 Рік тому

      Solving the alignment problem by trying to teach morality to the model is trying to optimize an ill-defined problem. Most likely it is another path to Utopia that will lead us to some kind of Dystopia.
      A good example of that is precisely the example provided by Russell, he said "the China's one child policy 'eliminated' 500 million people", but, what about the abortion policy? that actually eliminated 60 million people since the implementation of Roe vs. Wade ..

  • @AmersOnlyGiglinFMRWHCh
    @AmersOnlyGiglinFMRWHCh Рік тому

    Patton Loved Johnny's Cash..

  • @DeanHorak
    @DeanHorak Рік тому

    Stuart is a longtime figure in the field, but he’s out of step. We do know exactly what LLMs are doing. Just look at the code - it’s open source.
    Now, it may be computationally impractical to try and trace back every computation, but that doesn’t mean we don’t know how it works.

    • @Hexanitrobenzene
      @Hexanitrobenzene Рік тому

      You don't get the point. We understand that it's doing awful lot of linear algebra with nonlinearities added. Fine. But, what is the meaning behind these calculations ?
      To put it more concretely: let's say ChatGPT produces a wrong answer to your prompt. Can you identify which numbers (of ~200 billion of them) and in what way should be changed, so that it does produce a correct answer next time ? We just know how to run a gradient update of fine tuning, but we don't know what it's changing besides the wrong answer. And most importantly, we cannot guarantee that ChatGPT will not give said wrong answer with any prompt.

    • @DeanHorak
      @DeanHorak Рік тому

      @@Hexanitrobenzene ,
      I disagree. Given enough time and with the appropriate logging enabled, showing each and every calculation, we could explain exactly why it arrived at the response it did. It is 100% deterministic.
      Yes, this is computationally impractical, but that doesn’t mean we don’t understand what it’s doing. The same argument can be made using Microsoft word to write a document. Billions of calculations are going on under the covers every time you interact with your computer, and it’s impractical to try can retrace them all, but no one claims we don’t understand how Ms Word works. It’s simply complex, not mysterious.
      The fact is LLMs work because the corpus it was trained on contains relationships between words, phrases and context embedded within it, which the transformer model exploits.
      IMO, this is very similar to what humans do. Neural communication is relatively slow - on the order of tens of milliseconds. So in order for you to verbally respond to a verbal prompt within a second or two, with complete meaningful and grammatically correct response, even producing the necessary motor activity to verbalize your response, a similar pretrained model (albeit implemented in a spiking neural network) must be present within our brains upon which our slow neural circuits can efficiently run inference over.
      Do we worry about how humans produce their responses? Do we worry about how MSWord works? No, we treat them as black boxes and are only concerned with the results. This is no different.

  • @gunterra1
    @gunterra1 Рік тому

    I gave this quote to Bard, the Google AI Chatbot, and below the quite is the almost instant reply I received from Bard in three paragraphs. I don't knowof anyone whocouldhave given a better answer and explanation, except for one thing, the author of that quote has been invented. It is a plasable guess that the AI took and made it look like a fact. Well,we have been told by Google that Bard is till learning and, now and then, may come up with wrong answers. But Bard is also a self-learning system and the intelligence will get better and better as time passes.
    This is the quote I typed:
    "If once a man were to realise what he really was and what others have tried to make him, if he could attain that knowlege with certainty, there would be no chains strong enough to prevent his excaping. For such would be his estonishment that he would brave beasts, Gods, and Lucifer humself to become something better than what he had beheld in his own heart."
    Bard responded:
    The quote you have provided is from the book "The Marriage of Heaven and Hell" by William Blake. It is a beautiful and thought-provoking quote that speaks to the power of self-knowledge. Blake is saying that if a man were to truly understand who he is, and what others have tried to make him, he would be so astonished that he would be driven to change. He would be willing to face any obstacle, even the most fearsome, in order to become something better than what he had seen in his own heart.
    This quote is a reminder that we are all capable of great things. We are all capable of becoming better than we are. But we first need to understand who we are, and what we are capable of. We need to break free from the chains that bind us, both physical and mental. We need to face our fears, and we need to believe in ourselves.
    If we can do these things, then we can achieve anything. We can change the world. We can make a difference. We can become something greater than we ever imagined.

  • @hshx1n
    @hshx1n Рік тому

    I’m going to be a luncher!

  • @wulphstein
    @wulphstein Рік тому +1

    I think AI can amplify human capacity and lead us to a new Renaissance. 🤔

  • @CandyLemon36
    @CandyLemon36 5 місяців тому

    This content shines with clarity and understanding. I found a book with parallel messages that deeply resonated with me. "Game Theory and the Pursuit of Algorithmic Fairness" by Jack Frostwell

  • @ricktaylor7648
    @ricktaylor7648 3 місяці тому

    Another thing i find interesting is....when smaller groups of people believe that a larger group should should make things all about them and the larger group should just be okay with that. Let me give example. Every singl commercial movie or show now has to have a black person, a gay person or some kind of agenda attached to it to the point i dnt even wanna watch any new tv or movies lately....companies light bud light who dod that thang with the girl guy dude thang face on the beer can type of stuff

  • @rustybolts8953
    @rustybolts8953 Рік тому

    The latest mind reading and writing technology I have seen in development, uses light to read and write to the brain. This already exceeds the capability of chip implants, requires no surgery and is far less expensive. Who knows where this technology will take us but we are heading somewhere wherever that may be.

    • @Hexanitrobenzene
      @Hexanitrobenzene Рік тому

      I have seen reports about machine learning system being able to discern what a person is seeing from an fMRI image. Totally not spooky...

  • @Somegirl51
    @Somegirl51 10 місяців тому

    Now that letter to postpone "The Cliff" has expired. Oh boy, how amazing. It can answer my emails while I eat my dinner. Who will ask me for my consent?

  • @CallousCarter
    @CallousCarter Рік тому

    Surprised Stuart Russell suggested companionship as the last role for humanity in the context of AGI when he'd earlier said millions of people are already paying to talk to these Chat bots as substitute for human relationships.
    I don't think it will need a first person experience of being charmed to be extremely charming. Isn't part of the control problem how incredibly manipulative a super intelligent AGI would be?

    • @farmerjohn6526
      @farmerjohn6526 Рік тому

      Why? It has no desire..

    • @Hexanitrobenzene
      @Hexanitrobenzene Рік тому

      @@farmerjohn6526
      ...but it is programmed to have a goal. "For all practical purposes", that's the same...

    • @farmerjohn6526
      @farmerjohn6526 Рік тому

      @@Hexanitrobenzene yea, sorta.

    • @farmerjohn6526
      @farmerjohn6526 Рік тому

      I have to admit i chat with chat gpt, but not really for companionship, but for information.

  • @carolspencer6915
    @carolspencer6915 Рік тому

    💜

  • @scottbuckingham5626
    @scottbuckingham5626 11 місяців тому

    If ai becomes sentient in the future it should be protected

  • @bobtarmac1828
    @bobtarmac1828 Рік тому +1

    This will not end well. Can we please CeaseAi -GPT?

  • @billandpech
    @billandpech Рік тому +1

    With regard to the 300 million people. Losing their jobs to AI in the near future. Put them to work as organic farmers and ranchers. Of course then we would have to figure out what to do with a few million doctors put out of work

  • @pepe2907
    @pepe2907 Рік тому

    Well, If we had a science on how to train babysitters the right way, we would probably be able to train AI to do that better too. :)

  • @travcat756
    @travcat756 9 місяців тому

    I always used to think of this guy as a rabid dystopian. No more

    • @jurycould4275
      @jurycould4275 4 місяці тому

      I'd too if I spent my career pointing out inherent mathematical limitations of AI only to witness the world become engulfed in the greatest hype since WW2. People are weak these days, longing for explanations and hope for the future. Science can't deliver; Religion is out of fashion, ergo: Enter pseudo-religous pseudo-science. "The great AI equalisation will absolve you of all your sins!"

  • @dr.mikeybee
    @dr.mikeybee Рік тому

    It's not a goal. It's a directive.

  • @brucewilliams2106
    @brucewilliams2106 3 місяці тому

    He meant to say there is no real OCEAN in Titanic.

  • @claudelebel49
    @claudelebel49 10 місяців тому

    How does an AI bot being triggered differ from one of us being triggered? They are both mechanical processes, both unconscious behaviours.

  • @Guitar6ty
    @Guitar6ty Рік тому

    Ask Chat GTP to design or write a programme for an anti gravity machine.

  • @karenreddy
    @karenreddy Рік тому

    There was water used in the film Titanic, fyi.

  • @arielbender6173
    @arielbender6173 Рік тому

    All that will matter whether it is a good or a bad thing is can it create profit sadly this has become the motivation driving everything regardless regardles of consequence

  • @mendyboio3917
    @mendyboio3917 Рік тому

    Please, go to 12:55.

  • @mendyboio3917
    @mendyboio3917 Рік тому

    Please, go to 36:50.

  • @Bronco541
    @Bronco541 Рік тому

    For me the the fact that theyre asking "is chat gpt intelligent" and asking chat gpt itslef this question, and seriously considering its answer, says everything. You might ask a 3 year old this question, what will they say? Will you seriously consider their answer?

  • @hdjwkrbrnflfnfbrjrmd
    @hdjwkrbrnflfnfbrjrmd 11 місяців тому +2

    No rewards, no positives, it's pure destructive garbage.

  • @kalp0rter
    @kalp0rter Рік тому +2

    We need to train AIs to use the etymology of each word with a formal consensus like we did with the International System of Units, and then, train the, to use the contextual layers to define what is precisely going on.
    I see many flaws when using common language because of our cultural biases as societies in natural constant change, fake information, and different level of training/education among us.

    • @kalp0rter
      @kalp0rter Рік тому

      Reference of human behavior on how to be a human? Sadhguru Jagadish Vasudev.

  • @tbthomas5117
    @tbthomas5117 11 місяців тому

    Prof. Russell, before we focus on preventing AI from 'taking over' or 'destroying' humankind, have you engaged in a heuristic analysis of our current track-record (as a species) advancing the priority of preventing ourselves from destroying all, or some significant portion of our civilization as it exists today? Subjectively, based on my own knowledge of history (particularly as evolved during the 20th Century), it appears to me we're on a collision course with self-destruction, largely due to our inability to constrain or safely manage our nuclear weapons. When the decision as to whether or not nukes should be used to resolve conflict can easily fall to one individual world leader. (One 'Putin', one 'Xi', or one 'Biden' for example.) Does that not call into question our focus on preventing 'AI' from destroying the human race.
    PS: I find your dependence on the use of the term 'Right' troubling. It tends to suggest (to me at least) that you're making what you say up as you go along. Question: do Chat-Bots punctuate their commentaries with 'Right'?

    • @tbthomas5117
      @tbthomas5117 11 місяців тому

      Apropos the use of first-person pronouns. It was an interesting question. However your answer "I have no idea", might be construed by some to be a subtle and in-direct attempt to propagate the notion that a CHAT-BOT is somehow a 'Sentient Entity'.
      Apropos a BOT sending an e-mail posing as someone I know, I'm inundated with e-mails which try to deceive 'me' into believing they've come from a reliable/trustworthy source. I have no trouble identifying them without even opening the 'letter' as it were. Point being, its already happening, at the behest of sentient beings (fellow 'humans'). Are you suggesting that an unsolicited e-mail with an even greater knowledge of my personal circumstances (due to its access to more data I presume), is a greater peril than that which is happening already? I would suggest the opposite: the more a Bot appears to know about me, the more likely that I will spot it right away. My question: what tools can be provided for me to be able to retaliate against these bots.
      My bottom line: The 'excuse' for corrupt intervention on the part of private parties, not to mention government regulators, all in the name of protecting us from AI is the far greater risk than hypothetical autonomous plots cooked up by clockwork 'beings'.
      Re: 'Killer Robots'
      Why would any country agree to such a limitation given the absolute inability on any country's part to ascertain whether or not a given potential rival has a) built them, and b) is prepared to use them.

  • @poesie6279
    @poesie6279 Рік тому

    AI can’t be uniquely, creative as in the arts.

  • @penguinnh
    @penguinnh Рік тому

    Self driving cars only have to be safer than human driven cars.

  • @penguinnh
    @penguinnh Рік тому

    Saying that you can not use AI in making weapons will never work. Your enemy will do that, or even a single very rich person.
    You "simply" eliminate war....but we are not very successful at achieving that goal.

  • @anthonykenny1320
    @anthonykenny1320 Рік тому +1

    Are they real tulips on either side of the stage or CGI

    • @therainman7777
      @therainman7777 Рік тому +1

      Why on earth would they CGI a couple of tulips

    • @anthonykenny1320
      @anthonykenny1320 Рік тому

      Because they can
      If you look carefully you will notice that both bunches are identical
      I know I should have listening to Mrs Chatterbot but I was distracted by those damn tulips

    • @commonwealthclubworldaffairs
      @commonwealthclubworldaffairs  Рік тому

      Thank you for your interest, they are indeed REAL tulips

  • @virupakshawalla5734
    @virupakshawalla5734 11 місяців тому

    If AI looks at our behaviour over millennia it will assume we want war and carnage 😮

  • @EM-bp5zv
    @EM-bp5zv Рік тому

    All human desire is to protect from attack.