Conversations on Artificial Intelligence: Should It Be Trusted? | Public Lecture

Поділитися
Вставка
  • Опубліковано 17 січ 2024
  • Artificial Intelligence and big data are dramatically transforming the way we work, live, and connect. Innovators have begun designing AI solutions to advance society at a rapid pace, but often new technologies bring both promise and risk. How can we trust AI and safeguard society from unintended consequences to ensure a safe and human-centred digital future?
    Join the University of Waterloo in partnership with the Perimeter Institute for the TRuST Scholarly Network’s Conversations on lecture series where technology leaders from UWaterloo, Google, and NASA will discuss how AI is transforming society and if we should trust these technologies.
    Learn more about the event here: insidetheperimeter.ca/trust-s...
    Perimeter Institute (charitable registration number 88981 4323 RR0001) is the world’s largest independent research hub devoted to theoretical physics, created to foster breakthroughs in the fundamental understanding of our universe, from the smallest particles to the entire cosmos. Perimeter public events are made possible in part by the support of donors like you. Be part of the equation: perimeterinstitute.ca/inspiri...
    Subscribe for updates on future webcasts, events, free posters, and more: insidetheperimeter.ca/newslet...
    perimeter
    pioutreach
    perimeterinstitute
    Donate: perimeterinstitute.ca/give-today
  • Наука та технологія

КОМЕНТАРІ • 10

  • @SurfCatten
    @SurfCatten Місяць тому +1

    Wow that was an excellent discussion. Surprised there aren't more comments. Thanks for this!

  • @garystevason1658
    @garystevason1658 3 місяці тому +2

    I am thinking that we should perhaps ask AI, itself for help with this solution. I'm an old-school AI guy (chess, backgammon, poker, pinball, etc.). And yes, those early deductive methodologies are likely too innocent compared to the new Armageddon inductive concepts - that is, it beat us just through speed and the number and accuracy of considerations possible.
    I am hoping that it may be possible to have a universal auditing function running simultaneously that ensures each AI plays nice. I just wouldn't, couldn't trust mere humans to police our proposed limitations, and yes, as I mentioned earlier: any limitations our friend, AI itself recommends for itself. The machine isn't bad, it is the greedy malevolent users that need to be bridled by the auditing code.

    • @eddieheron1939
      @eddieheron1939 3 місяці тому

      A desire to monitor & police is obvious and natural, but along with many concerns, med / longer term - it takes just One Bad Egg!

  • @eddieheron1939
    @eddieheron1939 3 місяці тому +4

    Whether it’s solely human, or AI in need of relevant directives, your expensive microphones need gain & sensitivities adjusted so as not to hear every breath intake, while amp output is nullified when speaker goes silent more than some fractions of a second.
    Basically, the who conversation goes Null-Gasp-talk, repeat, every phrase.
    The technical content is interesting and on topic, though public 😊’ is not a regular challenge for some, it seems.

    • @tbird81
      @tbird81 3 місяці тому +1

      Thank you.

  • @Kounomura
    @Kounomura 3 місяці тому +2

    And what happens when everyone uses AI to develop their own strategy? AI can only mechanize knowledge, but not "real" intelligence, because real intelligence requires its own experiences, own "try and fals" eye-opener. However, AI cannot collect its own experiences, because it does not live, has no emotions, does not participate in life's struggles, joys, and pains. Humans have both knowledge and intelligence. The two are not the same.
    The biggest risk of AI is that it separates knowledge and intelligence. Intelligence can give correct answers to unexpected situations, but knowledge is not necessarily capable of doing so. And that can lead to huge problems. In humans, the two things form an organic unity, they are able to complement and support each other harmoniously and in a coordinated manner. Overall, it is possible that reckless "overdevelopment" of AI will do more harm than good. Only one thing is certain: with the help of AI, even better weapons of mass destruction can be produced and used.