AI Governance & Risk Management | Kartik Hosanagar | Talks at Google

Поділитися
Вставка
  • Опубліковано 28 лип 2019
  • Join Talks at Google for a conversation with Kartik Hosanagar, John C. Hower Professor of Technology and Digital Business at Wharton, about his new book A Human’s Guide to Machine Intelligence. The book is the result of years of Professor Hosanagar’s research, and explores the impact of algorithmic decisions on our personal and professional lives, and their unanticipated consequences. Kartik will explore how firms can make use of the tremendous opportunities and potential offered by machine learning and automated decision-making, while also doing their part to ensure algorithms are responsibly deployed.
    About Kartik:
    Kartik Hosanagar is the John C. Hower Professor of Technology and Digital Business at the University of Pennsylvania’s Wharton School of Business. Professor Hosanagar’s research focuses on the digital economy, in particular the impact of analytics and algorithms on consumers and society, Internet media, Internet marketing and e-commerce. Kartik has been recognized as one of the world’s top 40 business professors under 40.
    Link to book here: goo.gle/2YT9MY3

КОМЕНТАРІ • 15

  • @syarpieko1031
    @syarpieko1031 3 роки тому +3

    Is there any best practice to setup AI Governance for Enterprise Company?

  • @ASTVHOTSPOT
    @ASTVHOTSPOT 4 роки тому

    👍

  • @tdreamgmail
    @tdreamgmail 4 роки тому +2

    Do the Chinese followers know they're taking to a chatbot?

  • @Autists-Guide
    @Autists-Guide 4 роки тому +1

    Good talk. A larger framework for Information Governance already exists, of course... COBIT2019.
    (I'm biased, of course, as I'm a COBIT trainer :)

  • @nomdeplume69
    @nomdeplume69 Місяць тому

    So Microsoft American avatar was really an American

  • @Dr.Kananga
    @Dr.Kananga 4 роки тому

    My fear is humans will become too dependent on AI and loose their own abilities to trust their expertise.

  • @akivaprivate595
    @akivaprivate595 4 роки тому

    5:44 How the computer knows if the woman is African American?

  • @heetendrarathor3126
    @heetendrarathor3126 4 роки тому +1

    What I think is that these Biases are actually the reality and In process of making AI unbiased you actually making it Ideal and unrealistic and unuseful to catch the real threat or real candidate.

    • @remoneilwemogatosi544
      @remoneilwemogatosi544 4 роки тому +1

      Hi,
      This is a bit circular. Do we not design objective tools to reduce bias? And thereby making them more useful? Because with your statement, you are actually saying that our biases (subjective) are sufficient. It would seem the only benefit is automating our biases. By making AI ideal (objective), we doing the same thing that people in the scientific community do when they use statistical models with confidence intervals.
      What do you think?

    • @themeek351
      @themeek351 4 роки тому

      @@remoneilwemogatosi544 Is this what we do? Reduce our bias in order to find truth or, at least a useful result? What is needed here is truth about what is useful bias and what is not. Maybe Google should start looking into the liberal progressive bias of their board of directors! The last I heard from a recent congressional hearing is that it is at 100%! Now that's not useful bias! There should always be a truthful and useful bias towards your goals, while maintaining a balanced load of results across the human spectrum. God bless!

    • @remoneilwemogatosi544
      @remoneilwemogatosi544 4 роки тому +1

      @@themeek351 HI,
      I think you and I might be on different points. Care to give examples? I think you might be misunderstanding my reply to Heetendra.
      To your first question, yes. That is what we aim to do with science. The tools used in pursuit of, say, factors and drivers of obesity are meant to be reliable. This, in turn, helps us take an action (depending on your philosophy of governing). But that is besides the point, I need to emphasize that yes, we are trying to eliminate human bias in decisions and, least of all, not to read causality in cross-sectional data. But, again, I will wait for your examples and guidance on your point.