NYC AI Bias Law: One Year In and What to Consider | Lunchtime BABLing 38

Поділитися
Вставка
  • Опубліковано 28 вер 2024
  • 👉 Lunchtime BABLing listeners can save 20% on all BABL AI online courses using coupon code "BABLING20".
    📚 Sign up for our courses today: babl.ai/courses/
    🔗 Follow us for more: linktr.ee/babl.ai
    Join us for an insightful episode of "Lunchtime BABLing" as BABL AI CEO Shea Brown and VP of Sales Bryan Ilg dive deep into New York City's Local Law 144, a year after its implementation. This law mandates the auditing of AI tools used in hiring for bias, ensuring fair and equitable practices in the workplace.
    Episode Highlights:
    Understanding Local Law 144: A breakdown of what the law entails, its goals, and its impact on employers and AI tool providers.
    Year One Insights: What has been learned from the first year of compliance, including common challenges and successes.
    Preparing for Year Two: Key considerations for organizations as they navigate the second year of compliance. Learn about the nuances of data sharing, audit requirements, and maintaining compliance.
    Data Types and Testing: Detailed explanation of historical data vs. test data, and their roles in bias audits.
    Practical Advice: Decision trees and strategic advice for employers on how to handle their data and audit needs effectively.
    This episode is packed with valuable information for employers, HR professionals, and AI tool providers to ensure compliance with New York City's AI bias audit requirements. Stay informed and ahead of the curve with expert insights from Shea and Bryan.
    🔗 Don't forget to like, subscribe, and share! If you're watching on UA-cam, hit the like button and subscribe to stay updated with our latest episodes. If you're tuning in via podcast, thank you for listening! See you next week on Lunchtime BABLing.

КОМЕНТАРІ • 1

  • @kevinferguson1999
    @kevinferguson1999 2 місяці тому +1

    Good insight here Shea. This makes me wonder about the self identification questions often posed at the end of online job applications. Does Human Resources filter out, or remove candidates, at the top of the funnel based on a percentage of how people answer the “voluntary/self identification” questions: 1) Gender 2) Hispanic/Latino 3) veteran status and 4) disability? In other words, once they get, lets say, 100 male candidates, that bucket might close, and so forth?