Understanding Fundamental Rights Impact Assessments in the EU AI Act | Lunchtime BABLing 27

Поділитися
Вставка

КОМЕНТАРІ • 3

  • @lhiwaya
    @lhiwaya 6 місяців тому

    Thank you for sharing your knowledge! What methodologies can you use for doing fundamental rights or human rights impact assessments? Is it the framework described in one of your publications, "A Framework for Assurance Audits of Algorithmic Systems"? I'm also aware of others, such as the HRIA methodology for digital activities from the Danish Institute of Human Rights. Are they comparable?

  • @reinouttuytten
    @reinouttuytten 8 місяців тому +1

    Could you elaborate on who is obligated to carry out a FRIA?
    Chapter 3, section 3, article 27 says "deployers that are bodies governed by public law, or are private entities providing public services, and deployers high-risk AI systems referred to in points 5 (b) and (c) of Annex III, shall perform an assessment of the impact on fundamental rights" but section 3 itself is named "Obligations of providers and deployers of high-risk AI systems and other parties" also mentioning providers.
    In another episode, you mentioned that carrying out a FRIA shows the commitment of the enterprise to its customers, building mutual trust and early differentiating from competitors that choose to postpone this process. I quote: "Now's the time to do that because pretty soon everybody's going to have to do this and you're just going to be one among a sea of people who are only meeting the floor of that regulation". Do you imply that in the future a FRIA will be mandatory for every business that implements AI systems or solely for businesses that implement High-Risk AI systems?
    Thanks for the great content by the way!

    • @bablai
      @bablai  7 місяців тому

      Most businesses will not need to conduct a FRIA as it's written in the law, only public orginisations and certain other private companies offering public services and credit scores and insurance. However, the risk assessment process as outlined in Article 9 is not dissimilar from a FRIA, so Providers of high-risk AI systems will effectively be completing assessments that have to consider the fundamental rights.