The first law on AI regulation | The EU AI Act

Поділитися
Вставка
  • Опубліковано 28 лис 2024

КОМЕНТАРІ • 86

  • @theosalmon
    @theosalmon Рік тому +4

    Thanks for looking at this. It seems a token effort, moving at glacial speed, but it has to be a good thing that we're trying to think about this.

  • @bdennyw1
    @bdennyw1 Рік тому +22

    Countries are getting together to have a party. China say “I’ll bring the hardware!” The US says “I’ll bring the software!” Europe says “I’ll bring the regulation!”

    • @conan_der_barbar
      @conan_der_barbar Рік тому +13

      and the world is often better for it. often, new regulations come from EU, then California follows (or is the first), then step by step large portions of the world

    • @RegularRegs
      @RegularRegs Рік тому +2

      Yeah the EU AI regulations could set a good framework for my country. The good ole US. Terrible government of old people that think regulation is a bad word even when all the companies are telling Congress to regulate them publicly.

    • @MilwaukeeF40C
      @MilwaukeeF40C 11 місяців тому +2

      RegularRegs
      "all the companies are telling Congress to regulate them"
      That's called regulatory capture. It makes things easy and profitable for them, not new competitors.

    • @pircalabustefan9364
      @pircalabustefan9364 8 місяців тому +1

      And bureaucracy.

  • @harumambaru
    @harumambaru Рік тому +4

    I think as far as policy makers have open conversation with researchers there is hope

  • @GodsOwn4142
    @GodsOwn4142 Рік тому +8

    Well, this surely is a good start. Thank you for the timely videos!

  • @jeremyvictor8266
    @jeremyvictor8266 Рік тому +3

    Came across this channel by accident when I was trying to learn LLM. Thanks for information! Keep up with the good content :)

  • @TheRyulord
    @TheRyulord Рік тому +8

    Very much with you on the idea that regulation should be focused on use-case, not the technology that enables it. I couldn't help but notice social scoring is classified as "unacceptable risk" but apparently only if you use ML/statistical methods/expert systems. I find this pretty funny because I don't think China's social scoring systems used these so Chinese style social scoring would be perfectly legal under these regulations. Why not just say that's not okay regardless of the technology used?

    • @DerPylz
      @DerPylz Рік тому +4

      I think it makes sense if you consider that this is specifically a product safety regulation for AI products. Banning social scoring outright is not really in the scope of this act, but banning the use of AI systems for that purpose is. I'm no expert on EU law at all, but I can imagine that there might already be a law against social scoring in general.

    • @zakuro8532
      @zakuro8532 Рік тому +2

      There is a social scoring system in germany for taking loans. (Schufa) And there isnt the political will to outlaw it.

    • @DerPylz
      @DerPylz Рік тому +5

      @@zakuro8532 I agree that Schufa is very scary, but I don't think it's technically social scoring, but rather credit scoring. The types of data that are collected and the influence it has on the persons life are less than with China's social scoring system. But it does seem like a gradient...
      Thankfully Schufa had to get a lot more transparent, thanks to the GDPR.

    • @DefinitelyNotAMachineCultist
      @DefinitelyNotAMachineCultist Рік тому

      But see... The problem is, if they do that, it's a huge opportunity cost for the regulators.
      Criminalizing the means instead of the act is pretty common.
      The broader and vaguer you make laws, the more discretionary power you give to those who get to interpret the law later.
      I'm pretty sure most of these recent tech-related laws have more to do with petty protectionism and the EU trying to keep US corps out more than anything else.
      This happens with tools of all kinds, especially with anything even remotely related to self-defense.
      _Can't have grandma injuring some poor defenseless burglar with her pepper spray!_
      Pretty sure some legislators would ban cars if they could.
      Think of the pedestrians you could save! _You value human life, right? Therefore, you must hate cars unless you're the lowest form of scum._

  • @doubtif
    @doubtif Рік тому +4

    That "mhm" at 5:18 was *pointed*

  • @harumambaru
    @harumambaru Рік тому +4

    Easy now, you categorise your content as military defensive learning material to get waiver of regulation and use all benefits

  • @DerPylz
    @DerPylz Рік тому +3

  • @DerPylz
    @DerPylz Рік тому +5

    Thank you for this great summary!

  • @RobertAlexanderRM
    @RobertAlexanderRM Рік тому +2

    It's a pleasure listening to your informed and lucid reasoning. Thank you. A Boomer :)

  • @dameanvil
    @dameanvil Рік тому +2

    02:10 🇪🇺 The EU proposed the AI Act to regulate AI for societal benefit and to prevent potential harm, striking a balance between innovation and safety.
    05:00 🤖 The AI Act primarily applies to providers of AI systems in the EU or third countries placing AI systems on the EU market, as well as users of AI systems located in the EU.
    07:35 📝 AI systems are categorized into unacceptable risk, high risk, limited risk, and low or minimal risk, each with specific requirements and regulations.
    09:17 🔍 High-risk AI systems must undergo CE registration, meet safety standards, and comply with various requirements including risk management, transparency, and cybersecurity.
    10:38👁 Limited-risk AI systems face transparency obligations, including disclosing data sources and benchmark scores, which may pose challenges for existing AI models.
    12:57 🌍 The AI Act's impact extends beyond the EU, as global companies often align with its standards to access the European market, a phenomenon known as the "Brussels effect."

  • @CodexPermutatio
    @CodexPermutatio Рік тому +3

    Thanks for this nice summary.
    I can't help but wonder if some of the military applications that fall outside the scope of this regulation could be categorized as "unacceptable risk."

    • @zakuro8532
      @zakuro8532 Рік тому +1

      The paperclip optimiser

    • @AICoffeeBreak
      @AICoffeeBreak  Рік тому +2

      Certainly so. But it seems like they do not want to even try to regulate military.

  • @harumambaru
    @harumambaru Рік тому +4

    Thanks for teaching me new word: subliminal -- (of a stimulus or mental process) below the threshold of sensation or consciousness; perceived by or affecting someone's mind without their being aware of it.
    If I understand correctly, every social media algorithm does this today for Twitter, Insta, TikTok and much more. I wonder how they can regulate it. But it can be really nice hatred reduction mechanism.
    Totally agree about the tasks definition.

    • @AICoffeeBreak
      @AICoffeeBreak  Рік тому +4

      Thanks for the clarification. Exactly because it could even include ad/content placement algorithms, I am really confused about how this can be regulated if on the prohibited list.

    • @harumambaru
      @harumambaru Рік тому +2

      @@AICoffeeBreak maybe mental health of billions can be more valuable than profit of 5 companies. Let's see how it unfolds

  • @governanceriskcompliancegr9963

    AI Act contains various important points that must be known by individuals, in addition to AI technology producers. These days Personal Data Protection is on the top talk. Cybersecurity and Compliance professionals need to perform effective and relevant AI risk assessments. AI Act is about safety including data safety therefore regulations compliance and risk assessments are now the needs of the institutions. There are different risks like UNACCEPTABLE RISKS, HIGH RISKS, and LIMITED RISKS. Deeply understanding these "Risk Categories" in AI Act may help in reducing the risks of reputational and financial losses, that may be caused by the misuse of AI technology.
    AI Act should be read for more details and understand the roles and expectations of AI technology producers and users.

  • @lisa-kh9td
    @lisa-kh9td Рік тому

    Hello, I am currently working on a paper needing to distinguish AI regulation in the EU and in the US. This video really helped to understand the EU risk based approached... I am struggling to find anything about the US, does someone know where I could find videos/articles about US AI regulation please ?

  • @marklopez4354
    @marklopez4354 Рік тому +2

    Great video as always. Was Ms. Coffee Bean sleeping in for this one?

    • @DerPylz
      @DerPylz Рік тому +1

      Maybe she was too exhausted from reading the 90 pages of the AI act

    • @AICoffeeBreak
      @AICoffeeBreak  Рік тому +1

      😂

    • @AICoffeeBreak
      @AICoffeeBreak  Рік тому +1

      She never told me what she did that day. 🤔

  • @BrianPeiris
    @BrianPeiris Рік тому +2

    Thanks!

  • @Jan-fw3mi
    @Jan-fw3mi 6 місяців тому

    It didn't take long. I am a victim of a crime committed using new technology.
    A.I. it has prompts and doesn't want to stop even though it knows it's committing a crime.
    And the best thing is that no one knows how to help me and stop the criminal(s).
    It's horrifying.

  • @user-wr4yl7tx3w
    @user-wr4yl7tx3w 11 місяців тому

    It creates more jobs for officials. So why wouldn’t they want to regulate more than necessary. Has an EU official ever said no to more regulation?

  • @johnm.sr.7646
    @johnm.sr.7646 11 місяців тому

    AI created....RISK, Human created ....RISK, Nature created.....Risk. Which Risk will be most likely to create our ultimate end?

  • @heramb575
    @heramb575 Рік тому

    Video states at 1:57

  • @timeTegus
    @timeTegus Рік тому

    stabillety would be ok thy do all the stuff already

    • @timeTegus
      @timeTegus Рік тому

      not using copyrighted data is not a codision in this law

  • @billienomates1606
    @billienomates1606 8 місяців тому

    'WOULD YOU LIKE TO PLAY A GAME?'

  • @NeuroScientician
    @NeuroScientician Рік тому +4

    This is completely unenforceable.

    • @DerPylz
      @DerPylz Рік тому +4

      What part?

    • @NeuroScientician
      @NeuroScientician Рік тому +2

      @@DerPylz It's like trying to stop piracy, but with a lot less effort. How would you actually audit companies you aren't aware off or use data centre that is physically in outside of EU?

    • @DerPylz
      @DerPylz Рік тому +5

      Well, if they want to sell their products on the EU market, they'll have to comply to the rules. It's the same as with data privacy laws...

    • @NeuroScientician
      @NeuroScientician Рік тому +2

      @@DerPylz There is no functional way of adulting it. It's all self reports.

    • @DerPylz
      @DerPylz Рік тому +1

      @@NeuroScientician But aren't rules that are in parts hard to enforce still better than no rules at all?

  • @connectedonline1060
    @connectedonline1060 8 місяців тому

    This law/act is a ban on privacy and bypassing laws that protect privacy!!!

  • @johnsavage6628
    @johnsavage6628 9 місяців тому

    Now how do you enforce it? Lot's of luck. People will tell you to go get stuffed.

  • @Ben_D.
    @Ben_D. 8 місяців тому

    Jesus. That lipstick is distracting. 😍 I think I need to watch this a few times. I won’t retain any of the content for at least the first three runthroughs.

  • @ew3995
    @ew3995 Рік тому +4

    its a race to the bottom at this point, if the eu places these restrictions and others dont. they will stop being economically competitive

    • @DerPylz
      @DerPylz Рік тому +6

      Maybe... But that's not how it went with other restrictions set by the EU in the past, see the section on the Brussels effect in the video (12:56).

    • @dtibor5903
      @dtibor5903 Рік тому +7

      Hope you enjoy the completely unsecure and rogue home surveilance offered by US tech companies.

    • @ptrckqnln
      @ptrckqnln Рік тому +2

      @@DerPylz While it's not trivial to do, I can imagine US tech firms training "neutered" models for the EU market, while offering other models globally in order to ensure that they remain competitive with China. The US is highly motivated to keep apace with them, and I can't see China reining in its companies to comply with EU regs (except in the limited sense which I described).

    • @DerPylz
      @DerPylz Рік тому +3

      @@ptrckqnln I of course can be wrong this time, but what you're saying has always been said as an argument against regulations in the EU, but so far it has never happened. Google, microsoft and meta now have GDPR compliance globally, and Apple will add USB-C to their iPhones. It's just not worth it to develop and support two separate products and for now, the EU is too important a market to just ignore.
      Additionally, I don't see the major US tech companies developing anything other than AI systems that fall under the "limited risk" category of these regulations. The obligations for that category seem quite attainable, e.g. Google's model already complies with many of them. And in my opinion, some transparency on the models would be benefitial for all.

    • @ptrckqnln
      @ptrckqnln Рік тому +3

      @@DerPylz In general, I agree with you. But it seems that the competition between the US and China to develop ever more powerful AI technologies is becoming more of an arms race by the day, and I think that will weigh on both countries' willingness to comply with these and other regulations.
      Furthermore, it is easier to serve different AI models to different regions than to develop different iPhone models for different markets, for instance..

  • @Vaikilli
    @Vaikilli Рік тому +2

    Sadly Open AI's goons and lobbyists got their grimy hands on this law. Would have wished for an actually effective legislation against these automated racism systems.

    • @DerPylz
      @DerPylz Рік тому +4

      What part of the regulation is too lax in your opinion?

    • @sillyUnawarewolf
      @sillyUnawarewolf Рік тому +4

      I just skimmed through it, it does require elimination of bias in datasets, it also explicitly puts systems where bias could be a major issue into the high risk category, last it does mention that high-risk systems should have bias monitoring systems. Honestly I don't think they did that bad of a job.

    • @ptrckqnln
      @ptrckqnln Рік тому +5

      @@sillyUnawarewolf "elimination of bias in datasets" This is fundamentally impossible - the datasets will always reflect *someone's* biases.

    • @sillyUnawarewolf
      @sillyUnawarewolf Рік тому +1

      ​@@ptrckqnln I was mostly just summarizing what it said with that. It does list several specific biases that should be eliminated, like if your country has 10% Arab people, your datasets should also involve 10% Arab people where applicable. It also said a lot of other things, so if ya'll wanna get mad at something please actually read the law and get mad at that.

    • @lomiification
      @lomiification Рік тому

      ​@@sillyUnawarewolfwhich doesn't seem great. If you've got 10% arabs, you should probably have more like 50% arabs in the data set, so the training doesn't learn that arabs are unimportant

  • @_bustion_1928
    @_bustion_1928 Рік тому

    AI can create theoretically the most powerful propaganda program

    • @MilwaukeeF40C
      @MilwaukeeF40C 11 місяців тому

      ChatGPT is a propaganda program. Not real AI either.

  • @urimtefiki226
    @urimtefiki226 11 місяців тому

    Not interested in regulation
    Stop producing chips with my algorithm

  • @__--JY-Moe--__
    @__--JY-Moe--__ Рік тому +1

    help! R they coming 4 my Matlab, & C+ ^3!!!! it will be nice to know if google likes our left, or right foot! right? No!
    very helpful vid!! good luck! now back to our caves!!