Countries are getting together to have a party. China say “I’ll bring the hardware!” The US says “I’ll bring the software!” Europe says “I’ll bring the regulation!”
and the world is often better for it. often, new regulations come from EU, then California follows (or is the first), then step by step large portions of the world
Yeah the EU AI regulations could set a good framework for my country. The good ole US. Terrible government of old people that think regulation is a bad word even when all the companies are telling Congress to regulate them publicly.
RegularRegs "all the companies are telling Congress to regulate them" That's called regulatory capture. It makes things easy and profitable for them, not new competitors.
Very much with you on the idea that regulation should be focused on use-case, not the technology that enables it. I couldn't help but notice social scoring is classified as "unacceptable risk" but apparently only if you use ML/statistical methods/expert systems. I find this pretty funny because I don't think China's social scoring systems used these so Chinese style social scoring would be perfectly legal under these regulations. Why not just say that's not okay regardless of the technology used?
I think it makes sense if you consider that this is specifically a product safety regulation for AI products. Banning social scoring outright is not really in the scope of this act, but banning the use of AI systems for that purpose is. I'm no expert on EU law at all, but I can imagine that there might already be a law against social scoring in general.
@@zakuro8532 I agree that Schufa is very scary, but I don't think it's technically social scoring, but rather credit scoring. The types of data that are collected and the influence it has on the persons life are less than with China's social scoring system. But it does seem like a gradient... Thankfully Schufa had to get a lot more transparent, thanks to the GDPR.
But see... The problem is, if they do that, it's a huge opportunity cost for the regulators. Criminalizing the means instead of the act is pretty common. The broader and vaguer you make laws, the more discretionary power you give to those who get to interpret the law later. I'm pretty sure most of these recent tech-related laws have more to do with petty protectionism and the EU trying to keep US corps out more than anything else. This happens with tools of all kinds, especially with anything even remotely related to self-defense. _Can't have grandma injuring some poor defenseless burglar with her pepper spray!_ Pretty sure some legislators would ban cars if they could. Think of the pedestrians you could save! _You value human life, right? Therefore, you must hate cars unless you're the lowest form of scum._
02:10 🇪🇺 The EU proposed the AI Act to regulate AI for societal benefit and to prevent potential harm, striking a balance between innovation and safety. 05:00 🤖 The AI Act primarily applies to providers of AI systems in the EU or third countries placing AI systems on the EU market, as well as users of AI systems located in the EU. 07:35 📝 AI systems are categorized into unacceptable risk, high risk, limited risk, and low or minimal risk, each with specific requirements and regulations. 09:17 🔍 High-risk AI systems must undergo CE registration, meet safety standards, and comply with various requirements including risk management, transparency, and cybersecurity. 10:38👁 Limited-risk AI systems face transparency obligations, including disclosing data sources and benchmark scores, which may pose challenges for existing AI models. 12:57 🌍 The AI Act's impact extends beyond the EU, as global companies often align with its standards to access the European market, a phenomenon known as the "Brussels effect."
Thanks for this nice summary. I can't help but wonder if some of the military applications that fall outside the scope of this regulation could be categorized as "unacceptable risk."
Thanks for teaching me new word: subliminal -- (of a stimulus or mental process) below the threshold of sensation or consciousness; perceived by or affecting someone's mind without their being aware of it. If I understand correctly, every social media algorithm does this today for Twitter, Insta, TikTok and much more. I wonder how they can regulate it. But it can be really nice hatred reduction mechanism. Totally agree about the tasks definition.
Thanks for the clarification. Exactly because it could even include ad/content placement algorithms, I am really confused about how this can be regulated if on the prohibited list.
AI Act contains various important points that must be known by individuals, in addition to AI technology producers. These days Personal Data Protection is on the top talk. Cybersecurity and Compliance professionals need to perform effective and relevant AI risk assessments. AI Act is about safety including data safety therefore regulations compliance and risk assessments are now the needs of the institutions. There are different risks like UNACCEPTABLE RISKS, HIGH RISKS, and LIMITED RISKS. Deeply understanding these "Risk Categories" in AI Act may help in reducing the risks of reputational and financial losses, that may be caused by the misuse of AI technology. AI Act should be read for more details and understand the roles and expectations of AI technology producers and users.
Hello, I am currently working on a paper needing to distinguish AI regulation in the EU and in the US. This video really helped to understand the EU risk based approached... I am struggling to find anything about the US, does someone know where I could find videos/articles about US AI regulation please ?
It didn't take long. I am a victim of a crime committed using new technology. A.I. it has prompts and doesn't want to stop even though it knows it's committing a crime. And the best thing is that no one knows how to help me and stop the criminal(s). It's horrifying.
@@DerPylz It's like trying to stop piracy, but with a lot less effort. How would you actually audit companies you aren't aware off or use data centre that is physically in outside of EU?
Jesus. That lipstick is distracting. 😍 I think I need to watch this a few times. I won’t retain any of the content for at least the first three runthroughs.
@@DerPylz While it's not trivial to do, I can imagine US tech firms training "neutered" models for the EU market, while offering other models globally in order to ensure that they remain competitive with China. The US is highly motivated to keep apace with them, and I can't see China reining in its companies to comply with EU regs (except in the limited sense which I described).
@@ptrckqnln I of course can be wrong this time, but what you're saying has always been said as an argument against regulations in the EU, but so far it has never happened. Google, microsoft and meta now have GDPR compliance globally, and Apple will add USB-C to their iPhones. It's just not worth it to develop and support two separate products and for now, the EU is too important a market to just ignore. Additionally, I don't see the major US tech companies developing anything other than AI systems that fall under the "limited risk" category of these regulations. The obligations for that category seem quite attainable, e.g. Google's model already complies with many of them. And in my opinion, some transparency on the models would be benefitial for all.
@@DerPylz In general, I agree with you. But it seems that the competition between the US and China to develop ever more powerful AI technologies is becoming more of an arms race by the day, and I think that will weigh on both countries' willingness to comply with these and other regulations. Furthermore, it is easier to serve different AI models to different regions than to develop different iPhone models for different markets, for instance..
Sadly Open AI's goons and lobbyists got their grimy hands on this law. Would have wished for an actually effective legislation against these automated racism systems.
I just skimmed through it, it does require elimination of bias in datasets, it also explicitly puts systems where bias could be a major issue into the high risk category, last it does mention that high-risk systems should have bias monitoring systems. Honestly I don't think they did that bad of a job.
@@ptrckqnln I was mostly just summarizing what it said with that. It does list several specific biases that should be eliminated, like if your country has 10% Arab people, your datasets should also involve 10% Arab people where applicable. It also said a lot of other things, so if ya'll wanna get mad at something please actually read the law and get mad at that.
@@sillyUnawarewolfwhich doesn't seem great. If you've got 10% arabs, you should probably have more like 50% arabs in the data set, so the training doesn't learn that arabs are unimportant
help! R they coming 4 my Matlab, & C+ ^3!!!! it will be nice to know if google likes our left, or right foot! right? No! very helpful vid!! good luck! now back to our caves!!
Thanks for looking at this. It seems a token effort, moving at glacial speed, but it has to be a good thing that we're trying to think about this.
Countries are getting together to have a party. China say “I’ll bring the hardware!” The US says “I’ll bring the software!” Europe says “I’ll bring the regulation!”
and the world is often better for it. often, new regulations come from EU, then California follows (or is the first), then step by step large portions of the world
Yeah the EU AI regulations could set a good framework for my country. The good ole US. Terrible government of old people that think regulation is a bad word even when all the companies are telling Congress to regulate them publicly.
RegularRegs
"all the companies are telling Congress to regulate them"
That's called regulatory capture. It makes things easy and profitable for them, not new competitors.
And bureaucracy.
I think as far as policy makers have open conversation with researchers there is hope
Well, this surely is a good start. Thank you for the timely videos!
Came across this channel by accident when I was trying to learn LLM. Thanks for information! Keep up with the good content :)
Very much with you on the idea that regulation should be focused on use-case, not the technology that enables it. I couldn't help but notice social scoring is classified as "unacceptable risk" but apparently only if you use ML/statistical methods/expert systems. I find this pretty funny because I don't think China's social scoring systems used these so Chinese style social scoring would be perfectly legal under these regulations. Why not just say that's not okay regardless of the technology used?
I think it makes sense if you consider that this is specifically a product safety regulation for AI products. Banning social scoring outright is not really in the scope of this act, but banning the use of AI systems for that purpose is. I'm no expert on EU law at all, but I can imagine that there might already be a law against social scoring in general.
There is a social scoring system in germany for taking loans. (Schufa) And there isnt the political will to outlaw it.
@@zakuro8532 I agree that Schufa is very scary, but I don't think it's technically social scoring, but rather credit scoring. The types of data that are collected and the influence it has on the persons life are less than with China's social scoring system. But it does seem like a gradient...
Thankfully Schufa had to get a lot more transparent, thanks to the GDPR.
But see... The problem is, if they do that, it's a huge opportunity cost for the regulators.
Criminalizing the means instead of the act is pretty common.
The broader and vaguer you make laws, the more discretionary power you give to those who get to interpret the law later.
I'm pretty sure most of these recent tech-related laws have more to do with petty protectionism and the EU trying to keep US corps out more than anything else.
This happens with tools of all kinds, especially with anything even remotely related to self-defense.
_Can't have grandma injuring some poor defenseless burglar with her pepper spray!_
Pretty sure some legislators would ban cars if they could.
Think of the pedestrians you could save! _You value human life, right? Therefore, you must hate cars unless you're the lowest form of scum._
That "mhm" at 5:18 was *pointed*
🎯
Easy now, you categorise your content as military defensive learning material to get waiver of regulation and use all benefits
Thank you for this great summary!
Glad you enjoyed it!
It's a pleasure listening to your informed and lucid reasoning. Thank you. A Boomer :)
02:10 🇪🇺 The EU proposed the AI Act to regulate AI for societal benefit and to prevent potential harm, striking a balance between innovation and safety.
05:00 🤖 The AI Act primarily applies to providers of AI systems in the EU or third countries placing AI systems on the EU market, as well as users of AI systems located in the EU.
07:35 📝 AI systems are categorized into unacceptable risk, high risk, limited risk, and low or minimal risk, each with specific requirements and regulations.
09:17 🔍 High-risk AI systems must undergo CE registration, meet safety standards, and comply with various requirements including risk management, transparency, and cybersecurity.
10:38👁 Limited-risk AI systems face transparency obligations, including disclosing data sources and benchmark scores, which may pose challenges for existing AI models.
12:57 🌍 The AI Act's impact extends beyond the EU, as global companies often align with its standards to access the European market, a phenomenon known as the "Brussels effect."
Thanks for this nice summary.
I can't help but wonder if some of the military applications that fall outside the scope of this regulation could be categorized as "unacceptable risk."
The paperclip optimiser
Certainly so. But it seems like they do not want to even try to regulate military.
Thanks for teaching me new word: subliminal -- (of a stimulus or mental process) below the threshold of sensation or consciousness; perceived by or affecting someone's mind without their being aware of it.
If I understand correctly, every social media algorithm does this today for Twitter, Insta, TikTok and much more. I wonder how they can regulate it. But it can be really nice hatred reduction mechanism.
Totally agree about the tasks definition.
Thanks for the clarification. Exactly because it could even include ad/content placement algorithms, I am really confused about how this can be regulated if on the prohibited list.
@@AICoffeeBreak maybe mental health of billions can be more valuable than profit of 5 companies. Let's see how it unfolds
AI Act contains various important points that must be known by individuals, in addition to AI technology producers. These days Personal Data Protection is on the top talk. Cybersecurity and Compliance professionals need to perform effective and relevant AI risk assessments. AI Act is about safety including data safety therefore regulations compliance and risk assessments are now the needs of the institutions. There are different risks like UNACCEPTABLE RISKS, HIGH RISKS, and LIMITED RISKS. Deeply understanding these "Risk Categories" in AI Act may help in reducing the risks of reputational and financial losses, that may be caused by the misuse of AI technology.
AI Act should be read for more details and understand the roles and expectations of AI technology producers and users.
Hello, I am currently working on a paper needing to distinguish AI regulation in the EU and in the US. This video really helped to understand the EU risk based approached... I am struggling to find anything about the US, does someone know where I could find videos/articles about US AI regulation please ?
Great video as always. Was Ms. Coffee Bean sleeping in for this one?
Maybe she was too exhausted from reading the 90 pages of the AI act
😂
She never told me what she did that day. 🤔
Thanks!
Wow, thank you! 😊
It didn't take long. I am a victim of a crime committed using new technology.
A.I. it has prompts and doesn't want to stop even though it knows it's committing a crime.
And the best thing is that no one knows how to help me and stop the criminal(s).
It's horrifying.
It creates more jobs for officials. So why wouldn’t they want to regulate more than necessary. Has an EU official ever said no to more regulation?
AI created....RISK, Human created ....RISK, Nature created.....Risk. Which Risk will be most likely to create our ultimate end?
Entropy.
Video states at 1:57
stabillety would be ok thy do all the stuff already
not using copyrighted data is not a codision in this law
'WOULD YOU LIKE TO PLAY A GAME?'
This is completely unenforceable.
What part?
@@DerPylz It's like trying to stop piracy, but with a lot less effort. How would you actually audit companies you aren't aware off or use data centre that is physically in outside of EU?
Well, if they want to sell their products on the EU market, they'll have to comply to the rules. It's the same as with data privacy laws...
@@DerPylz There is no functional way of adulting it. It's all self reports.
@@NeuroScientician But aren't rules that are in parts hard to enforce still better than no rules at all?
This law/act is a ban on privacy and bypassing laws that protect privacy!!!
Now how do you enforce it? Lot's of luck. People will tell you to go get stuffed.
Jesus. That lipstick is distracting. 😍 I think I need to watch this a few times. I won’t retain any of the content for at least the first three runthroughs.
its a race to the bottom at this point, if the eu places these restrictions and others dont. they will stop being economically competitive
Maybe... But that's not how it went with other restrictions set by the EU in the past, see the section on the Brussels effect in the video (12:56).
Hope you enjoy the completely unsecure and rogue home surveilance offered by US tech companies.
@@DerPylz While it's not trivial to do, I can imagine US tech firms training "neutered" models for the EU market, while offering other models globally in order to ensure that they remain competitive with China. The US is highly motivated to keep apace with them, and I can't see China reining in its companies to comply with EU regs (except in the limited sense which I described).
@@ptrckqnln I of course can be wrong this time, but what you're saying has always been said as an argument against regulations in the EU, but so far it has never happened. Google, microsoft and meta now have GDPR compliance globally, and Apple will add USB-C to their iPhones. It's just not worth it to develop and support two separate products and for now, the EU is too important a market to just ignore.
Additionally, I don't see the major US tech companies developing anything other than AI systems that fall under the "limited risk" category of these regulations. The obligations for that category seem quite attainable, e.g. Google's model already complies with many of them. And in my opinion, some transparency on the models would be benefitial for all.
@@DerPylz In general, I agree with you. But it seems that the competition between the US and China to develop ever more powerful AI technologies is becoming more of an arms race by the day, and I think that will weigh on both countries' willingness to comply with these and other regulations.
Furthermore, it is easier to serve different AI models to different regions than to develop different iPhone models for different markets, for instance..
Sadly Open AI's goons and lobbyists got their grimy hands on this law. Would have wished for an actually effective legislation against these automated racism systems.
What part of the regulation is too lax in your opinion?
I just skimmed through it, it does require elimination of bias in datasets, it also explicitly puts systems where bias could be a major issue into the high risk category, last it does mention that high-risk systems should have bias monitoring systems. Honestly I don't think they did that bad of a job.
@@sillyUnawarewolf "elimination of bias in datasets" This is fundamentally impossible - the datasets will always reflect *someone's* biases.
@@ptrckqnln I was mostly just summarizing what it said with that. It does list several specific biases that should be eliminated, like if your country has 10% Arab people, your datasets should also involve 10% Arab people where applicable. It also said a lot of other things, so if ya'll wanna get mad at something please actually read the law and get mad at that.
@@sillyUnawarewolfwhich doesn't seem great. If you've got 10% arabs, you should probably have more like 50% arabs in the data set, so the training doesn't learn that arabs are unimportant
AI can create theoretically the most powerful propaganda program
ChatGPT is a propaganda program. Not real AI either.
Not interested in regulation
Stop producing chips with my algorithm
help! R they coming 4 my Matlab, & C+ ^3!!!! it will be nice to know if google likes our left, or right foot! right? No!
very helpful vid!! good luck! now back to our caves!!