Ethically sourced data for AI i have no issue with at all, but all the big companies who have been stealing from everyone should be forced to restart from scratch with ethically sourced data.
@@larslarsen5414 So far many of them are losing money from it, they mainly use it as a quick cash grab from investors. Announcing your own AI can often give you a massive insertion of funds through stock investment, which makes it appealing for these companies to pretend they're getting deep into AI systems. There is also some stuff about selling subscriptions to AI services, or selling API. Though none of these will probably generate enough income to exceed the actual costs of running a significantly powerful AI.
@@larslarsen5414 They make most of their money by providing infrastructure (cloud servers etc) to businesses. AI is also infrastructure by making businesses pay for the services running in the background. Even though an AI startup may be european, they probably still use OpenAI (Microsoft) services and pay them a lot of money. Even some states like denmark use american tech-company infrastructure. EU regulation makes it almost impossible to develop domestic alternatives. It's creating its own dependency issues.
@@larslarsen5414 they might also charge for specific data sets when companies make and us specific ai. Google is a pretty shady company as well. Lots of stuff that's not ethically sourced.
credit card scoring is pretty sus, if you take on a deceased family members debt it can affect your score which can limit you getting loans, buying a houses, or renting an apartment.
@@hansmemling2311 hello, I'm from America and unfortunately they do credit card scoring to get a house, car, loans etc, it's a relic from long ago as a way to track if people can pay things off reliably over a long period, high score means better credit and more likely to get loans from them bank etc.
The regulation isn't really unreasonable for the most part. The transparency requirements especially will hopefully limit all the recent bs with AI written article spam. I am just worried this will add a lot of overhead with all the requirements on documentation and ensuring no error in datasets (which is basically impossible). Of course this only affects a few systems deemed as high risk so maybe not that big of an issue.
For me it's completely unreasonable as it seeks to make this technology not usable for the average citizens by promoting only government sanctioned ai systems that will only be used in Government approved tasks, you can forget about having an ai powered girlfriend as that deemed as socially un acceptable by the governments but you can count on ai systems that shut your Internet connection after you tried to access content that was not sanctioned by the pro Ukrainian EU authorities and authorised for social distribution online by the correct institutions.
Regulation makes sense, and even companies like OpenAI and Microsoft called for it. That being said, the EU really needs to stop focusing all its effort on regulating technologies the US and China develop and start nourishing a real tech economy.
They call it because they allready have massive amount of data and funding which makes it easy to fulfill and lobby the regulators, while startups and smaller companies will be overwhelmed by complying to these regulations. These companies do not lobby for less regulations they lobby for the regulations only they can fulfill.
I am very suprised and reliefed that the EU has regarded moral and ethic foundations such as human rights to privacy and no social scoring, helping us to maintain our civilisation for a bit longer, on a healthy way to more technology.
France and Germany are both leaders in AI research. UK as well if we are talking about Europe broadly. Examples would be DeepMind, Hugging Face, Mistral, Armis, Aleph Alpha and DeepL. Generally I think it is a mistake to just look at the private companies though. They rarely push the envelope even ones like OpenAI. Just implement mostly public research. And that research is very international anyways.
Risks lie with the outputs of these models, if they exist out of EU, this framework is only good for capping the capabilities for use in the EU, the bloc will still be susceptible to the negative externalities
To some. Certainly things like deepfakes will still affect the EU and only be ethically made within but this does prevent something like a social scoring AI from being implemented in the EU which is a win.
Any foreign company wanting to sell subscriptions or otherwise access the EU market will have to comply with these regulations as well. The EU is a massive market which multinationals simply cannot avoid. There is also the Brussels effect when it comes to regulation.
@@XMysticHerox true, but social scoring is imposible to avoid on the long term due to the rise of AI. Its just how it is implemented. If it is open source project with open algorithms which are voted dirctly by citizens not unelected birocrats as it is in EU, than it could bring huge benefits.
@fpxy00 I think the countries in the EU are small enough to not be seduced to gravitate towards social scoring. There is much less need for a tight grip on behaviour than say for a tyrannical government that governs more than a billion people. The scoring system has impacts on wether they can use trains, find work etc. This goes against human rights and te EU would never do this even if the technology is there.
It will not keep AI in check in China and Russia and possibly India. All defence industries of all advanced countries will push AI research and development to the limit.
It will he pushed to its limits here too. Do you honestly believe the army complex in Europe is going to risk falling behind because of morality? Where we differ is in the private companies and multinationals. We actually hold them to a higher moral standard and have more respect for privacy and human rights. But in the context of the army we are wise enough to have a different standard there.
@@hansmemling2311 You said 'Do you honestly believe the army complex in Europe is going to risk falling behind because of morality'? I had said 'All defence industries of all advanced countries will push AI research and development to the limit.' Thus, I believe the army complex in Europe is NOT going to risk falling behind because of morality.' (logic does have its uses sometimes)
You realise that companies will simply apply the EU standard because the EU market is too big right? This is a well known and documented phenomenon, the "Brussels effect", just look at the USB-C chargers. And if you pay attention or inform yourself further on the topic you'll see that applications for national security grant exceptions.
Three Laws of Robotics: 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Good to see them suggesting a ban on "social credit" or scoring systems like China has. THAT is certainly the aspect that should be banned. However, I'm not sure they can enforce most the other regulations, since it cannot possibly be known who's doing such things privately.
So in the opening statement it says the law is also designed to foster AI development in various industries. How exactly? All I heard was curtailing of AI in many ways, but nothing at all that would foster innovation or development of AI systems.
Because if you know what will and won't be allowed it makes it safer to develop because you won't waste your time developing things that will be shut down or has to get fixed in the future. Ethical devs want regulation because they want to have a proper framework that they can work with, it's only unethical and abusive devs that don't want it. Not all innovations are good and often for innovation to do good you need to regulate it, unregulated innovations can be dangerous and harmful to people and society.
If you actually read the regulations or some more in depth articles on it there will be public testing environments for AIs to make it wasier to check if you are in compliance. They are also meant to be cheaper proportionally to the size of the business. I wouldn't really call it a measure in support of innovation and moreso limiting the impact of the directive but it's not a terrible idea.
We need to force companies to disclose training data. Aswell as rules, what data can be used and what is protected (for example social media). This is the best way to keep these models in check. Otherwise AI companies will compete based on who is willing to go the futhest on what data to use.
@@Also_sprach_Zarathustra.I'm not familiar with the Chinese regulations regarding AI, but I wouldn't be surprised if generative models were required to follow certain safety standards. Maybe similar rules to what can be published in newspapers or posted on social media.
@@Also_sprach_Zarathustra. unrestricted AI is the last thing a goverment like China's would want. Other than the scanning, point system ( surveillance AI) AI would mean too much power in the hands of corpos and citzenry, hence why China cracked down on it
Another restriction that will widen the innovation gap between EU and countries like USA and China (like we are not behind enough). Also this doesn’t solve any of the actual issues EU is facing at the moment like housing crisis, immigration, rising energy prices, aging population, small companies and local farmers having a hard time operating, big corporations moving jobs from EU to Asia…and so on… Can we first focus on the problems we have now? “AI” (the marketing representation at least) will be everywhere, the difference this framework will make is that the code will be written by people from China or USA and not EU. Those countries and those people will benefit economically (and possibly politically) and not EU. (My opinion 😅)
This is a first good decision EU has taken in a long run. As someone working in the software development, I can assure you, it dsnt matter what you do and how good are you at your job, AI "CAN" replace you.
@@tamalchakraborty5346 Oh software developer, we must thank you for this. Part of the problem. Worked yourself out of a future job, too late now AI is here to stay.
@@tamalchakraborty5346 I am also a software engineer btw. Question: why is this a good decision? I listed my opinion why it’s bad, I’m curious about your thoughts. (Not sarcasm, genuinely interested)
I will do my best to express myself. Software engineering was all about solving problems and advancing technology. In todays world, the society capitalises on human brain rather than the physical capacity of a human being, the creative decisions are been taken away from humans and given to machines. I am not even talking about the garbage programming lines ChatGPT produces, but in its full potential it can outperform any veteran software engineer I have come to know. I believe the final decision should be always taken by a human and should not be given to an AGI software under any circumstances. AGI is coming for all of us.People who are rejoicing in the comment section fail to realise what that means.@@ForiDunk
This is the kind of legislation that makes the US happy we aren't part of the EU. What I've seen is no attempt at protecting the public from the risks involved, just a limit on what private citizens can do versus what the government can do. As AI takes off in other countries, it's only going to hinder Europe's ability to adapt to the new landscape.
Yeah I heard from a friend that in the countryside of Germany there’s people still using fax machines and not using internet a lot. Seems they have kept a lot of their older infrastructure from the 80’s 90’s and early 2000’s.
your friend might have a narrow view o german socety. Whilst older generatiosnsstill can hang on too older tech german inddustry and society is generally well known for its good tech. @@hansmemling2311
@@hansmemling2311 Dr. office use Fax machines to send prescriptions to pharmacies here in Netherlands still - well a few years ago - maybe they finally removed them...
Europeans invented annoying pop-ups in every website in the name of privacy. It is the most annoying thing in every website. This is what regulation does making users irritable and slow.
EU just agreed to shoot itself in the foot and will get left behind in the AI race. AI promises to speed up R&D on many fields, not least importantly in medical field where we have huge challenges to crack still. Slowing down AI development will slow down advances in research as well, which is NOT what want. It's EU makin decisions again on something they have absolutely not clue about.
Generally, I'm the opposite of a eurosceptic, but I'm concerned that these AI regulations could hinder research, development, and accessibility. For example, try to access Anthropic's latest LLM, Claude 3, and you will see what I mean. My biggest worry is that they aim for an impact akin to GDPR. In my opinion GDPR complicates setting up even simple websites and burdens visitors with constant cookie consent prompts. If I preferred not to use cookies, I'd disable them in my browser settings; I don't see the need for repetitive requests for user consent.
I completely agree that AI technology must of course follow certain common rules of the game. Just that the rules do not become too restrictive and prevent good use of AIs.
EU wants EVs, ESG, AI and the list goes on, all of this before 2030. Where are they going to generate electricity for all this? Nvidia's AI GPU consume about the same as a household.
The EU is a aimles bot that can not think for itselfe for once. instead of realizing where it strenghts lie they rather walk 10 years behind the latest trends.
Regulating something you yourself can’t build, is a sure way to keep it that way. So other places will be the risk takers, and therefore be the generational innovators. And that due to an AI safety act that actually exempts offensive industries…
That’s what large companies want you to believe while they rake in the billions from violating your privacy or your human rights. Don’t fall for it. Everyone in any industry is going to cry when laws get introduced in their field. Gee I wonder why? It’s almost like they have a different motive other than concerns for innovation? I wonder what that could be! Come on wake up. Of course people whine when you made it harder for them to make money when it was easier before.
If you ever wondered why EU has no Google, Apple, Amazon, Microsoft, Meta, Baidoo, Samsung, Alibaba etc. these regulations are why EDIT: I mean EU has no companies like the abovementioned
The EU is not a country, it's a collection of countries and that puts it an an immediate disadvantage when it comes to creating big companies. Still we aren't exactly crying over not having such kinds of brands, we use them the same.
@@XMysticHerox the point I'm trying to make is that all the legislation the governments are working on, both in EU and USA, focus on data protection, when that is the least of the risks, when we think about the possible repercussions on jobs and society
@@stefanorizzo3384 It doesn't though? It's just one aspect of the regulation and I wouldn't say it is the focus either. Have you actually taken a look at it? The primary focus is ensuring transparency on what is and isn't AI content as well as quality assurance is any AI that is used in a high risk environment. As for jobs. Well that is imo mostly a matter of social services and redistributive measures. However while it is important it is also not critically important right now. The things regulated here have a much more immediate impact and the changes are also comparatively uncontroversial. Good luck getting conservatives onboard with handling mass automation.
@@XMysticHerox the problem is not the automation or the change in itself: it's the fact that the change could outpace our capability to adapt to it, so that once our solution is ready, it's already obsolete...you clearly are more interested in the short-term effects I'm more focused on the long-term ones. You can think it's premature and that there's room for change later on, if required, but experience taught me that undesirable social and economical mistakes are easier to prevent than to correct.
@@stefanorizzo3384 I am not sure why that would be outpaced. Greater social services could be implemented at any time. Once a lot of people loose their jobs the political will should exist for it.
AI is basically a marketing term. It's not a thing. I've had a look at this act: Shopping site you'r visiting and "learns" what you've put in the basket is AI. The same shopping site that pushes you "red hot deal" is AI. It is, of course, bonkers. Any such act trying to regulate an undefined term is bound to be similarly bonkers.
Anything that uses a computer is AI now according to the media. This is really a new low for our society, shows how utterly useless our leaders have become.
It isn't just a marketing term, you just overestimate human intelligence. Intelligence is nothing more than the recognition of patterns and giving reactions to it. A storekeeper sees that you come every week and you are interested in special products, does he offers you a random new product then or a new product that he wants to sell which is related to your shopping pattern before ? The human is also just a computer, only more complex and more mysterious, but your brain also just works with electric signals, so 1s and 0s, like a computer.
@@randyraudi7725 computers have been able to recognise patterns for years, decades even. Why is this longstanding ability of computer systems suddenly being renamed AI? If not for marketing purposes. It's just dumb predictive algorithms, like we've always had. But put to new purposes. The issue isn't that a shop keeper and a algorithm can both offer you products based on previously observed patterns. It's that the algorithm can be fed billions of data points collected from millions of other shoppers without them being aware of it, and be tweaked and massaged to produce the best possible outcome for the store owner every single time, at the expense of the shopper who still only has the limited information available to a single person. Whenever power imbalances exist then exploitation is inevitable, if laws are put in place then that exploitation can be reduced. But this shouldn't be seen as an existentially new threat. It should be seen for what it is. A new leaver corporations have to wield power over individuals, and thereby exploit them
AI is not a marketing term. AI is when you don't set up your IT system based on predetermined rules as we used to do some time ago but when you write a program which automatically generates a ruleset which fits all the input data and then applies this ruleset to the new data hoping that it will give the correct answer. To go with your shopping site example a classical predetermined rule based system might work like this: if this user has previously bought a purse and a nail polish then recommend to them high heels. While an AI might work like this: 1) one user who previously bough a purse and a nail polish has just bought high heels 2) another user who previously bought A and B has just bought C .... and a whole lot more inputs >>> analyze this data set to find a ruleset which matches all of these then later: >>> if a user has bought A and C run it through the ruleset to estimate what the user might buy next. This is of course an oversimplified example but I hope you get the gist.
AI regulations can really only be (potentially) effective if passed by the UN via the AI Advisory Body. If just one world leader limits it's own AI capabilities for the greater good, that will allow it's competitors to take the reins and lead us to the greater not-so-good.
Complete hogwash! The classification of legal AI as high-risk is a blatant overreach, clearly designed to protect the interests of legal professionals rather than addressing genuine risks. AI in law offers immense potential-analyzing case law, reducing bias, and improving access to justice-all of which could make the system fairer and more efficient. Instead of fostering this progress, the EU AI Act places unnecessary hurdles that stifle innovation in the one field where AI’s impartiality could deliver the greatest societal benefits. Are we really to believe that automating case analysis poses more risk than AI in healthcare or finance, where human lives and livelihoods are directly at stake? This reeks of professional preservation disguised as ethical concern, and it’s time we question whether this classification is about protecting citizens or protecting lawyers.
I can assure you that AGI does not exist and not for a very long time. If it does exist within some tech company then I'll tell you right now that its going to be incredibly underwhelming.
So what's really going to be a problem is THIS LAW ! BEcause of this LAW the people who do want to bluild malicious functionalities will stay out of view and it will become harder to monitor then when it's done out in the open. Just my 2 cents. It certainly won't stop anyone with a good Computer from doing whatever they want in their attics. The future is here and laws won't hold it back. PL.
Facial recognision issue is nothing in comparison with listening to my phone talking to my patient. Social scoring issue is nothing in comparison with listening to my phone and going to my private sphere. Here seems that we have problems with what is more important, you can watch me in your cameras is not a problem, you can make social scoring of me is not a problem, but listening to my phone and writing me on FB as you are my friend with whom I have worked together is much worse. I will brake my iphone in pieces!..
....hope they have considered a kind of "right to coffee drinking act" alongside the AI Regulation, too ? ;-( (of course coffee drinking in small groups or 1on1, otherwise it would be probably very good for CNS medication producing pharmaceutical industry, too (as antidepressiva sales may could rise steeply )) ;-(
AI poses significant risks to our fundamental rights. It's actually pretty simple. Naturally we are behind in terms of infrastructure. Pretty basic once again: look at economic power level worldwide. You cannot be that naive.... unless it's intentionally
@@johnvif I will. EU is not fit for the future, they've disappointed on computer and internet (no IT giants from Europe), and they're doing the same for AI
The law will always regulate AI fine till AI finds a way around it. It will then be in a position where it has to self regulate...I guess people are not in to self regulating digital demigods.
This law ranks basically any system with real-world utility as 'high-risk' and is full of broadly-defined terms and uncertain requirements. Is the EU trying to sabotage it's tech sector? This will likely seriously damage investor confidence in european startups. Don't get me wrong, regulation is a good thing. But this document is absolutely over-regulation. After reading it, i'm debating whether my own early stage startup is even feasible in the EU.
@@soundscape26 Film. Even that is debatable. Take a photo of something with different cameras and see for yourself. It's calculated. Dif software, dif calculations. Books are written on that subject. To then paint a frog face over it is just a further step.
It's not that simple, weapons that incorporate ai could be safer it just depends on how it's incorporated. No one is saying we should give ai nuclear launch codes lol, but it's possible that ai used for targeting and damage assessment calculations for instance could lead to less casualties and more precise strikes.
Ask Russia if they would anything and I am pretty sure that they will not. So what is your point? Are you against regulation in general or just when it comes to AI?
@@larslarsen5414 AI technology is what you will want and need in the future. AI will have to be embraced at some point, especially when external forces will use it against you. It’s known that people will lose their job because of AI of course this is sad, but the point is AI will need to combat another AI for protection. With AI I hope it will give mankind the ability to get of the computer and interact face to face with another again, then on UA-cam comments.
Do you think the EU is going to curtail the AI growth in the military complex? They would never. Have some common sense. Your vague use of the actual implements of AI in a war setting shows you have no idea.
This law is the dumbest implementation that they could have come up with. Honestly. It actually even makes existing pre-AI workflows illegal because they made it so broad. They define AI as any machine that provides an automated process. That's the most crazy definition ever. Every machine in existence falls under that category. Then they go on to say that using an AI (by that crappy definition) in a situation that could be harmful to humans is now illegal, and has huge punishments. Okay, I guess this means that self-driving cars are illegal, automated assembly lines are now illegal. Heck, your pellet grill is probably illegal under this inane law.
EU is living in 1900s. You need to work on guard rails injecting policy in models during learning. You can't put in a regulation on models already made. There will be opensource models and it will get out.
Comprehensive huh? Did they outlaw giving robots guns?? No? So Terminator good, but facial recognition based door lock, bad? Some real "thinkers" over there in Brussels.
Soo what if we got a photo of a terrorist, then what? Will we wait until he appears on some new footage or will we analyze the old ones? It would be too difficult to analyze all the old ones suddenly, so it's easier if it gets processed continuously and everyone is watched. And when he becomes a terrorist, you immediately have everything on him. I think humanity should just give up on privacy and publish everything about everyone. If it's only in hands of government, it's too risky.
Why investing too much money on AI, when EU cant stop war with Russia? Wars are ahead and EU is spending on AI? The question is: will AI exsist after the nuclear war?
Ethically sourced data for AI i have no issue with at all, but all the big companies who have been stealing from everyone should be forced to restart from scratch with ethically sourced data.
Or go open source
Honest question: Meta, Google etc make money on running adds, right? So my question is: How will they make money on AI?
@@larslarsen5414 So far many of them are losing money from it, they mainly use it as a quick cash grab from investors. Announcing your own AI can often give you a massive insertion of funds through stock investment, which makes it appealing for these companies to pretend they're getting deep into AI systems. There is also some stuff about selling subscriptions to AI services, or selling API. Though none of these will probably generate enough income to exceed the actual costs of running a significantly powerful AI.
@@larslarsen5414 They make most of their money by providing infrastructure (cloud servers etc) to businesses.
AI is also infrastructure by making businesses pay for the services running in the background. Even though an AI startup may be european, they probably still use OpenAI (Microsoft) services and pay them a lot of money.
Even some states like denmark use american tech-company infrastructure.
EU regulation makes it almost impossible to develop domestic alternatives. It's creating its own dependency issues.
@@larslarsen5414 they might also charge for specific data sets when companies make and us specific ai.
Google is a pretty shady company as well. Lots of stuff that's not ethically sourced.
I guess we need to ban credit card scoring since that is social scoring
credit card scoring is pretty sus, if you take on a deceased family members debt it can affect your score which can limit you getting loans, buying a houses, or renting an apartment.
exactly.
In fact, we should ban capitalism, and non-rehabilitative justice.
We don’t do that in my country. In what EU country do they do this?
@@hansmemling2311 hello, I'm from America and unfortunately they do credit card scoring to get a house, car, loans etc, it's a relic from long ago as a way to track if people can pay things off reliably over a long period, high score means better credit and more likely to get loans from them bank etc.
The regulation isn't really unreasonable for the most part. The transparency requirements especially will hopefully limit all the recent bs with AI written article spam. I am just worried this will add a lot of overhead with all the requirements on documentation and ensuring no error in datasets (which is basically impossible). Of course this only affects a few systems deemed as high risk so maybe not that big of an issue.
'I guess we need to ban credit card scoring since that is social scoring'
So, will you ban capitalism finally ?
@@Also_sprach_Zarathustra. Wrong comment buddy. Try the next door down.
For me it's completely unreasonable as it seeks to make this technology not usable for the average citizens by promoting only government sanctioned ai systems that will only be used in Government approved tasks, you can forget about having an ai powered girlfriend as that deemed as socially un acceptable by the governments but you can count on ai systems that shut your Internet connection after you tried to access content that was not sanctioned by the pro Ukrainian EU authorities and authorised for social distribution online by the correct institutions.
@@Also_sprach_Zarathustra.Credit card scoring? There is no such thing. There is a credit score. So stop making stuff up on the spot.
Regulation makes sense, and even companies like OpenAI and Microsoft called for it. That being said, the EU really needs to stop focusing all its effort on regulating technologies the US and China develop and start nourishing a real tech economy.
They call it because they allready have massive amount of data and funding which makes it easy to fulfill and lobby the regulators, while startups and smaller companies will be overwhelmed by complying to these regulations. These companies do not lobby for less regulations they lobby for the regulations only they can fulfill.
Well put @APolly
Because they can't do both or what? The job of lawmakers first and foremost is to pass laws, i.e. to regulate/legislate.
I am very suprised and reliefed that the EU has regarded moral and ethic foundations such as human rights to privacy and no social scoring, helping us to maintain our civilisation for a bit longer, on a healthy way to more technology.
Europe is good at regulating, not good at innovating. It is a final nail in the coffin for European businesses.
The United States develops the software, the Chinese build the hardware, and the EU regulates. How many of the AI companies are from the EU again?
Spot on!
hundreds of ai companies exist in the EU
@emphelele Name five for me, please. I know that there's only one in France worth talking about.
@@emphelele with little to no achievement
France and Germany are both leaders in AI research. UK as well if we are talking about Europe broadly. Examples would be DeepMind, Hugging Face, Mistral, Armis, Aleph Alpha and DeepL.
Generally I think it is a mistake to just look at the private companies though. They rarely push the envelope even ones like OpenAI. Just implement mostly public research. And that research is very international anyways.
Risks lie with the outputs of these models, if they exist out of EU, this framework is only good for capping the capabilities for use in the EU, the bloc will still be susceptible to the negative externalities
To some. Certainly things like deepfakes will still affect the EU and only be ethically made within but this does prevent something like a social scoring AI from being implemented in the EU which is a win.
Any foreign company wanting to sell subscriptions or otherwise access the EU market will have to comply with these regulations as well. The EU is a massive market which multinationals simply cannot avoid. There is also the Brussels effect when it comes to regulation.
@@XMysticHerox true, but social scoring is imposible to avoid on the long term due to the rise of AI. Its just how it is implemented. If it is open source project with open algorithms which are voted dirctly by citizens not unelected birocrats as it is in EU, than it could bring huge benefits.
@fpxy00 I think the countries in the EU are small enough to not be seduced to gravitate towards social scoring. There is much less need for a tight grip on behaviour than say for a tyrannical government that governs more than a billion people. The scoring system has impacts on wether they can use trains, find work etc. This goes against human rights and te EU would never do this even if the technology is there.
It will not keep AI in check in China and Russia and possibly India. All defence industries of all advanced countries will push AI research and development to the limit.
It will he pushed to its limits here too. Do you honestly believe the army complex in Europe is going to risk falling behind because of morality? Where we differ is in the private companies and multinationals. We actually hold them to a higher moral standard and have more respect for privacy and human rights. But in the context of the army we are wise enough to have a different standard there.
@@hansmemling2311
You said 'Do you honestly believe the army complex in Europe is going to risk falling behind because of morality'?
I had said 'All defence industries of all advanced countries will push AI research and development to the limit.' Thus, I believe the army complex in Europe is NOT going to risk falling behind because of morality.' (logic does have its uses sometimes)
Did you even watch the video?
You realise that companies will simply apply the EU standard because the EU market is too big right?
This is a well known and documented phenomenon, the "Brussels effect", just look at the USB-C chargers.
And if you pay attention or inform yourself further on the topic you'll see that applications for national security grant exceptions.
Forcing companies to reveal training data is a good first step. What we need now is a way to tag data in a legally binding way as "not for AI use".
Three Laws of Robotics:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Good to see them suggesting a ban on "social credit" or scoring systems like China has. THAT is certainly the aspect that should be banned.
However, I'm not sure they can enforce most the other regulations, since it cannot possibly be known who's doing such things privately.
So in the opening statement it says the law is also designed to foster AI development in various industries. How exactly? All I heard was curtailing of AI in many ways, but nothing at all that would foster innovation or development of AI systems.
Because if you know what will and won't be allowed it makes it safer to develop because you won't waste your time developing things that will be shut down or has to get fixed in the future.
Ethical devs want regulation because they want to have a proper framework that they can work with, it's only unethical and abusive devs that don't want it.
Not all innovations are good and often for innovation to do good you need to regulate it, unregulated innovations can be dangerous and harmful to people and society.
If you actually read the regulations or some more in depth articles on it there will be public testing environments for AIs to make it wasier to check if you are in compliance. They are also meant to be cheaper proportionally to the size of the business.
I wouldn't really call it a measure in support of innovation and moreso limiting the impact of the directive but it's not a terrible idea.
We need to force companies to disclose training data. Aswell as rules, what data can be used and what is protected (for example social media). This is the best way to keep these models in check.
Otherwise AI companies will compete based on who is willing to go the futhest on what data to use.
No. Can you disclose YOUR training data ? (the data your brain absorbed since your birth) ? No. You can't. It's the same.
@Also_sprach_Zarathustra. Take your meds.
@@hansmemling2311There's no medicine for that, because you're preventing your country from creating AIs capable of discovering new medicines :)
@@hansmemling2311 Don't be dumb. go to school, university, learn something.
@@Also_sprach_Zarathustra.I am currently at university lmao.
China: yeah, yeah, you guys should put caps and limits on your AI.
China has already put quite some regulation for generative AI, even well before the EU did.
@@ttt5205LOL, like forbidden citizen scoring ? Who are you pretending to fool ?
@@Also_sprach_Zarathustra.That is true... China is on top of Ai regulation, it's antithetical to what they've spent decades building.
@@Also_sprach_Zarathustra.I'm not familiar with the Chinese regulations regarding AI, but I wouldn't be surprised if generative models were required to follow certain safety standards. Maybe similar rules to what can be published in newspapers or posted on social media.
@@Also_sprach_Zarathustra. unrestricted AI is the last thing a goverment like China's would want. Other than the scanning, point system ( surveillance AI)
AI would mean too much power in the hands of corpos and citzenry, hence why China cracked down on it
How about copyrights fees of ideas ? Having ai in new age have such huge power of forces.
Regulations but zero investments in AI... Just to ensure a future of complete dependency from the US and China
Another restriction that will widen the innovation gap between EU and countries like USA and China (like we are not behind enough). Also this doesn’t solve any of the actual issues EU is facing at the moment like housing crisis, immigration, rising energy prices, aging population, small companies and local farmers having a hard time operating, big corporations moving jobs from EU to Asia…and so on… Can we first focus on the problems we have now? “AI” (the marketing representation at least) will be everywhere, the difference this framework will make is that the code will be written by people from China or USA and not EU. Those countries and those people will benefit economically (and possibly politically) and not EU. (My opinion 😅)
This is a first good decision EU has taken in a long run. As someone working in the software development, I can assure you, it dsnt matter what you do and how good are you at your job, AI "CAN" replace you.
@@tamalchakraborty5346 Oh software developer, we must thank you for this. Part of the problem. Worked yourself out of a future job, too late now AI is here to stay.
@@tamalchakraborty5346 I am also a software engineer btw. Question: why is this a good decision? I listed my opinion why it’s bad, I’m curious about your thoughts. (Not sarcasm, genuinely interested)
I will do my best to express myself.
Software engineering was all about solving problems and advancing technology. In todays world, the society capitalises on human brain rather than the physical capacity of a human being, the creative decisions are been taken away from humans and given to machines. I am not even talking about the garbage programming lines ChatGPT produces, but in its full potential it can outperform any veteran software engineer I have come to know. I believe the final decision should be always taken by a human and should not be given to an AGI software under any circumstances.
AGI is coming for all of us.People who are rejoicing in the comment section fail to realise what that means.@@ForiDunk
Exactly.
This law/act is a ban on privacy and bypassing laws that protect privacy!!😢
This is the kind of legislation that makes the US happy we aren't part of the EU. What I've seen is no attempt at protecting the public from the risks involved, just a limit on what private citizens can do versus what the government can do. As AI takes off in other countries, it's only going to hinder Europe's ability to adapt to the new landscape.
Germany hasn’t got to grips with digitalisation as yet
Yeah I heard from a friend that in the countryside of Germany there’s people still using fax machines and not using internet a lot. Seems they have kept a lot of their older infrastructure from the 80’s 90’s and early 2000’s.
your friend might have a narrow view o german socety. Whilst older generatiosnsstill can hang on too older tech german inddustry and society is generally well known for its good tech. @@hansmemling2311
@@hansmemling2311 Dr. office use Fax machines to send prescriptions to pharmacies here in Netherlands still - well a few years ago - maybe they finally removed them...
Europeans invented annoying pop-ups in every website in the name of privacy. It is the most annoying thing in every website. This is what regulation does making users irritable and slow.
Do you mean if the unit declared before head, they can do photocopies of Pearson books without payment ?
EU just agreed to shoot itself in the foot and will get left behind in the AI race. AI promises to speed up R&D on many fields, not least importantly in medical field where we have huge challenges to crack still. Slowing down AI development will slow down advances in research as well, which is NOT what want.
It's EU makin decisions again on something they have absolutely not clue about.
Generally, I'm the opposite of a eurosceptic, but I'm concerned that these AI regulations could hinder research, development, and accessibility. For example, try to access Anthropic's latest LLM, Claude 3, and you will see what I mean. My biggest worry is that they aim for an impact akin to GDPR. In my opinion GDPR complicates setting up even simple websites and burdens visitors with constant cookie consent prompts. If I preferred not to use cookies, I'd disable them in my browser settings; I don't see the need for repetitive requests for user consent.
I wonder how existing laws that already prevent nefarious behaviours overlap with this regulation.
Good move. Highly anticipated
I completely agree that AI technology must of course follow certain common rules of the game. Just that the rules do not become too restrictive and prevent good use of AIs.
EU wants EVs, ESG, AI and the list goes on, all of this before 2030. Where are they going to generate electricity for all this? Nvidia's AI GPU consume about the same as a household.
The EU is a aimles bot that can not think for itselfe for once. instead of realizing where it strenghts lie they rather walk 10 years behind the latest trends.
Regulating something you yourself can’t build, is a sure way to keep it that way. So other places will be the risk takers, and therefore be the generational innovators. And that due to an AI safety act that actually exempts offensive industries…
Regulation hurting innovation is the single must bogus statement when it comes to technology.
Don't be a cryptobro.
That’s what large companies want you to believe while they rake in the billions from violating your privacy or your human rights. Don’t fall for it. Everyone in any industry is going to cry when laws get introduced in their field. Gee I wonder why? It’s almost like they have a different motive other than concerns for innovation? I wonder what that could be!
Come on wake up. Of course people whine when you made it harder for them to make money when it was easier before.
If you ever wondered why EU has no Google, Apple, Amazon, Microsoft, Meta, Baidoo, Samsung, Alibaba etc. these regulations are why
EDIT: I mean EU has no companies like the abovementioned
The EU is not a country, it's a collection of countries and that puts it an an immediate disadvantage when it comes to creating big companies. Still we aren't exactly crying over not having such kinds of brands, we use them the same.
Have you been living under a rock appletree?
We all use it...
They have Spotify and ALDI.
Perhaps you know SAP as well ( I don’t think it’s in the list) and Spotify of course which is in the list.
I always fing funny how the EU does not contribute or invest into AI, but think they have the right to tell the rest of the world what to do with it.
Focusing on face recognition when we talk about AI risks is like thinking of saving the towels if your house takes fire...
It doesn't really focus on face recognition. DW did here.
@@XMysticHerox the point I'm trying to make is that all the legislation the governments are working on, both in EU and USA, focus on data protection, when that is the least of the risks, when we think about the possible repercussions on jobs and society
@@stefanorizzo3384 It doesn't though? It's just one aspect of the regulation and I wouldn't say it is the focus either. Have you actually taken a look at it?
The primary focus is ensuring transparency on what is and isn't AI content as well as quality assurance is any AI that is used in a high risk environment.
As for jobs. Well that is imo mostly a matter of social services and redistributive measures. However while it is important it is also not critically important right now. The things regulated here have a much more immediate impact and the changes are also comparatively uncontroversial. Good luck getting conservatives onboard with handling mass automation.
@@XMysticHerox the problem is not the automation or the change in itself: it's the fact that the change could outpace our capability to adapt to it, so that once our solution is ready, it's already obsolete...you clearly are more interested in the short-term effects I'm more focused on the long-term ones. You can think it's premature and that there's room for change later on, if required, but experience taught me that undesirable social and economical mistakes are easier to prevent than to correct.
@@stefanorizzo3384 I am not sure why that would be outpaced. Greater social services could be implemented at any time. Once a lot of people loose their jobs the political will should exist for it.
AI is basically a marketing term. It's not a thing.
I've had a look at this act:
Shopping site you'r visiting and "learns" what you've put in the basket is AI.
The same shopping site that pushes you "red hot deal" is AI.
It is, of course, bonkers.
Any such act trying to regulate an undefined term is bound to be similarly bonkers.
Anything that uses a computer is AI now according to the media. This is really a new low for our society, shows how utterly useless our leaders have become.
It isn't just a marketing term, you just overestimate human intelligence. Intelligence is nothing more than the recognition of patterns and giving reactions to it. A storekeeper sees that you come every week and you are interested in special products, does he offers you a random new product then or a new product that he wants to sell which is related to your shopping pattern before ? The human is also just a computer, only more complex and more mysterious, but your brain also just works with electric signals, so 1s and 0s, like a computer.
@@randyraudi7725 computers have been able to recognise patterns for years, decades even. Why is this longstanding ability of computer systems suddenly being renamed AI? If not for marketing purposes. It's just dumb predictive algorithms, like we've always had. But put to new purposes.
The issue isn't that a shop keeper and a algorithm can both offer you products based on previously observed patterns. It's that the algorithm can be fed billions of data points collected from millions of other shoppers without them being aware of it, and be tweaked and massaged to produce the best possible outcome for the store owner every single time, at the expense of the shopper who still only has the limited information available to a single person. Whenever power imbalances exist then exploitation is inevitable, if laws are put in place then that exploitation can be reduced. But this shouldn't be seen as an existentially new threat. It should be seen for what it is. A new leaver corporations have to wield power over individuals, and thereby exploit them
@@randyraudi7725 Youre selling the shoes in both cases.
AI is not a marketing term. AI is when you don't set up your IT system based on predetermined rules as we used to do some time ago but when you write a program which automatically generates a ruleset which fits all the input data and then applies this ruleset to the new data hoping that it will give the correct answer.
To go with your shopping site example a classical predetermined rule based system might work like this: if this user has previously bought a purse and a nail polish then recommend to them high heels.
While an AI might work like this:
1) one user who previously bough a purse and a nail polish has just bought high heels
2) another user who previously bought A and B has just bought C
.... and a whole lot more inputs
>>> analyze this data set to find a ruleset which matches all of these
then later:
>>> if a user has bought A and C run it through the ruleset to estimate what the user might buy next.
This is of course an oversimplified example but I hope you get the gist.
DW is awesome.
The last human folly: We can control AI.
AI regulations can really only be (potentially) effective if passed by the UN via the AI Advisory Body. If just one world leader limits it's own AI capabilities for the greater good, that will allow it's competitors to take the reins and lead us to the greater not-so-good.
Complete hogwash!
The classification of legal AI as high-risk is a blatant overreach, clearly designed to protect the interests of legal professionals rather than addressing genuine risks. AI in law offers immense potential-analyzing case law, reducing bias, and improving access to justice-all of which could make the system fairer and more efficient. Instead of fostering this progress, the EU AI Act places unnecessary hurdles that stifle innovation in the one field where AI’s impartiality could deliver the greatest societal benefits. Are we really to believe that automating case analysis poses more risk than AI in healthcare or finance, where human lives and livelihoods are directly at stake? This reeks of professional preservation disguised as ethical concern, and it’s time we question whether this classification is about protecting citizens or protecting lawyers.
What's that gonna help? A bit too late don't you think? I mean AGI is most likely already there, they just don't release it to the public...
I can assure you that AGI does not exist and not for a very long time. If it does exist within some tech company then I'll tell you right now that its going to be incredibly underwhelming.
@@ttt5205 come back to this comment in 2 years
@@minglee9288 I was told this exact thing several years ago.
@@ttt5205 I mean look at Gpt 4o... It will happen in two years for sure, if not already..
It took *5* years for this law to be made... That's very slow, even by EU-standards (it usually takes 2)
Simple? That is a red flag for not reading it.
US legislators were so busy to "secure US leadership" in legislation and in AI tech still, that they missed the curve
Idk the degrees of shallowness. I am sure there inches to explore. This making me mindlow. That means the depths have soared.
Those animated pictures are annoying me, they are not real and my mind doesnt like them at all
lmao I know the end was supposed to be heartwarming or something but viv is the most dystopian thing I've ever seen
So what's really going to be a problem is THIS LAW !
BEcause of this LAW the people who do want to bluild malicious functionalities will stay out of view and it will become harder to monitor then when it's done out in the open.
Just my 2 cents.
It certainly won't stop anyone with a good Computer from doing whatever they want in their attics.
The future is here and laws won't hold it back.
PL.
Elites won´t go digital, they will stay raw....
Facial recognision issue is nothing in comparison with listening to my phone talking to my patient.
Social scoring issue is nothing in comparison with listening to my phone and going to my private sphere.
Here seems that we have problems with what is more important, you can watch me in your cameras is not a problem, you can make social scoring of me is not a problem, but listening to my phone and writing me on FB as you are my friend with whom I have worked together is much worse. I will brake my iphone in pieces!..
Soon it will be, "You have a an uncensored LLM on your PC, off to jail for you".
eu is the worst dictatorship, like all dictatorship pretending to be benevolent
Hopefully! :)
That man really needs a drink of water.
Economics in Europe will decline due this act.
dont worry im sure china will halt there work as well
China is using my algorithm illegally. Together with the US.
....hope they have considered a kind of "right to coffee drinking act" alongside the AI Regulation, too ? ;-( (of course coffee drinking in small groups or 1on1, otherwise it would be probably very good for CNS medication producing pharmaceutical industry, too (as antidepressiva sales may could rise steeply )) ;-(
What about AI taking over jobs of humans? Taxes for AI that goes to UBI?
LOL! An ant telling an elephant where to walk!
Europe is lagging behind both USA and China in terms of AI R&D and AI infrastructure but at least we have regulations. GG Europe😂😂😂
EU has more than 2x funding as china looool, China has 0 relevant ai models. 2x the amount of startups as all of Asia
Yes, change is frowned upon. People here tend to spend more time planning to avoid cleaning up, which can be impossible in many cases.
USA AND china where people are ruled by one or two parties? GG Democracy
AI poses significant risks to our fundamental rights. It's actually pretty simple. Naturally we are behind in terms of infrastructure. Pretty basic once again: look at economic power level worldwide. You cannot be that naive.... unless it's intentionally
WTF are you talking about lol, EU has 2x investment, and 2x more startups than all of asia
Europe will get left behind as usual
Janosch seems cool
What about deepfakes? That should be one of the main focus of the AI Act.
Deepfakes and otherwise manipulated materials by AI need to be flagged as such, as I understand
It is included in the regulation.
how do you want to regulate deepfakes ?
Is AI dangerous or are we humans? We have never been so close to a devastating nuclear war and we humans are to blame, not AI.
And what you think an AI trained on human behavior would do? Distilled human shittiness in computer algorithm form, like it's already happening.
@@pinnacleevolution1634 humans are the worst making AI the worst yes
One day it will only be Ai on 🌏🌎🌍🛸
Hopefully
The act will make sure European talent will continue to emigrate to US
Isn't the US also trying to regulate the usage of AI? Everyone's chill about AI until some serious damage is done.
I'm in IT and I say don't stop them. Let them go. We prefer here to maintain our fundamental rights. Go with them if you agree
@@johnvif I will. EU is not fit for the future, they've disappointed on computer and internet (no IT giants from Europe), and they're doing the same for AI
PROBABLY AS WELL AS THIS INTERNET IS KEPT IN CHECK...NOT AT ALL!!!
the senell which country
The law will always regulate AI fine till AI finds a way around it. It will then be in a position where it has to self regulate...I guess people are not in to self regulating digital demigods.
This law ranks basically any system with real-world utility as 'high-risk' and is full of broadly-defined terms and uncertain requirements.
Is the EU trying to sabotage it's tech sector? This will likely seriously damage investor confidence in european startups.
Don't get me wrong, regulation is a good thing. But this document is absolutely over-regulation. After reading it, i'm debating whether my own early stage startup is even feasible in the EU.
Its life.
How do you regulate computer code? AI only produce the outcomes it has been coded to do based on the input received.
Definitely support,
AI is horrible,
All the videos you see, the photos you see, and even the voices you hear may not be real
It is mostly not! Big tech wants to exploit people the way they use it.
The photo you take with your phone youself is not "real".
@@urbansenicar81What's a real photo then?
@@soundscape26 Film. Even that is debatable.
Take a photo of something with different cameras and see for yourself. It's calculated. Dif software, dif calculations. Books are written on that subject. To then paint a frog face over it is just a further step.
@@urbansenicar81Ah ok, you are talking about AI photo enhancements.
Well even DSLR's and mirrorless cameras will adopt some of those sooner or later.
lady sounds like bjork
EU is absolutely based!
*middle finger* - ChatGPT
They should ban its use in weapons first. Hypocrisy.
Building weapons that are inferior to those produced by adversaries sounds like a great idea for EU defense and security.
It's not that simple, weapons that incorporate ai could be safer it just depends on how it's incorporated.
No one is saying we should give ai nuclear launch codes lol, but it's possible that ai used for targeting and damage assessment calculations for instance could lead to less casualties and more precise strikes.
@@jag764 Same can be said on almost every other AI application.
You missed the point by half an astronomical unit!
I did not. Those making these laws will never touch the weapons industry. All I am saying is: let us start there.@@Leptospirosi
Ask Russia if they will put a disclaimer on an AI system when they use it on Europeans. I’m pretty sure they will do that.
Ask Russia if they would anything and I am pretty sure that they will not.
So what is your point? Are you against regulation in general or just when it comes to AI?
@@larslarsen5414 AI technology is what you will want and need in the future. AI will have to be embraced at some point, especially when external forces will use it against you. It’s known that people will lose their job because of AI of course this is sad, but the point is AI will need to combat another AI for protection. With AI I hope it will give mankind the ability to get of the computer and interact face to face with another again, then on UA-cam comments.
Do you think the EU is going to curtail the AI growth in the military complex? They would never. Have some common sense. Your vague use of the actual implements of AI in a war setting shows you have no idea.
The Butlerian Jihad has begun
"In today's news, the EU has banned..."
It was not banned, just regulated. Everyone's cool with AI until some serious damage is done.
In the law they ban several uses of AI
@@bobtuiliga8691 Are you against it?
Another bot that wants to push "china social credit system" blindly.
Not first regulation in world get over yourselves
Yuval Noah Harari likes this
This law is the dumbest implementation that they could have come up with. Honestly. It actually even makes existing pre-AI workflows illegal because they made it so broad. They define AI as any machine that provides an automated process. That's the most crazy definition ever. Every machine in existence falls under that category. Then they go on to say that using an AI (by that crappy definition) in a situation that could be harmful to humans is now illegal, and has huge punishments. Okay, I guess this means that self-driving cars are illegal, automated assembly lines are now illegal. Heck, your pellet grill is probably illegal under this inane law.
The EU is a bureaucratic nightmare, you journalists are not doing a good job a highlighting the real nature of the problems.
EU is living in 1900s. You need to work on guard rails injecting policy in models during learning. You can't put in a regulation on models already made. There will be opensource models and it will get out.
I'm expecting something worse than GDPR
GDPR is a great thing forvus EU citizens
@@frankguz55 the intentions are good but the law is a tangled mess that nobody understands
More like legislative catchup since modern technology is rooted in Moore's law
LOOOOOOOOOOL, Europe restricts so much... is it all good? can u truly stop the development of AI?
Not the development rather its usage in certain areas.
ah... EU best product ... regulations ?!!!!
We welcome Lucia... so tell us Luisa! lol weird name for this woman I see
European national sport: Regulate stuff we are not able to build. Which guarantees that nobody will build any competitive AI here in the future.
AI does not exist.
@Magastz AI does not exist.
US leader in innovation... EU leader in banning LOL
It feels like banning things is all the EU ever does, except for patting themselves on the back for banning things
Banning is good, much better than innovation.
Wait until most stuff you see online will be AI and you won't be able to tell what's real and what isn't.
@@bubshab Welcome to matrix XD
Comprehensive huh? Did they outlaw giving robots guns?? No? So Terminator good, but facial recognition based door lock, bad? Some real "thinkers" over there in Brussels.
Boooooooooooooo
No!
AI bot 😟
AI is the future.
Future? Yes if we exist after wars 😅
I can already see how they use AI for censoring. This future will be a dystopian nightmare not worth living.
@@MehnazMirza-x8gThe AI can exist after wars, without parasitic humans.
about time they ban adobe illustrator.
Somalis have hijacked the Jal Dasu ship from indian ocen sea
Soo what if we got a photo of a terrorist, then what? Will we wait until he appears on some new footage or will we analyze the old ones? It would be too difficult to analyze all the old ones suddenly, so it's easier if it gets processed continuously and everyone is watched. And when he becomes a terrorist, you immediately have everything on him. I think humanity should just give up on privacy and publish everything about everyone. If it's only in hands of government, it's too risky.
We'll start with you then. Tell us about you. Which hole do you prefer? The one in the front or the one in the back? Putin would like to know...
Are you smoking crack?
Stalking and impersonation is already a huge problem. Privacy always has to be protected.
❤
EU is a joke
Why investing too much money on AI, when EU cant stop war with Russia? Wars are ahead and EU is spending on AI? The question is: will AI exsist after the nuclear war?
WW3 wìlĺ be nukes, WW4 will be sticks and stones
bot
Lilliputians staking Gulliver to the ground. 😂
Hhhhhhhhhhhhhhhhhmmmmm………..
"trustworthy ai" okaaay. So in other words, only woke ai. Here comes dystopia.