Governments worldwide are racing to establish artificial intelligence (AI) regulations, with Europe taking the lead. A European Parliament committee voted to strengthen a flagship legislative proposal to create AI guardrails. The urgency to regulate AI has increased as technological advancements like ChatGPT demonstrate this emerging field’s benefits and potential risks.
Let’s take a closer look at the EU’s Artificial Intelligence Act:
How Do the Rules Work?
Initially proposed in 2021, the AI Act will govern any product or service that utilizes an AI system. The act classifies AI systems into four risk levels, ranging from minimal to unacceptable. Higher-risk applications will face stricter requirements, including transparency and the use of accurate data. Essentially, it establishes a risk management system for AI, as described by Johann Laux, an expert at the Oxford Internet Institute.
What Are the Risks?
One of the main goals of the EU is to mitigate AI’s threats to health, safety, and fundamental rights and values. This means that specific uses of AI are strictly prohibited. For example, systems that implement “social scoring” to judge individuals based on their behaviour are prohibited. AI that exploits vulnerable populations, such as children, or utilizes subliminal manipulation that can cause harm, like an interactive toy encouraging dangerous behaviour, is also forbidden.
Lawmakers further strengthened the proposal by voting to ban predictive policing tools, which analyze data to predict crimes and potential offenders. They also expanded the ban on remote facial recognition, with a few exceptions for law enforcement purposes, such as preventing specific terrorist threats. This technology scans people’s faces in public and uses AI to match them against a database.
The objective is to prevent the emergence of a surveillance society based on AI, as stated by Brando Benifei, an Italian lawmaker leading the European Parliament’s AI efforts. The risks associated with these technologies are considered too high.
AI systems used in high-risk domains like employment and education, which can significantly impact individuals’ lives, will be subject to stringent requirements. These requirements include transparency with users and the implementation of risk assessment and mitigation measures.
For example, the EU’s executive branch considers video games and spam filters low- or no-risk AI systems.
What About ChatGPT?
The original proposal, spanning 108 pages, barely mentioned chatbots and only required them to be labelled as such so that users are aware of interacting with a machine. The negotiations later included provisions that covered general-purpose AI, such as ChatGPT, and subjected them to some of the exact requirements as high-risk systems.
One notable addition is the requirement to comprehensively document any copyrighted material used to train AI systems in generating text, images, video, or music resembling human work. This provision enables content creators to determine if their work has been used to train algorithms powering systems like ChatGPT, allowing them to seek redress for potential copyright infringement.
Why Are the EU Rules Significant?
While Europe may not be at the forefront of cutting-edge AI development, the European Union often sets trends with regulations that become de facto global standards. As Johann Laux explains, Europe’s sizeable population and relative wealth make it attractive for companies and organizations to comply with EU regulations rather than develop different products for various regions.
The EU’s aim goes beyond regulation; it also seeks to foster user confidence by establishing standard rules for AI. By instilling trust in AI and its applications, the European Union hopes to encourage its widespread use, unlocking AI’s economic and social potential.
What Are the Consequences for Rule Violations?
If a company violates AI regulations, it could be fined 30 million euros ($33 million) or 6% of its annual global revenue, which amounts to billions of euros for tech giants like Google and Microsoft.
What’s Next?
The full implementation of the rules may take several years. A plenary session of EU lawmakers is scheduled in mid-June for the vote on the draft legislation.
After that, 27 member states, the Parliament, and the executive committee will negotiate the legislation in a three-way process. During this process, further adjustments and changes may be made as they deliberate over the details.
Hopefully, the legislation will be approved by the end of the year or the beginning of 2024 at the latest. Following that, companies and organizations will have a grace period to adapt, typically lasting around two years.
The European Union’s proactive stance in regulating AI is significant not only for Europe but also on a global scale. By establishing comprehensive guidelines and regulations, Europe aims to strike a balance between harnessing the potential of AI and safeguarding individuals’ rights, safety, and well-being. These regulations can serve as a model for other regions grappling with AI governance, making Europe a trendsetter in the field.
As AI advances rapidly, Europe’s efforts to build guardrails around this technology serve as an essential foundation for responsible and ethical AI development. By instilling trust and confidence among users, these regulations aim to create an environment where AI can thrive while protecting society from potential risks and ensuring the respect of fundamental rights and values.