European Union (EU) negotiators have made history by reaching a landmark deal on comprehensive regulations for artificial intelligence (AI), making the EU the first continent to establish clear rules in this field. The deal was the result of intense closed-door negotiations, focusing on contentious issues such as generative AI and the use of face recognition surveillance by the police.
However, civil society groups have expressed concerns that the deal does not offer adequate protection to individuals from the potential harm caused by AI systems. While the EU was initially at the forefront of developing AI regulations, recent advancements in generative AI necessitated updates to the existing laws.
Before the regulations can be officially implemented, they need to be approved by the European Parliament, which is expected to be a formality. It is projected that the law will not come into full effect until at least 2025, and it includes substantial financial penalties for violations.
One of the critical issues leading to the need for comprehensive regulations is the emergence of generative AI systems, such as OpenAI’s ChatGPT. These systems have highlighted the risks associated with rapidly advancing AI technology, including threats to jobs, privacy, and copyright protection.
While countries like the US, UK, and China have proposed their own AI regulations, they are still trying to catch up to Europe’s progress. The EU’s strong and comprehensive rules could serve as a model for other countries considering AI regulation.
Furthermore, AI companies that fall under the EU’s jurisdiction will likely extend their obligations internationally to avoid creating separate models for different markets.
The AI Act now includes regulations for foundation models, which serve as the backbone for general-purpose AI services like ChatGPT and Google’s Bard chatbot. These models will be subject to technical documentation requirements, compliance with EU copyright law, and detailed information about the training data used.
Advanced foundation models that pose “systemic risks” will face additional scrutiny and requirements, including mitigating risks, reporting incidents, implementing cybersecurity measures, and reporting energy efficiency.
However, concerns have been raised about the lack of transparency regarding the data used to train these models and the potential risks they may pose to everyday life.
Face recognition surveillance systems were a contentious topic during negotiations. Ultimately, a compromise was reached, allowing exemptions for the use of this technology by law enforcement in serious crimes.
Rights groups have expressed concerns about exemptions, potential loopholes in the AI Act, and the lack of safeguards for AI systems used in migration and border control.
Despite the EU reaching a deal on AI regulations, digital rights group Access Now suggests that significant flaws still exist in the final text. The group argues that the regulations may not go far enough in protecting individuals and addressing potential risks adequately.
“Infuriatingly humble tv expert. Friendly student. Travel fanatic. Bacon fan. Unable to type with boxing gloves on.”