European Union policymakers have reached a groundbreaking decision to regulate artificial intelligence (AI), establishing comprehensive standards to oversee the influential technology’s use.
The ‘AI Act’ agreement emerged after intensive negotiations lasting almost 38 hours between lawmakers and policymakers.
The agreement is a significant step forward in prioritizing people’s and businesses’ safety and fundamental rights, fulfilling the commitment outlined in the political guidelines. The move has been warmly welcomed as a milestone achievement.
Efforts to pass the “AI Act” gained momentum after OpenAI’s ChatGPT garnered attention last year, thrusting the field of AI into the public eye. This legislation is poised to become a global standard for governments seeking to leverage AI’s potential benefits while mitigating risks like disinformation, job displacement, and copyright violations.
Despite previous delays caused by disagreements over regulating language models and AI use by law enforcement and intelligence services, the legislation is now set for review by member states and the EU parliament.
The law mandates tech companies operating within the EU to disclose data used in training AI systems and to conduct thorough testing, particularly for applications with high-risk implications, such as self-driving cars and healthcare.
Provisions within the legislation prohibit indiscriminately scraping images from the internet or security footage for creating facial recognition databases. However, exceptions exist for law enforcement using “real-time” facial recognition to combat terrorism and serious crimes.
Consequences for tech firms violating the law are severe, with fines reaching up to 7% of their global revenue, dependent on the violation’s nature and the company’s size.