The world’s largest technology companies are intensifying efforts to influence the European Union (EU) towards a more lenient approach to regulating artificial intelligence (AI), as they aim to avoid the risk of substantial fines under the forthcoming AI Act.
This comes after EU lawmakers agreed on the first comprehensive set of AI rules in May, marking a milestone in global efforts to regulate the technology.
The AI Act establishes a legal framework for “general purpose” AI systems, such as OpenAI’s ChatGPT, but key details about enforcement and potential liabilities remain unclear. The law’s accompanying codes of practice, which are still being drafted, will determine how strictly these regulations are applied and whether companies will face challenges such as copyright lawsuits and multi-billion-dollar penalties.
The European Union has invited stakeholders, including companies, academics, and other interested parties, to contribute to developing these codes. According to an insider, nearly 1,000 applications have been submitted to participate in the process—a significant number, underscoring the high level of interest and potential impact of the regulations.
“The code of practice is crucial. If we get it right, we can continue innovating,” said Boniface de Champris, a senior policy manager at CCIA Europe, a trade organization representing major tech firms like Amazon, Google, and Meta. He warned that overly restrictive or narrowly defined rules could stifle innovation.
Data usage is a major sticking point in the ongoing debate over AI regulation, particularly concerning the legality of using copyrighted material for training AI models without explicit permission. Companies such as OpenAI and Stability AI have faced scrutiny over whether their use of content, such as books or photographs, violates copyright law.
The AI Act will require companies to provide detailed summaries of the data used to train their AI models. This transparency could enable creators whose content was used without consent to seek compensation, though this aspect of the law is still being tested in courts.
Some business leaders argue that these summaries should only include limited details to protect trade secrets. At the same time, copyright holders advocate for more transparency, insisting they have the right to know if their work has been exploited without authorization.
OpenAI, criticized for its lack of transparency about the data used to train its models, is among the companies seeking to join the working groups developing the code of practice. Google has also applied, with a company spokesperson confirming its involvement, while Amazon has expressed a desire to “contribute our expertise and ensure the code of practice succeeds.”