As the development of artificial intelligence (AI) services accelerates, regulators are struggling to control a technology that could disrupt the way societies and businesses operate. The European Union is leading the way in drafting new AI rules to address privacy and safety concerns that have emerged with the rapid advances in generative AI technology. However, it will take several years for the legislation to be enforced, leaving regulators to rely on existing laws to control the technology.
The Challenge of Regulating AI
“In absence of regulations, the only thing governments can do is to apply existing rules,” said Massimiliano Cimnaghi, a European data governance expert at consultancy BIP. “If it’s about protecting personal data, they apply data protection laws, if it’s a threat to safety of people, there are regulations that have not been specifically defined for AI, but they are still applicable.”
Generative AI models have become well known for making mistakes or “hallucinations,” spewing up misinformation with uncanny certainty. If a bank or government department used AI to speed up decision-making, individuals could be unfairly rejected for loans or benefit payments. Big tech companies including Alphabet’s Google and Microsoft had stopped using AI products deemed ethically dicey, like financial products.
Regulators aim to apply existing rules covering everything from copyright and data privacy to two key issues: the data fed into models and the content they produce, according to six regulators and experts in the United States and Europe.
Regulatory Responses
The European Union is at the forefront of drafting new AI rules that could set the global benchmark to address privacy and safety concerns that have arisen with the rapid advances in the generative AI technology behind OpenAI’s ChatGPT. In April, Europe’s national privacy watchdogs set up a task force to address issues with ChatGPT after Italian regulator Garante had the service taken offline, accusing OpenAI of violating the EU’s GDPR, a wide-ranging privacy regime enacted in 2018.
Data protection authorities in France and Spain also launched in April probes into OpenAI’s compliance with privacy laws. The agency will begin examining other generative AI tools more broadly, a source close to Garante told Reuters.
In the EU, proposals for the bloc’s AI Act will force companies like OpenAI to disclose any copyrighted material used to train their models, leaving them vulnerable to legal challenges.
In Britain, the Financial Conduct Authority is one of several state regulators that has been tasked with drawing up new guidelines covering AI. It is consulting with the Alan Turing Institute in London, alongside other legal and academic institutions, to improve its understanding of the technology, a spokesperson told Reuters.
French data regulator CNIL has started “thinking creatively” about how existing laws might apply to AI, according to Bertrand Pailhes, its technology lead. For example, in France discrimination claims are usually handled by the Defenseur des Droits (Defender of Rights). However, its lack of expertise in AI bias has prompted CNIL to take a lead on the issue.
“While regulators adapt to the pace of technological advances, some industry insiders have called for greater engagement with corporate leaders,” said Harry Borovick, general counsel at Luminance, a startup which uses AI to process legal documents. He told Reuters that dialogue between regulators and companies had been “limited” so far.
As the development of AI services continues, regulators are struggling to keep up with the pace of technological advances. With the EU at the forefront of drafting new AI rules, other regions are being encouraged to “interpret and reinterpret their mandates” to address privacy and safety concerns. While regulators adapt to the pace of technological advances, some industry insiders have called for greater engagement with corporate leaders to ensure a balance between consumer protection and business growth.
Leave a Reply