Sundar Pichai, CEO of Alphabet, the parent company of Google, has committed to an “AI Pact” during a meeting with Thierry Breton, the European Commissioner for Internal Market. Pichai confirmed that Google would work with other companies to ensure that AI products and services are developed responsibly. The “AI Pact” is voluntary and is intended to be developed ahead of the legal deadline for the AI regulation. Breton tweeted about the agreement and stated that the EU expects technology companies to respect all rules on data protection, online safety, and artificial intelligence. The European Parliament recently passed a new set of rules for AI, which includes provisions to ensure that the training data for tools such as ChatGPT does not violate copyright laws. The rules will regulate AI on a risk-based approach, with applications of the technology deemed “high risk” being banned, and applications with limited risk having tough transparency restrictions enforced.

Concerns around advanced AI and risk of disinformation

The increasing risks surrounding AI have raised concerns amongst regulators, with industry leaders, politicians, and academics having raised alarm about advanced new forms of AI, such as generative AI and the large language models that power them. These tools allow users to generate new content quickly, such as poems or essays, by simply giving them prompts on what to do. There is concern that these technologies will disrupt the labor market and their ability to produce disinformation. Google’s ChatGPT, the most popular generative AI tool, has more than 100 million users since its launch in November. Google released its own alternative to ChatGPT, called Google Bard, in March, and unveiled an advanced new language model known as PaLM 2 earlier this month.

Commitment to safety in AI development

During a separate meeting with Vera Jourova, a vice president of the European Commission, Pichai committed to ensuring that Google’s AI products are developed with safety in mind. Both Pichai and Jourova agreed that AI could have an impact on disinformation tools, and that everyone should be prepared for a new wave of AI-generated threats. Jourova also discussed concerns about the spread of pro-Kremlin war propaganda and disinformation, also on Google’s products and services, and asked Pichai to take swift action on the issues faced by Russian independent media that cannot monetize their content in Russia on YouTube. Jourova also highlighted the risks of disinformation for electoral processes in the EU and its Member States.

Engagement with the bloc’s Code of Practice of Disinformation

Jourova praised Google’s “engagement” with the bloc’s Code of Practice of Disinformation, a self-regulatory framework released in 2018 and since revised, aimed at spurring online platforms to tackle false information. Jourova said that “more work is needed to improve reporting” under the framework. Signatories of the code are required to report how they have implemented measures to tackle disinformation.

Sundar Pichai’s meetings with EU officials demonstrate Alphabet’s commitment to responsible AI development and collaboration with other companies to self-regulate AI products and services. The EU’s new set of rules for AI aims to regulate AI on a risk-based approach and place restrictions on the use of high-risk applications. The concerns surrounding advanced AI and the risk of disinformation highlight the need for responsible development in this area. Google’s engagement with the bloc’s Code of Practice of Disinformation shows the company’s willingness to tackle false information online, but more work is needed to improve reporting under the framework.

Enterprise

Articles You May Like

The Evolution of Google’s Bard Conversational AI System
Exploring the Need for Character Switching in Baldur’s Gate 3
The Impact of Meta’s Return-to-Office Mandate on Remote Work
Google Removes “Slavery Simulator” Game After Racism Outcry in Brazil

Leave a Reply

Your email address will not be published. Required fields are marked *