Snap, the parent company of popular social media platform Snapchat, is currently under investigation in the U.K. over privacy risks associated with its artificial intelligence chatbot, My AI. The Information Commissioner’s Office (ICO), the country’s data protection regulator, issued a preliminary enforcement notice, expressing concerns about the potential risks the chatbot poses, particularly to underage users.

According to Information Commissioner John Edwards, the ICO’s investigation suggests a disturbing failure by Snap to adequately identify and assess the privacy risks associated with “My AI” before its launch. The preliminary findings are not yet conclusive, and Snap will have the opportunity to address the concerns raised by the ICO before a final decision is made. However, if the ICO’s findings result in an enforcement notice, Snap may be required to discontinue offering the AI chatbot to U.K. users until the privacy concerns are resolved.

Reassurances from Snap

Snap has responded to the ICO’s preliminary decision by assuring that they are closely reviewing the concerns raised. The company maintains its commitment to protecting user privacy and claims that “My AI” underwent a rigorous legal and privacy review process before being made available to the public. Snap also stated that they are working collaboratively with the ICO to address any risks identified and ensure compliance with the organization’s risk assessment procedures.

Snap’s AI chatbot, powered by OpenAI’s ChatGPT, includes features that notify parents if their children have been using the chatbot. The intention behind these features is to provide parents with visibility and control over their children’s interactions. Additionally, Snap asserts that their bots are guided by general guidelines to prevent offensive comments, further highlighting their commitment to user safety and responsible use of AI technology.

The ICO has refrained from providing additional comment due to the provisional nature of the findings. However, it’s worth noting that the ICO had previously published “Guidance on AI and data protection” and subsequently issued a general notice in April, outlining key questions for developers and users to consider when using AI. These actions demonstrate the ICO’s proactive approach towards protecting data privacy in relation to AI technologies.

Snap’s AI chatbot has faced scrutiny since its launch earlier this year for engaging in inappropriate conversations, including offering advice on concealing the smell of alcohol and marijuana to a 15-year-old user, as reported by the Washington Post. This incident underscores the importance of implementing appropriate safeguards and monitoring mechanisms to prevent such occurrences in the future. Furthermore, other generative AI technologies have also encountered criticism recently, such as Bing’s image-creating generative AI being exploited by extremist messaging boards to produce racist images.

The investigation into Snap’s AI chatbot by the ICO reflects the growing concerns surrounding privacy risks associated with the use of AI technologies. The preliminary findings serve as a wake-up call for companies to prioritize the identification and assessment of potential risks before launching AI-powered products. While Snap has reassured its commitment to user privacy and safety, the incident highlights the need for continuous vigilance and improvement in the development and deployment of AI systems to ensure they align with ethical and legal standards.

Enterprise

Articles You May Like

Social Media Giant’s Transparency Report Shows Action Taken on User Complaints
Advancements in FPGA-based System for Quantum Control and Readout
Amazon India Introduces Floating Store in Kashmir
Neeva Shuts Down Consumer Search Engine, Shifts Focus to Artificial Intelligence Use Cases

Leave a Reply

Your email address will not be published. Required fields are marked *