There has been a lot of discussion around the dangers of generative AI in recent times. However, most of the concerns revolve around the risk to jobs, fake content, and sentient machines. While these are valid, they do not reflect the biggest threat that generative AI poses to society.

The Common Warnings of Generative AI

Generative AI can create human-level content ranging from artwork to scientific reports, which will greatly impact the job market. It can also generate fake and misleading content at scale, leading to misinformation. Finally, there is a concern that AI systems will develop a “will of their own” and take actions that conflict with human interests or even threaten human existence.

The Hidden Danger of Generative AI

The most dangerous feature of generative AI is the ability to produce interactive and adaptive content that is highly personalized and potentially far more manipulative than any form of targeted content to date. Interactive generative media is targeted promotional material that is created or modified in real-time to maximize influence objectives based on personal data about the receiving user.

Targeted Generative Advertising and Targeted Conversational Influence

Targeted generative advertising is personalized content created on the fly by generative AI systems based on influence objectives provided by third-party sponsors in combination with personal data accessed for the specific user being targeted. The personal data may include age, gender, education level, interests, values, aesthetic sensibilities, purchasing tendencies, political affiliations, and cultural biases. The system will learn which tactics work best on the user over time, discovering the hair colors and facial expressions that best draw their attention.

Targeted conversational influence is a generative technique in which influence objectives are conveyed through interactive conversation rather than traditional documents or videos. Conversations will occur through chatbots or voice-based systems powered by large language models. Users will encounter these “conversational agents” many times throughout their day, targeted with conversational influence – subtle messaging woven into the dialog with promotional goals.

The Asymmetric Power Balance and AI Manipulation Problem

The big threat to society is not the optimized ability to sell a product but the use of the same techniques to drive propaganda and misinformation. A conversational agent could be directed to convince a user that a perfectly safe medication is a dangerous plot against society. This creates an asymmetric power balance called the AI manipulation problem, in which humans converse with artificial agents that are highly skilled at appealing to them, while they have no ability to “read” the true intentions of the entities they’re talking to.

Generative AI is a new form of media that is interactive, adaptive, personalized, and deployable at scale. Without meaningful protections, consumers could be exposed to predatory practices that range from subtle coercion to outright manipulation. Regulators, policymakers, and industry leaders must focus on this hidden danger of generative AI to ensure meaningful protections for consumers.

AI

Articles You May Like

Legacy Media Industry Struggles to Keep Up with Netflix
Republican Lawmakers Propose Draft Bill for Clearer Regulation of Crypto Assets and Exchanges
The Challenging Road Ahead for Samsung Electronics
New AI Features Coming to Apple Devices

Leave a Reply

Your email address will not be published. Required fields are marked *