When Open AI first introduced ChatGPT, it appeared to be a revolutionary tool, capable of providing a single source of truth in an era plagued by misinformation and polarization. However, as the weaknesses of this technology quickly became apparent, such as its tendency to hallucinate answers, it became clear that ChatGPT relied solely on patterns in the data it was trained on rather than objective facts.

After the release of ChatGPT, several other chatbots emerged from major tech companies like Microsoft, Google, Tencent, and more. Each of these chatbots provided different results for the same prompt, depending on the model, training data, and applied guardrails. These guardrails aimed to prevent biases and the spread of disinformation or hate speech. However, they didn’t receive unanimous approval, with conservatives claiming that ChatGPT exhibited a liberal bias.

Alternate Approaches to Guardrails

In response to concerns about bias and misinformation, different companies implemented varying approaches. Elon Musk expressed his intention to develop a less restrictive and politically correct chatbot, while Anthropic established a “constitution” outlining values and principles for their chatbots. This constitution drew inspiration from the U.N. Declaration of Human Rights and aimed to capture diverse perspectives.

Open-Source Models and Guardrail Restrictions

Meta released their LLaMA 2 large language model (LLM) as open-source, allowing anyone to download and use it freely. Consequently, the need for guardrails and constitutions became less relevant when using these unrestricted models. However, recent research demonstrated that a prompting technique could bypass guardrails in both closed-source and open-source models, potentially enabling malicious use.

The fragmentation of AI, similar to our fragmented social media and news landscape, poses challenges for truth and trust. The vast array of disparate results from multiple models responding to the same prompt only adds to the noise and chaos. This fragmentation not only affects text-based information but also digital human representations, as AI models become increasingly multimodal, capable of generating images, video, and audio.

Emerging Applications of Multimodal AI

One notable application of multimodal AI is the creation of “digital humans,” entirely synthetic entities that possess human-like faces and the ability to interact naturally with real humans. These digital humans have potential applications in customer service, healthcare, and remote education, offering highly realistic and sophisticated support.

AI-powered newscasters have already made appearances, allowing for newsfeeds customized to individual interests. Companies like Kuwait News and China’s People’s Daily have experimented with digital human newscasters, and startup company Channel 1 plans to launch an AI-generated CNN-style news channel. With scripts developed using language models, Channel 1 aims to produce personalized newscasts, even offering hosts with different points of view.

Synthetic faces created by AI have been found to be highly realistic and, surprisingly, more trustworthy than real faces, raising concerns about their potential nefarious use. While Channel 1 may not have sinister intentions, advancements in technology could allow others to create personalized news videos and manipulate opinions more effectively. As a society, we must be cautious about the erosion of truth and trust in the face of these developments.

Truth and trust have long been under attack, and the rise of AI-powered chatbots and digital humans further exacerbates this issue. As technology continues to advance, we must remain vigilant about the potential misuse of these tools. It is essential that we prioritize ethical considerations and implement safeguards to ensure the preservation of truth and trust, even as we navigate this new frontier of AI-powered communication.

AI

Articles You May Like

AMD Reports Strong Q2 Results Driven by High Demand for AI
The Dual Nature of Artificial Intelligence: Excitement and Fear
Colorado space firms Ursa Major and Orbit Fab lay off employees amid tight funding environment
China’s Domestic Chip Equipment Makers See Surge in Revenue Amidst Self-Reliance Push

Leave a Reply

Your email address will not be published. Required fields are marked *