The development of artificial intelligence (AI) has prompted discussions on how it should be regulated. However, the language used in these discussions can be confusing and alienating to the general public. This article aims to clarify the terms and concepts used in AI governance debates.

AI Safety vs. AI Ethics

The discussions on regulating AI have two major camps – one concerned with “AI safety,” and the other with “AI ethics.” The former refers to the risks posed by the development of an unfriendly AGI with unimaginable powers. Those in this camp suggest that governments should regulate its development to prevent an untimely end to humanity, similar to nuclear nonproliferation. Industry leaders such as OpenAI, Google DeepMind, and well-capitalized startups share this stance.

On the other hand, the “AI ethics” camp focuses on the current harms of AI and suggests that governments enforce transparency around how AI systems collect and use data, restrict its use in areas subject to anti-discrimination law, and explain how current AI technology falls short. IBM Chief Privacy Officer Christina Montgomery, who represented this camp at a congressional hearing, suggested that each company working on these technologies should have an “AI ethics” point of contact.

AI Terminology

The debate around AI governance has developed its own lingo, which can be confusing to those not well-versed in the field. AI models need to be “trained” through a data analysis process called “training.” The most expensive AI systems that analyze the most data are called “frontier models,” as opposed to smaller AI models that perform specific tasks.

“AGI” or “artificial general intelligence” refers to a significantly more advanced AI than is currently possible, one that can do most things as well or better than most humans, including improving itself. “LLMs” or “large language models” use graphic processing units to predict statistically likely sentences, images, or music, a process called “inference.” OpenAI’s GPT-4 is a frontier model of LLMs.

AI Safety Terminology

The AI safety camp has terms that are more cultural in nature and often refer to shared references and in-jokes. For example, AI safety people might say that they’re worried about turning into a paper clip. That refers to a thought experiment popularized by philosopher Nick Bostrom that posits that a super-powerful AI could be given a mission to make as many paper clips as possible and logically decide to kill humans and make paper clips out of their remains.

Another concept in AI safety is the “hard takeoff” or “fast takeoff,” which suggests that if someone succeeds in building an AGI, it will already be too late to save humanity. Sometimes, this idea is described in terms of an onomatopoeia – “foom” – especially among critics of the concept.

AI Ethics Terminology

When describing the limitations of the current LLM systems, AI ethics people often compare them to “Stochastic Parrots.” The analogy emphasizes that while sophisticated AI models can produce realistic-seeming text, the software doesn’t understand the concepts behind the language – like a parrot. When these LLMs invent incorrect facts in responses, they’re “hallucinating.”

“Explainability” in AI results is another term that is important to AI ethics. It means that when researchers and practitioners cannot point to the exact numbers and path of operations that larger AI models use to derive their output, this could hide some inherent biases in the LLMs.

AI Governance Terminology

“Guardrails” refer to software and policies that Big Tech companies are currently building around AI models to ensure that they don’t leak data or produce disturbing content, which is often called “going off the rails.” It can also refer to specific applications that protect AI software from going off-topic, like Nvidia’s “NeMo Guardrails” product.

“Emergent behavior” is a term that describes what happens when simple changes are made at a very big scale, like the patterns birds make when flying in packs, or, in AI’s case, what happens when ChatGPT and similar products are being used by millions of people, such as widespread spam or disinformation.

The language around AI governance can be confusing, but it is essential to understand it to participate in discussions on AI regulation. It is crucial to recognize that there are different camps in the debate, each with its own terminology. As the development of AI accelerates, it is crucial to find a common language that can be understood by policymakers and the general public alike.

Enterprise

Articles You May Like

YouTube to Stop Removing Misleading Content About US Election Results
China Cracks Down on Fake News and Misleading Accounts
Vectara expands AI-powered conversational search platform with generative AI capabilities
An Amazon Shareholder Files Lawsuit Against Jeff Bezos and Amazon Board Over Project Kuiper Contracts

Leave a Reply

Your email address will not be published. Required fields are marked *