A group of industry leaders and experts have issued a warning that global leaders should be taking action to reduce the risks posed by artificial intelligence (AI) technology. The statement, which was signed by dozens of specialists, including Sam Altman whose firm OpenAI created the ChatGPT bot, said that tackling the risks from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war. The statement, which can be found on the website of US-based non-profit Center for AI Safety, gave no detail of the potential existential threat posed by AI but was intended to open up a discussion on the dangers of the technology.

The fear is that humans would no longer have control over superintelligent machines, which experts have warned could have disastrous consequences for the species and the planet. Several of the signatories, including Geoffrey Hinton, who created some of the technology underlying AI systems and is known as one of the godfathers of the industry, have made similar warnings in the past. Dozens of academics and specialists from companies including Google and Microsoft, both leaders in the AI field, signed the statement.

The Risks of Biased Algorithms and AI-Powered Automation

The success of ChatGPT, which demonstrated an ability to generate essays, poems and conversations from the briefest of prompts, sparked a gold rush with billions of dollars of investment into the field. However, critics and insiders have raised the alarm. Common worries include the possibility that chatbots could flood the web with disinformation, that biased algorithms will churn out racist material, or that AI-powered automation could lay waste to entire industries.

Critics have slammed AI firms for refusing to publish the sources of their data or reveal how it is processed, which is known as the “black box” problem. Among the criticism is that the algorithms could be trained on racist, sexist or politically biased material. US academic Emily Bender, who co-wrote an influential paper criticising AI, said that the March letter, signed by hundreds of notable figures, was “dripping with AI hype”.

Despite the concerns, Altman has defended his firm’s refusal to publish the source data, saying that critics really just want to know if the models were biased. He added that the latest model was “surprisingly non-biased”. However, the risks of biased algorithms and AI-powered automation remain a concern for many experts in the field.

The risks posed by AI technology are a global priority that should be taken seriously alongside other societal-scale risks such as pandemics and nuclear war. The fear that humans would no longer have control over superintelligent machines, which could have disastrous consequences for the species and the planet, is a major concern. While the success of AI technology has sparked a gold rush with billions of dollars of investment into the field, experts have warned of the risks of biased algorithms and AI-powered automation. The refusal of AI firms to publish the sources of their data or reveal how it is processed, known as the “black box” problem, adds to the concerns. It is essential that these risks are addressed and mitigated to ensure the safe and responsible development of AI technology.

Internet

Articles You May Like

Weights & Biases Expands Platform with New Capabilities for Building and Monitoring Machine Learning Models
Deezer Launches AI Tool to Detect and Tag Songs with Vocal Clones
Introducing FreeWilly1 and FreeWilly2: New Open-Access Language Models
Revolutionizing Communication and Collaboration: Cisco’s Bold AI Strategy

Leave a Reply

Your email address will not be published. Required fields are marked *