Geoffrey Hinton, known as the “Godfather of AI,” has recently resigned from Google, citing the desire to speak freely about the risks of artificial intelligence (AI). While surprising given his lifelong dedication to advancing AI technology, it is also unsurprising due to his growing concerns expressed in recent interviews. Hinton’s resignation announcement on May 1, which is May Day, adds a layer of symbolism to the situation. May Day is known for celebrating workers and the blooming of spring; however, ironically, AI, particularly generative AI based on deep learning neural networks, may displace a large portion of the workforce. As the World Economic Forum predicts, AI may disrupt 25% of jobs over the next five years. On the other hand, generative AI could create a new beginning for symbiotic intelligence between humans and machines. Alternatively, AI advancement could approach superintelligence, posing an exponential threat.

Hinton’s Concerns

Hinton’s immediate concern is the ability of AI to produce human-quality content in text, video, and images. He believes that bad actors can use this capability to spread misinformation and disinformation to the point where the average person will not be able to distinguish what is true and what is not. Hinton also now believes that machines will be more intelligent than the smartest people much sooner than expected. This point has been much discussed, and while most AI experts viewed this as a far-off scenario, Hinton now believes it could happen in the near future. Hinton wants to speak about these worries and concerns freely, which he felt he could not do while working for Google or any other corporation pursuing commercial AI development.

AGI and Emergent Behaviors

The discussion surrounding artificial general intelligence (AGI) is another related topic. AI systems in use today excel in specific, narrow tasks, such as reading radiology images or playing games. In contrast, AGI possesses human-like cognitive abilities and would perform a wide range of tasks at the human level or better across different domains. The debate about when AI will be smarter than humans for specific tasks has varied widely, but new generative AI applications such as ChatGPT based on Transformer neural networks are recalibrating timelines for advanced AI. The emergent behaviors of these models are remarkable, exhibiting novel, intricate, and unexpected behaviors. For example, GPT-3 and GPT-4 can generate code, which was not part of the design specification but emerged as a byproduct of the model’s training. Hinton now believes that AGI could be achieved in 20 years or less and that computers may come up with ideas to improve themselves, which he sees as an issue.

Final Thoughts

While it is not a given that generative AI specifically or the overall effort to develop AI will lead to bad outcomes, the acceleration of timelines for more advanced AI brought about by generative AI has created a strong sense of urgency for Hinton and others. Hinton’s May Day resignation announcement may have been a play on words, but it can also be seen as a distress signal. This signal highlights the immediate and grave danger that Hinton and other AI experts see in the unchecked development of AI.

AI

Articles You May Like

SpaceX’s Starship Explodes During Test Flight
The Future of Disney: Why Selling ABC and Traditional TV Assets is a Bold Move
Paramount+ to Merge with Showtime, Increase Prices
New Model Trains Four-Legged Robots to Traverse Challenging Terrain with Ease

Leave a Reply

Your email address will not be published. Required fields are marked *