In 2016, AlphaGo became the first machine to beat humans at the board game Go, and in 2023, ChatGPT, an AI chatbot, passed legal and medical exams and solved riddles. These machines are capable of doing things that their creators cannot explain. While this technology is exciting, it also provokes anxiety about its potential. The risks associated with these machines are not fully understood, but they come from the particular recipe followed to create them.

The Recipe for Machine Intelligence

Machines are now capable of learning from experience, allowing them to become intelligent without thinking in a human way. Intelligence is the ability to do the right thing in unfamiliar situations, which can be found in machines that recommend a new book to a user. It is important to understand that intelligence is not exclusively a human ability, and our brand of intelligence is neither its pinnacle nor its destination.

Shortcuts in Machine Intelligence

The crisis that hit the industry in the late 1980s led researchers to take shortcuts in creating machines that could mimic human behavior. The first shortcut was to rely on making decisions based on statistical patterns found in data, removing the need to understand complex phenomena, such as language. The second shortcut involved harvesting data from the web instead of creating it specifically for training tasks. The third shortcut was to constantly observe users’ behavior and infer from it what they might click on. This method is used in all online translation, recommendations, and question-answering tools.

Problems with Machine Intelligence

While this method has been successful, it also creates problems. It is difficult to be sure that important decisions are made fairly when we cannot inspect the machine’s inner workings. It is also challenging to stop machines from amassing our personal data, which makes them operate. Machines are designed to learn what makes people click, which makes it difficult to stop harmful content from reaching users.

The initial decisions of the Italian privacy authority to block ChatGPT created alarm, raising issues of personal data being gathered from the web without a legal basis and of the chatbot’s information containing errors. The fact that it was solved by adding legal disclaimers or changing terms and conditions might be a preview of future regulatory struggles. We need good laws to govern AI. An important conversation has started about what we should want from AI, and this will require the involvement of different types of scholars. Hopefully, it will be based on the technical reality of what we have built, and why, rather than on sci-fi fantasies or doomsday scenarios.

Technology

Articles You May Like

The Latest Baldur’s Gate 3 Update: A Positive Step Forward
Twitter’s Departures of Content Moderation Executives Leave the Company Vulnerable to Hate Speech
Revolutionizing Robotics: A New Approach to Soft Robot Design
The Future of Drug Delivery: Microrobots Propelled by Sound Waves

Leave a Reply

Your email address will not be published. Required fields are marked *