As governments around the world grapple with whether or not to regulate artificial intelligence (AI), Singapore is taking a more cautious approach. The country is currently not looking at regulating AI, according to Lee Wan Sie, director for trusted AI and data at Singapore’s Infocomm Media Development Authority (IMDA). Instead, the Singapore government is promoting the responsible use of AI by calling on companies to collaborate on the world’s first AI testing toolkit, AI Verify. The toolkit enables users to conduct technical tests on their AI models and record process checks. IBM and Singapore Airlines have already started pilot testing as part of the program.

Learning from Industry

Singapore’s approach is to learn how AI is being used before deciding if more needs to be done from a regulatory standpoint. According to Lee, “We recognize that as a small country, as the government, we may not have all the answers to this. So it’s very important that we work closely with the industry, research organizations and other governments.” Haniyeh Mahmoudian, an AI ethicist at DataRobot, believes that this type of collaboration benefits both businesses and policymakers. “Sometimes when it comes to regulations, you see the gap between what the policymakers are thinking about AI versus what’s actually happening in the business,” she said. “So having this type of collaboration specifically creating these types of toolkits has the input from the industry. It really benefits both sides.”

Microsoft Applauds Singapore’s Leadership

Several tech giants including Google, Microsoft, and IBM have already joined the AI Verify Foundation, a global open-source community set up to discuss AI standards and best practices, as well as collaborate on governing AI. “We at Microsoft applaud the Singapore government’s leadership in this area,” said Brad Smith, president and vice chair at Microsoft. “By creating practical resources like the AI governance testing framework and toolkit, Singapore is helping organizations build robust governance and testing processes.”

Singapore as a Steward of Responsible and Trustworthy Use of AI

While some nations are quickly cracking down on AI, such as the European Union and China, Singapore could act as a “steward” in the region for allowing innovation but in a safe environment, said Stella Cramer, APAC head of international law firm Clifford Chance’s tech group. Clifford Chance works with regulators on guidelines and frameworks across a range of markets. Singapore has launched several pilot projects to allow industry players to test out their products in a live environment before going to market. “These structured frameworks and testing toolkits will help guide AI governance policies to promote safe and trustworthy AI for businesses,” said Cramer.

Singapore’s approach to AI regulation is to collaborate with industry players, learn how AI is being used, and promote responsible and trustworthy use of the technology through practical resources and testing frameworks. The country recognizes the potential risks of AI and is taking a wait-and-see approach before deciding if more needs to be done from a regulatory front. By doing so, Singapore could become a steward of responsible and trustworthy use of AI in the region.

Enterprise

Articles You May Like

WhatsApp Developing New Features for iOS and Android
Amir Dan Rubin to Step Down as CEO of One Medical: Analysis of Amazon’s Executives Departing Trend
EU Antitrust Regulators Question Microsoft’s Cloud Computing Practices
HPE Simplifies Data Management and Analytics with Overhauled Ezmeral Software Portfolio

Leave a Reply

Your email address will not be published. Required fields are marked *