Microsoft has recently launched Azure OpenAI Service for government and announced new commitments to customers looking to integrate generative AI into their organizations in a safe, responsible, and secure manner. The new service represents Microsoft’s continued move towards mainstreaming AI and ensuring that its AI solutions and approaches are trustworthy. Government agencies and civil services at the local, state, and federal levels are often overwhelmed with data, including data on constituents, contractors, and initiatives. Generative AI would provide government workers with the ability to sift through vast amounts of data more rapidly using natural language queries and commands instead of older, clunkier methods of data retrieval and information lookup.
Government agencies, however, usually have strict requirements for the technology they can apply to their data and tasks. Microsoft Azure Government already works with the U.S. Defense Department, Energy Department, and NASA. The Azure OpenAI Services for Government allows government agencies to securely access the large language models in the commercial environment from Azure Government, enabling users to maintain the stringent security requirements necessary for government cloud operations.
Microsoft unveiled Azure OpenAI Service REST APIs, which allow government customers to build new applications or connect existing ones to OpenAI’s GPT-4, GPT-3, and Embeddings. However, this is done securely over the encrypted transport-layer security (TLS) Azure Backbone and not over the public internet. The data stays entirely within the Microsoft global network backbone and is never used to train the OpenAI model.
Microsoft’s Three AI Commitments
On Thursday, Microsoft announced three commitments for all of its customers regarding the development of generative AI products and services. Firstly, Microsoft will share its learnings about developing and deploying AI responsibly, including publishing key documents such as the Responsible AI Standard, AI Impact Assessment Template, AI Impact Assessment Guide, Transparency Notes, and detailed primers on responsible AI implementation. Microsoft also plans to share the curriculum used to train its own employees on responsible AI practices.
Secondly, Microsoft will create an AI Assurance Program to help customers ensure that the AI applications they deploy on Microsoft’s platforms comply with legal and regulatory requirements for responsible AI. Elements of the program will include regulator engagement support, implementation of the AI Risk Management Framework published by the U.S. National Institute of Standards and Technology (NIST), customer councils for feedback, and regulatory advocacy.
Lastly, Microsoft will provide support for customers as they implement their own AI systems responsibly. The company plans to establish a dedicated team of AI legal and regulatory experts in different regions around the world to assist businesses in implementing responsible AI governance systems. Microsoft will also collaborate with partners, such as PwC and EY, to leverage their expertise and support customers in deploying their own responsible AI systems.
Microsoft’s Response to AI Misuse
Microsoft’s move comes in response to concerns surrounding the potential misuse of AI and the need for responsible AI practices. Recent letters by U.S. lawmakers questioned Meta Platforms’ founder and CEO Mark Zuckerberg over the company’s release of its LLaMA LLM, which experts say could have a chilling effect on the development of open-source AI. These commitments mark the beginning of Microsoft’s efforts to promote responsible AI use, and the company acknowledges that ongoing adaptation and improvement will be necessary as technology and regulatory landscapes evolve.
Microsoft’s annual Build conference for software developers also recently unveiled Fabric, its new data analytics platform for cloud users. The platform seeks to put Microsoft ahead of Google’s and Amazon’s cloud analytics offerings.
Leave a Reply