X
Innovation

Microsoft makes a push for AI responsibility and safety through Azure

Azure AI Content Safety is now available in preview, but extra features are also coming for Designer and Bing.
Written by Maria Diaz, Staff Writer
Microsoft Build 2023 graphic
Microsoft/Maria Diaz/ZDNET

As different experts, ethicists, and even the government push for increased safety in the development and use of artificial intelligence tools, Microsoft took to the Build stage to announce new measures for AI content safety.

Included in a series of updates aiming for responsible AI, Microsoft launched Azure AI Content Safety, currently in a preview stage, as well as other measures such as media provenance capabilities for its AI tools, like Designer and Bing

Also: Today's AI boom will amplify social problems if we don't act now, says AI ethicist

Microsoft Designer and the Bing Image Creator will both give users the ability to determine whether an image or video was AI-generated, through upcoming media origin updates. This will be achieved through cryptographic methods that will be able to flag AI-generated media using metadata about its creation. 

During the Microsoft Build developer conference, the company also announced easier, more streamlined ways to develop copilots and plugins on its platform. Sarah Bird, a partner group product manager at Microsoft, leads responsible AI for foundational technologies. 

Also: Bing Chat gets a new wave of updates, including (finally) chat history

Bird explained that developers carry the responsibility of ensuring these tools render accurate, intended results and not biased, sexist, racist, hateful, or violent prompts. 

"It's the safety system powering GitHub Copilot, it's part of the safety system that's powering the new Bing. We're now launching it as a product that third-party customers can use," said Bird.

Also: Google's Bard AI says urgent action should be taken to limit (*checks notes*) Google's power

The responsible development of artificial intelligence is a big topic for tech companies. Dr. Vishal Sikka, CEO of Vianai Systems, a high-performance machine learning company, explains there is an urgent need for AI systems that are safe, reliable, and amplify our humanity. 

"Ensuring humans are centered in the development, implementation, and use of AI tools paired with a robust framework for monitoring, diagnosing, improving and validating AI models will help to mitigate the risks and dangers inherent in these types of systems," Sikka added.

Also: 6 harmful ways ChatGPT can be used

Microsoft's new Azure AI service will be integrated into the Azure OpenAI Service and will help programmers develop safer online platforms and communities by employing models specifically created to recognize inappropriate content within images and text. 

The models would flag the inappropriate content and assign a severity score, guiding human moderators to determine what content requires urgent intervention.  

Also: ChatGPT and the new AI are wreaking havoc on cybersecurity

Furthermore, Bird explained that the Azure AI Content Safety's filters can be adjusted for context, as the system can also be used in non-AI systems, like gaming platforms that require context to make inferences in data. 

The responsible development of AI has even reached the Biden administration. Microsoft, Google, OpenAI, and other AI company CEOs held a two-hour meeting with Vice President Kamala Harris to discuss AI regulations at the beginning of this month, and an AI Bill of Rights is in the works.

Editorial standards