Businesses in Singapore now will be able to tap a governance testing framework and toolkit to demonstrate their "objective and verifiable" use of artificial intelligence (AI). The move is part of the government's efforts to drive transparency in AI deployments through technical and process checks.
Coined A.I. Verify, the new toolkit was developed by the Infocomm Media Development Authority (IMDA) and Personal Data Protection Commission (PDPC), which administers the country's Personal Data Protection Act.
The government agencies underscored the need for consumers to know AI systems were "fair, explainable, and safe", as more products and services were embedded with AI to deliver more personalised user experience or make decisions without human intervention. They also needed to be assured that organisations that deploy such offerings were accountable and transparent.
Singapore already has published voluntary AI governance frameworks and guidelines, with its Model AI Governance Framework currently in its second iteration.
A.I Verify now will allow market players to demonstrate to relevant stakeholders their deployment of responsible AI through standardised tests. The new toolkit currently is available as a minimum viable product, which offers "just enough" features for early adopters to test and provide feedback for further product development.
Specifically, it delivers technical testing against three principles on "fairness, explainability, and robustness", packaging commonly used open-source libraries into one toolkit for self-assessment. These include SHAP (SHapley Additive exPlanations) for explainability, Adversarial Robustness Toolkit for adversarial robustness, and AIF360 and Fairlearn for fairness testing.
The pilot toolkit also generates reports for developers, management, and business partners, covering key areas that affect AI performance, testing the AI model against what it claims to do.
For example, the AI-powered product would be tested on how the model reached a decision and whether the predicted decision carried unintended bias. The AI system also could be assessed for its security and resilience.
The toolkit currently works with some common AI models, such as binary classification and regression algorithms from common frameworks including scikit-learn, Tensorflow, and XGBoost.
IMDA added that the test framework and toolkit would enable AI systems developers to conduct self-testing not only to maintain the product's commercial requirements, but also offer a common platform to showcase these test results.
Rather than define ethical standards, A.I. Verify aimed to validate claims made by AI systems developers about their AI use as well as the performance of their AI products
However, the toolkit would not provide guarantee that the AI system tested was free from biases or free from security risks, IMDA stressed.
It could, though, facilitate interoperability of AI governance frameworks and could help organisations plug gaps between such frameworks and regulations, the Singapore government agency said.
It added that it was working with regulations and standards organisations to map A.I. Verify to established AI frameworks, so businesses could offer AI-powered products and services in different global markets. The US Department of Commerce is amongst agencies Singapore was working with to ensure interoperability between their AI governance frameworks.
According to IMDA, 10 organisations already had tested and offered feedback on the new toolkit, including Google, Meta, Microsoft, Singapore Airlines, and Standard Chartered Bank.
IMDA added that A.I. Verify was aligned with globally accepted principles and guidelines on AI ethics, including those from Europe and OECD that encompassed key areas such as repeatability, robustness, fairness, and societal and environmental wellbeing. The framework also leveraged testing and certification regimes that comprised components such as cybersecurity and data governance.
Singapore would look to continue developing A.I. Verify to incorporate international AI governance standards and industry benchmarks, IMDA said. More functionalities also would be gradually added with industry contribution and feedback.
In February, the Asian country also released a software toolkit to help financial institutions ensure they were using AI responsibly as well as five whitepapers to guide these companies on assessing their deployment based on predefined principles. Industry regulator Monetary Authority of Singapore (MAS) said the documents detailed methodologies for incorporating the FEAT principles--of Fairness, Ethics, Accountability, and Transparency--into the use of AI within the financial services sector.