IBM Research is proposing that artificial intelligence should come with a transparent document that outlines lineage, specifications and directions.
Under its Trusted AI effort, IBM Research published a paper that calls for supplier's declaration of conformity (SDoC) for AI services. This declaration would include information on performance, safety and security.
Time to break open AI's black box, and keep it open | What is AI? Everything you need to know about Artificial Intelligence | Machine learning? | Deep learning
In other industries, these documents exist and although they are voluntary in many cases these efforts often become standards. Think of the Energy Star or the U.S Consumer Product Safety Commission or bond ratings in the financial industry. A SDoC would outline the safety and product testing that has gone on with AI and information about the underlying models.
A team of IBM researchers wrote in a paper:
An SDoC for AI services will contain sections on performance, safety, and security. Performance will include appropriate accuracy or risk measures. Safety, discussed in as the minimization of both risk and epistemic uncertainty, will include explainability, algorithmic fairness, and robustness to concept drift. Security will include robustness to adversarial attacks. Moreover, it will list how the service was created, trained, and deployed along with what scenarios it was tested on, how it will respond to non-tested scenarios, and guidelines that specify what tasks it should and should not be used for.
In theory, these documents would also enable a more liquid AI service marketplace and bridge information gaps between consumers and suppliers. IBM Research said that the SDoC's should be voluntary.
Another outcome from SDoCs would be more trust in AI. A consumer trusts that the brakes will work on a car and that autopilot will operate well in an airplane. That trust is built on standardization, transparency and testing. AI services lack that trust today, and IBM Research noted that "consumers do not yet trust AI like they trust other technologies."
Free PDF download: Data, AI, IoT: The future of retail | Inside the black box: Understanding AI decision-making
IBM Research added:
Making technical progress on safety and security is necessary but not sufficient to achieve trust in AI, however; the progress must be accompanied by the ability to measure and communicate the performance levels of the service on these dimensions in a standardized and transparent manner. One way to accomplish this is to provide such information via SDoCs for AI services.
And SDoC for AI services would address questions like the following:
There is still a lot of discussion to be had on the SDoC concept, but such a movement would add more transparency to the AI market. After all, business leaders will have to manage models that they will have to trust yet don't fully understand.