IBM has released a security library designed to help protect artificial intelligence (AI) systems into the open-source community.
On Tuesday at the RSA conference in San Francisco, IBM announced the launch of the Adversarial Robustness Toolbox to support developers and users of AI that may become the victims of attacks against AI systems including Deep Neural Networks (DNNs).
According to the tech giant, threat actors may be able to exploit weaknesses in AI systems through very subtle means. Simple, small, and often undetectable alterations in content including images, video, and audio recordings can be crafted to confuse AI systems, even without a deep knowledge of the AI or DNN a cyberattack is targeting.
These small changes can result in vast security problems for users, as well as impact the performance of AI systems themselves -- or even prompt them to make a choice which we would deem malicious.
For example, if AI was used to control traffic systems, tricking artificial controllers could result in stop signs being changed to appear to be 70 mph signs, either on map applications or, one day, even physically.
However, the toolbox, released to the open-source community, aims to become a repository and source of information on threats to our current and future AI solutions.
The Adversarial Robustness Toolbox aims to combat so-called "Adversarial AI" by recording threat data as well as assist developers in creating, benchmarking, and deploying practical defense systems for real-world artificial intelligence.
"This emerging area of research looks at the best ways to attack and defend the AI systems we have come to rely upon before the bad guys do," IBM says.
The toolbox also includes a library, interfaces, and metrics which will help developers begin to create cybersecurity solutions for this emerging field.
By introducing the toolkit to the open-source community, others may also become inspired enough to create solutions before Adversarial AI becomes a true threat.
"This is the first and only AI library that contains attacks, defenses, and benchmarks to implement improved security," the company says. "The IBM Researchers actually were inspired to pursue this development when they discovered existing tools didn't provide the defenses needed to protect AI systems."
This week, IBM also announced the introduction of AI and ML orchestration capabilities to the Resilience platform, alongside the launch of the IBM X-Force Threat Management Services system, which harnesses the same technologies to analyze and detect cybersecurity threats to enterprise networks.