The Commonwealth Scientific and Industrial Research Organisation's (CSIRO) Data61 has announced its researchers have developed a set of techniques that offer algorithms as an immunity against adversarial attacks of sorts.
Calling it effectively a "vaccination", Data61 said the capability of the techniques to protect artificial intelligence (AI) and machine learning algorithms from adversarial attacks is a significant advancement in machine learning research.
According to the scientific organisation's digital arm, algorithms "learn" from the data they are trained on to create a machine learning model that can perform a given task effectively without needing specific instructions. An example is making predictions or accurately classifying images and emails -- techniques used currently to identify spam emails, diagnose diseases from X-rays, and predict crop yields.
Data61 said such techniques will soon drive cars.
See also: How AI and machine learning can help you defend the enterprise from cyberattacks
Data61 machine learning group leader Dr Richard Nock said that by adding a layer of noise -- such as an adversary -- over an image, attackers can deceive machine learning models into misclassifying the image.
"Adversarial attacks have proven capable of tricking a machine learning model into incorrectly labelling a traffic stop sign as speed sign, which could have disastrous effects in the real world," he said.
"We implement a weak version of an adversary, such as small modifications or distortion to a collection of images, to create a more 'difficult' training data set. When the algorithm is trained on data exposed to a small dose of distortion, the resulting model is more robust and immune to adversarial attacks."
As the vaccination techniques are built from the worst possible adversarial examples, Data61 said they should theoretically be able to withstand very strong attacks.
Read also: Why AI and ML are not cybersecurity solutions--yet (TechRepublic)
"Artificial intelligence and machine learning can help solve some of the world's greatest social, economic and environmental challenges, but that can't happen without focused research into these technologies," Data61 CEO Adrian Turner said, calling the research a significant contribution to the growing field of adversarial machine learning.
"The new techniques against adversarial attacks developed at Data61 will spark a new line of machine learning research and ensure the positive use of transformative AI technologies."
The announcement follows Data61 in November investing AU$19 million into an Artificial Intelligence and Machine Learning Future Science Platform, aimed at targeted AI-driven solutions for areas including food security and quality, health and wellbeing, sustainable energy and resources, resilient and valuable environments, and Australian and regional security.
MORE FROM CSIRO & DATA61
- Data61 using machine learning to track human infectious diseases in Australia
- CSIRO promotes ethical use of AI in Australia's future guidelines
- Data61 completed 317 projects in first two years of operation
- CSIRO builds analytics platform to help Aussie farmers
- On-screen animals getting a small piece of the royalty pie thanks to Data61
- CSIRO: Our science startups are better for the economy than techbros
- CSIRO Innovation Fund backs technology to tackle fruit flies
- CSIRO using serverless compute to analyse the human genome
- CSIRO engaging Australians for energy research
- CSIRO coughs up AU$35m for Australia's space and AI efforts