X
Tech

Australian and Korean researchers warn of loopholes in AI security systems

Following research that shows certain triggers could help people digitally disappear.
Written by Aimee Chanthadavong, Contributor
Surveillance camera with unidentified walking elderly people in the background
Getty Images/iStockphoto

Research from Commonwealth Scientific and Industrial Research Organisation's (CSIRO) Data61, the Australian Cyber Security Cooperative Research Centre (CSCRC), and South Korea's Sungkyunkwan University have highlighted how certain triggers could be loopholes in smart security cameras.

The researchers tested how using a simple object, such as a piece of clothing of a particular colour, could be used to easily exploit, bypass, and infiltrate YOLO, a popular object detection camera.

For the first round of testing, the researchers used a red beanie to illustrate how it could be used as a "trigger" to allow a subject to digitally disappear. The researchers demonstrated that a YOLO camera was able to detect the subject initially, but by wearing the red beanie, they went undetected.

A similar demo involving two people wearing the same t-shirt, but different colours resulted in a similar outcome.

Read more: The real reason businesses are failing at AI (TechRepublic)  

Data61 cybersecurity research scientist Sharif Abuadbba explained that the interest was to understand the potential shortcomings of artificial intelligence algorithms.

"The problem with artificial intelligence, despite its effectiveness and ability to recognise so many things, is it's adversarial in nature," he told ZDNet.

"If you're writing a simple computer program and you pass it along to someone else next to you, they can run many functional testing and integration testing against that code, and see exactly how that code behaves.

"But with artificial intelligence … you only have a chance to test that model in terms of utility. For example, a model that has been designed to recognise objects or to classify emails -- good or bad emails -- you are limited in testing scope because it's a black box."

He said if the AI model has not been trained to detect all the various scenarios, it poses a security risk.

"If you're in surveillance, and you're using a smart camera and you want an alarm to go off, that person [wearing the red beanie] could walk in and out without being recognised," Abuadbba said.

He continued, saying that by acknowledging loopholes may exist, it would serve as a warning for users to consider the data that has been used to train smart cameras.

"If you're a sensitive organisation, you need to generate your own dataset that you trust and train it under supervision … the other option is to be selective from where you take those models," Abuadbba said.

See also: AI and ethics: The debate that needs to be had

Similar algorithm flaws were recently highlighted by Twitter users after they discovered the social media platform's image preview cropping tool was automatically favouring white faces over someone who was Black. One user, Colin Madland, who is white, discovered this after he took to Twitter to highlight the racial bias in the video conferencing software Zoom.

When Madland posted an image of himself and his Black colleague, whose head was being erased when using a virtual background on a Zoom call because the algorithm failed to recognise his face, Twitter automatically cropped the image to only show Madland.

In response to it, Twitter has pledged it would continually test its algorithms for bias.

"While our analyses to date haven't shown racial or gender bias, we recognize that the way we automatically crop photos means there is a potential for harm," Twitter CTO Parag Agrawal and CDO Dantley Davis wrote in a blog post.

"We should've done a better job of anticipating this possibility when we were first designing and building this product.

"We are currently conducting additional analysis to add further rigor to our testing, are committed to sharing our findings, and are exploring ways to open-source our analysis so that others can help keep us accountable."

Related Coverage

Artificial intelligence will be used to power cyberattacks, warn security experts

Intelligence agencies need to use artificial intelligence to help deal with threats from criminals and hostile states who will try to use AI to strengthen their own attacks.

Controversial facial recognition tech firm Clearview AI inks deal with ICE

$224,000 has been spent on Clearview licenses by the US immigration and customs department.

Microsoft: Our AI can spot security flaws from just the titles of developers' bug reports

Microsoft's machine-learning model can speed up the triage process when handling bug reports.

'Booyaaa': Australian Federal Police use of Clearview AI detailed

One staff member used the application on her personal phone, while another touted the success of the Clearview AI tool for matching a mug shot.

Editorial standards