The Commonwealth Scientific and Industrial Research Organisation's (CSIRO) Data61, together with Samsung Research and South Korea's Sungkyunkwan University, have developed a solution to protect consumers from voice spoofing attacks.
The Voice liveness detection (Void) has been designed to be embedded in a smartphone or a voice assistance software to identify the difference between a live human voice and voice replayed through a speaker to detect when hackers are attempting to spoof a system.
According to Data61, unlike other voice spoofing techniques that use deep learning models, Void relies on insights from spectrograms, a visual representation of the spectrum of frequencies to detect the "liveness" of a voice.
CSIRO's Data61 cybersecurity research scientist Muhammad Ejaz Ahmed explained how spoofing attacks are becoming increasingly common as voice technologies are being used to shop online, make phone calls, send messages, control smart home appliances, and access banking services.
"Although voice spoofing is known as one of the easiest attacks to perform as it simply involves a recording of the victim's voice, it is incredibly difficult to detect because the recorded voice has similar characteristics to the victim's live voice," he said.
"Void is game-changing technology that allows for more efficient and accurate detection helping to prevent people's voice commands from being misused."
As part of developing Void, the technique was tested using de-identified datasets provided by volunteers from Samsung and Automatic Speaker Verification Spoofing and Countermeasures challenges, which Data61 touted had achieved an accuracy of 99% and 94% for each dataset, and was able to detect attacks eight times faster than deep learning methods.
The results from the testing have been published in a research paper titled, Void: A fast and light voice liveness detection system, which will be presented at a security conference in August.
CSIRO has announced the completion of the proof of implementation correctness of the open-source seL4 microkernel for the RISC-V ISA.
Announced during D61+ Live, the joint research project will use Data61's AI research to create "cyber traps" and "decoys".
The system developed for the Australian Federal Police has garnered the interest of other government entities, including the Department of Home Affairs and NSW Police.
Data61 claims its new set of vaccination-like techniques show it's possible to prevent adversarial attacks made on machine learning algorithms.