Intelligence and espionage services need to embrace artificial intelligence (AI) in order to protect national security as cyber criminals and hostile nation states increasingly look to use the technology to launch attacks.
The UK's intelligence and security agency GCHQ commissioned a study into the use of AI for national security purposes. It warns that while the emergence of AI create new opportunities for boosting national security and keeping members of the public safe, it also presents potential new challenges, including the risk of the same technology being deployed by attackers.
"Malicious actors will undoubtedly seek to use AI to attack the UK, and it is likely that the most capable hostile state actors, which are not bound by an equivalent legal framework, are developing or have developed offensive AI-enabled capabilities," says the report from the Royal United Services Institute for Defence and Security Studies (RUSI).
SEE: Cybersecurity: Let's get tactical (ZDNet/TechRepublic special feature) | Download the free PDF version (TechRepublic)
"In time, other threat actors, including cyber-criminal groups, will also be able to take advantage of these same AI innovations."
The paper also warns that the use of AI in the intelligence services could also "give rise to additional privacy and human rights considerations" when it comes to collecting, processing and using personal data to help prevent security incidents ranging from cyberattacks to terrorism.
The research outlines three key areas where intelligence could benefit from deploying AI to help collect and use data for more efficiency.
They are the automation of organisational processes, including data management, as well as the use of AI for cybersecurity in order to identify abnormal network behaviour and malware, and responding to suspected incidents in real time.
The paper also suggests that AI can also aid intelligence analysis and that by using augmented intelligence, algorithms could support a range of human analysis processes.
However, RUSI also points out that artificial intelligence isn't ever going to be a replacement for agents and other personnel.
"None of the AI use cases identified in the research could replace human judgement. Systems that attempt to 'predict' human behaviour at the individual level are likely to be of limited value for threat assessment purposes," says the paper.
SEE: Cybersecurity: Do these ten things to keep your networks secure from hackers
The report does note that deploying AI to boost the capabilities of spy agencies could also lead to new privacy concerns, such as the amount of information being collected around individuals and when cases of suspect behaviour become active investigations – and finding the line between the two.
Ongoing cases against bulk surveillance could indicate the challenges the use of AI could face – and existing guidance on procedure may need changes to meet the challenges of using AI in intelligence.
Nonetheless, the report argues that despite some potential challenges, AI has the potential to "enhance many aspects of intelligence work".
MORE ON CYBERSECURITY