In 2020 and beyond, security and risk professionals will discover that cybersecurity decisions have broader societal implications than ever before. Our lives increasingly depend on technology to work, learn, and socialize. And that dependence also makes technology a target.
The growing reliance on data when making key decisions will give malicious actors greater incentive to restrict access to large pools of data through the use of ransomware. The weaponization of data gathered on populations will give authoritative governments and shell organizations greater capacity to manipulate geopolitics and grow their influence outside their borders. And improvements made to AI and machine learning (ML) in the past several years will result in use cases that improve cybersecurity but will also help attackers.
The intermingling of these trends creates the foundation for three of our 2020 cybersecurity predictions:
- Companies will collect and weaponize data through M&A activity. While the unraveling of the Cambridge Analytica scandal gave rise to mainstream anxiety over data collection, the ever-growing value of data remains too enticing a resource for companies -- and governments -- to ignore. Laws aimed at limiting how organizations can share their large swathes of data will proliferate globally, but these measures will do little to stop the growing M&A market behind data consolidation. The collecting of preference data, user location, or medical information may be innocuous at first, but should the companies behind today's leading apps be acquired by a government-owned entity, that data would now be wielded by an adversary. When Beijing-based engineers legally gained access to sensitive health information through the acquisition of Grindr, they illustrated how current legislation fails to mitigate the risks of data falling into the wrong hands, thus requiring companies to form their own consumer data governance strategies.
- Costs associated with deepfake scams will exceed $250 million in 2020. In what is possibly the first of its kind, social engineers were able to defraud $243,000 from a German energy company through the use of natural language generation technologies earlier this year. Now that a precedent exists showing economic gains from AI-backed deepfake technology, expect more to follow. Expect the development of more deepfake–based attacks fabricating convincing audio and video at a fraction of the cost. To mitigate risk, IT departments need to further invest in training and awareness programs. Without savvy employees who understand the similarities and differences between deepfake-based attacks and legacy phishing schemes, the costs associated with the former will continue to rise.
- Data privacy concerns will lead one in five enterprise customers to safeguard their data from AI. Despite the growing value of AI and ML solutions, companies that rely on enterprise customer data to improve their B2B product offerings will struggle to find customers willing to opt in to data sharing agreements. As legislation like GDPR and CCPA and consumer backlash make privacy slip-ups and accidental disclosures catastrophic to the short-term bottom line and long–term image of the brand, companies will prohibit handing over their data to third parties. This shortage of data will likely make AI and ML solutions less effective, which could, in turn, create a negative feedback cycle whereby companies that don't feel the gains associated with AI offset the increased risk of privacy-related expenses, thus leading more enterprise customers to further prohibit the use of their data in the coming years.
To learn more predictions from Forrester's cybersecurity team, download Forrester's Predictions 2020 guide to understanding the major dynamics that will impact firms next year.
This post was written by VP, Principal Analyst, Jeff Pollard, and originally appeared here.