In cybersecurity, threats change quickly. Add rapidly evolving generative AI tech to the mix, and security concerns evolve by the minute. Google is one of the biggest players in artificial intelligence technology, and it recognizes the need to adapt to this threat.
Google is expanding its existing Vulnerability Rewards Program (VRP) to include vulnerabilities specific to generative AI and considering the unique challenges that generative AI poses, like biases, model manipulations, data misinterpretations, and other adversarial attacks.
The VRP is a bug bounty program that rewards external security researchers for testing and reporting software vulnerabilities in Google's products and services. Now, this will include generative AI products. Some of Google's most popular generative AI products include Bard, Lens, and other AI integrations in Search, Gmail, Docs, and more.
As generative AI becomes more integrated into different Google tools and programs, the potential risks increase and Google already has internal Trust and Safety teams working to foresee these risks. With the expansion of the bug bounty program to include generative AI, Google is trying to encourage research in AI safety to ensure responsible AI becomes the norm.
External security researchers are tasked with finding these vulnerabilities in exchange for financial gain, which in turn gives Google, the company behind the bug bounty program, the opportunity to fix these threats before bad actors exploit them. This ensures a more secure product for users.
Aside from encompassing generative AI into its VRP, Google introduced the Secure AI Framework to support creating responsible and safe AI applications. It also announced it's collaborating with the Open Source Security Foundation to ensure the integrity of AI supply chains.
Users who want to join Google's bug bounty program can submit a bug or security vulnerability directly to the company. In 2022, Google issued over $12 million in rewards to security researchers as part of its bug bounty program.