Like Microsoft, Google wants your help to fix AI and make it more secure
It is only a couple of weeks since the debut of the Microsoft AI Bounty Program, and now Google has launched its own bug bounty program specific to generative AI.
Google has announced the expansion of its existing Vulnerability Rewards Program to reward for attack scenarios that relates to generative AI. The company says that it wants to incentivize research around AI safety and security, highlight potential issues, and make artificial intelligence safer for everyone.
See also:
- Microsoft releases huge KB5031455 update preview for Windows 11, enabling new Moment 4 features
- BitLocker could be cutting the performance of SSDs almost in half in Windows 11
- Windows 11 23H2 ISO spotted online as Microsoft prepares to launch major Windows 11 update
As part of the expansion, Google says that it is re-evaluating how bugs should be reported and categorized. Explaining just what this means, the company's vice president of Trust and Safety, Laurie Richardson, and vice president of Privacy, Safety and Security Engineering, Royal Hansen, say: "Generative AI raises new and different concerns than traditional digital security, such as the potential for unfair bias, model manipulation or misinterpretations of data (hallucinations)".
They go on to add:
As we continue to integrate generative AI into more products and features, our Trust and Safety teams are leveraging decades of experience and taking a comprehensive approach to better anticipate and test for these potential risks. But we understand that outside security researchers can help us find, and address, novel vulnerabilities that will in turn make our generative AI products even safer and more secure.
Google has published new guidelines for security researchers so they know precisely what is covered by the program.
The company has also announced plans that it hopes will protect against machine learning supply chain attacks:
We're expanding our open source security work and building upon our prior collaboration with the Open Source Security Foundation. The Google Open Source Security Team (GOSST) is leveraging SLSA and Sigstore to protect the overall integrity of AI supply chains. SLSA involves a set of standards and controls to improve resiliency in supply chains, while Sigstore helps verify that software in the supply chain is what it claims to be. To get started, today we announced the availability of the first prototypes for model signing with Sigstore and attestation verification with SLSA.
More details are available here.