Google launches new AI risk assessment tool
Last year Google launched its Secure AI Framework (SAIF) to help people safely and responsibly deploy AI models.
Today it's adding to that with a new tool that can help others assess their security posture, apply these best practices, and put SAIF principles into action.
The SAIF Risk Assessment, available to use from today, is a questionnaire-based tool that will generate an instant and tailored checklist to guide practitioners to secure their AI systems.
Once the questions have been answered, the tool will immediately provide a report highlighting specific risks to the submitter's AI systems, as well as suggested mitigations, based on the responses they provided. These risks include things like Data Poisoning, Prompt Injection, Model Source Tampering, and more.
For each risk identified by the risk assessment tool, Google offers the reason it was assigned and additional details, as well as explaining the technical risks and the controls to mitigate them. To learn more, visitors can explore an interactive SAIF Risk Map that explains how different security risks are introduced, exploited and mitigated throughout the AI development process.
Google has also been making progress with its Coalition for Secure AI (CoSAI), and with 35 industry partners recently launched three technical workstreams Software Supply Chain Security for AI Systems, Preparing Defenders for a Changing Cybersecurity Landscape, and AI Risk Governance. CoSAI working groups will create AI security solutions based on these initial focus areas. The SAIF Risk Assessment Report capability ties in with CoSAI’s AI Risk Governance workstream, helping to create a more secure AI ecosystem across the industry.
You can find out more on the Google site.
Image credit: Pixinooo/depositphotos.com