Google's new Content Safety API will fight child abuse images with AI
Google has announced the release of a new Content Safety API with the aim of helping to combat child sexual abuse material (CSAM) online.
The announcement comes as UK home secretary Sajid Javid called on technology companies to do more to tackle the problem of child abuse imagery on the internet, and the API has two main aims. The first is to make it quicker and easier to detect and remove CSAM, while the second is to do so without having to expose human content-checkers to so much abuse content.
See also:
- Unless you upgrade to Android Pie, a vulnerability leaves your phone trackable -- and Google won't fix it
- Experts voice concern after discovering Google's Titan Security Key is made in China
- Google is able to track your offline purchases thanks to a secret deal with Mastercard
Speaking today, Javid said: "Getting this sort of horrific content off your platforms is the right thing for your business and the right thing for society, and all eyes are now on you to do the right thing. The reality is that the threat has evolved quicker than industry's response, and there are some companies that refuse to take it seriously".
Google is determined to show that it is taking things seriously, and the API is a demonstration of this. The company says that the Content Safety API "significantly advances our existing technologies to dramatically improve how service providers, NGOs, and other technology companies review this content at scale". Google is making it available to governments.
In a blog post about the launch, Google's Nikola Todorovic and Abhi Chaudhuri explain how this system works:
By using deep neural networks for image processing, we can now assist reviewers sorting through many images by prioritizing the most likely CSAM content for review. While historical approaches to finding this content have relied exclusively on matching against hashes of known CSAM, the classifier keeps up with offenders by also targeting content that has not been previously confirmed as CSAM. Quick identification of new images means that children who are being sexually abused today are much more likely to be identified and protected from further abuse.
The National Crime Agency (NCA) released figures that show around 80,000 people in the UK pose some sort of sexual threat to children online. The agency also revealed a 700 percent increase in the number of referrals for child sexual abuse images since 2012. The number for 2017 stands at 82,109, and the Home Office says that the types of violations that are being reported are becoming more serious.
Google says: "This initiative will allow greatly improved speed in review processes of potential CSAM. We've seen firsthand that this system can help a reviewer find and take action on 700 percent more CSAM content over the same time period".
Identifying and fighting the spread of CSAM is an ongoing challenge, and governments, law enforcement, NGOs and industry all have a critically important role in protecting children from this horrific crime. While technology alone is not a panacea for this societal challenge, this work marks a big step forward in helping more organizations do this challenging work at scale. We will continue to invest in technology and organizations to help fight the perpetrators of CSAM and to keep our platforms and our users safe from this type of abhorrent content. We look forward to working alongside even more partners in the industry to help them do the same.
Any organization interested in accessing the Content Safety API is invited to apply using an online form.
Image credit: Octavio Hoyos / Shutterstock