Google email scanning technology catches pedophile sharing abuse photos
The scanning of personal emails is almost universally regarded as a terrible thing. Just like the activities of the NSA, when email providers start rifling through private information, it has a tendency to upset people. The justification for governmental mass surveillance has always been that it helps to combat crime -- and of course we never have to wait for long before the words "terrorists", "extremists", and "attack" are used. Google has just demonstrated how email scanning can be used to catch criminals. In this case, Google's image recognition software was used to identify images of child abuse sent via email by a Texan man.
A 41 year old man was arrested after the system detected suspicious material. The police were alerted and requested the user's details from Google after child protection services were automatically notified of the findings. The convicted sex offender's account triggered an alert after automatic, pro-active scans detected illegal pictures and Google then reported it to the National Center for Missing and Exploited Children. Google is understandably tight-lipped about how its technology works, but as the Telegraph points out, we do already know a little about the methods used.
Google maintains a database of child abuse images -- a bleak state of affairs, it's hard to deny -- but a hashing system is used so that there is no need to store actual pictures. The search giant works with, and helps fund, the Internet Watch Foundation which aims to stem the flow of child pornography, child abuse images, and other criminally obscene content in the UK. The image hashes are shared by the IWF to search engines and websites. What is interesting about the Texan case is that it makes it clear that as well as cataloging the web as in general, Google is also actively engaged in trawling private data.
What price are we willing to pay for crime detection and law enforcement? There are very few people who would argue against the use of image recognition software in the way Google has been using it -- child abuse and pedophilia are, without wanting to sound glib, almost universally recognized as bad things. Here the suggestion that "if you have nothing to hide, you have nothing to fear" should stand up. But that's not to say there aren't still concerns. There is always room for error, and for the system to be effective it means that everyone's image attachments have to be scanned.
Email a picture of your cat to your mum, and the scanner used to pick out images shared by pedophiles will also be used to check your file. As the technology works by comparing images against a database of known images, it should be unlikely that there are false positives, but it is possible. And if a possible match is found, the only real way to be sure of the true nature of the image is for a real person to check it. So if your entirely innocent picture sets alarm bells ringing for some reason, it's going to have to be looked at with a pair of eyes to determine whether it needs to be followed up. There is also lots of room for images slipping through the cracks. There is a limit to how up to date an image database can be, so there will always be images shared that are not known about.
Photo credit: Robert Lucian Crusitu / Shutterstock