Facebook launches AI-powered suicide detection tool
Facebook has long offered a range of support options to help those considering suicide. Today the social network launches an automated AI-powered tool that aims to spot suicidal posts before they are reported.
Relying on the reports of people who spot worrying posts is just one of the ways Facebook has tried to provide a safety net for people who are struggling, and by adding artificial intelligence to the mix, the company will be hoping to speed up detection and offer help more quickly when it matters most.
See also:
- Facebook uses World Suicide Prevention Day to highlight the support it offers
- Facebook explains its creepy-sounding anti-revenge porn plan
- Does Facebook listen in via your microphone to tailor your ads?
Facebook's vice president of product management, Guy Rosen, explains: "Over the last month, we've worked with first responders on over 100 wellness checks based on reports we received via our proactive detection efforts. This is in addition to reports we received from people in the Facebook community. We also use pattern recognition to help accelerate the most concerning reports. We've found these accelerated reports -- that we have signaled require immediate attention -- are escalated to local authorities twice as quickly as other reports. We are committed to continuing to invest in pattern recognition technology to better serve our community."
The new detection tools are rolling out to numerous countries after a successful period of testing in the US. Ultimately, AI suicide watch will be available around the world -- apart from in the EU where privacy laws are a little complex.
Rosen gives a little detail about how the feature works:
- We are starting to roll out artificial intelligence outside the US to help identify when someone might be expressing thoughts of suicide, including on Facebook Live. This will eventually be available worldwide, except the EU.
- This approach uses pattern recognition technology to help identify posts and live streams as likely to be expressing thoughts of suicide. We continue to work on this technology to increase accuracy and avoid false positives before our team reviews.
- We use signals like the text used in the post and comments (for example, comments like "Are you ok?" and "Can I help?" can be strong indicators). In some instances, we have found that the technology has identified videos that may have gone unreported.
It will still be possible for concerned Facebook users to report posts that they find worrying, but it is hoped that with the introduction of AI, precious time could be saved in getting help to people.
Image credit: AlesiaKan / Shutterstock