Facebook announced that it is currently testing in the US an AI whose main purpose is to prevent suicide.
In a blog post, Facebook representatives – Vanessa Callison-Burch, Product Manager, Jennifer Guadagno, Researcher, and Antigone Davis, Head of Global Safety – wrote that Facebook is testing in the US an artificial intelligence approach that will streamline “reporting process using pattern recognition in posts previously reported for suicide.”
This AI, according to the Facebook representatives “will make the option to post a report about suicide or self injury.” The Facebook representatives added that this pattern recognition AI will identify post that is very likely to include thoughts of suicide.
Once the AI identifies a post that is likely to include thoughts of suicide, Facebook’s Community Operations team will then review this post. If appropriate, Facebook’s Community Operations team will then offer resources to the person who posted the content. Accordingly, this AI will work despite the fact that no one on Facebook has reported the post yet.
“Suicide prevention is one way we’re working to build a safer community on Facebook. With the help of our partners and people’s friends and family members on Facebook, we’re hopeful we can support more people over time,” Callison-Burch, Guadagno and Davis said.
According to the World Health Organization (WHO), 800, 000 people die every year due to suicide. In 2012, WHO reported that suicide was the second leading cause of death among 15-29 year olds worldwide.