Facebook is Using AI for Suicide Prevention
The social media company has changed its system for spotting users with potentially suicidal thoughts.
Facebook has upgraded its arguably most important artificial intelligence program, the tech company announced Monday.
This AI doesn't help compile populate your newsfeed or find you new friends. Instead, it attempts to identify Facebook users at risk of committing suicide. The AI relies on pattern recognition to spot when someone is expressing thoughts of suicide or self-harm in posts or live videos.
The program was first introduced in March 2017, but it required the user or a friend to file a report seeking help.
The artificial intelligence now works proactively and doesn't require a human to make a report first. If the AI identifies a post that is "likely to include thoughts of suicide," it will send it along to one of Facebook's specially trained reviewers who can then contact emergency services.
"The whole point of this is that our proactive detection can kick in even before something has been reported," said Facebook Vice President of Product Management Guy Rosen.
Early testing of the AI has resulted in more than 100 "wellness checks"—when a first responder was sent to help—in just one month.
Facebook is planning to expand the system globally.