Results 1 to 2 of 2

Thread: Facebook rolls out AI to detect suicidal posts before they're reported

  1. #1
    Administrator Olivia's Avatar
    Join Date
    Jun 2006
    Location
    CA
    Posts
    26,644
    Rep Power
    21474880

    Facebook rolls out AI to detect suicidal posts before they're reported

    This is software to save lives. Facebook?s new ?proactive detection? artificial intelligence technology will scan all posts for patterns of suicidal thoughts, and when necessary send mental health resources to the user at risk or their friends, or contact local first-responders. By using AI to flag worrisome posts to human moderators instead of waiting for user reports, Facebook can decrease how long it takes to send help.

    Facebook previously tested using AI to detect troubling posts and more prominently surface suicide reporting options to friends in the U.S. Now Facebook is will scour all types of content around the world with this AI, except in the European Union, where General Data Protection Regulation privacy laws on profiling users based on sensitive information complicate the use of this tech.

    Facebook also will use AI to prioritize particularly risky or urgent user reports so they?re more quickly addressed by moderators, and tools to instantly surface local language resources and first-responder contact info. It?s also dedicating more moderators to suicide prevention, training them to deal with the cases 24/7, and now has 80 local partners like Save.org, National Suicide Prevention Lifeline and Forefront from which to provide resources to at-risk users and their networks.

    ?This is about shaving off minutes at every single step of the process, especially in Facebook Live,? says VP of product management Guy Rosen. Over the past month of testing, Facebook has initiated more than 100 ?wellness checks? with first-responders visiting affected users. ?There have been cases where the first-responder has arrived and the person is still broadcasting.?

    The idea of Facebook proactively scanning the content of people?s posts could trigger some dystopian fears about how else the technology could be applied. Facebook didn?t have answers about how it would avoid scanning for political dissent or petty crime, with Rosen merely saying ?we have an opportunity to help here so we?re going to invest in that.? There are certainly massive beneficial aspects about the technology, but it?s another space where we have little choice but to hope Facebook doesn?t go too far.

    Facebook CEO Mark Zuckerberg praised the product update in a post today, writing that ?In the future, AI will be able to understand more of the subtle nuances of language, and will be able to identify different issues beyond suicide as well, including quickly spotting more kinds of bullying and hate.?

    Unfortunately, after TechCrunch asked if there was a way for users to opt out, of having their posts a Facebook spokesperson responded that users cannot opt out. They noted that the feature is designed to enhance user safety, and that support resources offered by Facebook can be quickly dismissed if a user doesn?t want to see them.]

    Facebook trained the AI by finding patterns in the words and imagery used in posts that have been manually reported for suicide risk in the past. It also looks for comments like ?are you OK?? and ?Do you need help??

    ?We?ve talked to mental health experts, and one of the best ways to help prevent suicide is for people in need to hear from friends or family that care about them,? Rosen says. ?This puts Facebook in a really unique position. We can help connect people who are in distress connect to friends and to organizations that can help them.?

    How suicide reporting works on Facebook now

    Through the combination of AI, human moderators and crowdsourced reports, Facebook could try to prevent tragedies like when a father killed himself on Facebook Live last month. Live broadcasts in particular have the power to wrongly glorify suicide, hence the necessary new precautions, and also to affect a large audience, as everyone sees the content simultaneously unlike recorded Facebook videos that can be flagged and brought down before they?re viewed by many people.

    Now, if someone is expressing thoughts of suicide in any type of Facebook post, Facebook?s AI will both proactively detect it and flag it to prevention-trained human moderators, and make reporting options for viewers more accessible.

    https://techcrunch.com/2017/11/27/fa...ecirc_featured

  2. #2
    Most awesome Member emmy_dreamy's Avatar
    Join Date
    Jul 2007
    Location
    Maine
    Posts
    3,276
    Rep Power
    15303301
    I can see the pros for this, I can also see the cons. Not everyone is happy all of the time, I'm just afraid I'll post something and get a visit by the local PD. Then again, I guess that could be seen as a good thing.
    http://mydeathspace.com/vb/signaturepics/sigpic8247_1.gif

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •