Within the last decade, technology has advanced to provide services and capabilities previously unimaginable, including drones that deliver packages, self-driving cars and virtual assistants like Alexa or Google Home. Because the technology is evolving so quickly, there may be little, if any, legal precedent or laws to guide companies that employ new technology like artificial intelligence (AI).
For example, Facebook recently announced the implementation of AI that detects suicidal posts, with the goal of preventing suicides or other harm. According to Facebook, the AI proactively scans users’ posts for patterns of suicidal thoughts. Human moderators review flagged content and can send first responders to the user publishing the posts.
While Facebook does not have any legal duty under existing laws or precedent, it’s chosen to be proactive in its approach to self-harm and suicide prevention. This may be in response to criticism Facebook faced in recent years for users livestreaming suicides, in addition to murder and other violent crimes, on its platform.
Although Facebook takes efforts to remove those videos once it learns of them, the nature of the social media platform, with millions of users comprised of the general public, makes that task difficult if not impossible without the use of AI. AI enables Facebook to sift through billions of posts and videos to detect patterns that could indicate suicidal thoughts or actions, rather than waiting for user reports.
By doing so, however, Facebook may be taking on additional, or at least different, legal obligations. As a general rule, absent a special relationship, people do not have a duty to prevent suicide. By implementing its AI with the stated goal of predicting and preventing self-harm, Facebook may be assuming an obligation to do so.
Regardless, Facebook will face questions as to how well its AI system works. The system is unlikely to be able to predict suicides with 100 percent accuracy; any predictive system will have false alarms. Will Facebook be responsible for the use of first responder resources when they’re called to the home of someone who doesn’t need help?
Additionally, once the AI system identifies an at-risk post, it’s up to a human moderator to assess the post and make a decision to send resources. If the human moderator takes no action and the poster harms him or herself, can Facebook be held responsible?
Moreover, could Facebook potentially be responsible for preventing violent crimes against others? The scenario is reminiscent of the Tom Cruise movie “Minority Report,” which 15 years ago imagined a future where technology is used to predict crimes before they happen. If Facebook’s AI works as designed, it is not a far conceptual leap to using similar AI to supposedly predict (and prevent) criminal acts. That slippery slope is something that Facebook probably would like to avoid at present.
Facebook undoubtedly consulted with its lawyers before implementing its AI systems. With many of the legal ramifications and outcomes uncertain, Facebook likely weighed general business and ethical considerations when making its decision. Looking to the future in an area of undeveloped law, that is perhaps the best it could do.
———
William Harstad is a partner in the litigation and alternative dispute resolution practice group at Carlsmith Ball LLP. He can be reached at wharstad@carlsmith.com.