NEW YORK--(BUSINESS WIRE)--In light of Suicide Prevention Awareness Month, Talkspace announced the results of its unique AI algorithm over the last three years of identifying individuals at risk of self-harm or suicide. Using machine learning capabilities, the Talkspace platform can detect language patterns consistent with high-risk behaviors that place individuals at risk for self-harm. The analysis runs real-time on messages sent by patients in their secure and encrypted virtual therapy room and triggers an urgent alert to the therapist. While Talkspace is not a crisis response service, an alert that an individual is displaying signs of suicidal ideation allows the provider to respond with appropriate care. A subset of anonymized, consenting clients flagged for risk suggests the model is 83% accurate.
“Technology will never replace that uniquely human interaction that occurs between provider and patient. However, we will prioritize machine learning capabilities that offer clinical assistance to improve the ability of our therapists to deliver the highest quality of care,” said Jon Cohen, MD, CEO of Talkspace. “In light of escalating suicide rates in the midst of a growing mental health crisis, developing and scaling technological aids for early intervention is mission critical for Talkspace.”
The natural language processing (NLP) model was developed by Talkspace in partnership with researchers at NYU Grossman School of Medicine and trained on anonymized, client-consented therapy transcripts to distinguish messages displaying suicidal risk from those without. Research published in Psychotherapy Research, titled “Just in time crisis response: Suicide alert system for telemedicine psychotherapy settings” (NYU Grossman School of Medicine; Bantilan, N., Malgaroli, M., Ray, B., & Hull, T.D. (2020)), presents evidence that the suicide and risk detection algorithm identified risk from non-risk content with 83% accuracy when compared to a human expert evaluating that same material.
Since 2019, when it was introduced onto the platform, Talkspace’s proprietary NLP model has flagged approximately 32,000 Talkspace members whose written messages to their therapists have shown signs of suicidality or risk of self-harm. Of those flagged individuals who continued to receive care through Talkspace, more than 50% demonstrated improved outcomes. According to an internal provider feedback survey, 83% of Talkspace mental health providers find the feature to be useful in providing clinical care and mitigating clinical risk. Talkspace will continue to develop AI technology with the goal of supporting mental health providers, enhancing quality of care, and improving outcomes for patients.
About Talkspace
Talkspace (Nasdaq: TALK) is a leading virtual behavioral healthcare company committed to helping people lead healthier, happier lives through access to high-quality mental healthcare. At Talkspace, we believe that mental healthcare is core to overall healthcare and should be available to everyone. Talkspace pioneered the ability to text with a licensed therapist from anywhere and now offers a comprehensive suite of mental health services from self-guided products to individual and couples therapy, in addition to psychiatric treatment and medication management. With Talkspace’s core psychotherapy offering, members are matched with one of thousands of licensed providers across all 50 states and can choose from a variety of subscription plans including live video, text or audio chat sessions and/or asynchronous text messaging.
All care offered at Talkspace is delivered through an easy-to-use, fully-encrypted web and mobile platform that meets HIPAA, federal, and state regulatory requirements. Talkspace covers approximately 110 million lives as of June 30, 2023, through our partnerships with employers, health plans, and paid benefits programs.
For more information, visit www.talkspace.com.