AI can help detect suicide risk earlier, FIU Business research finds.

| By

As mental health challenges continue to rise, particularly among teens and young adults, new research from FIU Business explores how artificial intelligence (AI) can be leveraged to detect suicide risk earlier and more effectively than traditional methods.

A new study by Pouyan Esmaeil Zadeh, associate professor of information systems and business analytics, investigated the potential of AI-powered tools like large language models (LLMs) to flag early warning signs of suicidal ideation or suicidal thoughts. The research emphasized the role of generative AI in detecting subtle behavioral cues, especially in individuals who may not be willing or able to express distress directly.

“Traditional tools and techniques often rely on direct questioning, which can be ineffective, particularly with teenagers who are reluctant to share openly,” said Esmaeil Zadeh. “AI offers the potential to passively detect concerning patterns from conversations and social media without requiring a person to say outright that they are struggling.”

Esmaeil Zadeh’s study, published in AI and Ethics in June of 2025, presents a comprehensive review of AI approaches used for suicide risk detection by analyzing 163 articles on the subject. The analysis evaluated tools such as chatbots, deep learning models and LLMs like ChatGPT and introduced a framework to assess their effectiveness in suicide prevention.

The study proposes AI tools can be used to monitor mental health by analyzing what people write on social media, in medical notes, or during conversations. These tools can be trained to look for signs like feelings of hopelessness, words related to self-harm, negative emotions, and sudden changes in language or tone. Based on what they find, the system can rate the risk and either suggest a safety plan or alert a professional. According to the findings, even older AI methods can catch about 85% of posts that show suicidal thoughts. Newer models, like those based on GPT-3, can be even more accurate—nearly 95%. Some combined approaches are even better, especially at telling apart suicidal and non-suicidal content.

These results suggest that advanced AI, when used in chatbots, can be as sensitive as human experts at spotting warning signs. However, Esmaeil Zadeh urges it’s important that these tools are used with human oversight, clear steps for action, and careful monitoring to avoid mistakes or unfair treatment.

The study shows that effective suicide detection AI-powered chatbots need two key elements: comprehensive clinical knowledge from extensive medical literature, and also advanced conversational skills for non-judgmental, empathetic interactions. AI Knowledge helps recognize warning signs and risks, while AI personality creates a safe space for users to share without stigma or pressure. Balancing expertise (knowledge gained through training) and compassion (personality) is crucial for early detection, enabling the identification of at-risk individuals and building trust for effective intervention.

Esmaeil Zadeh makes a strong distinction between detection and treatment. “AI is not ready to replace mental health professionals,” Esmaeil Zadeh said. “But it can serve as a supplement for early detection, a crucial window when timely intervention can save lives.”

While some general-purpose AI tools have come under scrutiny for generating inaccurate or harmful mental health advice, Esmaeil Zadeh’s research focused on responsible use. He advocated for the development of domain-specific mental health chatbots with ethical guardrails, privacy protection and escalation protocols that connect at-risk users to trained professionals.

His analysis compares several long-standing mental health chatbots and proposes standardized evaluation criteria such as accuracy in risk detection, user engagement and safety monitoring.

As generative AI technologies evolve, Esmaeil Zadeh says that with proper training and oversight, these tools can be integrated into clinical workflows, college counseling centers and even school systems to enhance early intervention strategies.

“Our goal isn’t to replace therapy, but to identify risk earlier, before a crisis occurs,” said Esmaeil Zadeh. “AI can’t heal, but it can help start the conversation that leads to healing.”