Since I was a little kid, I remember watching the Terminator movies and fantasizing about the possibility of the “machines” acquiring consciousness and realizing humans are no longer needed. Every time a significant advance in artificial intelligence occurs, I hear my friends whispering, “Skynet.” Data computing is advancing at levels we could only dream of as we were playing with Atari and Nintendo. Let us face this: artificial intelligence is here to stay and will most likely take the world by storm. Part of my world is healthcare and improving mental health around me and within me. This purpose led me to the inevitable question, “Can Robots (AI) help us improve mental health?” Here is what I was able to find in my preliminary research.
Artificial intelligence (AI) is increasingly being utilized in various fields of healthcare to improve patient outcomes. One area where AI has shown tremendous promise is in mental health. AI-powered tools can aid in the early detection, diagnosis, and treatment of mental illnesses, providing an opportunity for early intervention and improved outcomes for patients. The questions still remains: “Can we trust Robots to care for our Mental Health?” Let us explore the potential benefits and risks of using AI in mental health and discuss real-life scenarios of how AI has made an impact.
CASE ONE
In 2018, a 42-year-old man named Andrew Leahey was diagnosed with a brain tumor. Following surgery and radiation, he experienced depression and anxiety, which he found challenging to manage. Andrew’s doctor recommended he try an AI-powered chatbot called Woebot, a chatbot that uses cognitive-behavioral therapy (CBT) techniques to help people manage symptoms of depression and anxiety. It engages users in conversations, providing them with evidence-based coping strategies and self-help resources.
Andrew was initially skeptical of using a chatbot for mental health support but decided to give Woebot a try. He found the chatbot to be non-judgmental and easy to talk to, and it provided him with helpful tools to manage his symptoms. Over time, Andrew’s mental health improved, and he felt more confident in managing his symptoms. He continued using Woebot as a supplement to his traditional therapy, finding it a valuable resource for ongoing support.
Andrew’s experience with Woebot demonstrates the potential benefits of AI-powered tools in mental health. They can provide personalized support and resources that complement traditional therapy, making mental health care more accessible and convenient.

CASE TWO
In 2018, a team of researchers at Stanford University conducted a study examining the use of AI to predict suicide risk. They analyzed over 800,000 messages from Reddit, a social media platform, posted by users who had discussed suicide or depression.
The researchers used machine learning algorithms to identify patterns in the language used in the posts and to predict the likelihood of a suicide attempt. However, the study found that the AI algorithms were only slightly better than chance at predicting suicide risk.
While the study was intended to identify new ways to use AI to prevent suicide, it also raised concerns about the potential risks of relying on AI for mental health care. One problem is that AI algorithms may not be accurate enough to reliably predict suicide risk, leading to false positives or false negatives.
Another concern is that relying too heavily on AI may reduce the quality of care. For example, suppose mental health professionals begin to rely on AI algorithms to identify high-risk patients. In that case, they may be less likely to engage in meaningful conversations with their patients and miss important information that could affect their risk level.
The study highlights the need for caution when using AI in mental health care and the importance of continued research to ensure that AI tools are accurate, reliable, and effective.
Let’s summarize the current state of AI in Mental Health by looking at the benefits versus the risks of AI use in Mental Health:

Benefits of AI in Mental Health
- Early Detection and Diagnosis: AI can assist in identifying patterns in data, such as social media activity, that may indicate a person is struggling with mental health issues. Early mental illness detection and diagnosis can lead to earlier intervention and improved outcomes.
- Personalized Treatment Plans: AI-powered tools can analyze a patient’s mental health data, such as their symptoms, medical history, and social determinants of health, to develop personalized treatment plans. These treatment plans can include recommendations for therapy, medication, self-help resources, and lifestyle changes.
- Increased Access to Mental Health Care: Many people with mental illness do not have access to mental health care due to a shortage of mental health professionals, high costs, or social stigma. AI-powered tools, such as chatbots and teletherapy, can increase access to mental health care by providing low-cost, convenient, and anonymous services.
- Improved Patient Outcomes: AI can improve patient outcomes by providing earlier intervention, personalized treatment plans, and ongoing monitoring and support. Patients who receive AI-assisted mental health care may experience reduced symptoms, improved quality of life, and decreased hospitalization rates.

Risks of AI in Mental Health
- Lack of Human Interaction: AI-powered tools may provide personalized treatment plans and support but cannot replace the human connection essential for mental health treatment. Patients may feel isolated or disconnected from their care providers, reducing engagement and adherence to treatment plans.
- Accuracy and Bias: AI algorithms are only as accurate as the data they are trained on. The results may be limited or inaccurate if the data used to train the AI algorithms is biased or incomplete. This can lead to misdiagnosis or inappropriate treatment recommendations.
- Privacy and Security: AI-powered tools collect and store sensitive patient data, including personal health information and mental health status. If this data is not adequately protected, it could be hacked, leaked, or misused, leading to patient privacy and security breaches.
- Ethical Concerns: There are ethical concerns related to the use of AI in mental health, such as the potential for AI-powered tools to be used for harmful purposes or the possibility of bias.

In conclusion, AI-powered tools have the potential to revolutionize mental health care by providing accessible and personalized support to those in need. The use of chatbots like Woebot and Tess has demonstrated the benefits of AI-powered tools, offering users a non-judgmental and convenient way to manage symptoms. However, as shown by the study at Stanford University, caution is needed when relying on AI for mental health care, and continued research is required to ensure that AI tools are accurate, reliable, and effective.

Ricardo Irizarry MD
Rick is a Board-Certified Psychiatrist with a subspecialty in Brain Injury Medicine and Addiction Medicine. He proudly serves as a Lieutenant Commander and Medical Corps Officer with the United States Navy. He is currently Medical Director of the ACT program at Tropical Texas Behavioral Health. Dr. Irizarry holds a position as Clinical Assistant Professor of Psychiatry at UT – Rio Grande Valley’s School of Medicine. He’s also the proud host of the Shrink Box podcast.