Skip to main content

ChatGPT in psychiatry: promises and pitfalls

Abstract

ChatGPT has become a hot topic of discussion since its release in November 2022. The number of publications on the potential applications of ChatGPT in various fields is on the rise. However, viewpoints on the use of ChatGPT in psychiatry are lacking. This article aims to address this gap by examining the promises and pitfalls of using ChatGPT in psychiatric practice. While ChatGPT offers several opportunities, further research is warranted, as the use of chatbots like ChatGPT raises various technical and ethical concerns. Some practical ways of addressing the challenges for the use of ChatGPT in psychiatry are also discussed.

Introduction

The launch of ChatGPT in November 2022 has witnessed unprecedented success. The chatbot has taken the world by storm and its number of users climbed to 100 million, in just 2 months after its release, making ChatGPT the fastest growing application of its kind in history [1]. The use of artificial intelligence (AI) and chatbots in healthcare, including psychiatry, is not a new concept. As technology advances and new applications emerge, AI has brought about groundbreaking changes in the field of psychiatry. Since the release of ChatGPT, there is an increasing number of publications on its applications in various areas such as scientific writing [2], language editing [3] and medical education [4]. However, opinions on the use of ChatGPT in psychiatry are lacking. This article aims to examine the opportunities and potential drawbacks of incorporating ChatGPT in psychiatric practice. In addition, some practical approaches to mitigate the challenges posed by the psychiatric applications of ChatGPT are explored.

What is ChatGPT?

GPT is the abbreviation for Generative Pretrained Transformer. ChatGPT is a chatbot developed by OpenAI, which belongs to GPT version 3.5. Prior to ChatGPT, there were three generations of GPT, namely GPT-1, GPT-2 and GPT-3. ChatGPT is a sibling model of InstructGPT, which uses transformer-based architecture to generate human-like texts and conversation in response to inquiries raised by the users [5]. On 14 March 2023, OpenAI released a newer version of GPT known as GPT-4 [6]. Unlike ChatGPT, GPT-4 is not a freeware, and is only available to GPTPlus users who pay a subscription fee. This article focusses on ChatGPT, as it is currently available to the users at no cost, which is more relevant to this discussion.

Opportunities of using ChatGPT in psychiatry

The use of chatbots in psychiatry has started long before the introduction of ChatGPT. For example, the application of earlier versions of chatbots like Eliza and Woebot has provided valuable insights into the opportunities that ChatGPT can offer [7]. Previous studies have investigated several applications of chatbots in the psychiatric practice, including patient education and disease prevention [8], mental health screening [9], detection of self-harm [10] and suicidal ideation [11], as well as patient management, such as delivery of cognitive behavioral therapy [12]. Table 1 summarizes specific examples of using ChatGPT in psychiatry.

Table 1 Specific examples of using ChatGPT in psychiatry

The author believes that, as a robust AI-powered chatbot trained on a large corpus of data, ChatGPT possesses the capability to offer diverse mental health services, similar to other chatbots previously reported. However, it is important to note that ChatGPT should be used as a supportive tool rather than a replacement for expert opinions and services provided by a psychiatrist. Some advantages of using ChatGPT in psychiatry include (1) its around-the-clock availability, (2) reduction in stigma associated with seeking healthcare from a professional, (3) cost effectiveness relative to the high costs of traditional psychiatric care and (4) efficiency due to reduction in waiting time and quick access to large volumes of information.

Potential pitfalls of using ChatGPT in psychiatry

The use of ChatGPT is associated with several potential drawbacks and limitations. Like any computer systems, chatbots can make mistakes. One obvious example is a mistake made by Google’s Bard in a promotional material that caused a $100-billion plunge in Alphabet’s market value [19]. This section discusses several potential pitfalls and the social and ethical concerns of using ChatGPT in psychiatry.

Limited emotional intelligence

Lack of empathy and emotional understanding is one of the disadvantages of using ChatGPT in psychiatry, as machines have difficulty in comprehending complex human emotions. Unlike a psychiatrist, ChatGPT does not possess the ability to interpret non-verbal cues such as facial expressions and body language. Therefore, ChatGPT may create miscommunications by generating inappropriate responses that can potentially confuse and mislead patients.

Overreliance on technology

The overdependence of AI on clinical decision making can weaken the practitioner’s critical thinking skills. This in turn, can lead to inaccurate diagnosis and inappropriate treatment plans. In addition, relying heavily on AI in psychiatry can also lead to dehumanization of mental healthcare. Consequently, this can compromise trust and communication, and have a negative impact on the doctor–patient relationship.

Lack of accountability

The legal and ethical considerations of utilizing AI in healthcare have long been a subject of debate, particularly when AI makes fatal mistakes, which lead to the question of who should be held responsible [20]. As ChatGPT has limited emotional intelligence, the chatbot may give a wrong diagnosis, leading to inadequate or inappropriate treatment. In one study, Elyoseph and Levkovich investigated the potential of ChatGPT in suicide risk assessment and reported that ChatGPT underestimated suicide risk and mental resilience in a hypothetical case [21]. Furthermore, machines have a limited ability to handle crisis such as suicidal or violent behavior. Failure to address these situations in a timely manner may result in life-threatening consequences.

Privacy and confidentiality

Another common subject of debate concerning the use of AI in healthcare is patient privacy and confidentiality [22]. As numerous conversations with ChatGPT are generated daily, concerns around platform security arise, particularly when sensitive patient data are archived. Transparency is also a common concern, especially when there is limited understanding of the complex AI algorithms used in the application, leading to the issues of “black box” and a lack of trust [23]. In addition, the training of chatbots is crucial in information accuracy. AI is prone to bias when there is misrepresentation of the training data [24]. Given that the training data significantly influence its output, ChatGPT may provide biased information, resulting in a lack of generalization, misdiagnoses and fatal outcomes.

Other ethical and social concerns

Not every patient has access to ChatGPT. The use of ChatGPT requires devices like a computer, mobile phone or tablets with internet access. However, patients who are in the lower socioeconomic groups or low-income countries do not have the luxury to possess these devices. This raises the concern of accessibility and equality. Furthermore, the increasing use of AI could lead to job replacement for mental healthcare professionals. Therefore, it is essential that the use of AI should not come at the cost of human expertise.

Potential pitfalls from the patients’ perspectives

Although AI has become increasingly popular in recently years, there are several concerns from the patients’ perspectives. The views of patients on the use of AI including ChatGPT are mixed and evolving. While some patients embrace AI with open arms due to its round-the-clock availability, others may have concerns regarding its impersonal nature. Some patients are not tech savvy and may not be able to use AI efficiently, especially psychiatric patients who are mentally disadvantaged. Furthermore, vulnerable psychiatric patients may be subject to exploitation and manipulation by AI, especially when they lack the social support or understanding on the limitations of AI technologies like chatbots.

Overcoming the challenges for the use of ChatGPT in psychiatry

Addressing the challenges for the use of ChatGPT in psychiatry requires a multi-faceted approach:

Education and training

It is important to ensure that both the mental healthcare professionals and patients receive adequate education and training on the appropriate use of AI technologies like ChatGPT in psychiatry. They need to understand the strengths and limitations of ChatGPT and use it carefully. Practitioners should be reminded that ChatGPT is a supportive tool in clinical decision making and it should not replace their expertise, whereas the patients should be informed of the rights regarding the confidentiality of their data and informed consent.

Optimization of technology

Enhancement of the current technology is necessary to minimize errors and biases. This can be done by ensuring that the data used for training chatbots like ChatGPT is diverse and representative to avoid inaccuracy in diagnosis, as well as discriminatory practices. The emotional intelligence aspects of AI models can be further improved by strengthening the sentiment analytical capabilities to better understand complex emotions. In addition, the development of user-friendly interfaces that incorporate the AI reasoning processes can build trust and enhance transparency.

Development of ethical guidelines and regulatory framework

Healthcare providers like hospitals and clinics can develop a clear ethical guideline when using ChatGPT or other AI technologies in mental healthcare. These guidelines should encompass patient data privacy and confidentiality, patient autonomy and accountability. On the other hand, policy makers should develop regulatory framework to ensure safe and responsible use of AI technologies in healthcare. There should also be regular monitoring and evaluation on the use of these technologies.

Rigorous research and development

Continuous research and development in AI technologies using clinical trials and real-world data analysis in the clinical setting to evaluate the safety and effectiveness of chatbots such as ChatGPT are necessary. Such research should take a multi-disciplinary approach, involving psychiatrists, technologists, policymakers and ethicists to address emerging challenges and to facilitate integration of AI in psychiatry.

Conclusions

Embracing new technologies in psychiatry can drive innovation in mental healthcare. Considering the rising trend of AI in healthcare, ChatGPT shows great potential in psychiatry. However, many technical and ethical questions remain unanswered. Therefore, more research is necessary before ChatGPT can be widely implemented in psychiatry. Psychiatrists, ethicists, technologists and policymakers should take a multi-pronged approach to address key challenges. Patient and practitioner education, optimization of the current technology, as well as development of regulatory framework and ethical guidelines can help ensure safe and effective use of AI in psychiatry. Importantly, chatbots like ChatGPT should not replace the expertise of psychiatrists. After all, psychiatry is not only a science, but also an art that requires human interactions.

Availability of data and materials

Not applicable.

Abbreviations

AI:

Artificial intelligence

GPT:

Generative pretrained transformer

ML:

Machine learning

PTSD:

Post-traumatic stress disorder

References

  1. Reuters. ChatGPT sets record for fastest-growing user base - analyst note. 2023. https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/ . Accessed 28 Mar 2023.

  2. Alkaissi H, McFarlane SI. Artificial hallucinations in ChatGPT: Implications in scientific writing. Cureus. 2023;15(2): e35179. https://doi.org/10.7759/cureus.35179.

    Article  PubMed  PubMed Central  Google Scholar 

  3. Salvagno M, Taccone FS, Gerli AG. Can artificial intelligence help for scientific writing? Crit Care. 2023;27(1):75. https://doi.org/10.1186/s13054-023-04380-2.

    Article  PubMed  PubMed Central  Google Scholar 

  4. Mbakwe AB, Lourentzou I, Celi LA, Mechanic OJ, Dagan A. ChatGPT passing USMLE shines a spotlight on the flaws of medical education. PLOS Digit Health. 2023;2(2): e0000205. https://doi.org/10.1371/journal.pdig.0000205.

    Article  PubMed  PubMed Central  Google Scholar 

  5. OpenAI. ChatGPT. 2023. https://openai.com/blog/chatgpt .Accessed 28 Mar 2023.

  6. OpenAI. GPT-4. 14 March 2023. Available from: https://openai.com/research/gpt-4 Accessed on 28 Mar 2023.

  7. Pham KT, Nabizadeh A, Selek S. Artificial intelligence and chatbots in psychiatry. Psychiatr Q. 2022;93(1):249–53. https://doi.org/10.1007/s11126-022-09973-8.

    Article  PubMed  PubMed Central  Google Scholar 

  8. Fitzsimmons-Craft EE, Chan WW, Smith AC, Firebaugh ML, Fowler LA, et al. Effectiveness of a chatbot for eating disorders prevention: a randomized clinical trial. Int J Eat Disord. 2022;55(3):343–53. https://doi.org/10.1002/eat.23662.

    Article  PubMed  Google Scholar 

  9. Schick A, Feine J, Morana S, Maedche A, Reininghaus U. Validity of chatbot use for mental health assessment: experimental study. JMIR Mhealth Uhealth. 2022;10(10): e28082. https://doi.org/10.2196/28082.

    Article  PubMed  PubMed Central  Google Scholar 

  10. Deshpande S, Warren J. Self-harm detection for mental health chatbots. Stud Health Technol Inform. 2021;281:48–52. https://doi.org/10.3233/SHTI210118.

    Article  PubMed  Google Scholar 

  11. Sels L, Homan S, Ries A, Santhanam P, Scheerer H, Colla M, et al. SIMON: a digital protocol to monitor and predict suicidal ideation. Front Psychiatry. 2021;12: 554811. https://doi.org/10.3389/fpsyt.2021.554811.

    Article  PubMed  PubMed Central  Google Scholar 

  12. Jang S, Kim JJ, Kim SJ, Hong J, Kim S, Kim E. Mobile app-based chatbot to deliver cognitive behavioral therapy and psychoeducation for adults with attention deficit: a development and feasibility/usability study. Int J Med Inform. 2021;150: 104440. https://doi.org/10.1016/j.ijmedinf.2021.104440.

    Article  PubMed  Google Scholar 

  13. Smith A, Hachen S, Schleifer R, Bhugra D, Buadze A, Liebrenz M. Old dog, new tricks? Exploring the potential functionalities of ChatGPT in supporting educational methods in social psychiatry. Int J Soc Psychiatry. 2023;69(8):1882–9. https://doi.org/10.1177/00207640231178451.

    Article  PubMed  Google Scholar 

  14. Franco D’Souza R, Amanullah S, Mathew M, Surapaneni KM. Appraising the performance of ChatGPT in psychiatry using 100 clinical case vignettes. Asian J Psychiatr. 2023;89: 103770. https://doi.org/10.1016/j.ajp.2023.103770.

    Article  PubMed  Google Scholar 

  15. Hwang G, Lee DY, Seol S, Jung J, Choi Y, Her ES, et al. Assessing the potential of ChatGPT for psychodynamic formulations in psychiatry: an exploratory study. Psychiatry Res. 2023;331: 115655. https://doi.org/10.1016/j.psychres.2023.115655.

    Article  PubMed  Google Scholar 

  16. Luykx JJ, Gerritse F, Habets PC, Vinkers CH. The performance of ChatGPT in generating answers to clinical questions in psychiatry: a two-layer assessment. World Psychiatr. 2023;22(3):479–80. https://doi.org/10.1002/wps.21145.

    Article  Google Scholar 

  17. Galido PV, Butala S, Chakerian M, Agustines D. A Case study demonstrating applications of ChatGPT in the Clinical Management of treatment-resistant schizophrenia. Cureus. 2023;15(4): e38166. https://doi.org/10.7759/cureus.38166.

    Article  PubMed  PubMed Central  Google Scholar 

  18. Bartal A, Jagodnik KM, Chan SJ, Dekel S. ChatGPT demonstrates potential for identifying psychiatric disorders: application to childbirth-related post-traumatic stress disorder. Res Sq. 2023;33428787. https://doi.org/10.21203/rs.3.rs-3428787/v1.

  19. Reuters. Alphabet shares dive after Google AI chatbot Bard flubs answer in ad. 2023. https://www.reuters.com/technology/google-ai-chatbot-bard-offers-inaccurate-information-company-ad-2023-02-08/ .Accessed 29 Mar 2023.

  20. Naik N, Hameed BMZ, Shetty DK, Swain D, Shah M, Paul R, et al. Legal and ethical consideration in artificial intelligence in healthcare: who takes responsibility? Front Surg. 2022;9: 862322. https://doi.org/10.3389/fsurg.2022.862322.

    Article  PubMed  PubMed Central  Google Scholar 

  21. Elyoseph Z, Levkovich I. Beyond human expertise: the promise and limitations of ChatGPT in suicide risk assessment. Front Psychiatry. 2023;14:1213141. https://doi.org/10.3389/fpsyt.2023.1213141.

    Article  PubMed  PubMed Central  Google Scholar 

  22. Murdoch B. Privacy and artificial intelligence: challenges for protecting health information in a new era. BMC Med Ethics. 2021;22(1):122. https://doi.org/10.1186/s12910-021-00687-3.

    Article  PubMed  PubMed Central  Google Scholar 

  23. Sariyar M, Holm J. Medical informatics in a tension between black-box ai and trust. Stud Health Technol Inform. 2022;289:41–4. https://doi.org/10.3233/SHTI210854.

    Article  PubMed  Google Scholar 

  24. Norori N, Hu Q, Aellen FM, Faraci FD, Tzovara A. Addressing bias in big data and AI for health care: a call for open science. Patterns (N Y). 2021;2(10): 100347. https://doi.org/10.1016/j.patter.2021.100347.

    Article  PubMed  Google Scholar 

Download references

Acknowledgements

Not applicable.

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Contributions

The author contributed solely to the writing and submission of this article. All authors reviewed and approved the final version of the manuscript.

Corresponding author

Correspondence to Rebecca Shin-Yee Wong.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The author declares no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wong, R.SY. ChatGPT in psychiatry: promises and pitfalls. Egypt J Neurol Psychiatry Neurosurg 60, 14 (2024). https://doi.org/10.1186/s41983-024-00791-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s41983-024-00791-2

Keywords