The potential of generative artificial intelligence (AI), particularly large language models (LLMs) like ChatGPT, in helping clinicians manage the increasing volume of patient messages. With a 57% increase in patient messages since the pandemic, the healthcare workforce is under significant strain, often leading to burnout. The article, drawing on medical, operational, computer science, and business expertise, explores how LLMs could be integrated into healthcare to address this issue.
Key points from the article include:
Workflow Integration: LLMs can be integrated at each step of the patient messaging workflow. This includes assisting patients in composing messages, routing messages to appropriate healthcare workers, summarizing messages and relevant patient information, and drafting responses.
Improving Message Quality: LLMs can help by providing self-service options, instructions for urgent care, and requests for additional information from patients. This can reduce the need for multiple back-and-forth messages.
Efficient Message Routing and Response: LLMs can filter out messages that don’t require a response and label others based on priority and type. They can also draft responses at an appropriate reading level for patients, linking to medical records and literature, and suggesting related orders for healthcare workers to review.
Clinical Validation and Integration: To effectively use LLMs in patient messaging, healthcare organizations need to develop, validate, and integrate clinical LLM models into electronic health records (EHR)-based workflows. This involves fine-tuning existing LLMs with historical patient messages and responses, validating their performance, and integrating them into EHR using APIs.
Challenges and Opportunities: Despite the potential benefits, there are challenges, including patient and clinician readiness to accept LLM-assisted messaging and the need for human-centered design methods to ensure solutions benefit both patients and clinicians. Regulatory systems must also evolve to enable safe and beneficial innovation.
Responsibility and Oversight: Clinicians remain responsible for meeting clinical standards and must review, verify, and edit LLM-generated messages. The article suggests that while LLMs can make messaging more efficient and improve quality, they are not without imperfections and require careful oversight.