🧪 Study Purpose
This study evaluated how well ChatGPT-4.0 responds to both clinical and patient questions about Barrett’s Esophagus (BE). Researchers wanted to test its accuracy, empathy, readability, and overall usefulness as a tool for patient education.
🧭 Methods
- 29 prompts were used:
• 14 clinical vignettes (realistic patient scenarios)
• 15 FAQ-style patient questions - ChatGPT-4 responses were compared to physician-generated responses.
- A panel of Barrett’s patients evaluated which answers they found more useful, empathetic, and easier to understand.
📊 Key Findings
Dimension | ChatGPT-4 | Notes |
✅ Accuracy | 79.3% responses rated “completely” or “mostly accurate” | Acceptable, but not flawless. |
📚 Completeness | High completeness score (>4/5) | Responses were comprehensive. |
❤️ Empathy | Moderate (mean ~3/5) | Slightly less than doctors. |
🧠 Clarity/Understandability | Often clearer than doctors’ responses | Patients found ChatGPT easier to follow. |
🗳 Patient Preference | 84.4% preferred ChatGPT’s responses | Liked the tone, format, and ease of understanding. |
💬 Conclusion
ChatGPT-4 delivered accurate, comprehensible, and generally preferred responses about Barrett’s Esophagus compared to doctors — especially for non-complex questions. While not meant to replace doctors, it has strong potential as an educational and support tool in clinical settings.