A groundbreaking study from the University of Maine has revealed how artificial intelligence stacks up against human clinicians when handling complex and emotionally sensitive medical questions. While AI impressed in delivering accurate, fact-based answers, it faltered when empathy, nuance, and context were needed most.
Published in the Journal of Health Organization and Management, the study analyzed over 7,000 real-world medical queries from the U.S. and Australia. AI models performed well with procedural or diagnostic tasks but struggled with emotionally charged “why” or “how” questions—and showed inconsistencies when the same queries were repeated later.