A new study suggests that ChatGPT can produce medical responses that appear empathetic — but still fall short of the emotional depth and situational awareness shown by physicians. Researchers compared AI-generated replies with doctors’ responses on an online medical forum and found that while ChatGPT often used supportive language and appropriate tone, it struggled with subtle emotional cues such as distress, anger, or uncertainty. The authors caution that AI may help handle routine patient questions and reduce clinician workload, but it cannot replace the trust, judgment, and relational nuance that underpin effective doctor–patient care — raising important ethical questions as AI tools become more embedded in healthcare.
AI Empathy: ChatGPT vs. Physicians in Study (Bioengineer)
0
