The Medical Futurist analyzes the first studies on ChatGPT’s application in healthcare, covering its use in medical education, clinical decision-making, and scientific research. Key findings include:
- Clinical Uses: ChatGPT has potential in clinical data management, clinical trial recruitment, and clinical decision-making. Studies show moderate accuracy in tasks like radiologic decision-making, but issues like data biases and nonsensical outputs (“hallucinations”) are noted.
- Medical Education Aptitude: ChatGPT could be a useful tool for medical students and educators, with its performance on medical exams comparable to a third-year medical student. It can assist in drafting course content and assessments, but expert oversight is advised due to occasional incorrect information.
- Scientific Research and Writing: ChatGPT can aid in scientific writing, from summarizing data to drafting papers. However, human authors outperform it in terms of completeness, scientific content, and credibility. Journals are updating policies to include the declaration of AI-assisted tools and manual checks of AI-generated content.
The article emphasizes that these studies are based on the GPT-3.5 model, and with the launch of GPT-4, which is reportedly 40% more accurate, future improvements and new considerations are expected.