AI’s so-called black box refers to the fact that much of the underlying technology behind machine learning-enhanced algorithms is probability/statistics without a human readable explanation. Oftentimes that’s the case because the advanced math or the data science behind the algorithms is too complex for the average user to understand without additional training. Several stakeholders in digital health maintain, however, that this lack of understanding isn’t that important. They argue that as long as an algorithm generates actionable insights, most clinicians don’t really care about what’s “under the hood.” Is that reasoning sound?
Trending
- GLP-1s linked to better IBD outcomes, hinting at role as ‘valuable adjunctive therapies’ (Healio)
- The cyber siege of private practices: Are you at risk? (Medical Economics)
- Is The Galleri Cancer Test Worth It? A Doctor Explains (Forbes)
- With a little help from my friends: What gastrointestinal pathologists need to know from gastroenterologists (GI & Hepatology News)
- The GI innovations transforming outcomes (Becker’s GI & Endoscopy)
- The Smart Toilet: Turning Waste Into Health Insights – Dr. Sonia Grego, Ph.D. (Progress, Potential, and Possibilities | YouTube)
- Peripheral gaze guidance improves adenoma detection rate (GI & Hepatology News)
- Artificial Intelligence for Gastroenterology Practice: A Modified Delphi Consensus (American Journal of Gastroenterology)
