Robots aren’t taking doctors’ jobs and it’s okay to be worried they might

3 minute read

Published:

Man, clinicians are really on a roll with their curmudgeon, take-me-back-to-before-computers editorials

The JAMA viewpoint “Unintended Consequences of Machine Learning in Medicine” and its associated response on the Cross Invalidation blog are really important reads. Both leave me fairly unsatisfied, however, in how little effort they make to acknowledge the validity of the other’s viewpoints.

The JAMA article is impressively negative and static. Probably every paragraph in the piece could be re-written to present the possible consequence of machine learning decision support systems (as it currently does), recommend research paths forward to mitigate those possible consequences, and present ongoing work in the field that is either directly attempting to address these consequences or could be adapted for medicine-specific applications. It also fails to discuss the “support” part of “machine learning decision support systems” - I don’t think anyone anywhere is saying that machine learning should replace doctors, but many people do think that computers could help support doctors in being better at their (very difficult) jobs.

The response blog post does a slightly better job of recognizing what’s going on on the other side of the aisle, but is also fairly static and defensive. If I were already a machine learning skeptic (because my thoughts on ML are driven mostly by my fear and/or lack of understanding of this complicated-sounding thing that seems to be taking over the world), nothing in this response would convince me to open my mind to any alternative viewpoint, mostly because nothing in this response makes the cause of my skepticism (fear, helplessness) feel heard.

For example, the rebuttal of the “uncertainty in medicine” part is really unempathetic. The JAMA author does have an important point that training labels in medicine are often very squishy (because even doctors disagree on what correct diagnoses are, and many times a diagnosis can’t even really be pinpointed [see: all seasons of House]). But rather than recognizing the validity of the concern and reassuring readers that machine learning researchers are working on ways to deal with noisy training data, musing about how that could be applied to ML in medicine, and discussing the unique challenges we’ll still have to figure out in medicine, the blog post just dismisses the concern outright. Not great for cross-field communication here…

Anyway, it’s really frustrating to see clinician-focused journals continuing to put out non-forward-looking articles that will only serve to drive wedges between scientific researchers and clinicians. I’m glad the response to this article recognizes the need for these discussions, though I wish it approached the rebuttal more empathetically. Scientists need to do a better job of recognizing (and articulating their recognition) that humans and biology are messy affairs that will need a lot of hard work from many different types of players. Technology won’t solve everything, and we need to stop sending the message to non-technologists that we think it will.

I’ve worked closely with a few clinicians during my PhD, and it’s amazing what a completely different world medicine is than academic research. I wish we were all better at recognizing that miscommunications will happen because we literally speak different languages, that there is so much we don’t know about the other side, and that at the end of the day, we’re all actually on the same side.