The use of artificial intelligence-based outcome prediction models (OPMs) in medicine is on the rise, but a new paper published in Patterns has warned widespread use could lead to unintentional patient harm.
Specifically, it suggests that OPMs – statistical models that predict a certain health outcome based on a patient's characteristics and may be used in challenging treatment cases – can be vulnerable to "harmful self-fulfilling prophecies", even if they are very effective at predicting outcomes.
For example, if they are trained on data that reflects existing disparities in treatment or demographics, the AI could perpetuate these inequalities, leading to poorer patient outcomes. To guard against that, an element of "human reasoning" must be incorporated into the process.
The authors of the paper suggest that the current approach to prediction model development, deployment, and monitoring "needs to shift its primary focus away from predictive performance and instead toward changes in treatment policy and patient outcomes."
【MORE】