The medical algorithmic audit

刊登時間

Artificial intelligence systems for health care, like any other medical device, have the potential to fail. However, specific qualities of artificial intelligence systems, such as the tendency to learn spurious correlates in training data, poor generalisability to new deployment settings, and a paucity of reliable explainability mechanisms, mean they can yield unpredictable errors that might be entirely missed without proactive investigation. 

The responsibility and benefits of investigating and improving the safety of the artificial intelligence system is shared between developers, health-care decision makers, and users, and should be part of a larger oversight framework of algorithmovigilance to ensure the continued efficacy and safety of artificial intelligence systems. 

【MORE】