As AI gains a stronger presence in biomedical research and the practice of medicine, healthcare institutions will have to ascertain the relative impacts to patient and professional populations.
While no single ethical framework, guidance document, or implementation strategy will alleviate the risks accompanying healthcare AI, bridging the gap between principles and practice will help support the implementation of safe and responsible AI.
In this article, we highlight several potential harms that may arise in the adoption of healthcare AI, such as model inaccuracies leading to adverse outcomes in patients, overreliance on AI technologies for clinicians resulting in loss of clinical expertise, and losing the human element of medicine in clinician-patient interactions.
Additionally, we describe several strategies that academic medical centers (AMCs) might adopt now to address concerns about the safety and ethical uses of healthcare AI, e.g., developing a course on AI ethics and safety for healthcare professionals, establish the patient and community advisory board, and create governance structure charged with evaluating institutional risks associated with healthcare AI.
Our analysis aims to support AMCs as they seek to balance AI innovation with proactive oversight.
【MORE】