Responsible Artificial Intelligence governance in oncology

Updated

This paper offers one of the first operational, real-world frameworks for Responsible AI (RAI) governance in oncology, detailing the one-year experience of Memorial Sloan Kettering Cancer Center (MSK) in managing AI models across clinical, operations, and research programs.

We developed an AI lifecycle management framework called iLEAP (Legal, Ethics, Adoption, Performance), incorporating multi-stakeholder governance, model risk assessments, and clinician trust tools.

Using these tools in the implementation phase of the study, we registered, evaluated, and monitored 26 AI models (including large languagemodels), and 2 ambient AI pilots, and retrospectively reviewed 33 live nomograms.

We introduced novel tools like the Model Information Sheet (MIS), risk assessment matrices, and a dynamic AI Model Registry to oversee AI projects' progression from research to production.

Our Model Information Sheet questions screen for FDA SaMD compliance as a key consideration.

Our RAI approach was also designed to be broader to handle many other types of AI models that are not considered SaMD.

Two AI model case studies illustrate lessons learned and our "Express Pass" methodology for select models.

【MORE】