AI integration into medicine has markedly advanced over time. Further, these models are promising in improving clinical decision-making, facilitating patient triage, and providing therapeutic recommendations. However, like other healthcare technologies, LLMs warrant scrutiny, safety monitoring, and validation.
In the present study, researchers at Harvard Medical School and the Mass General Brigham AI Governance Committee developed comprehensive guidelines for integrating AI into healthcare effectively and responsibly. To identify critical themes, the team performed an extensive peer-reviewed and gray literature search on topics such as AI governance and implementation.
The researchers identified several components critical for the responsible implementation of AI in healthcare. Diverse and demographically representative training datasets should be mandated to reduce bias. Further, outcomes should be evaluated through an equity lens. Regular evaluations of equity should include model reengineering to ensure fair benefits across patient populations.
This multidisciplinary approach provided a blueprint for non-profit organizations, healthcare institutes, and government bodies aiming to implement and monitor AI responsibly. The case study highlighted challenges such as balancing ethical considerations with clinical utility and underscored the importance of ongoing collaboration with vendors to refine AI systems.
【MORE】