A new tool developed by researchers at Mount Sinai's Icahn School of Medicine is designed to find and reduce biases in datasets used to train machine learning models – helping boost the accuracy and equity of AI-enabled decision-making.
Researchers applied the AEquity tool to several types of health data – images, patient records, public health surveys and more – using different machine-learning models. They found that the tool was able to identify biases across these datasets – some of which were familiar and expected, and some that were unknown.
The AEquity tool that could help developers and health systems identify whether bias exists in their data – and then take steps to mitigate it. It can help ensure these tools work well for everyone, not just the groups most represented in the data.
【MORE】