Skip to main content

Mount Sinai AI tool can detect bias in datasets

AEquity aims to improve training for healthcare machine-learning algorithms by assessing the accuracy and fairness of the data they're fed.
By Mike Miliard , Executive Editor
Mount Sinai

Photo: Mount Sinai

As artificial intelligence proliferates across healthcare, with AI now used for clinical and financial imperatives from disease diagnosis to cost and capacity predictions, there's big potential for efficiency and efficacy – as long as those algorithms are trained on accurate and representative data. 

A new tool developed by researchers at Mount Sinai's Icahn School of Medicine is designed to find and reduce biases in datasets used to train machine learning models – helping boost the accuracy and equity of AI-enabled decision-making. 

WHY IT MATTERS
As explained in the newest issues of the Journal of Medical Internet Research, the new tool, known as AEquity, can help spot – and correct – bias in healthcare datasets before they're used to train AI and ML models. 

Researchers applied the AEquity tool to several types of health data – images, patient records, public health surveys and more – using different machine-learning models. 

They found that the tool was able to identify biases across these datasets – some of which were familiar and expected, and some that were unknown.

The challenge is that some demographic groups may not be proportionately represented in a given dataset. Moreover, many conditions might present differently – or be overdiagnosed across groups. 

When AI and machine learning models are trained on that data, it can perpetuate and even amplify inaccuracies, leading to inaccurate diagnoses, unintended outcomes and other care deficiencies.

Mount Sinai researchers say the AEquity tool is adaptable to a wide range of machine-learning models of varying power and complexity, and it can be applied to datasets of differing sizes –  assessing not just the input data, but also the outputs, such as predicted diagnoses and risk scores. 

"Our goal was to create a practical tool that could help developers and health systems identify whether bias exists in their data – and then take steps to mitigate it," said Mount Sinai researcher Dr. Faris Gulamali, in a statement. "We want to help ensure these tools work well for everyone, not just the groups most represented in the data."

The new study suggests that AEquity could be useful for AI developers, researchers and regulators, and that using it during algorithm development and in audits before deployment could help ensure that models are more fair and accurate.

The research paper, "Detecting, Characterizing, and Mitigating Implicit and Explicit Racial Biases in Health Care Datasets With Subgroup Learnability: Algorithm Development and Validation Study," was authored by Gulamali, alongside Ashwin Shreekant Sawant, Lora Liharska, Carol Horowitz, Lili Chan, Patricia Kovatch, Ira Hofer, Karandeep Singh, Lynne Richardson, Emmanuel Mensah, Alexander Charney, Dr. David L. Reich, Jianying Hu and Dr. Girish N. Nadkarni. 

It was funded by the National Center for Advancing Translational Sciences and the National Institutes of Health. 

THE LARGER TREND
Mount Sinai, of course, has been at the forefront of artificial intelligence innovation for years. Recent news has included the launch in 2024 of its Center for AI and Human Health to a similar effort this past spring focused on AI-enabled pediatric care.

As the New York City-based health system continues its digital transformation efforts, its projects include GPT for education, new algorithms to detect sleep disordersclosing gaps in care and combating AI hallucinations.

ON THE RECORD
"Tools like AEquity are an important step toward building more equitable AI systems, but they're only part of the solution," says Nadkarni, senior corresponding author and chief AI officer of the Mount Sinai Health System. "If we want these technologies to truly serve all patients, we need to pair technical advances with broader changes in how data is collected, interpreted, and applied in health care. The foundation matters, and it starts with the data." 

"This research reflects a vital evolution in how we think about AI in health care – not just as a decision-making tool, but as an engine that improves health across the many communities we serve," added Reich, chief clinical officer at Mount Sinai. 

"By identifying and correcting inherent bias at the dataset level, we're addressing the root of the problem before it impacts patient care," he said. "This is how we build broader community trust in AI and ensure that resulting innovations improve outcomes for all patients, not just those best represented in the data. It's a critical step in becoming a learning health system that continuously refines and adapts to improve health for all." 
 


Mike Miliard is executive editor of Healthcare IT News
Email the writer: mmiliard@himss.org
Healthcare IT News is a HIMSS publication.