Omitting race and ethnicity from colorectal cancer (CRC) recurrence risk prediction models could decrease their accuracy and fairness, particularly for minority groups, potentially leading to inappropriate care advice and contributing to existing health disparities, new research suggests.
“Our study has important implications for developing clinical algorithms that are both accurate and fair,” write first author Sara Khor, MASc, with University of Washington, Seattle, and colleagues.
“Many groups have called for the removal of race in clinical algorithms,” Khor told Medscape Medical News. “We wanted to better understand, using CRC recurrence as a case study, what some of the implications might be if we simply remove race as a predictor in a risk prediction algorithm.”
Their findings suggest that doing so could lead to higher racial bias in model accuracy and less accurate estimation of risk for racial and ethnic minority groups. This could lead to inadequate or inappropriate surveillance and follow-up care more often in patients of minoritized racial and ethnic groups.
The study was published online June 15 in JAMA Network Open.
Lack of Data and Consensus
There is currently a lack of consensus on whether and how race and ethnicity should be included in clinical risk prediction models used to guide healthcare decisions, the authors note.
The inclusion of race and ethnicity in clinical risk prediction algorithms has come under increased scrutiny, due to concerns over the potential for racial profiling and biased treatment. On the other hand, some argue that excluding race and ethnicity could harm all groups by reducing predictive accuracy and would especially disadvantage minority groups.
Yet, it remains unclear whether simply omitting race and ethnicity from algorithms will ultimately improve care decisions for patients of minoritized racial and ethnic groups.
Khor and colleagues investigated the performance of four risk prediction models for CRC recurrence using data from 4230 patients with CRC (53% non-Hispanic white; 22% Hispanic; 13% Black or African American; and 12% Asian, Hawaiian, or Pacific Islander).
The four models were: (1) a race-neutral model that explicitly excluded race and ethnicity as a predictor; (2) a race-sensitive model that included race and ethnicity; (3) a model with two-way interactions between clinical predictors and race and ethnicity; and (4) separate models stratified by race and ethnicity.
They found that the race-neutral model had poorer performance (worse calibration, negative predictive value, and false-negative rates) among racial and ethnic minority subgroups compared with non-Hispanic white. The false-negative rate for Hispanic patients was 12% vs 3% for non-Hispanic white patients.
Conversely, including race and ethnicity as a predictor of postoperative cancer recurrence improved the model’s accuracy and increased “algorithmic fairness” in terms of calibration slope, discriminative ability, positive predictive value, and false-negative rates. The false-negative rate for Hispanic patients was 9% and 8% for non-Hispanic white patients.
The inclusion of race interaction terms or using race-stratified models did not improve model fairness, likely due to small sample sizes in subgroups, the authors add.
‘No One-Size-Fits-All Answer’
“There is no one-size-fits-all answer to whether race/ethnicity should be included, because the health disparity consequences that can result from each clinical decision are different,” Khor told Medscape Medical News.
“The downstream harms and benefits of including or excluding race will need to be carefully considered in each case,” Khor said.
“When developing a clinical risk prediction algorithm, one should consider the potential racial/ethnic biases present in clinical practice, which translate to bias in the data,” Khor added. “Care must be taken to think through the implications of such biases during the algorithm development and evaluation process in order to avoid further propagating those biases.”
The co-authors of a linked commentary say this study “highlights current challenges in measuring and addressing algorithmic bias, with implications for both patient care and health policy decision-making.”
Ankur Pandya, PhD, with Harvard T.H. Chan School of Public Health, Boston, Massachusetts, and Jinyi Zhu, PhD, with Vanderbilt University School of Medicine, Nashville, Tennessee, agree that there is no “one-size-fits-all solution” — such as always excluding race and ethnicity from risk models — to confronting algorithmic bias.
“When possible, approaches for identifying and responding to algorithmic bias should focus on the decisions made by patients and policymakers as they relate to the ultimate outcomes of interest (such as length of life, quality of life, and costs) and the distribution of these outcomes across the subgroups that define important health disparities,” Pandya and Zhu suggest.
“What is most promising,” they write, is the high level of engagement from researchers, philosophers, policymakers, physicians and other healthcare professionals, caregivers, and patients to this cause in recent years, “suggesting that algorithmic bias will not be left unchecked as access to unprecedented amounts of data and methods continues to increase moving forward.”
This research was supported by a grant from the National Cancer Institute of the National Institutes of Health. The authors and editorial writers have disclosed no relevant financial relationships.
JAMA Netw Open. 2023;6(6):e2318495, e2318501. Full text, Commentary
For more news, follow Medscape on Facebook, Twitter, Instagram, and YouTube.
Source: Read Full Article