Dissecting racial bias in an algorithm used to manage the health of populations
Racial bias in health algorithms
Abstract
Get full access to this article
View all available purchase options and get full access to this article.
View all available purchase options and get full access to this article.
Select the format you want to export the citation of this publication.
AAAS login provides access to Science for AAAS members, and access to other journals in the Science family to users who have purchased individual subscriptions.
Purchase digital access to this article
Download and print this article for your personal scholarly, research, and educational use.
Buy a single issue of Science for just $15 USD.
eLetters is an online forum for ongoing peer review. Submission of eLetters are open to all. eLetters are not edited, proofread, or indexed. Please read our Terms of Service before submitting your own eLetter.
Log In to Submit a ResponseNo eLetters have been published for this article yet.
RE: tautology in AI predictor
The inattentive reader will miss a crucial acknowledgement by Obermeyer et al(1) of a serious tautology inherent in the described analytic enterprise. It goes like this:
If you build a model to distribute future health care resources based upon billing of present consumption.
If African American have less access to resource either through functional denial with inadequate resources in their community or self-denial through a lifetime of trained hopelessness to access such services in a meaningful way
Then, future distributions directed by that model will give more to Whites independent of need.
To those to whom much has been given, more will be given.
Obermeyer correctly identifies that the programs enlisting these models "primarily work to prevent acute health decompensations that lead to catastrophic health care utilization". They are not focused on long term population health that would be more properly invoked 15 years prior to the "catastrophic sentinel event" and directed against prevalent risk factors of hypertension, diabetes, and elevated lipids poorly controlled in African American populations. The unfairness hinted at in the article is the focus on short term benefit of cost reduction in the thirty days post acute decompensation while ignoring the much broader health care needs of the population. Unfortunately, it is the goal that is deviant from health care justice. It is lifelong health care missing management(2) that is the true problem not a single poorly focused AI directed predictor.
Artificial Intelligence is not the problem. It is the master who has directed the AI to distribute resources using the wrong metric for the wrong goal.
Sincerely,
Eran Bellin, M.D.
Professor of Epidemiology and Population Health and Medicine
Albert Einstein College of Medicine Bronx, N.Y.
Vice President Clinical IT Research and Development
Montefiore Information Technology
1. Obermeyer Z, Powers B, Vogeli C, Mullainathan S. Dissecting racial bias in an algorithm used to manage the health of populations. Science. 2019;366(6464):447-453.
2. Bellin E. Missing Management: Health-Care Analytic Discovery in a Learning Health System. South Carolina: Kindle direct publishing; 2019.
RE: Dissecting racial bias in an algorithm used to manage the health of populations
Recent news that an algorithm deciding care for 200 million patients was racially biased is highly concerning. This cautionary tale provides valuable lessons. A hasty rejection of Artificial Intelligence (AI) would be wrong though. AI has immense potential to improve health outcomes across society.
AI is already transforming health for the better. British researchers have built systems that can accurately diagnose breast cancer and detect eye conditions faster than ever before. In hospitals, we are using AI to proactively identify sepsis cases, thereby saving lives. Reducing readmissions and creating more accurate staffing forecasts is also saving money – essential for cash-strapped health services.
This story reminds companies to follow AI best practices. Set the right goals – the AI will follow your lead. Train the AI on your data and beware of bias. Avoid using third-party opaque "black-box" tools. Make sure decisions are easy to explain and justifiable. For sensitive areas, keep people involved. Despite the recent news, AI is ethical and trustworthy when used properly.
James Lawson, AI Evangelist, DataRobot