Impact of Artificial Intelligence in Contemporary Medicine
When people go to a medical facility for help, they expect the doctor to make appropriate health decisions for their optimal health and outcome.
Doctors and other health care providers are increasingly using healthcare algorithms (a computation, often based on statistical or mathematical models, that helps medical practitioners make diagnoses and decisions for treatments) and artificial intelligence (AI), to diagnose patient illnesses, suggest treatments, predict health risks, and more. In some cases, this is fine. However, using healthcare algorithms and AI can sometimes worsen things for people from certain ethnic or racial groups. This is because algorithms and AI are based on data from one set of the population that may not work well for others.
Awareness of Bias
Healthcare algorithms and AI bias can contribute to existing health disparities for certain populations based on race, ethnicity, gender, age, or other demographic factors.
One reason for healthcare algorithm and AI bias is the lack of diversity in the data used to train computer programs. It is important to use data from patients with diverse demographic factors when creating AI computer programs to ensure the algorithm works well for everyone.
Another way bias can enter healthcare algorithms and AI is through the assumptions made by the people who create them. For example, if developers assume that some symptoms are more common in non-Hispanic White women than in Black/African American women. This can lead to algorithms producing unfair or inaccurate results for Black/African American women with those symptoms.
A Case Study
If a woman has had a cesarean delivery, also known as a C-section, there is a chance that a subsequent delivery can be attempted through a vaginal birth, which is known as Vaginal Birth after Cesarean Delivery or VBAC. However, there are known risks associated with attempting VBAC, such as uterine rupture or other complications.In 2007, the VBAC algorithm was designed to help healthcare providers assess the likelihood of safely giving birth through vaginal delivery. The algorithm considers many things, such as the woman's age, the reason for the previous C-section, and how long ago it happened. However, in 2017, in a study by Vyas, et al., researchers found the original algorithm was not correct. It predicted that Black/African American and Hispanic/Latino women were less likely to have a successful vaginal birth after a C-section than non-Hispanic White women. This caused doctors to perform more C-sections on Black/African American and Hispanic/Latino women than on White women.
After years of work by researchers, advocates, and clinicians, changes were made to the algorithm. The new version of the algorithm no longer considers race or ethnicity when predicting the risk of complications from VBAC. This means that doctors can make decisions based on more accurate and impartial information that works for all women, providing more equitable care regardless of race or ethnicity. To access more information about this case study, visit: Challenging the Use of Race in the Vaginal Birth after Cesarean Section Calculator.
The Treatment Plan for Bias
There are best practices that healthcare data scientists and developers can incorporate to address the challenges of using algorithms and AI. These include:
- Have a more diverse body of people review and supervise the algorithms and AI.
- Use methods or techniques to best manage situations where there is not enough information available, like using synthetic data.
- Work with diverse communities to ensure the algorithms are helpful and don't cause harm.
- Introduce the algorithms gradually and carefully instead of all at once.
- Create ways for people to provide feedback and improve the algorithms over time.
- Involve diverse members of your workforce in developing the algorithms and validating patient data from various racial and ethnic backgrounds.
The Office of Minority Health (OMH) is focused on helping to reduce differences in health outcomes, known as health disparities, for racial and ethnic minority populations and American Indian and Alaska Native communities. By encouraging equity in the lifecycle of algorithms and AI, OMH and other federal agencies aim to lower the risk of bias and improve healthcare outcomes for everyone.
References
The Center for Open Data Enterprise (CODE). (2019). Sharing And Utilizing Health Data for A.I. Applications: Roundtable Report. U.S. Department of Health and Human Services. https://www.hhs.gov/sites/default/files/sharing-and-utilizing-health-data-for-ai-applications.pdf
U.S. Government Accountability Office & The National Academy of Medicine. (2020). Artificial Intelligence in Health Care Benefits and Challenges of Technologies to Augment Patient Care. U.S. Government Accountability Office, Science, Technology Assessment, and Analytics. https://www.gao.gov/assets/gao-21-7sp.pdf
United States Department of Health and Human Services (HHS) (2022). Artificial Intelligence (AI) at HHS. Retrieved from: https://www.hhs.gov/about/agencies/asa/ocio/ai/index.html
Davenport, and Kalakota (2019). The potential for artificial intelligence in healthcare. Free article: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6616181/
Bohr, and Memarzadeh (2020). The rise of artificial intelligence in healthcare applications. Free article: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7325854/
Norori, et al. (2021). Addressing bias in big data and AI for health care: A call for open science. Free article: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8515002/
National Institute for Health Care Management (NIHCM) Foundation (2021). Racial Bias in Health Care Artificial Intelligence. Free article: https://nihcm.org/publications/artificial-intelligences-racial-bias-in-health-care
Jackson, M. C. (2021). Artificial Intelligence & Algorithmic Bias: The Issues with Technology Reflecting History & Humans. Journal of Business, 19. Free article: https://digitalcommons.law.umaryland.edu/cgi/viewcontent.cgi?article=1335&context=jbtl
Harris, L. A. (2021). Artificial Intelligence: Background, Selected Issues, and Policy Considerations. Congressional Research Service. https://crsreports.congress.gov/product/pdf/R/R46795
Huang, J., Galal, G., Etemadi, M., & Vaidyanathan, M. (2022). Evaluation and Mitigation of Racial Bias in Clinical Machine Learning Models: Scoping Review. JMIR Medical Informatics, 10(5), e36388. Free PMC article: http://www.ncbi.nlm.nih.gov/pmc/articles/pmc9198828/
Schwartz, R., Vassilev, A., Greene, K., Perine, L., Burt, A., & Hall, P. (2022). Towards a Standard for Identifying and Managing Bias in Artificial Intelligence. U.S. Department of Commerce, National Institute of Standards and Technology. https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.1270.pdf
Bernstam, E. V., Shireman, P. K., Meric-Bernstam, F., N. Zozus, M., Jiang, X., et al. (2022). Artificial Intelligence in Clinical and Translational Science: Successes, Challenges, and Opportunities. Clinical and Translational Science, 15(2), 309–321. Free PMC article: http://www.ncbi.nlm.nih.gov/pmc/articles/pmc8841416/
Marcus, J. L., Sewell, W. C., Balzer, L. B., & Krakower, D. S. (2020). Artificial Intelligence and Machine Learning for HIV Prevention: Emerging Approaches to Ending the Epidemic. Current HIV/AIDS Reports, 17(3), 171–179. Free PMC article: http://www.ncbi.nlm.nih.gov/pmc/articles/pmc7260108/
Solomonides, A. E., Koski, E., Atabaki, S. M., Weinberg, S., Mcgreevey, J. D., et al. (2022). Defining AMIA’s Artificial Intelligence Principles. Journal of the American Medical Informatics Association (JAMIA), 29(4), 585–591.
Lee, E. W. J., & Viswanath, K. (2020). Big Data in Context: Addressing the Twin Perils of Data Absenteeism and Chauvinism in the Context of Health Disparities Research. Journal of Medical Internet Research, 22(1), e16377. Free PMC article: http://www.ncbi.nlm.nih.gov/pmc/articles/pmc6996749/
Lin, S. (2022). A Clinician’s Guide to Artificial Intelligence (AI): Why and How Primary Care Should Lead the Health Care AI Revolution. Journal of the American Board of Family Medicine, 35(1), 175. Free article: https://doi.org/10.3122/jabfm.2022.01.210226
Nadkarni, P. M., Ohno-Machado, L., & Chapman, W. W. (2011). Natural Language Processing: An Introduction. Journal of the American Medical Informatics Association (JAMIA), 18(5), 544–551. Free PMC article: http://www.ncbi.nlm.nih.gov/pmc/articles/pmc3168328/
Vyas, Jones, Meadows, et al. (2019). Challenging the Use of Race in the Vaginal Birth after Cesarean Section Calculator. Free PMC article: https://pubmed.ncbi.nlm.nih.gov/31072754/