Abstract

The rapid integration of artificial intelligence (AI) into healthcare offers promising advancements but also presents significant challenges, particularly concerning bias. This paper critically examines how AI systems, particularly those utilising Body Mass Index (BMI) as a health metric, can perpetuate and exacerbate existing racial and gender biases. By analysing the intersection of these biases within AI-driven healthcare applications, this study highlights the risk of worsening health disparities among marginalised groups. Additionally, the paper explores the legal, ethical, and economic feasibility of implementing AI-based breast cancer screening services. It concludes that for AI to be a force for good in healthcare, it must be developed and deployed with a commitment to inclusivity, transparency, and accountability. This research underscores the need for a sociotechnical approach to AI development, ensuring that these technologies promote rather than undermine social justice. Future directions include exploring more personalised debiasing techniques that account for intersectionality and developing frameworks for inclusive AI training datasets. Further research should also focus on legal and regulatory structures to ensure equitable AI deployment in diverse healthcare settings.

Key Words: Artificial Intelligence, Breast Cancer Screening, Bias, Machine Learning, Healthcare Ethics.

Acknowledgments: Special thanks to Merissa Hickman for her invaluable mentorship during the study.

Introduction

Artificial intelligence (AI) is increasingly recognised as a transformative force in healthcare, offering the potential for more accurate diagnostics, personalised treatment plans, and efficient resource allocation (Davenport and Kalakota, 2019). However, the application of AI in clinical settings is fraught with challenges, particularly regarding bias. Two critical areas of concern are the use of Body Mass Index (BMI) as a health metric and the broader perpetuation of racial and gender inequalities within AI systems. BMI, originally developed as a population-level measure, is widely used in healthcare despite its well-documented limitations (Carter et al., 2020). These limitations become even more problematic when integrated into AI systems that may perpetuate and even magnify existing biases.

The current discourse on AI in healthcare has largely overlooked how these biases intersect, particularly in sensitive areas such as breast cancer screening. This paper aims to fill this gap by examining the implications of bias in AI-driven healthcare and exploring the feasibility of implementing AI-based screening services. By doing so, it calls for a critical reassessment of the principles guiding AI development, advocating for a shift towards a more inclusive and equitable approach. Through a detailed analysis of the intersection between BMI and racial bias, this study contributes to the ongoing debate on the ethical deployment of AI in healthcare, offering recommendations for mitigating these biases to ensure AI technologies benefit all populations equitably.

Diversity and Bias in AI

At  a time where technology is constantly advancing, artificial intelligence plays a huge role in many people’s lives, from healthcare to criminal justice. However, the increasing reliance on AI systems has shed light on a few setbacks, one of which is where biases in these technologies increase existing social inequities. Two significant areas where AI bias manifests are in the measurement of body mass index (BMI) and the perpetuation of racial and gender inequalities. The combination of these biases highlights the need for a critical reassessment of how AI is developed and deployed, in order for it to be successfully integrated into healthcare.

The Problem of BMI in Healthcare

BMI, a widely used metric in healthcare, is often cited as a tool for assessing an individual’s health status based on their weight relative to their height. However, BMI has long been criticised for its simplicity and inability to account for the complexities of human health (Katella, 2023). Originally developed by Belgian mathematician Adolphe Quetelet in the 19th century, BMI was designed as a population-level metric, not an individual health measure (Pray and Riskin, 2023). Despite this, it has been repurposed as a clinical tool, leading to widespread misapplications in healthcare.

One of the most glaring issues with BMI is its inability to differentiate between muscle and fat. This flaw often leads to the misclassification of individuals with higher muscle mass as overweight or obese, while underestimating health risks in individuals with normal BMI but high body fat. Furthermore, BMI does not consider other important factors such as age, sex, or ethnicity. Research has shown that different ethnic groups have varying body compositions, with some populations being more prone to visceral fat accumulation despite having a lower BMI. For instance, studies indicate that Asian individuals may face higher health risks at lower BMIs compared to their European counterparts (Hauk, Hollingsworth and Morgan, 2011). This misalignment between BMI and actual health outcomes raises questions about the metric’s validity, especially in a diverse society.

AI and the Entrenchment of BMI Bias

The integration of AI into healthcare has only magnified the problems associated with BMI. AI algorithms that rely on BMI as a key variable can perpetuate the same biases that have plagued the metric since its inception. For example, AI systems used to predict health outcomes or allocate medical resources often use BMI as a proxy for health status. Researchers state that AI still has limited training on real world data and that synthetic data can limit the reliability of this tool (An, Shen, and Xiao, 2022). This reliance can lead to skewed predictions that disproportionately affect individuals from certain ethnic backgrounds or those with atypical body compositions (Siddiqui et al., 2022).

Moreover, the datasets used to train these AI systems are often biased. If an AI system is trained on data that overrepresents certain populations – such as those of European descent – it may not accurately reflect the health risks faced by other groups. This can result in the underdiagnosis or overtreatment of individuals from minority populations, exacerbating existing health disparities. 

For instance, an AI model might recommend weight loss interventions for a person with a high BMI without considering that their health risks are more closely linked to other factors, such as genetics or environmental influences (Siddiqui et al., 2022). 

Racial and Gender Bias in AI: A Broader Perspective

The issue of bias in AI extends far beyond BMI. Dr. Alex Hanna, Director of Research at the Distributed AI Research Institute (DAIR), has extensively studied how AI systems can perpetuate racial, gender, and class inequalities (Lazaro, 2022). According to Dr Hanna, bias in AI is not just a matter of flawed data but also of who is involved in the development of these systems (Lazaro, 2022). The tech industry is predominantly White, male, and affluent, leading to AI systems that reflect these narrow perspectives (Lazaro, 2022).

Facial recognition technology is a prime example of how AI can reinforce racial and gender biases. Research by Dr. Joy Buolamwini and Dr. Timnit Gebru, known as the “Gender Shades study”, revealed that facial recognition algorithms are significantly less accurate in identifying darker-skinned individuals, particularly women. This discrepancy is not merely a technical flaw; it is a consequence of the underrepresentation of diverse faces in training datasets and the lack of diversity among AI developers. More troubling is the fact that facial recognition is often deployed in ways that disproportionately impact communities of colour, such as in policing and surveillance. Even if the technology were improved to reduce bias, its use would remain problematic due to the contexts in which it is applied (Lazaro, 2022).

MIT researcher Joy Buolamwini had a firsthand encounter with facial recognition and bias when she was not able to be identified until she had “covered her face with a white mask” (Johnson, 2023). A recent study reveals that Black women between the ages of 18 and 30 have the lowest percentage of successful facial recognition (Johnson, 2023). This means Black women in this age group are less likely to receive accurate healthcare with AI usage than Caucasian women in the same age group.

The Intersection of Racial and BMI Bias

The meeting of racial and BMI bias in AI systems is particularly concerning in healthcare, where both metrics are frequently used to inform clinical decisions. For example, Black individuals are often subjected to harmful stereotypes surrounding race and health (Artiga, 2021). When combined with the flawed use of BMI, these biases can lead to worse health outcomes for Black patients. A study highlighted how an AI algorithm used to allocate healthcare resources systematically disadvantaged Black patients because it relied on healthcare spending as a proxy for health needs (Obermeyer, 2019). Since Black patients often have lower healthcare expenditures due to systemic barriers, the algorithm concluded they required fewer resources, perpetuating a cycle of underfunding and neglect (Lazaro, 2022).

The field of breast cancer diagnostics is increasingly reliant on AI for early detection, risk assessment, and treatment planning. While AI offers the potential for more accurate and efficient diagnostics, it also introduces the risk of bias, particularly when the underlying data reflects existing disparities. For instance, if AI models are trained predominantly on data from Caucasian women, they may be less effective in identifying breast cancer in women of colour, resulting in delayed diagnosis and treatment (Lutton, 2023). This intersectionality of bias highlights the critical need to consider multiple dimensions of identity in the development of AI systems. A narrow focus on a single aspect, such as race or BMI, inadequately addresses the broader disparities that exist. Therefore, it is essential that AI systems are designed with a comprehensive understanding of how various forms of bias interact and exacerbate one another, particularly in the context of healthcare.

The Limitations of Debiasing AI

Efforts to mitigate bias in AI systems have shown promise but remain limited. As Dr. Hanna points out, there is no such thing as a “bias-free dataset” (Lazaro, 2022). Even the most sophisticated debiasing techniques cannot fully eliminate the underlying prejudices embedded in the data. For example, attempts to debias language models by removing gendered or racialised language often overlook the nuances of intersectionality, such as how race and gender intersect to produce unique forms of discrimination. Moreover, debiasing efforts typically focus on binary categories, such as male versus female or Black versus White, which erases the experiences of nonbinary individuals and people of mixed race (Lazaro, 2022). While technical debiasing methods for AI data and algorithms are important, they are insufficient in addressing the broader discriminatory impact of AI systems. In order for debiasing to be effective, solutions to AI discrimination have to be more personalised to each situation (EDRi, 2021). Given these limitations, a more comprehensive approach to AI development is needed – one that explicitly acknowledges the values and assumptions guiding the process. This requires a shift away from the technocentric mindset that dominates the AI field and towards a more sociotechnical approach that considers the broader social and historical context. For instance, addressing BMI bias in AI would involve not only improving the accuracy of health predictions but also rethinking the very use of BMI as a health metric. Similarly, tackling racial bias in AI would require reexamining the contexts in which AI systems are deployed, particularly in areas like law enforcement, where their use may be inherently harmful (Siddiqui et al., 2022). It is recommended that if companies implement AI into the workplace that they consult with data scientists in order to properly integrate it into their workplace (Brobeil, 2024).

Towards a More Equitable AI

The potential for AI to be a force for good remains, but it requires a concerted effort to align AI development with principles of social justice. This means involving a more diverse range of voices in the AI development process, particularly those from marginalised communities who are most affected by these technologies. It also means holding businesses and policymakers accountable for the impact of AI systems on vulnerable populations.

Dr. Hanna’s work at DAIR provides a model for what ethical AI research can look like. By prioritising community needs, focusing on non-exploitation, and rejecting the profit-driven imperatives of big tech, DAIR represents a new paradigm for AI development – one that recognises the potential harms of AI and actively works to prevent them. For instance, DAIR’s project on identifying hate speech and disinformation in the Ethiopian Civil War demonstrates how AI can be used to protect marginalised groups rather than surveil or control them (Lazaro, 2022).

However, the responsibility for ensuring ethical AI extends beyond research institutes. Businesses, data scientists, engineers, and regulators all have roles to play in creating AI systems that are fair and equitable. Businesses must commit to ethical practices, even when they conflict with profitability, and data scientists must be protected when they raise concerns about the social impact of their work. Regulators, too, must step up to provide the oversight necessary to prevent AI from causing harm. The European Union’s AI Act and the Algorithmic Accountability Act proposed in the U.S. are steps in the right direction, but more comprehensive national and international regulations are needed.

Feasibility of Implementing AI-Based Breast Cancer Screening Services

For a screening service to function and be a valuable investment, it must be accessible, usable, affordable, and acceptable to both the provider and recipient. This raises questions about whether an AI-based breast cancer screening service could ever be fully utilised. Accessibility is a significant issue, particularly due to the age of the at-risk group and the wealth disparities that might prevent access. This section focuses on the legal complications that could arise with an AI-based screening service, whether concerns over data privacy would discourage public use, and the economic feasibility of such an investment.

i. Accountability

Trust is essential in healthcare, and patient compliance often relies on it. For patients to trust artificial intelligence, there would need to be strict regulations to protect their rights and privacy. A key issue is that technological advancement is more rapid than the introduction of new legislation (Rivett and Simpson, 2024). This ultimately could result in new technology being used without the proper regulations behind it. If an error occurred, there then may be issues regarding patient data and accountability for any harm caused by the AI. 

Within the scientific community, concerns regarding the accountability of AI are common. Although AI has been in use for decades, there have been cases of misuse. For example, in 1980, a radiation therapy machine called Therac-25 caused the demise of six individuals due to the administering of harmful doses of radiation, due to a glitch in the computer coding (Gluyas & Day, 2018). This case is still debated, with questions over liability prevalent. It calls into question whether an AI error constitutes a breach in duty of care, and whether the doctor supervising the case or the AI developers should be held accountable. Current legislation states that the liability is dependent on the error and the system (Gluyas & Day, 2018). For example, if the AI is utilised incorrectly, then the user or owner is at fault; but if the error occurred while the AI was still learning, then responsibility lies with the developer or data programmer (Gluyas & Day, 2018).

However, a key part of why AI use in medicine is so successful is because the machine is continuously learning from past experiences. Therefore, it is constantly in a state of development, and the question of liability is reopened. This conundrum could undoubtedly cause issues regarding potential errors. Discussion centred around this issue is key with some individuals believing that liability should remain with humans, as the original programming determines the machine’s behaviour; however, AI often has the ability to learn and change its own behaviour, in a way the developer may not be able to control or predict. All of this uncertainty could result in patient mistrust, and a lack of compliance with the software. It therefore may not be a viable investment, and further ethical issues could arise if patients were not fully aware of the use of AI in their treatment. Resources could be wasted due to insufficient use and therefore AI implementation may not be a suitable investment.

II. Physician Concerns

According to a study performed in Sweden (Högberg, Larsson and Lång, 2023), one in five radiologists are hesitant about the use of AI. Many expressed concern over false positives and the possibility of increased workload if AI readings had to be reviewed to prevent this. The study also found that while most radiologists felt positive about the integrated use of AI, they had some concerns regarding liability, privacy, and upholding the strict medical ethical standards as outlined in the four pillars (Högberg et al., 2023). The issue with this is that even if some doctors are unwilling to utilise AI, then inequality arises. Areas that do not use AI may attract doctors who are more uncomfortable with technology, perhaps from an older generation, and this could result in differences in care and a potential bias against certain patients. Justice highlights the importance of equal distribution, so this would be inherently unethical according to medical ethics, and therefore the use of AI becomes unethical.

In the same study, 51.1% of radiologists were concerned that the AI might be trained using datasets that were not relevant to the local population, raising the possibility of bias (Högberg et al., 2023). This further shows how the feasibility of an AI screening service is limited by issues such as bias and access. The most difficult issue the radiologists referred to was the issue of accountability, with some concerned that any shift in accountability away from radiologists could result in complacency. However, it is important to mention that 41.6% of radiologists surveyed believed that the use of AI would have no impact on breast cancer screening and the role of a radiologist, calling into question whether the impact is worth the financial investment (Högberg et al., 2023). We believe that for the benefits to outweigh the cost, there would need to be high levels of engagement, which may not be possible due to limits to access.

III. Patient Protection and Privacy

Carter and colleagues suggest that all further developments in the use of AI in screening must be slow and carefully considered to reduce the social, ethical, and legal impacts (Carter et al., 2020). For many years, governments, companies, and services have all been concerned with AI, so legislation has been implemented; however, further research and restrictions must be implemented to protect patients and ensure they are comfortable using an AI service.

AI systems require past data to detect cancers from scans. This creates an issue with data privacy and confidentiality. The right to privacy is incredibly important in medicine, with the autonomy pillar ensuring that all patients who have the capability have the right to make their own decisions regarding who has access to their data. In Italy, the health data of the entire population was released to IBM Watson, a technological company making huge advancements in AI, without individual consent (Carter et al., 2020). Many questioned whether this was ethical and abided by patients’ rights. As long as there are concerns over data privacy, patients may be unwilling to use an AI service, potentially reducing the feasibility from an economic perspective.

Additionally, when large amounts of data are shared, the potential for leakage is high. Alarmingly, a large breach could result in private medical data being accessible by many people, a frightening concept to many individuals. Furthermore, if algorithms using the data of those who agreed to share it are utilised by AI, those who refused access may be disadvantaged if they are not represented in the overall algorithm. This group could include those who were most unfamiliar with AI and technology, relating to the issue of wealth and age disparities.

Access-Related Concerns in AI-Based Healthcare

I. Age-Related Concerns

Screening services must be accessible and understandable to their users. The most at-risk age group for breast cancer is women over 40, with only 9% of breast cancers appearing in people under the age of 45 (OncoLink, 2024). This older age group highlights a potential issue surrounding the understanding of the use of AI in a patient’s care. A study found that 27% of adults over the age of 50 had “only a little” knowledge of AI, with 9% stating that they had heard or read “nothing at all” about AI (NORC at the University of Chicago, 2023). Therefore, it is reasonable to assume that a large proportion of the at-risk group for breast cancer has little to no knowledge of AI, and their trust in it could be limited. This further emphasises the potential ethical concerns with the use of AI in breast cancer screening; if patients feel uninformed about the process, then any consent given is invalid, directly opposing the ethical pillar of autonomy. In the same study, 49% of the adults over 50 surveyed stated that they were ‘very uncomfortable’ with AI diagnosing medical issues (NORC at the University of Chicago, 2023). This evidence shows a lack of trust among the intended users. It is clear to see how this level of discomfort could translate into a lack of usage of an AI system, and patients actively avoiding an AI screening service. This could undoubtedly lead to an increase in undetected breast cancers and ultimately decrease survival rates instead of increasing them as intended.

Digital ageism is also a key concern, especially as the source of AI age-related bias is unknown (Chu et al., 2023). In relation to other forms of bias already discussed, this could negatively impact a patient significantly if a bias develops in which the AI under- or over-diagnoses cancer in older tissue. This further proves the potential for AI to be unethical and biased, bringing the financial viability into question.

II. Geographical Issues

Furthermore, the justice pillar of medicine states that all individuals should have access to the same level of care and resources. Therefore, the introduction of an AI screening service in some areas and not others would increase healthcare disparities between wealthier countries and less developed ones. Additionally, the issue of age becomes key again when discussing location. For isolated communities or individuals, especially in large countries such as the USA, there may be limited choices in healthcare providers due to the distance between each one. For rural Americans, it takes an average of 34 minutes to get to the nearest hospital (Galvin, 2018), and the second nearest could be much further. Therefore, if an older individual does not trust the use of AI in their own breast cancer screening, and the nearest hospital has implemented its use, a concerning and clear issue arises. The implementation, therefore, of the AI, may have prevented an individual from receiving sufficient care and could reduce their chances of full recovery, and is therefore not ethical.

III. Wealth Disparities and Access to AI in Healthcare

Wealth disparities in the United States significantly affect access to advanced technologies, including artificial intelligence (AI) in medicine. AI’s potential to revolutionise healthcare is immense, promising improved diagnostics, personalised treatment plans, and more efficient healthcare delivery. However, the unequal distribution of wealth across the US influences who can benefit from these advancements. The implementation of AI in healthcare is largely occurring in more affluent areas where hospitals can afford the substantial costs associated with these technologies. Conversely, underfunded hospitals and clinics, often located in low-income or rural areas, struggle to adopt these technological advancements (Topol, 2019).

AI in medicine is often associated with high costs, including the need for advanced infrastructure, ongoing maintenance, and specialised personnel, which can be prohibitive for less wealthy institutions (Jiang et al., 2017). This financial burden is particularly challenging for healthcare providers serving low-income communities, where financial constraints already limit the availability of basic healthcare services.

Moreover, wealth disparities affect the data used to train AI models in healthcare. AI systems rely heavily on large datasets, but these often lack sufficient representation of minority and low-income populations. This can lead to biased algorithms that do not perform as well for these groups, perpetuating existing healthcare disparities. Obermeyer et al. found that racial bias in healthcare algorithms can result in Black patients receiving less intensive care than White patients, even when they are sicker (Obermeyer et al., 2019). This bias can result in misdiagnosis or less effective treatment recommendations for low-income or minority patients, further widening the health disparities that already exist.

Additionally, the recurring costs associated with AI, such as software updates, data storage, and continuous training for healthcare professionals, create ongoing financial challenges. It has been stated that the ongoing costs of maintaining AI systems, including updating algorithms and storing large amounts of data, pose significant financial challenges for hospitals, especially those in lower-income areas (Rajkomar, Dean, and Kohane, 2018). This results in a situation where patients in wealthier communities benefit from the latest AI technologies, while those in poorer areas do not.

The integration of AI in medicine also raises ethical concerns about exacerbating inequalities. If wealthier patients and healthcare providers have more access to AI-driven care, there is a risk of creating a two-tiered healthcare system. In this system, AI-powered interventions are more likely to be available in affluent areas, while underserved communities continue to rely on less effective, outdated methods (Stanley, 2020). This could lead to worse health outcomes for low-income patients, further entrenching the cycle of poverty and poor health.

Conclusion

To conclude, any health system wishing to implement an entirely or partially AI-dependent screening service would need to: establish guidelines and laws regarding liability and data sharing; educate the population, especially at-risk groups, on the use of AI, providing full transparency about some of the issues; and ensure that the use of public funds was justified. Without effective guidelines and especially transparency, individuals may be wary and not fully utilise the system, which ultimately could lead to more cases going undetected, or certain groups receiving inadequate care. 

The intersection of BMI and racial bias in AI underscores the critical need for a reassessment of how AI technologies are developed and implemented in healthcare. While AI holds immense potential for improving health outcomes, its current applications risk exacerbating existing disparities unless carefully managed. This paper has demonstrated that biases in AI systems, particularly those relying on flawed metrics like BMI, can lead to unequal treatment and reinforce systemic inequities, especially among marginalised groups.

To address these challenges, AI development must prioritise inclusivity, transparency, and accountability. This involves not only improving the accuracy of AI models but also critically evaluating the metrics and data on which they rely. Furthermore, the feasibility of AI-based healthcare interventions, such as breast cancer screening, must be evaluated within a broader sociotechnical context, taking into account legal, ethical, and economic considerations. Future research should focus on developing and testing debiasing techniques that account for the complexities of intersectionality, ensuring that AI systems are fair and equitable for all.

Ultimately, the responsible deployment of AI in healthcare will require collaboration across disciplines, involving not only technologists and healthcare professionals but also ethicists, sociologists, and representatives from affected communities. By addressing these foundational issues, we can harness the power of AI to promote social justice and improve health outcomes for everyone, particularly those most at risk of being left behind.

Bibliography

An, Ruopeng, et al. “Applications of Artificial Intelligence to Obesity Research: Scoping Review of Methodologies.” Journal of Medical Internet Research, vol. 24, no. 12, 7 Dec. 2022, p. e40589, https://doi.org/10.2196/40589.

Brobeil, Caroline. “Battling Bias in AI.” Rutgers.edu, 2024, stories.camden.rutgers.edu/battling-bias-in-ai/index.html#article. Accessed 1 Sept. 2024.

Carter, S.M., et al. (2020). The ethical, legal, and social implications of using artificial intelligence systems in breast cancer care. The Breast, 49, 25–32. https://doi.org/10.1016/j.breast.2019.10.001.

Chu, C.H., et al. (2023). Age-related bias and artificial intelligence: a scoping review. Humanities and Social Sciences Communications, 10(1). https://doi.org/10.1057/s41599-023-01999-y.

Davenport, Thomas, and Ravi Kalakota. “The Potential for Artificial Intelligence in Healthcare.” Future Healthcare Journal, vol. 6, no. 2, 2019, pp. 94–98, www.ncbi.nlm.nih.gov/pmc/articles/PMC6616181; https://doi.org/10.7861/futurehosp.6-2-94.

Galvin, G. (2018). Where it takes the longest to get to a hospital in the U.S. U.S. News & World Report. https://www.usnews.com/news/healthiest-communities/articles/2018-12-13/states-where-it-takes-the-longest-to-get-to-a-hospital-in-the-us#:~:text=Americans%20in%20West%20North%20Central,trip%20time%20of%2015.8%20minutes

Gluyas, L., & Day, S. (2018). Artificial Intelligence – Who is Liable When AI Fails to Perform? CMS. https://cms.law/en/media/local/cms-cmno/files/publications/publications/artificial-intelligence-who-is-liable-when-ai-fails-to-perform?v=1&pk_vid=1724061338518b68.

Högberg, C., Larsson, S., & Lång, K. (2023). Anticipating artificial intelligence in mammography screening: views of Swedish breast radiologists. BMJ Health & Care Informatics, 30(1), e100712. https://doi.org/10.1136/bmjhci-2022-100712.

“If AI Is the Problem, Is Debiasing the Solution?” European Digital Rights (EDRi), 21 Sept. 2021, edri.org/our-work/if-ai-is-the-problem-is-debiasing-the-solution/.

Jiang, F., et al. (2017). Artificial intelligence in healthcare: past, present and future. Stroke and Vascular Neurology, 2(4), 230-243.

Johnson, A. (2023). Racism and AI: Here’s how it’s been criticized for amplifying bias. Forbes. https://www.forbes.com.

Katella, Kathy. “Why You Shouldn’t Rely on BMI Alone.” Yale Medicine, 4 Aug. 2023, www.yalemedicine.org/news/why-you-shouldnt-rely-on-bmi-alone.

Lazaro, J. (2022). Understanding Gender and Racial Bias in AI. AI Ethics Journal, 14(3), 122-135.

Lutton, Logan. “FDA-Approved AI Algorithm More Likely to Detect False Positive Breast Cancer Cases in Black Women.” Managed Healthcare Executive, Managed Healthcare Executive, 23 May 2024, www.managedhealthcareexecutive.com/view/fda-approved-ai-algorithm-more-likely-to-detect-false-positive-breast-cancer-cases-in-black-women. Accessed 1 Sept. 2024.

Nashwan A J, Abdi Hassan M, AlBarakat M M (February 26, 2024) Rethinking BMI and Obesity Management: The Transformative Role of Artificial Intelligence. Cureus 16(2): e54995. doi:10.7759/cureus.54995.

NORC at the University of Chicago. (2023). Older adults express mixed views on artificial intelligence. NORC at the University of Chicago. https://www.norc.org/research/library/older-adults-express-mixed-views-artificial-intelligence.html.

Obermeyer, Z., et al. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447-453.

Rajkomar, A., Dean, J., & Kohane, I. (2018). Machine learning in medicine. New England Journal of Medicine, 380(14), 1347-1358.

Santhanam, Prasanna, et al. “Artificial Intelligence and Body Composition.” Diabetes & Metabolic Syndrome Clinical Research & Reviews, vol. 17, no. 3, Mar. 2023, p. 102732, doi:10.1016/j.dsx.2023.102732.

Stanley, T. (2020). The promise and peril of AI in healthcare. New England Journal of Medicine, 382(23), 2262-2265.

Tong, Michelle, and Samantha Artiga. “Use of Race in Clinical Diagnosis and Decision Making: Overview and Implications.” KFF, 9 Dec. 2021, www.kff.org/racial-equity-and-health-policy/issue-brief/use-of-race-in-clinical-diagnosis-and-decision-making-overview-and-implications.

Topol, E. (2019). Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. Basic Books.

Tufford, Adele R, et al. “Toward Systems Models for Obesity Prevention: A Big Role for Big Data.” Current Developments in Nutrition, vol. 6, no. 9, 30 July 2022, academic.oup.com/cdn/article/6/9/nzac123/6652211, https://doi.org/10.1093/cdn/nzac123.