Abstract
This research explores the transformative potential of Artificial Intelligence (AI) in breast cancer prevention, diagnosis, and treatment. AI’s ability to analyse vast datasets and medical images aids in early detection, personalised treatment planning, and risk prediction, offering the potential for improved patient outcomes. AI also has the ability to predict the risk of a patient developing cancer due to their genetics and familial history. In order to optimise cancer care, individualised treatment plans are crucial; previously, doctors have used a generalised treatment plan in order to treat patients with similar symptoms, yet as we move towards the future this method is becoming less relevant. It is important to take into consideration a cancer patient’s stage and their background information to provide them with the most suitable treatments. AI risk prediction models can help with this. However, challenges remain, including the “black box” nature of AI models, concerns about data quality and biases, and the need to understand the relationship between AI and ageing populations. While explainable AI techniques can enhance transparency and trust, addressing ethical implications is vital. The future lies in developing interpretable models, fostering human-AI collaboration, and promoting equitable AI solutions for breast cancer prevention and care.
Introduction
Approximately one in every eight women in the United States has breast cancer, and approximately one in every 40 women poses the risk of dying from breast cancer (American Cancer Society, 2024). Despite advancements in medical science, early detection and effective treatment planning are critical factors in improving patient outcomes. In recent years, the integration of AI with breast cancer research has opened new avenues for enhancing diagnostic accuracy and personalising treatment strategies. AI enables machines to perform tasks that require human intelligence; its capability to process vast amounts of data and identify patterns invisible to the human eye offers more accurate and efficient care for patients in cancer care. However, as with any new technology, the adoption of AI in breast cancer prevention and diagnosis presents both opportunities and challenges. This article explores three key aspects of AI’s role in breast cancer: AI-powered risk prediction for personalised preventive care; the need for explainable and causal AI models to foster clinical trust; and the biases in AI for equitable healthcare. AI’s ability to analyse vast data and discern subtle patterns in medical images, such as in mammography screening, offers a powerful tool for early detection and personalised interventions, potentially revolutionising breast cancer prevention and contributing to a future where its burden is significantly reduced. However, this necessitates overcoming challenges related to explainability, causality, and bias, calling for ongoing research and collaboration. Furthermore, to achieve truly holistic and effective preventative care, AI must evolve to incorporate a multi-faceted understanding of individuals, encompassing not only biological and genetic factors but also lifestyle, environmental, and socioeconomic influences.
AI in Screening Analysis
Early detection of breast cancer is crucial in order to maximise survival rates. If the breast cancer is caught in a localised stage, the chance of survival is 99% (OASH, 2022). Recent advances in technology provide new tools to achieve earlier detection: implementing AI in mammography screening has the potential to significantly lower mortality rates because of its ability to increase the rate of early detection. AI systems would help improve preventative care by offering more regular and accurate mammogram screenings. Implementing AI would also improve the efficiency of reading mammograms, allowing more time for screening a larger number of patients, with the aim of detecting additional cases of breast cancer. AI’s role in detection would greatly help radiologists by reducing false negatives, minimising unnecessary biopsies, and providing improved attention to detail. This section will cover AI’s role in detection, compare AI-driven methods with traditional screening techniques, and discuss the potential challenges related to integrating AI with the screening process and how medical professionals should go about addressing these challenges.
Computer Assisted Detection (CAD) has been used since 1998 in mammography screening as a “second look.” CAD is used to analyse the mammograms and to highlight the potential areas of concern: masses, asymmetries, or microcalcifications. Microcalcifications are tiny calcium deposits in the breast that appear as white specks on a mammogram. Typically, they are not cancerous, though a buildup of them could be a sign of early breast cancer. Though certain studies show that CAD showed increased accuracy in mammogram readings, other studies have shown that CAD has increased the rate of false positives (Engelken, 2012). The increase in false positives leads to an increase in unnecessary biopsies and unnecessary stress for patients (Girometti 2010). To combat the increasing number of false positives, AI-enhanced CAD systems have been designed. For instance, NHS Grampian, a health and social care service in North East Scotland, conducted the first formal evaluation of the AI-enhanced CAD system called “Mia.” This evaluation indicated the success of “Mia” has helped the radiologists detect more cancers (Davidson, 2024). The evaluation revealed that there was no increase in the number of women who needed unnecessary biopsies for false positives and that there was a workload reduction of around 30% (Davidson, 2024). The long hours, large amounts of patients, and the mental strain of interpreting complex mammograms led to the burnout of many radiologists.
Similarly, Hologic’s Genius AI Detection 2.0 is also an AI tool used in mammography, and it is renowned for its high sensitivity and specificity. This system prioritises high-risk cases for immediate attention, allowing radiologists to prioritise these cases over non-essential cases. It also reduces the number of false positives by over 70% (Stamatopoulos, 2024). Other AI tools, like GE Healthcare’s MyBreastAI Suite and Fujifilm’s Transpara AI, streamline radiologists’ work by prioritising high-risk patients and providing critical perspectives into the health of patients (Stamatopoulos, 2024). Traditionally, hospitals/clinics require two radiologists to look over the mammograms. AI can serve as a reliable second opinion, thus reducing the mental strain for radiologists and allowing them to maintain high standards of care (Stamatopoulos, 2024).
Along with AI-enhanced systems, it is important to explore how AI compares directly to human radiologists regarding diagnostic performance and performance metrics. Numerous studies have compared the accuracy, specificity, and sensitivity of AI systems to that of radiologists (Najjar, 2023). These studies bring to light the potential of AI systems to exceed or match the capabilities of radiologists (Najjar, 2023). According to a German study (“Combining the strengths of radiologists and AI for breast cancer screening: a retrospective analysis”) AI systems have much potential in the screening analysis process, though when working together with the radiologists the potential for greatness is heightened; the study showed a 2.6% increase in detecting breast cancer when both the AI system worked with the radiologist as opposed to when the radiologist worked alone (Leibig, 2022).
The “Mammography Image Analysis Study (MIAS)” discusses an international evaluation of an AI system for breast cancer screening performance, comparing it to the performance of human radiologists (McKinney et al., 2020). Through this study, the AI system outperformed all the human radiologists in both the speed and accuracy in reading the mammograms. The area under the receiver operating characteristic curve (AUC-ROC) is an important performance metric used to test the effectiveness of diagnostic tests involved in breast cancer screening. The AUC-ROC for the AI system was greater than the AUC-ROC of the typical radiologist by a margin of 11.5%. This study also presented that when the AI system participated in the double reading process, its performance reduced the workload of the second reader by 88% (McKinney et al., 2020). The reduced workload of the second reader suggests that the second reader can now prioritise more complex cases or they could analyse a greater volume of mammograms.
Another 2020 study evaluated the performance of a deep learning model when detecting abnormalities in mammography screenings (Hall, 2022). Through this study, the authors found a difference in specificity of 0.5% and a difference in sensitivity of 1.1% with AI involvement versus without AI involvement. Breast Imaging Reporting and Data System (BI-RADS) is a standardised system used by radiologists to categorise mammogram findings. The researchers used the AI system Transpara, noting that the AI screening reduced sensitivity with increasing BI-RADS density and increased specificity across all BI-RADS. This study also noted a 25% reduction in false positives with AI assessment and reported that utilising AI screening reduced the workload of radiologists by over 62%. Martin Lillholm, Ph.D, a professor in the Department of Computer Science at the University of Copenhagen, wrote, “…[The] incorporation of an artificial intelligence (AI) system in population-based breast cancer screening programs could potentially improve screening outcomes and may considerably reduce the workload of radiologists” (Hall, 2022). Lillholm’s comment reinforces the idea that the integration of AI will lead to substantial improvements; the reduction in workload may allow for radiologists to detect more cases of breast cancer because of the decrease in cognitive overload.
While integrating AI systems into mammography screenings seems promising, several challenges must be considered to make sure that it is effective. The major concerns are the variability in AI performance and the large datasets needed to train the AI models. The variability can be influenced by factors including the quantity of the dataset, the diversity of the patient population within the dataset, and the specific AI model being used. Acquiring and curating large datasets poses an obstacle in integrating AI systems because gaining access to a vast amount of data with diversity in the patients can be very difficult, especially due to the fact that the amount of data that is needed for the AI systems to accurately read the mammograms may take a large period of time to collect. Additionally, there are ethical considerations, such as maintaining patient privacy and addressing biases that may occur in certain AI systems (Lamb, 2022). Patient confidentiality is a major ethical pillar in the healthcare industry, and the implementation of AI may put the patients at risk.
The integration of AI systems in mammography screening manifests a significant increase in early detection, leading to the prevention of breast cancer. AI systems show the potential to reduce mortality rates and improve patient outcomes by enhancing the accuracy and efficiency of mammogram readings. As AI technology is continuously evolving, it is crucial to address the challenges by building a bigger team that could consist of radiologists, data scientists, engineers, and AI specialists to help train the AI systems and collect the data that will comprise the datasets used to train the AI to ensure effective use of these systems. Despite these challenges corresponding with implementing AI, the benefits of AI as a diagnostic aid to radiologists would improve the standard of care and ultimately save more lives.
AI and Risk Prediction
This section will cover AI’s ability to predict the risk patients have of developing cancer in the future, predicting patients’ responses to specific cancer treatments, and analysing how lifestyle choices may directly affect the chances of developing breast cancer.
AI usage in tools can improve risk prediction by analysing genetic data in a patient to predict their likelihood of either developing cancer in the future or the risk of cancer metastasising or coming back. Two primary examples of this are the MyRisk and OncotypeDX tests (MyRisk, 2018). MyRisk collects genetic samples (either salivary or blood) and sends them to a Myriad laboratory where AI-supported DNA sequencing technology is used to analyse 48 specific genes that are associated with increasing chances of hereditary cancer (Myriad Genetics, 2018). Myriad technology uses AI to help with automating data, reducing human interaction in a clinical setting and identifying new risk identifiers in genetic sequences (DNANexus, 2020). Myriad partnered with DNANexus to develop a program known as Smart Reuse, which is an AI tool that increases the speed of bioinformatics pipeline development by up to 100 times. Bioinformatics pipeline development refers to the process of analysing large sets of genetic data to identify specific genetic mutations in a large set of raw data. Smart Reuse was fed large sets of raw data until the programming was perfected, saving countless hours for the laboratory geneticists. Myriad uses rule-based decision mapping and machine learning to predict how much a patient will need to pay out of pocket (for instance, in the US) and provide tools for understanding hereditary cancer risks based on family history so that patients know their eligibility for genetic testing and frequency of mammograms. Myriad uses AI to find additional genetic markers, including polygenic risk indicators (which are small variations across multiple genes rather than one singular mutation) that improve cancer risk predictions, especially for genes with a less severe risk; these markers may not be as obvious for humans to pick up on, which is where AI can be extremely revolutionary.
OncotypeDX is also a genetic test, yet it is usually done after a patient is diagnosed with cancer and targets a section of the tumour or infected tissues. This genetic test scans genes to predict the likelihood of the cancer reoccurring (either in the other breast or a different part of the body). In studies such as one published by the National Library of Medicine, an AI model known as Logistic Regression classifier is used in analysing MRI scans to non-invasively predict cancer recurrence risk with a 63% accuracy rating (NIH, 2023).
Digital models such as the Tyrer-Cuzick model and the Gail model are widely used internationally as early risk prediction tools for breast cancer. The Tyrer-Cuzick model predicts a woman’s risk of developing breast cancer over the next ten years and over her lifetime. These predictions are based on various factors, including age, reproductive history, and lifestyle choices (such as alcohol consumption, body mass index, hormone replacement therapy, and familial history of breast cancer) (Magview, 2022). The Gail model, named after Dr. Mitchell Gail, estimates the likelihood of developing invasive breast cancer over a specific period, incorporating factors such as personal medical history, age at first menstruation, age at first live birth, and family history, including BRCA1 and BRCA2 mutations (Magview, 2022).
A significant concern with these models is their reliance on self-reported data and online tools, which may not always accurately capture a patient’s familial history or lifestyle factors. For example, the accuracy of predictions could be compromised if patients are unaware of certain familial cancer histories or if the data inputted is incomplete or inaccurate. The model only focuses on a set number of risk factors, and there may be underlying factors that the model is not programmed to question. Additionally, there are ongoing debates about the model’s ability to accurately assess risk across different populations, particularly in women from diverse ethnic backgrounds, as most models are based on data from predominantly Caucasian populations rather than Pacific Islanders or Black populations (Magview, 2022). This raises questions about the model’s generalisability and the potential for disparities in risk assessment.
As AI advances, it increasingly enables the prediction of personalised responses to specific cancer treatments, allowing for the creation of individualised treatment plans to optimise outcomes. A notable example is a study conducted by researchers at the National Institutes of Health (NIH), where they developed an AI tool called LORIS (NIH, 2024). This tool predicts how well a patient’s cancer will respond to immune checkpoint inhibitors, a type of immunotherapy drug that helps immune cells target and kill cancer cells. LORIS uses routine clinical data, such as a patient’s age, cancer type, and blood test results, to make these predictions. Remarkably, the tool accurately predicted treatment responses and survival outcomes, even in patients with low tumour mutational burden (cancers with fewer mutations), who are generally less responsive to immunotherapy. One limitation that is crucial to point out is that the accuracy of such models is entirely based on the quality of data inputted into the system. If the data is biased or incomplete, it can lead to a large amount of negative predictions, which can be detrimental to patients. This is why it is important for oncologists to work hand-in-hand with such AI models until there is more trust in these technologies. Additionally, it is important to take into consideration how such AI models can help the efficiency of hospitals in terms of cost and time, which directly addresses the issue of understaffed hospitals globally.
To achieve precise oncology treatment, personalised targeted therapy is essential for each patient. A recent review investigated the impact of AI on personalised breast cancer treatment. The review included 46 studies, with AI models demonstrating an impressive accuracy range of 90 to 96% in terms of sensitivity, specificity, and precision (Sohrabei et al., 2024). These findings underscore AI’s ability to effectively uncover specific genetic and omics patterns that may not be discernible through traditional methods. AI creates personalised cancer treatment plans by analysing genetic and omics data to identify patterns and predict how a patient will respond to specific therapies. By uncovering hidden insights from large datasets, AI can tailor treatments to target the unique molecular characteristics of a patient’s cancer. This approach optimises treatment selection, improves patient outcomes, and supports clinicians in making more informed decisions, all in all improving patient survival outcomes (BMC, 2023).
The Black Box of AI: Understanding Causality in AI’s Processes to Prevent Risk
AI, particularly Deep Learning (DL), has emerged as a transformative force in breast cancer prevention and diagnosis. The ability of AI to analyse vast datasets and distinguish subtle patterns within medical images, such as mammograms and digital pathology slides, has significantly improved diagnostic accuracy and efficiency (Ahn et al., 2023; Marcus & Teuwen, 2024). However, the complexity of these AI models often renders them as opaque “black boxes”, where their process of “thought” remains concerningly unknown, even to their creators (Plass et al., 2023; Wu et al., 2022).
This lack of transparency poses a primary challenge in the clinical adoption of AI. When AI is employed in critical and impactful decision-making processes, such as breast cancer screening and diagnosis, understanding the rationale or reasoning behind AI-generated outputs is of the utmost importance for building trust and ensuring responsibility. The inability to explain how an AI model derives bewilderingly more accurate results from the same source as a radiologist can lead to scepticism and reluctance among clinicians. This will always seem to barricade or limit the development of better treatments in the oncology field, alongside what could have been an immaculate integration of AI into the medical field.
The black box nature of AI, however, creates a disconnect between the AI’s output and the physician’s ability to provide meaningful explanations for their sources. When an AI model generates a diagnosis or treatment recommendation without revealing the underlying rationale, physicians are left unable to satisfactorily communicate this information to their patients. This compromises patient autonomy, as patients are deprived of the opportunity to make truly informed decisions about their healthcare (Chan, 2024). Moreover, it can undermine patient dignity, as the distance a patient has from the reasons behind their own diagnosis or treatment plan can lead to feelings of disempowerment and a lack of control over their own health.
Delving into the difficulty of medical decision-making, the importance of explainability surpasses mere trust in the accuracy of AI models. It is deeply intertwined with the ethical responsibilities of physicians to provide effective and respectful care to their patients (Chan, 2024). Good physician care involves more than just communicating a diagnosis or treatment recommendation; it necessitates the ability to explain the reasoning behind these decisions, showing your understanding, acceptance, and informed decision-making among patients (Peteet et al, 2023; Campbell, 2019).
Explainable AI (XAI) has emerged as a critical field of research dedicated to developing techniques for making AI models more transparent and their decisions more interpretable. Post hoc XAI methods, applied after model training, play a vital role in simplifying the process of generating explanations for AI predictions (Marcus & Teuwen, 2024). Visualisation techniques, such as saliency maps and GradCAM, are commonly employed in digital pathology and radiology to highlight the image regions and features that contribute most significantly to the AI’s decision-making (Plass et al., 2023; Marcus & Teuwen, 2024). These visual explanations offer clinicians a glimpse into the AI’s “thought process,” facilitating the validation and interpretation of AI outputs.
Other XAI methods, such as prototypes and counterfactuals, provide additional layers of understanding (Plass et al, 2023). Prototypes offer representative examples of images or features that the AI model associates with specific diagnoses or risk categories, allowing clinicians to grasp the model’s internal representations. Contrary to some facts, it can be demonstrated how minimal changes in an image can also lead to differing AI predictions, revealing AI model boundaries and potential sensitivities. While XAI techniques are invaluable for enhancing explainability, they primarily focus on identifying correlations between image features and AI predictions. However, establishing true causal relationships between these features and breast cancer risk remains a rudimentary challenge. Most AI models are trained on observational data and excel at pattern recognition rather than causal inference (Cirillo et al., 2020). This distinction is particularly important in breast cancer prevention, where understanding the causal mechanisms underlying AI predictions is essential for developing targeted interventions and effective prevention strategies.
Successfully integrating AI into breast cancer prevention necessitates addressing various challenges. Clinical validation, ensuring the generalisability of AI models across diverse datasets and patient populations, and navigating the “black box” problem are critical considerations (Ahn et al., 2023).
Moreover, ethical implications, such as potential biases and the impact on patient autonomy and dignity, must be carefully addressed. Biases can inadvertently be introduced into AI models due to various factors, including underrepresentation of certain populations, between different genders and sexes, in training datasets or the use of biased data collection methods (Cirillo et al., 2020). These biases can lead to disparities in AI performance and potentially extend existing inequalities in healthcare access and outcomes.
The increasing use of AI in healthcare, particularly in the context of ageing populations, shines a light on the need for a deeper understanding of the complex relationship between ageing and AI. The concept of the co-constitution of ageing and technology emphasises that AI not only impacts the lives of older adults but is also shaped by the social and cultural understandings of ageing (Gallistl et al., 2024).
This perspective calls for a critical examination of how AI is imagined, developed, and evaluated for older adults. It also emphasises the importance of considering the ethical implications of AI in the context of ageing, including issues of autonomy, privacy, and social justice. By recognising the association between ageing and AI, we can develop more inclusive and equitable AI technologies that cater to the diverse needs and experiences of older adults.
The path forward involves developing AI models that are interpretable and provide explanations alongside their predictions (Gastounioti et al, 2020). Integrating input from clinicians and radiologists is crucial in refining AI models and ensuring their clinical relevance. Additionally, incorporating causal inference techniques within AI frameworks is essential for differentiating true causal relationships from mere correlations, facilitating the development of targeted preventive interventions (Dibaeinia et al, 2023).
The future of XAI also lies in the development of interactive human-AI interfaces that enable a collaborative partnership between clinicians and AI systems. Such interfaces would empower clinicians to engage AI predictions, explore hypothetical scenarios, and provide feedback to refine AI algorithms, ultimately leading to more accurate, transparent, and clinically relevant AI tools for breast cancer prevention.
Conclusion
AI has the potential to play a critical role in the prevention or early detection of breast cancer. AI has the ability to analyse vast datasets and distinguish subtle patterns within medical images, thus also helping radiologists. The AI systems would be very beneficial for radiologists by significantly reducing their workload while helping them read mammograms more accurately. The integration of AI with radiologists would manifest a lower mortality rate of breast cancer because of the early detection that will enable people to take measures that will prevent the cancer from advancing further.
AI is revolutionising cancer care through enhanced risk prediction, treatment response forecasting, and personalised treatment plans. AI-powered genetic tests and digital models identify individuals at high risk, enabling early interventions and prevention. Furthermore, AI tools like LORIS predict treatment responses, guiding oncologists toward optimal therapies. The ability to analyse genetic and omics data allows AI to personalise treatment plans, targeting the unique molecular characteristics of each patient’s cancer, thereby improving treatment selection and patient outcomes. Although challenges remain, such as data quality and potential biases, AI’s impact on cancer care is undeniable, offering a brighter future for patients worldwide.
The lack of transparency in AI models hinders physicians’ ability to explain AI-generated results to patients, impacting patient autonomy and trust. XAI techniques, such as visualisation and prototypes, are crucial for making AI decisions more understandable. Additionally, addressing ethical implications, including potential biases, is essential for the successful integration of AI into breast cancer prevention. The future of AI in this field lies in the development of interpretable models, collaborative human-AI interfaces, and a deeper understanding of the relationship between AI and ageing populations.
Bibliography
Ahn, J. S., Shin, S., Yang, S.-A., Park, E. K., Kim, K. H., Cho, S. I., … & Kim, S. (2023). Artificial Intelligence in Breast Cancer Diagnosis and Personalized Medicine. Journal of Breast Cancer, 26(5), 405-435.
Campbell, C. S. (2019). Mortal Responsibilities: Bioethics and Medical-Assisted Dying. Yale Journal of Biology and Medicine, 92(4), 733-739. [PMCID: PMC6913808] [PMID: 31866788].
Cereser, L., et al. (2010). CAD comes under scrutiny in breast screening debate. Diagnostic Imaging, [online] 26.
Chan, B. (2024). Black-box assisted medical decisions: AI power vs ethical physician care. Ethics and Information Technology, 26, 153-161.
Cirillo, D., Catuara-Solarz, S., Morey, C., Guney, E., Subirats, L., Mellino, S., … & Mavridis, N. (2020). Sex and gender differences and biases in artificial intelligence for biomedicine and healthcare. NPJ digital medicine, 3(1), 1-12.
Comparison of the tyrer-cuzick vs gail risk assessment (2024) MagView. Available at: https://magview.com/womens-health/tyrer-cuzick-vs-gail-risk-assessmenttools/#:~:text=Differences%20in%20the%20Models,with%20familial%20history%20of%20cancer (Accessed 31st August 2024).
Davidson, W. (2024). More breast cancers detected in first evaluation of breast screening AI. [online] Medicalxpress.com.
Dibaeinia, P., & Sinha, S. (2023). CIMLA: Interpretable AI for inference of differential causal networks. ArXiv [Preprint], arXiv:2304.12523v1. [PMCID: PMC10168428] [PMID: 37163135]
Elhakim, M.T., et al. (2023). Breast cancer detection accuracy of AI in an entire screening population: a retrospective, multicentre study. Cancer imaging.
Gallistl, V., Banday, M. U. L., Berridge, C., Grigorovich, A., Jarke, J., Mannheim, I., … & Peine, A. (2024). Addressing the Black Box of AI—A Model and Research Agenda on the Co-constitution of Aging and Artificial Intelligence. The Gerontologist, 64(6), gnae039.
HealthManagement.org (2024). AI in Mammography: Earlier Detection and Decreased Radiologist Burnout. [online].
Hosny, A., Parmar, C., Quackenbush, J., Schwartz, L. H., & Aerts, H. J. W. L. (2018). Artificial intelligence in radiology. Nature Reviews Cancer, 18(8), 500-510.
How myriad genetics is powering AI & Machine Learning Advancements in precision medicine – inside DNAnexus (no date) Group 5453. Available at: https://blog.dnanexus.com/2020-08-13-myriad-genetics-ai-machine-learning-precision-medicine (Accessed 31st August 2024).
Lamb, L.R., et al. (2022). Artificial Intelligence (AI) for Screening Mammography, From the AJR Special Series on AI Applications. AJR. American journal of roentgenology, [online] 219(3), pp.1–12.
Marcus, E., & Teuwen, J. (2024). Artificial intelligence and explanation: How, why, and when to explain black boxes. European Journal of Radiology, 111393.
McKinney, S.M., et al. (2020). International evaluation of an AI system for breast cancer screening. Nature, [online].
MyRisk® hereditary cancer test (2024) Myriad Genetics. Available at: https://myriad.com/genetic-tests/myrisk-hereditary-cancer-risk-test/ (Accessed 31st August 2024).
Najjar, R. (2023). Redefining Radiology: A Review of Artificial Intelligence Integration in Medical Imaging. Diagnostics, [online] 13(17), p.2760.
National Breast Cancer Foundation. (2019). National Breast Cancer Foundation. [online].
NIH scientists develop AI tool to predict how cancer patients will respond to immunotherapy (2024) National Institutes of Health. Available at: https://www.nih.gov/news-events/news-releases/nih-scientists-develop-ai-tool-predict-how-cancer-patients-will-respond-immunotherapy (Accessed 31st August 2024).
Peteet, J. R., Witvliet, C. V. O., Glas, G., & Frush, B. W. (2023). Accountability as a virtue in medicine:from theory to practice. Philos Ethics Humanit Med., 18, 1. https://doi.org/10.1186/s13010-023-00129-5.
Plass, M., Kargl, M., Kiehl, T.-R., Regitnig, P., Geißler, C., Evans, T., … & Müller, H. (2023). Explainability and causability in digital pathology. The Journal of Pathology: Clinical Research, e322.
Romeo, V. et al. (2023) MRI radiomics and machine learning for the prediction of oncotype dx recurrence score in invasive breast cancer, Cancers. Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10047199/ (Accessed 31st August 2024).
Rsna.org. (2017). Responsible Steps to Implementing AI in Breast Screening.
Sohrabei, S. et al. (2024) Investigating the effects of artificial intelligence on the personalization of breast cancer management: A systematic study – BMC cancer, BioMed Central. Available at: https://bmccancer.biomedcentral.com/articles/10.1186/s12885-024-12575-1#:~:text=Artificial%20intelligence%20has%20proven%20to,complex%20omics%20and%20genetic%20data (Accessed 31st August 2024).
Womenshealth.gov. (2022). 99 Percent Survival Rate for Breast Cancer Caught Early | Office on Women’s Health. [online].
Wu, H., Ye, X., Jiang, Y., Tian, H., Yang, K., Cui, C., … & Dong, F. (2022). A comparative study of multiple deep learning models based on multi-input resolution for breast ultrasound images. Frontiers in Oncology, 12, 869421.
www.breastcancer.org. (n.d.). Using AI (Artificial Intelligence) to Detect Breast Cancer. [online].