picture_as_pdf Download PDF

IARC 60th Anniversary - 19-21 May 2026

Session : 19/05/26 - Posters

Ethical implications of AI in Cancer Screening and Diagnosis: A Systematic Review

EZZEMNI S. 1, DOWNHAM L. 1, TAGHAVI K. 1, KATARIA I. 1, ROL M. 1, LUCAS E. 1, MUWONGE R. 1, BASU P. 1

1 IARC, LYON, France

Background: 
Artificial intelligence (AI) has the strong potential to transform cancer screening and diagnostic pathways, ranging from automated image analysis and decision support to population-facing tools such as chatbots, tailored risk communication, and participation support. By enabling faster and more consistent evaluation of large and complex datasets, AI may improve early detection, triage, and diagnostic workflows, and help address resource constraints across diverse healthcare settings. However, rapid development and deployment of AI also raise important ethical, legal, and societal concerns, including transparency, bias and fairness, equity, accountability, privacy and data governance, safety and reliability, and implications for patients, clinicians, and health systems. Despite growing interest, a comprehensive synthesis of how such implications are handled in cancer screening and diagnosis is still lacking. This systematic review addresses this gap by mapping ethical considerations across global contexts to inform future research, evaluation, and responsible implementation.  

Objective: 
This systematic review aims to identify, categorize, and synthesize ethical implications associated with the use of AI in cancer screening and diagnosis at the patient, healthcare system, and global governance levels. 

Methods: 
This review follows the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) guidelines and uses a structured Population-Intervention-Comparator-Outcome (PICO) framework. Eligible populations include: (P1) patients undergoing cancer screening, triaging, or diagnostic testing, (P2) healthcare professionals involved in screening, interpretation, or diagnostic decision making, and (P3) global governance actors, including policymakers, ministries of health, the World Health Organisation (WHO), and United Nation (UN) agencies. 

We include studies that use AI-enabled analysis of screening, triage, or diagnostic tests, compared with no AI use. Three main categories of outcomes have been identified related to i) trust and acceptability, ii) feasibility and implementation, and iii) Equity and decision-making. 

We searched PubMed, Embase, Web of Science, Cochrane Library, and grey literature sources (e.g. these repositories, online libraries), using four key concepts: AI, ethics, cancer, and screening/triage/diagnosis. The quality of the included studies will be appraised using assessment tools tailored to the study design. We will use descriptive analyses to report how frequently various ethical outcomes are considered and handled, and to identify key areas where gaps remain. 

(Preliminary) results: 
We screened 5,147 records after removing duplicates. Two reviewers independently screen titles, abstracts, and full texts using Rayyan in a blinded approach, with discrepancies resolved through unblinding and consultation with senior reviewers. Included studies so far are observational or interventional studies, commentary pieces, brief communications, or case reports reporting discussing AI used in cancer screening and diagnosis and related ethical issues.   

Conclusion: 
This review will provide comprehensive evidence on ethical considerations related to AI-supported cancer screening and diagnosis. Findings will highlight important gaps in this fast-moving field and may inform policy guidance. It is essential to promote responsible and equity-centered AI implementation, consistent with IARC’s mission to strengthen global cancer control and protect populations through trustworthy technological innovation.