This article is 752 words and a three-minutes read.
“We need to be transparent. We need to talk about the limitations and risks of our AI tools in detail.” (Prof. Karim Lekadir)
The potential of artificial intelligence (AI) to improve cancer diagnostics, treatment plans and patient outcomes is immense, particularly when it comes to medical imaging. But what does it mean for AI to be truly trustworthy in practice? What do experts and patients need to develop trust in such potentially revolutionary new tools and how can we incorporate human needs early-on to ensure AI is trustworthy by design?
In the latest ELSI Dialogues session in October, two invited experts, Macej Bobowicz, MD, PhD (Medical University of Gdansk) and Prof. Karim Lekadir (University of Barcelona), shared their thoughts and expertise on this highly current topic.
The integration of AI tools into medical practice is not just an incremental step forward; a new generation of software that works in ways that are more or less familiar. It is a leap into a new era that comes with a lot of hope, but also uncertainties. Systems can outperform highly trained medical experts in the analysis of imaging data for cancer diagnosis – and this is only the beginning.
While researchers are able to analyse and measure how good a certain AI tool can detect cancer growth in imaging data, they often lack the ability to fully explain how a tool comes to its conclusion. Fully explaining this “how” may remain elusive in the near future, as Prof. Lekadir explains in the webinar.
Dealing with such an unknown is an additional source of fear and scepticism, but this can be faced with transparency and open communication. We need to have open discussions about the models that are developed, their strengths and limitations, their risks and mitigations in the light of their undeniable potential for doctors and patients.
Those who don’t learn history, are doomed to repeat it. This quote by George Santayana has a special implication for new AI tools. They are trained on the history of information gathered as a society and results can only be as good as the inputted or harvested data. If those data are biased and do not adequately reflect the diversity of our societies, these tools will further reproduce existing inequalities.
An example from the medical field is the detection of skin cancer on patients with different skin colours. Persons with darker skin colours are underrepresented in medical literature and available imaging data. As a consequence, both human experts and AI tools have lower success rates in detecting their developing cancers as this article and a underlying study published in Nature Medicine highlight. Without paying extra attention to this disparity, the use of AI tools can even worsen the problem instead of being part of the solution
A similar pattern can be seen when it comes to gender bias. In an interview with UN Woman, Zinnya del Villar, expert in responsible AI, pinpoints the underlying problem that touches far more areas than medical diagnosis: “AI systems, learning from data filled with stereotypes, often reflect and reinforce gender biases. These biases can limit opportunities and diversity, especially in areas like decision-making, hiring, loan approvals, and legal judgments.”
Trust in new AI tools can be built when patients, citizens and clinicians are integral to its development from the beginning. This will help to properly address diverse human needs that strongly vary based on regional, cultural or religious backgrounds.
How can this be communicated? What is the right amount of information and detail to give to people who aren’t experts in the field? How to inform but not overwhelm? A recent study by the Financial Times examines transparency about AI in news but the conclusion has broader relevance for: “it is best understood as a spectrum, evolving with tech advancements, commercial, professional and ethical considerations and shifting audience attitudes.”
There will be no one-size-fits-all approach for AI integrations and tools, but evolving expertise cuts across domains; this is uncharted territory but not a lonely path. Exchange of experiences to train artificial intelligence means better accuracy, fair and ethical responses. Paired with open and inclusive communication, this will help maximise the societal benefit this new technological era can bring.
Be part of the discussion. Enjoy the recording of the ELSI Dialogues session here on YouTube or on the BBMRI-ERIC podcast. Follow us on socials (LinkedIn and Bluesky) so you don’t miss any of the upcoming news and events.
Racial bias exists in photo-based medical diagnosis despite AI help
By Shanice Harris, Northwestern Now, February 05, 2024
Deep learning-aided decision support for diagnosis of skin disease across skin tones.
Groh, M., Badri, O., Daneshjou, R. et al. Nat Med 30, 573–583 (2024).
Fulltext: https://doi.org/10.1038/s41591-023-02728-3
How AI reinforces gender bias—and what we can do about it: Interview with Zinnya del Villar on AI gender bias and creating inclusive technology
UN Women, February 05, 2025
Tricky Trade-Offs on a Transparency Spectrum: How the Financial Times Approaches Transparency about AI Use in News
Liz Lohn, Felix M. Simon, Preprint, November 05, 2025
The Speakers of the ELSI Dialogues session: “Trustworthy by Design: Ethics, Practice and Impact of AI in Cancer Research and Care”
Dr. Maciej Bobowicz (Medical University of Gdańsk)
Dr Maciej Bobowicz is an Assistant Professor at the 2nd Department of Radiology, Medical University of Gdańsk, Poland, and a surgical oncologist specialising in breast cancer diagnostics and treatment. His research focuses on the ethical and clinically responsible use of artificial intelligence in oncology, radiomics, and precision medicine.
He leads or contributes to several EU-funded projects, including EuCanImage, RadioVal, MAYA, Cinderella, TRACE and CAREWAY. He is also part of the AI4HI network, which supports the EUCAIM initiative under Europe’s Beating Cancer Plan. Author of over 80 scientific papers, Dr Bobowicz is committed to advancing patient-centred, trustworthy, and equitable AI applications in cancer imaging and care.
Prof. Karim Lekadir (University of Barcelona)
Karim Lekadir is an ICREA Research Professor in the Department of Mathematics and Computer Science at the University of Barcelona. He obtained his PhD from Imperial College London and was a postdoctoral researcher at Stanford University. He investigates new data science techniques for trustworthy and ethical artificial intelligence in medicine. He has been PI in 15 EU-funded projects, coordinated 6 Horizon projects, and was awarded an ERC Consolidator grant to investigate new AI techniques tailored to resource-limited settings.
Moderator: Melanie Goisauf (BBMRI-ERIC)
Dr. Melanie Goisauf is an accomplished social scientist with a PhD in Sociology (University of Vienna). She also studied at the Royal Holloway University of London and completed the postgraduate programme “Sociology of Social Practices” at the Institute for Advanced Studies (IHS) Vienna. As senior scientist at BBMRI-ERIC she is involved in several research projects and serves on ethical advisory boards. Dr. Goisauf also leads the Ethics of AI Lab, which focuses on the ethical and social implications of artificial intelligence, and is in charge of the scientific coordination of the Horizon Europe project PERIFORMANCE.