Date de l'évènement :
Lieu de l'évènement :
The integration of artificial intelligence (AI) into healthcare promises to revolutionize diagnostics, treatment, and patient management. However, the "black box" nature of many AI models poses significant challenges to trust, safety, and equitable deployment. This talk will underscore the critical importance of transparency, reproducibility, and open science across the entire lifecycle of AI-driven healthcare solutions, from model development and validation to evaluation in clinical trials. I will explore how a lack of transparency can critical evaluation, obscure biases, hinder reproducibility, and impede clinical adoption. Specifically, we will discuss the challenges of replicating AI model performance and the necessity of openly sharing code, data, and model parameters. Furthermore, I will introduce key reporting guidelines, specifically TRIPOD+AI and CONSORT-AI, which are designed to enhance the clarity and completeness of AI-related research. By advocating for rigorous reporting, open science practices, and reproducible model evaluation, we can foster a culture of transparency and collaboration, ensuring that AI in healthcare delivers on its potential while safeguarding patient well-being and promoting ethical innovation.
Keywords: Prediction models; Artificial intelligence; transparency; reporting guidelines; open science.