开放获取期刊获得更多读者和引用
700 种期刊 和 15,000,000 名读者 每份期刊 获得 25,000 多名读者
Mohammad Haeri, Mohammad Hossein Jarrahi
Artificial intelligence in medical diagnosis, including pathology, provides unprecedented opportunities. However, the lack of explainability of these systems raises concerns about the proper adoption, accountability and compliance. This article explores the problem of opacity in end-to-end AI systems where pathologists might only serve as trainers of the algorithm. A solution is suggested with the "pathologists-in-the-loop" approach, which involves continuous collaboration between pathologists and AI systems through the concepts of parameterization and implicitization. This human centered workflow enhances the pathologist's role in the diagnosis process to create an explainable system rather than automating it.