Explainable AI: New framework increases transparency in decision-making systems
1 min read
Image caption: In high-stakes situations like medical diagnostics, understanding why an AI model made a decision is as important as the decision itself. […] Image credit: ChatGPT image prompted by Salar Fattahi. Article by Patricia DeLacey, University of Michigan College of Engineering. Tech Xplore – June 13, 2025. Research article: arXiv:2502.06775.
A new explainable AI technique transparently classifies images without compromising accuracy. The method, developed at the University of Michigan, opens up AI for situations where understanding why a decision was made is just as important as the decision itself, like medical diagnostics. If an AI model flags a tumor as malignant without specifying what prompted the result—like size, shape or a shadow in the image—doctors cannot verify the result or explain it to the patient. Worse, […]