A study relating to the PETRAS’ ICE-AI Project has been published by authors Auste Simkute, Ewa Luger, Bronwyn Jones, Michael Evans, and Rhianne Jones. The paper, titled “Explainability for experts: A design framework for making algorithms supporting expert decisions more explainable”, maps the decision-making strategies that will be potentially used as explainability design guidelines.
The study argues that the existing explainability approaches lack usability, and are mostly not effective when applied in a decision-making context. In some cases this can result in automation bias and loss of human expertise. For this reason, the authors propose a novel framework with the aim of tailoring to support naturalistic decision-making strategies employed by domain experts and novices.
Read the paper here.