Skip to yearly menu bar Skip to main content


Workshop

Explainable AI for Computer Vision: Where Are We and Where Are We Going?

Robin Hesse

Sun 29 Sep, 5 a.m. PDT

Keywords:  RAI  

Deep neural networks (DNNs) are an essential component in the field of computer vision and achieve state-of-the-art results in almost all of its sub-disciplines. While DNNs excel at predictive performance, they are often too complex to be understood by humans, leading to them often being referred to as “black-box models”. This is of particular concern when DNNs are applied in safety-critical domains such as autonomous driving or medical applications. With this problem in mind, explainable artificial intelligence (XAI) aims to gain a better understanding of DNNs, ultimately leading to more robust, fair, and interpretable models. To this end, a variety of different approaches, such as attribution maps, intrinsically explainable models, and mechanistic interpretability methods, have been developed. While this important field of research is gaining more and more traction, there is also justified criticism of the way in which the research is conducted. For example, the term “explainability” in itself is not properly defined and is highly dependent on the end user and the task, leading to ill-defined research questions and no standardized evaluation practices. The goals of this workshop are thus two-fold:

1. Discussion and dissemination of ideas at the cutting-edge of XAI research (“Where are we?”)
2. A critical introspection on the challenges faced by the community and the way to go forward (“Where are we going?”)

Live content is unavailable. Log in and register to view live content