Skip to main content

Structured Representations for Explainable Deep Learning

Time: Mon 2023-06-12 14.00

Location: F3, Lindstedtsvägen 26 & 28, Stockholm

Video link: https://kth-se.zoom.us/j/66725845533

Language: English

Subject area: Computer Science

Doctoral student: Federico Baldassarre , Robotik, perception och lärande, RPL

Opponent: Associate Professor Hamed Pirsiavash, University of California, Davis, USA

Supervisor: Associate professor Hossein Azizpour, Robotik, perception och lärande, RPL; Professor Josephine Sullivan, Robotik, perception och lärande, RPL; Associate professor Kevin Smith, Beräkningsvetenskap och beräkningsteknik (CST)

Export to calendar

QC 20230516

Abstract

Deep learning has revolutionized scientific research and is being used to take decisions in increasingly complex scenarios. With growing power comes a growing demand for transparency and interpretability. The field of Explainable AI aims to provide explanations for the predictions of AI systems. The state of the art of AI explainability, however, is far from satisfactory. For example, in Computer Vision, the most prominent post-hoc explanation methods produce pixel-wise heatmaps over the input domain, which are meant to visualize the importance of individual pixels of an image or video. We argue that such dense attribution maps are poorly interpretable to non-expert users because of the domain in which explanations are formed - we may recognize shapes in a heatmap but they are just blobs of pixels. In fact, the input domain is closer to the raw data of digital cameras than to the interpretable structures that humans use to communicate, e.g. objects or concepts. In this thesis, we propose to move beyond dense feature attributions by adopting structured internal representations as a more interpretable explanation domain. Conceptually, our approach splits a Deep Learning model in two: the perception step that takes as input dense representations and the reasoning step that learns to perform the task at hand. At the interface between the two are structured representations that correspond to well-defined objects, entities, and concepts. These representations serve as the interpretable domain for explaining the predictions of the model, allowing us to move towards more meaningful and informative explanations. The proposed approach introduces several challenges, such as how to obtain structured representations, how to use them for downstream tasks, and how to evaluate the resulting explanations. The works included in this thesis address these questions, validating the approach and providing concrete contributions to the field. For the perception step, we investigate how to obtain structured representations from dense representations, whether by manually designing them using domain knowledge or by learning them from data without supervision. For the reasoning step, we investigate how to use structured representations for downstream tasks, from Biology to Computer Vision, and how to evaluate the learned representations. For the explanation step, we investigate how to explain the predictions of models that operate in a structured domain, and how to evaluate the resulting explanations. Overall, we hope that this work inspires further research in Explainable AI and helps bridge the gap between high-performing Deep Learning models and the need for transparency and interpretability in real-world applications.

urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-326958