Home
Machine Learning
Seminar

Using ontologies to enhance human understandability of global post-hoc explanations of black-box models

The interest in explainable artificial intelligence has grown strongly in recent years because of the need to convey safety and trust in the ‘how’ and ‘why’ of automated decision-making to users. While a plethora of approaches has been developed, only a few focus on how to use domain knowledge and how this influences the understanding of explanations by users. In this talk, we show that by using ontologies we can improve the human understandability of global post-hoc explanations, presented in the form of decision trees. In particular, we introduce Trepan Reloaded, which builds on Trepan, an algorithm that extracts surrogate decision trees from black-box models. Trepan Reloaded includes ontologies, that model domain knowledge, in the process of extracting explanations to improve their understandability.

Roberto Confalonieri
Photo:
Roberto Confalonieri

Main content

Speaker: Roberto Confalonieri

Abstract: The interest in explainable artificial intelligence has grown strongly in recent years because of the need to convey safety and trust in the ‘how’ and ‘why’ of automated decision-making to users. While a plethora of approaches has been developed, only a few focus on how to use domain knowledge and how this influences the understanding of explanations by users. In this talk, we show that by using ontologies we can improve the human understandability of global post-hoc explanations, presented in the form of decision trees. In particular, we introduce Trepan Reloaded, which builds on Trepan, an algorithm that extracts surrogate decision trees from black-box models. Trepan Reloaded includes ontologies, that model domain knowledge, in the process of extracting explanations to improve their understandability. We tested the understandability of the extracted explanations by humans in a user study with four different tasks. We evaluate the results in terms of response times and correctness, subjective ease of understanding and confidence, and similarity of free text responses. The results show that decision trees generated with Trepan Reloaded, taking into account domain knowledge, are significantly more understandable throughout than those generated by standard Trepan. The enhanced understandability of post-hoc explanations is achieved with little compromise on the accuracy with which the surrogate decision trees replicate the behaviour of the original neural network models.

Short bio: Roberto Confalonieri received his Ph.D. in Artificial Intelligence (with distinction) from the Polytechnic University of Catalonia in 2011.  He is Assistant Professor at the Faculty of Computer Science of UniBZ since 2020. In 2018-2020 he was eXplainable AI team lead at Alpha, the first European Moonshot projects company funded by Telefonica Research in Barcelona. In 2017-2018, he was project manager and researcher of the Smart Data Factory, the technology transfer centre of the Faculty of Computer Science of UniBZ where he acquired and directed several research projects and collaborations with industries (raising a total amount of € 1.732.353,26 in two years). In 2011-2016 he was post-doctoral researcher in several research institutions in Europe (UPC BarcelonaTech, IRIT, Goldsmiths College, IIIA-CSIC, Universitat of Barcelona).  He is the PI of 2 research projects (one European); he participated in 4 European projects, and in a number of (Italian) national projects and collaborations with industries. He has published around 50 peer-reviewed articles in AI top conferences (IJCAI, AAAI, ECAI) and journals (AIJ, EAAI, AMAI). He co-edited the book `Concept Invention' published by Springer in 2018. He is associate editor of the Cognitive Systems Research journal published by Elsevier. He is co-editing the special issue on `The Role of Ontologies and Knowledge in Explainable AI' to be published in the Semantic Web Journal published by IOS Press. He regularly organises scientific events: he was co-chair of an invited symposium at CogSci 2019, of the series of international workshops of Methods for Interpretation of Industrial Event Logs (MIEL @IDEAL 2018, MIEL @BPM 2019), of Data meets Applied Ontologies (DAO @JOWO 2017, DAO-SI @JOWO 2019, DAO-XAI@BASK 2021), and C3GI 2018. He serves as Senior PC and PC member in top-tier AI conference such as IJCAI, AAAI, and ECAI. In 2020, he received the Italian ASN as ‘Professore II Fascia’ in the scientific sectors 01/B1 (Computer Science) and 09/H1 (Information Processing Systems) respectively, and the Catalan Habilitation as `Agregat' (equivalent to Professore II Fascia). He also holds the Catalan Habilitation as `Lector' since 2013. His main research topics are in AI, particularly Trustworthy and Explainable AI, Knowledge Representation and Applied Ontologies. A major focus of his work is on human-centred AI, specifically, on the role played by explicit knowledge to provide explanations of black-box models that are human-understandable, reusable in different contexts, and adaptable to stakeholders with different backgrounds.

Web page: https://www.inf.unibz.it/~rconfalonieri/

 

Topic: ML SeminarTime: Aug 25, 2021 09:00 AM

Join Zoom Meeting

https://uib.zoom.us/j/61389767568?pwd=cWRlMkJKSWxlL3VLT3NJNnFBWVJvZz09

Meeting ID: 613 8976 7568

Password: NXxQ8u8L