Tarek R. Besold: Symbols, Networks, Explanations: A Complicated Ménage à Trois
Discussions surrounding questions of interpretability (or even: explainability) in AI and ML are gaining in popularity in academia and industry. I will briefly characterize four notions of explainable AI/ML that cut across research fields: 1) opaque systems that offer no insight whatsoever, 2) interpretable systems where users can mathematically analyse the mechanisms at work, 3) comprehensible systems that emit symbols enabling user-driven explanations of how a conclusion is reached, and 4) explainable systems, where automated reasoning is central to output crafted explanations.
Against that backdrop, we will then look into different lines of work relating to questions of comprehensibility and explainability of (types of) AI formalisms and approaches. This includes the interpretability of learned logic programs in an ILP setting (as worked example for symbolic representations more generally) and ways of extracting decision trees from (certain types of) trained neural networks (giving symbolic insight into the global rules learned by the respective connectionist model).
Tarek R. Besold, PhD, is the AI Lead and a Senior Research Scientist at the Alpha Health AI Lab in Barcelona. Before joining Telefonica Innovation Alpha, he was a Lecturer/Assistant Professor in Data Science at City, University of London, conducting research at the intersection between artificial intelligence, computational creativity, and cognitive systems.
Among others, Tarek was the General Chair of the HLAI 2016 and 2018 Joint Multi-Conferences on Human-Level Artificial Intelligence, and founder and/or organizer of several international workshop series bridging between AI and cognitive science. He was co-editor of the books Computational Creativity Research: Towards Creative Machines and Concept Invention: Foundations, Implementations, Social Aspects and Applications. In addition, Tarek holds different editorial functions with several scientific journals in AI and neighboring fields (including Cognitive Systems Research and the Journal of Artificial General Intelligence).
Tarek also serves as chairman of the German Institute for Standardization (DIN)’s National Working Group on Standards and Norms for AI (part of the ISO/IEC JTC-1 SC 42 “Artificial Intelligence”), and as an expert member of the Digital Future Society’s Think Tank Working Group on “Data Ethics and the Challenge of Digital Privacy”.