Half-day workshop on Explainable Artificial Intelligence (XAI)

November 12, 2024, 1:30 pm – 6:00pm | Onsite at Blücherstraße 17, Karlsruhe, Germany and available online
Workshop Objective:

This hybrid half-day workshop will explore key advancements in explainable AI (XAI) at the intersection of AI, Mathematical Sciences, Engineering, and Economics. Experts will discuss both theoretical foundations and practical applications, focusing on making AI models more transparent and interpretable, which is essential for building trust in complex AI systems.

Preliminary list of speakers
Prof. Dr. Gitta Kutyniok:

Bavarian AI Chair for Mathematical Foundations of Artificial Intelligence at the Ludwig-Maximilians-Universität München. (homepage: website)

Title: Mathematical Algorithm Design for Deep Learning under Societal and Judicial Constraints: The Algorithmic Transparency Requirement

Abstract: Deep learning still has drawbacks in terms of trustworthiness, which describes a comprehensible, fair, safe, and reliable method. To mitigate the potential risk of AI, clear obligations associated to trustworthiness have been proposed via regulatory guidelines, e.g., in the European AI Act. Therefore, a central question is to what extent trustworthy deep learning can be realized. Establishing the described properties constituting trustworthiness requires that the factors influencing an algorithmic computation can be retraced, i.e., the algorithmic implementation is transparent. Motivated by the observation that the current evolution of deep learning models necessitates a change in computing technology, we derive a mathematical framework which enables us to analyze whether a transparent implementation in a computing model is feasible. We exemplarily apply our trustworthiness framework to analyze deep learning approaches for inverse problems in digital and analog computing models represented by Turing and Blum-Shub-Smale Machines, respectively. Based on previous results, we find that Blum-Shub-Smale Machines have the potential to establish trustworthy solvers for inverse problems under fairly general conditions, whereas Turing machines cannot guarantee trustworthiness to the same degree.

Maximilian Fleissner

PhD  candidate at TUM, School of computations, information and technology (homepage-website)

Title: Explaining (Kernel) Clustering via Decision Trees

Abstract: Despite the growing popularity of explainable and interpretable machine learning, there is still surprisingly limited work on inherently interpretable clustering methods. Recently, there has been a surge of interest in explaining the classic k-means algorithm using axis-aligned decision trees. However, interpretable variants of k-means have limited applicability in practice, where more flexible clustering methods are often needed to obtain useful partitions of the data. We investigate interpretable kernel clustering, and propose algorithms that construct decision trees to approximate the partitions induced by kernel k-means, a nonlinear extension of k-means. Our method attains worst-case bounds on the clustering cost induced by the tree.In addition, we introduce the notion of an explainability-to-noise ratio for mixture models. Assuming sub-Gaussianity of the mixture components, we derive upper and lower bounds on the error rate of a suitably constructed decision tree, capturing the intuition that well-clustered data can indeed be explained well with a decision tree.

Dr. Vikram Sunkara

Head of Explainable A.I. for Biology, Zuse Institute Berlin (homepage-website)

Title: A “Deep” Dive into how Neural Networks see Data

Abstract: Artificial Neural Networks (A.I.) have become a ubiquitous tool in society. They permeate through nearly all facets of our modern lives from leisure recommendations to more critical medical diagnosis. In this talk, we’ll take a ‘Deep’ dive into how neural networks perceive/process data and what mathematical patterns emerge inside DNNs. We will study  a simple artificial neural network, and describe its design through the lens of functional analysis; understand the learning mechanism with the lens of stochastics; decode its inner representation using the lens of geometry; and more importantly, with our new understanding and intuition, we will look at some real world applications in medicine and look into the architectures on what they have learnt. 

 

Registration

The participation at the workshop is free of charge. To register, kindly follow the link: https://sop.ior.kit.edu/english/663.php

Deadline for registration: November 1, 2024

Organizers

 

 

gutt
Prof. Dr. Gitta Kutyniok
Maximilian Fleissner Maximilian Fleissner
Maximilian Fleissner
sunbkara
Dr. Vikram Sunkara