IJCAI 2020 Tutorial on

Trustworthiness of Interpretable Machine Learning

July 2020 @ Yokohama, Japan


Speakers


Overview

Deep neural networks (DNNs) have no doubt brought great successes to a wide range of applications in computer vision, computational linguistics and AI. However, foundational principles underlying the DNNsā€™ success, the trustworthiness of DNNs, and the DNNs' resilience to adversarial attacks are still largely missing. In the scope of explainable AI, the quantification of the trustworthiness of explanations to network predictions and the analysis of the trustworthiness of DNN features become a compelling yet controversial topic. Related issues include (1) the quantification of the trustworthiness of network features, (2) the objectiveness, robustness, semantic strictness of explanations of DNNs, and (3) the semantic strictness of interpretability of explainable neural networks, etc. Rethinking the trustworthiness and fairness of existing interpretable machine learning methods is of significant values for further development of interpretable machine learning.

This tutorial aims to bring together researchers, engineers as well as industrial practitioners, who concern about interpretability, safety, and reliability of artificial intelligence. This tutorial introduces a number of new findings on above issues discovered in recent papers of the speakers and some classic studies. Critical discussions on the strength and limitations of current explainable-AI algorithms provide new prospective research directions. This tutorial is expected to have profound influences on critical industrial applications such as medical diagnosis, finance, and autonomous driving.


Schedule

Jan 7th Afternoon 1.2 9:40 a.m. - 11:15 a.m. UTC

Speaker Topic Time Link
Quanshi Zhang Efforts in Pushing XAI Towards Science 9:40 a.m. - 10:30 a.m. UTC Youtube, Zoom, Pdf
Zhanxing Zhu The Adversarial Example and Interpretability 10:30 a.m. - 11:15 a.m. UTC Youtube, Zoom

Please contact Quanshi Zhang if you have questions.