"False"
Skip to content
printicon
Main menu hidden.

Interpretable Machine Learning

Generative mixture of linear models by DNN co-supervision

Post-doc project Kempe Foundations funded project for interpretable machine learning.

This project aims to further develop the generative mixture of linear models by incorporating more sophisticated region-specific models and enhancing the training of the modules. These improvements will expand the potential applications of the method and increase prediction accuracy. This project will contribute novel algorithms for interpretable ML and create a platform for a formal study of the relationship between interpretability and prediction accuracy.

Head of project

Jun Yu
Professor
E-mail
Email

Project overview

Project period:

Start date: 2025-03-01

Participating departments and units at Umeå University

Department of Mathematics and Mathematical Statistics

Research area

Mathematics, Statistics

External funding

The Kempe Foundation

Project description

Artificial intelligence (AI) has become ubiquitous in our daily lives even though we are often not aware of the technology being in action. Machine learning (ML) is an area at the technical core of AI. With the enormous success of deep neural networks (DNN), the research focus of ML has shifted from pursuing high accuracy to a few other qualities of an ML system. One of the highly valued traits of an ML system is interpretability. For some critical tasks, a black-box ML classifier is rejected even if it performs the best on a test dataset. The caution against black-box ML is well grounded since a test dataset usually cannot fully represent the phenomenon under study. For proof of concept, we have developed a prototype approach to approximate the prediction of a DNN model by a piecewise linear function (or linear decision boundaries) called Mixture of Linear Models (MLM).

This project aims to further develop MLM by incorporating more sophisticated region-specific models and enhancing the training of the modules in MLM. These improvements will expand the potential applications of the method and increase prediction accuracy. This project will contribute novel algorithms for interpretable ML and create a platform for a formal study of the relationship between interpretability and prediction accuracy.

External funding

Latest update: 2025-03-26