Model interpretability and understanding for PyTorch
-
Updated
Dec 2, 2025 - Python
Model interpretability and understanding for PyTorch
Shapley Interactions and Shapley Values for Machine Learning
Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.
Collection of NLP model explanations and accompanying analysis tools
An Open-Source Library for the interpretability of time series classifiers
Explainable AI in Julia.
A set of notebooks as a guide to the process of fine-grained image classification of birds species, using PyTorch based deep neural networks.
Counterfactual SHAP: a framework for counterfactual feature importance
Materials for "Quantifying the Plausibility of Context Reliance in Neural Machine Translation" at ICLR'24 🐑 🐑
Materials for the Lab "Explaining Neural Language Models from Internal Representations to Model Predictions" at AILC LCL 2023 🔍
The official repo for the EACL 2023 paper "Quantifying Context Mixing in Transformers"
Code and data for the ACL 2023 NLReasoning Workshop paper "Saliency Map Verbalization: Comparing Feature Importance Representations from Model-free and Instruction-based Methods" (Feldhus et al., 2023)
Efficient and accurate explanation estimation with distribution compression (ICLR 2025 Spotlight)
⛈️ Code for the paper "End-to-End Prediction of Lightning Events from Geostationary Satellite Images"
Implementation of the Integrated Directional Gradients method for Deep Neural Network model explanations.
Sum-of-Parts: Self-Attributing Neural Networks with End-to-End Learning of Feature Groups
Reproducible code for our paper "Explainable Learning with Gaussian Processes"
Robustness of Global Feature Effect Explanations (ECML PKDD 2024)
Bachelor's thesis for degree in Economics at HSE University, Saint-Petersburg (2022)
This repository contains the code and material to reproduce the results of the ICML'25 paper "Gradient-based Explanations for Deep Learning Survival Models".
Add a description, image, and links to the feature-attribution topic page so that developers can more easily learn about it.
To associate your repository with the feature-attribution topic, visit your repo's landing page and select "manage topics."