GitHub - jphall663/awesome-machine-learning-interpretability: A curated list of awesome responsible machine learning resources. (original) (raw)
acd-
"Produces hierarchical interpretations for a single prediction made by a pytorch neural network. Official code for Hierarchical interpretations for neural network predictions.”
"Aequitas is an open-source bias audit toolkit for data scientists, machine learning researchers, and policymakers to audit machine learning models for discrimination and bias, and to make informed and equitable decisions around developing and deploying predictive tools.”
"Interpretability and explainability of data and machine learning models.”
"A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.”
"Python Accumulated Local Effects package.”
"A Python package for unwrapping ReLU DNNs.”
See Algorithmic Fairness.
"Alibi is an open source Python library aimed at machine learning model inspection and interpretation. The focus of the library is to provide high-quality implementations of black-box, white-box, local and global explanation methods for classification and regression models.”
"An open-source NLP research library, built on PyTorch.”
"Code for 'High-Precision Model-Agnostic Explanations' paper.”
"This code implements the Bayesian or-of-and algorithm as described in the BOA paper. We include the tictactoe dataset in the correct formatting to be used by this code.”
Rudin group at Duke Bayesian case model implementation
"Research code for auditing and exploring black box machine-learning models.”
CalculatedContent, WeightWatcher-
"The WeightWatcher tool for predicting the accuracy of Deep Neural Networks."
"Model interpretability and understanding for PyTorch.”
"contains the code originally forked from the ImageNet training in PyTorch that is modified to present the performance of classifier-agnostic saliency map extraction, a practical algorithm to train a classifier-agnostic saliency mapping by simultaneously training a classifier and a saliency mapping.”
"Package for causal inference in graphs and in the pairwise settings. Tools for graph structure recovery and dependencies are included.”
"Uplift modeling and causal inference with machine learning algorithms.”
cdt15, Causal Discovery Lab., Shiga University-
"LiNGAM is a new method for estimating structural equation models or linear causal Bayesian networks. It is based on using the non-Gaussianity of the data."
"Beyond Accuracy: Behavioral Testing of NLP models with CheckList.”
"An adversarial example library for constructing attacks, building defenses, and benchmarking both.”
"Contextual AI adds explainability to different stages of machine learning pipelines
ContrastiveExplanation - Foil Trees-
"provides an explanation for why an instance had the current outcome (fact) rather than a targeted outcome of interest (foil). These counterfactual explanations limit the explanation to the features relevant in distinguishing fact from foil, thereby disregarding irrelevant features.”
"a CLI that provides a generic automation layer for assessing the security of ML models.”
"moDel Agnostic Language for Exploration and eXplanation.”
"Remove problematic gender bias from word embeddings.”
"provides a unified framework for state-of-the-art gradient and perturbation-based attribution methods. It can be used by researchers and practitioners for better undertanding the recommended existing models, as well for benchmarking other attribution methods.”
"This repository implements the methods in 'Learning Important Features Through Propagating Activation Differences' by Shrikumar, Greenside & Kundaje, as well as other commonly-used methods such as gradients, gradient-times-input (equivalent to a version of Layerwise Relevance Propagation for ReLU networks), guided backprop and integrated gradients.”
"the code required to run the Deep Visualization Toolbox, as well as to generate the neuron-by-neuron visualizations using regularized optimization.”
"DIANNA is a Python package that brings explainable AI (XAI) to your research project. It wraps carefully selected XAI methods in a simple, uniform interface. It's built by, with and for (academic) researchers and research software engineers working on machine learning projects.”
DiCE-
"Generate Diverse Counterfactual Explanations for any machine learning model.”
"DoWhy is a Python library for causal inference that supports explicit modeling and testing of causal assumptions. DoWhy is based on a unified language for causal inference, combining causal graphical models and potential outcomes frameworks.”
"A python library for decision tree visualization and model interpretation.”
ecco-
"Explain, analyze, and visualize NLP language models. Ecco creates interactive visualizations directly in Jupyter notebooks explaining the behavior of Transformer-based language models (like GPT2, BERT, RoBERTA, T5, and T0).”
"eXplainable AI for Tabular Data"
eli5-
"A library for debugging/inspecting machine learning classifiers and explaining their predictions.”
"aims to support data scientists and machine learning (ML) engineers in explaining, testing and documenting AI/ML models, developed in-house or acquired externally. The explabox turns your ingestibles (AI/ML model and/or dataset) into digestibles (statistics, explanations or sensitivity insights).”
Explainable Boosting Machine EBM/GA2M-
"an open-source package that incorporates state-of-the-art machine learning interpretability techniques under one roof. With this package, you can train interpretable glassbox models and explain blackbox systems. InterpretML helps you understand your model's global behavior, or understand the reasons behind individual predictions.”
"a tool that inspects your system outputs, identifies what is working and what is not working, and helps inspire you with ideas of where to go next.”
"Quickly build Explainable AI dashboards that show the inner workings of so-called "blackbox" machine learning models.”
"Explainable AI framework for data scientists. Explain & debug any blackbox machine learning model with a single line of code.”
"Python code for training fair logistic regression classifiers.”
"a Python package that empowers developers of artificial intelligence (AI) systems to assess their system's fairness and mitigate any observed unfairness issues. Fairlearn contains mitigation algorithms as well as metrics for model assessment. Besides the source code, this repository also contains Jupyter notebooks with examples of Fairlearn usage.”
"a python toolbox auditing the machine learning models for bias.”
"contains implementations of measures used to quantify discrimination.”
"meant to facilitate the benchmarking of fairness aware machine learning algorithms.”
Rudin group at Duke falling rule list implementation
"A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX.”
"The testing framework dedicated to ML models, from tabular to LLMs. Scan AI models to detect risks of biases, performance issues and errors. In 4 lines of code.”
"implements Genetic Programming in Python, with a scikit-learn inspired and compatible API.”
Grad-CAM-(GitHub topic)
Grad-CAM is a technique for making convolutional neural networks more transparent by visualizing the regions of input that are important for predictions in computer vision models.
"Builds gradient boosted classification trees and gradient boosted regression trees on a parsed data set."
H2O-3 Penalized Generalized Linear Models
"Fits a generalized linear model, specified by a response variable, a set of predictors, and a description of the error distribution."
H2O-3 Sparse Principal Components
"Builds a generalized low rank decomposition of an H2O data frame."
"Large-language Model Evaluation framework with Elo Leaderboard and A-B testing."
HateCheck: A dataset and test suite from an ACL 2021 paper, offering functional tests for hate speech detection models, including extensive case annotations and testing functionalities.
"Python package for concise, transparent, and accurate predictive modeling. All sklearn-compatible and easy to use.”
A comprehensive Python library to analyze and interpret neural network behaviors in Keras, featuring a variety of methods like Gradient, LRP, and Deep Taylor.
"a variation on computing the gradient of the prediction output w.r.t. features of the input. It requires no modification to the original network, is simple to implement, and is applicable to a variety of deep models (sparse and dense, text and vision).”
"induces rules to explain the predictions of a trained neural network, and optionally also to explain the patterns that the model captures from the training data, and the patterns that are present in the original dataset.”
"an open-source package that incorporates state-of-the-art machine learning interpretability techniques under one roof.”
"integrates knowledge graphs (KG) with machine learning methods to generate interesting meaningful insights. It helps to generate human- and machine-readable decisions to provide assistance to users and enhance efficiency.”
Keract is a tool for visualizing activations and gradients in Keras models; it's meant to support a wide range of Tensorflow versions and to offer an intuitive API with Python examples.
"a high-level toolkit for visualizing and debugging your trained keras neural net models.”
L2X-
"Code for replicating the experiments in the paper Learning to Explain: An Information-Theoretic Perspective on Model Interpretation at ICML 2018, by Jianbo Chen, Mitchell Stern, Martin J. Wainwright, Michael I. Jordan.”
"LangFair is a Python library for conducting use-case level LLM bias and fairness assessments"
"LangTest: Deliver Safe & Effective Language Models"
learning-fair-representations-
"Python numba implementation of Zemel et al. 2013 http://www.cs.toronto.edu/~toni/Papers/icml-final.pdf"
leeky: Leakage/contamination testing for black box language models-
"leeky - training data contamination techniques for blackbox models"
leondz / garak, LLM vulnerability scanner-
"LLM vulnerability scanner"
LiFT-
"The LinkedIn Fairness Toolkit (LiFT) is a Scala/Spark library that enables the measurement of fairness and the mitigation of bias in large-scale machine learning workflows. The measurement module includes measuring biases in training data, evaluating fairness metrics for ML models, and detecting statistically significant differences in their performance across different subgroups.”
"Curate better data for LLMs."
lime-
"explaining what machine learning classifiers (or models) are doing. At the moment, we support explaining individual predictions for text classifiers or classifiers that act on tables (numpy arrays of numerical or categorical data) or images, with a package called lime (short for local interpretable model-agnostic explanations).”
lit-
"The Learning Interpretability Tool (LIT, formerly known as the Language Interpretability Tool) is a visual, interactive ML model-understanding tool that supports text, image, and tabular data. It can be run as a standalone server, or inside of notebook environments such as Colab, Jupyter, and Google Cloud Vertex AI notebooks.”
LLM Dataset Inference: Did you train on my dataset?-
"Official Repository for Dataset Inference for LLMs"
"LOFO (Leave One Feature Out) Importance calculates the importances of a set of features based on a metric of choice, for a model of choice, by iteratively removing each feature from the set, and evaluating the performance of the model, with a validation scheme of choice, based on the chosen metric.”
"The Layer-wise Relevance Propagation (LRP) algorithm explains a classifer's prediction specific to a given data point by attributing relevance scores to important components of the input by using the topology of the learned model itself.”
"enables developers to build AI tools that need access to real-time data to perform their tasks.”
"an open-source library to audit data privacy in statistical and machine learning algorithms. The tool can help in the data protection impact assessment process by providing a quantitative analysis of the fundamental privacy risks of a (machine learning) model.”
"a set of components for building simple simulations that explore the potential long-run impacts of deploying machine learning-based decision systems in social environments.”
"Mlxtend (machine learning extensions) is a Python library of useful tools for the day-to-day data science tasks.”
mllp-
"This is a PyTorch implementation of Multilayer Logical Perceptrons (MLLP) and Random Binarization (RB) method to learn Concept Rule Sets (CRS) for transparent classification tasks, as described in our paper: Transparent Classification with Multilayer Logical Perceptrons and Random Binarization.”
Guide on implementing and understanding monotonic constraints in XGBoost models to enhance predictive performance with practical Python examples.
Multilayer Logical Perceptron - MLLP-
"This is a PyTorch implementation of Multilayer Logical Perceptrons (MLLP) and Random Binarization (RB) method to learn Concept Rule Sets (CRS) for transparent classification tasks, as described in our paper: Transparent Classification with Multilayer Logical Perceptrons and Random Binarization.”
"a library written in Python implementing a rigorous and flexible mathematical programming formulation to solve the optimal binning problem for a binary, continuous and multiclass target type, incorporating constraints not previously addressed.”
Optimal Sparse Decision Trees-
"This accompanies the paper, "Optimal Sparse Decision Trees" by Xiyang Hu, Cynthia Rudin, and Margo Seltzer.”
"This repository contains codes that demonstrate the use of fairness metrics, bias mitigations and explainability tool.”
"Python Partial Dependence Plot toolbox. Visualize the influence of certain features on model predictions for supervised machine learning algorithms, utilizing partial dependence plots.”
"a new Python toolbox for interpretable machine learning model development and validation. Through low-code interface and high-code APIs, PiML supports a growing list of inherently interpretable ML models.”
"A Python package for fitting Quinlan's Cubist regression model"
"Implementation of privacy-preserving SVM assuming public model private data scenario (data in encrypted but model parameters are unencrypted) using adequate partial homomorphic encryption.”
"This code package implements the prototypical part network (ProtoPNet) from the paper "This Looks Like That: Deep Learning for Interpretable Image Recognition" (to appear at NeurIPS 2019), by Chaofan Chen (Duke University), Oscar Li
See dalex.
"Python Individual Conditional Expectation Plot Toolbox.”
"Generalized Additive Models in Python.”
"PyMC (formerly PyMC3) is a Python package for Bayesian statistical modeling focusing on advanced Markov chain Monte Carlo (MCMC) and variational inference (VI) algorithms. Its flexibility and extensibility make it applicable to a large suite of problems.”
"The SS3 text classifier is a novel and simple supervised machine learning model for text classification which is interpretable, that is, it has the ability to naturally (self)explain its rationale.”
"a package with state of the art methods for Explainable AI for computer vision. This can be used for diagnosing model predictions, either in production or while developing models. The aim is also to serve as a benchmark of algorithms and metrics for research of new explainability methods.”
"PyTorch implementation of Keras already existing project: https://github.com/albermax/innvestigate/.”
"Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations."
"This directory contains the code and resources of the following paper: "Rationalizing Neural Predictions". Tao Lei, Regina Barzilay and Tommi Jaakkola. EMNLP 2016. PDF Slides. The method learns to provide justifications, i.e. rationales, as supporting evidence of neural networks' prediction.”
"Toolkit for Auditing and Mitigating Bias and Fairness of Machine Learning Systems.”
REVISE: REvealing VIsual biaSEs-
"A tool that automatically detects possible forms of bias in a visual dataset along the axes of object-based, attribute-based, and geography-based patterns, and from which next steps for mitigation are suggested.”
RISE-
"contains source code necessary to reproduce some of the main results in the paper: Vitali Petsiuk, Abir Das, Kate Saenko (BMVC, 2018) [and] RISE: Randomized Input Sampling for Explanation of Black-box Models.”
"a machine learning method to fit simple customized risk scores in python.”
"a package we (students in the MadryLab) created to make training, evaluating, and exploring neural networks flexible and easy.”
SAGE-
"SAGE (Shapley Additive Global importancE) is a game-theoretic approach for understanding black-box machine learning models. It quantifies each feature's importance based on how much predictive power it contributes, and it accounts for complex feature interactions using the Shapley value.”
"Python implementations of commonly used sensitivity analysis methods. Useful in systems modeling to calculate the effects of model inputs or exogenous factors on outputs of interest.”
"User-friendly Python module for machine learning explainability," featuring PD and ALE plots, LIME, SHAP, permutation importance and Friedman's H, among other methods.
Historical link. Merged with fairlearn.
"a non-parametric supervised learning method used for classification and regression.”
Scikit-learn Generalized Linear Models
"a set of methods intended for regression in which the target value is expected to be a linear combination of the features.”
Scikit-learn Sparse Principal Components
"a variant of [principal component analysis, PCA], with the goal of extracting the set of sparse components that best reconstruct the data.”
"a machine learning package for streaming data in Python.”
shap-
"a game theoretic approach to explain the output of any machine learning model. It connects optimal credit allocation with local explanations using the classic Shapley values from game theory and their related extensions"
"a Python library for evaluating binary classifiers in a machine learning ensemble.”
"a scikit-learn compatible wrapper for the Bayesian Rule List classifier developed by Letham et al., 2015, extended by a minimum description length-based discretizer (Fayyad & Irani, 1993) for continuous data, and by an approach to subsample large datasets for better performance.”
"a Python machine learning module built on top of scikit-learn and distributed under the 3-Clause BSD license.”
"a collection of tools that allows modelers, compliance, and business stakeholders to test outcomes for bias or discrimination using widely accepted fairness metrics.”
Super-sparse Linear Integer models - SLIMs-
"a package to learn customized scoring systems for decision-making problems.”
tensorflow/fairness-indicators-
"designed to support teams in evaluating, improving, and comparing models for fairness concerns in partnership with the broader Tensorflow toolkit.”
"a library that implements constrained and interpretable lattice based models. It is an implementation of Monotonic Calibrated Interpolated Look-Up Tables in TensorFlow.”
"a collection of infrastructure and tools for research in neural network interpretability.”
"a library for evaluating TensorFlow models. It allows users to evaluate their models on large amounts of data in a distributed manner, using the same metrics defined in their trainer. These metrics can be computed over different slices of data and visualized in Jupyter notebooks.”
tensorflow/model-card-toolkit-
"streamlines and automates generation of Model Cards, machine learning documents that provide context and transparency into a model's development and performance. Integrating the MCT into your ML pipeline enables you to share model metadata and metrics with researchers, developers, reporters, and more.”
"a library that provides solutions for machine learning practitioners working to create and train models in a way that reduces or eliminates user harm resulting from underlying performance biases.”
"the source code for TensorFlow Privacy, a Python library that includes implementations of TensorFlow optimizers for training machine learning models with differential privacy. The library comes with tutorials and analysis tools for computing the privacy guarantees provided.”
"Testing with Concept Activation Vectors (TCAV) is a new interpretability method to understand what signals your neural networks models uses for prediction.”
"a library for performing coverage guided fuzzing of neural networks.”
"a debugging and visualization tool designed for data science, deep learning and reinforcement learning from Microsoft Research. It works in Jupyter Notebook to show real-time visualizations of your machine learning training and perform several other key analysis tasks for your models and data.”
"text_explainability provides a generic architecture from which well-known state-of-the-art explainability approaches for text can be composed.”
"Uses the generic architecture of text_explainability to also include tests of safety (how safe it the model in production, i.e. types of inputs it can handle), robustness (how generalizable the model is in production, e.g. stability when adding typos, or the effect of adding random unrelated data) and fairness (if equal individuals are treated equally by the model, e.g. subgroup fairness on sex and nationality).”
"A Model for Natural Language Attack on Text Classification and Inference"
"Implements interpretability methods as Tensorflow 2.x callbacks to ease neural network's understanding.”
"A Python library built on top of pandas and sklearnthat implements fairness-aware machine learning algorithms.”
"A testing-based approach for measuring discrimination in a software system.”
"A package designed to help you leverage uncertainty quantification techniques and make your deep neural networks more reliable.”
"Package for interpreting scikit-learn's decision tree and random forest predictions.”
"This repository contains the implementation of TRIAGE, a "Data-Centric AI" framework for data characterization tailored for regression.”
woe-
"Tools for WoE Transformation mostly used in ScoreCard Model for credit rating.”
xai-
"A Machine Learning library that is designed with AI explainability in its core.”
"An open source Python library for Interpretable Machine Learning.”
"an optimized distributed gradient boosting library designed to be highly efficient, flexible and portable.”
"A Python toolkit dedicated to explainability. The goal of this library is to gather the state of the art of Explainable AI to help you understand your complex neural network models.”
"Provide(s) a one-line Exploratory Data Analysis (EDA) experience in a consistent and fast solution.”
"A suite of visual diagnostic tools called "Visualizers" that extend the scikit-learn API to allow human steering of the model selection process.”