⬛ Python Individual Conditional Expectation Plot Toolbox

Overview

PyCEbox

Python Individual Conditional Expectation Plot Toolbox

Individual conditional expectation plot

A Python implementation of individual conditional expecation plots inspired by R's ICEbox. Individual conditional expectation plots were introduced in Peeking Inside the Black Box: Visualizing Statistical Learning with Plots of Individual Conditional Expectation (arXiv:1309.6392).

Quickstart

pycebox is available on PyPI and can be installed with pip install pycebox.

The tutorial recreates the first example in the above paper using pycebox.

Development

For easy development and prototyping using IPython notebooks, a Docker environment is included. To run an IPython notebook with access to your development version of pycebox, run PORT=8889 sh ./start_container.sh. A Jupyter notebook server with access to your development version of pycebox should be available at http://localhost:8889/tree.

To run the pycebox's tests in your development container

  1. Access a bash shell on the container with docker exec -it pycebox bash.
  2. Change to the pycebox directory with cd ../pycebox
  3. Run the tests with pytest test/test.py

Documentation

For details of pycebox's API, consult the documentation.

License

This library is distributed under the MIT License.

Comments
  • Typo in ice_plot() regarding _get_quantiles()

    Typo in ice_plot() regarding _get_quantiles()

    There is a typo in the ice_plot() function when calling the _get_quantiles() function. In lines 124 and 137, the ice_plot() calls __get_quantiles() (which is undefined) instead of _get_quantiles(), which results in an error if trying to use quantiles or center the ICE curves.

    opened by savvastj 6
  • Using predicted probabilities for binary classification

    Using predicted probabilities for binary classification

    Is there any way to give some form of predict_proba function to the ice() function in order to see the probability as opposed to the prediction?

    Thanks! Nema

    opened by nemasobhani 1
  • Plot mistake

    Plot mistake

    There is a problem in the visualization part. When I am trying to plot the graph in the example, I see the following mistake:


    TypeError Traceback (most recent call last) in 12 ice_plot(ice_df, frac_to_plot=0.1, 13 color_by='x3', cmap=PuOr, ---> 14 ax=ice_ax); 15 16 ice_ax.set_xlabel('$X_2$');

    C:\ProgramData\Anaconda3\lib\site-packages\pycebox\ice.py in ice_plot(ice_data, frac_to_plot, plot_points, point_kwargs, x_quantile, plot_pdp, centered, centered_quantile, color_by, cmap, ax, pdp_kwargs, **kwargs) 128 if frac_to_plot < 1.: 129 n_cols = ice_data.shape[1] --> 130 icols = np.random.choice(n_cols, size=frac_to_plot * n_cols, replace=False) 131 plot_ice_data = ice_data.iloc[:, icols] 132 else:

    mtrand.pyx in mtrand.RandomState.choice()

    TypeError: 'float' object cannot be interpreted as an integer

    opened by karakol15 4
  • "frac_to_plot" parameter in ice_plot

    Hey Austin,

    This package rocks, thanks for publishing it!

    I have a question and a potential small bug in the ice_plot method, specifically on the "frac_to_plot" parameter.

    It is my understanding that you simply take the fraction and multiply by the number of columns, and then pass this to the "size" parameter of np.random.choice(). I think we should make sure that the number being passed is an integer, not a float. Otherwise np.random.choice() will not accept a float as a parameter for "size".

    Current: icols = np.random.choice(n_cols, size=frac_to_plot * n_cols, replace=False)

    Fix: icols = np.random.choice(n_cols, size=int(frac_to_plot * n_cols), replace=False)

    Best, Andrew

    opened by andrew-cho 1
  • Extended use to classification models, fixed typecast bug

    Extended use to classification models, fixed typecast bug

    • Extended use to classification models by allowing predict_proba to be passed to the ice_plot function.
    • Fixed 'type error when size is non-int' error for np.random.choice function
    opened by sanjifr3 0
  • Averaging ICE plots across multiple runs/folds of a model

    Averaging ICE plots across multiple runs/folds of a model

    Hi Austin,

    I was wondering if it is possible to average across multiple runs/folds of the same model.

    I am trying at the moment, but the resulting ICE plots do not make sense. The per run plots make sense but when I average them across both runs and folds the data gets screwed.

    Cheers,

    Dan

    opened by danieltudosiu 0
Releases(0.0.1)
Owner
Austin Rochford
Chief Data Scientist @ Kibo Commerce, recovering mathematician, enthusiastic Bayesian
Austin Rochford
Algorithms for monitoring and explaining machine learning models

Alibi is an open source Python library aimed at machine learning model inspection and interpretation. The focus of the library is to provide high-qual

Seldon 1.9k Dec 30, 2022
TensorFlowTTS: Real-Time State-of-the-art Speech Synthesis for Tensorflow 2 (supported including English, Korean, Chinese, German and Easy to adapt for other languages)

🤪 TensorFlowTTS provides real-time state-of-the-art speech synthesis architectures such as Tacotron-2, Melgan, Multiband-Melgan, FastSpeech, FastSpeech2 based-on TensorFlow 2. With Tensorflow 2, we c

3k Jan 04, 2023
A collection of research papers and software related to explainability in graph machine learning.

A collection of research papers and software related to explainability in graph machine learning.

AstraZeneca 1.9k Dec 26, 2022
Visualizer for neural network, deep learning, and machine learning models

Netron is a viewer for neural network, deep learning and machine learning models. Netron supports ONNX (.onnx, .pb, .pbtxt), Keras (.h5, .keras), Tens

Lutz Roeder 20.9k Dec 28, 2022
Delve is a Python package for analyzing the inference dynamics of your PyTorch model.

Delve is a Python package for analyzing the inference dynamics of your PyTorch model.

Delve 73 Dec 12, 2022
Visualizer for neural network, deep learning, and machine learning models

Netron is a viewer for neural network, deep learning and machine learning models. Netron supports ONNX, TensorFlow Lite, Keras, Caffe, Darknet, ncnn,

Lutz Roeder 20.9k Dec 28, 2022
Lucid library adapted for PyTorch

Lucent PyTorch + Lucid = Lucent The wonderful Lucid library adapted for the wonderful PyTorch! Lucent is not affiliated with Lucid or OpenAI's Clarity

Lim Swee Kiat 520 Dec 26, 2022
pytorch implementation of "Distilling a Neural Network Into a Soft Decision Tree"

Soft-Decision-Tree Soft-Decision-Tree is the pytorch implementation of Distilling a Neural Network Into a Soft Decision Tree, paper recently published

Kim Heecheol 262 Dec 04, 2022
A collection of infrastructure and tools for research in neural network interpretability.

Lucid Lucid is a collection of infrastructure and tools for research in neural network interpretability. We're not currently supporting tensorflow 2!

4.5k Jan 07, 2023
An intuitive library to add plotting functionality to scikit-learn objects.

Welcome to Scikit-plot Single line functions for detailed visualizations The quickest and easiest way to go from analysis... ...to this. Scikit-plot i

Reiichiro Nakano 2.3k Dec 31, 2022
A python library for decision tree visualization and model interpretation.

dtreeviz : Decision Tree Visualization Description A python library for decision tree visualization and model interpretation. Currently supports sciki

Terence Parr 2.4k Jan 02, 2023
L2X - Code for replicating the experiments in the paper Learning to Explain: An Information-Theoretic Perspective on Model Interpretation.

L2X Code for replicating the experiments in the paper Learning to Explain: An Information-Theoretic Perspective on Model Interpretation at ICML 2018,

Jianbo Chen 113 Sep 06, 2022
Code for "High-Precision Model-Agnostic Explanations" paper

Anchor This repository has code for the paper High-Precision Model-Agnostic Explanations. An anchor explanation is a rule that sufficiently “anchors”

Marco Tulio Correia Ribeiro 735 Jan 05, 2023
Interpretability and explainability of data and machine learning models

AI Explainability 360 (v0.2.1) The AI Explainability 360 toolkit is an open-source library that supports interpretability and explainability of datase

1.2k Dec 29, 2022
Auralisation of learned features in CNN (for audio)

AuralisationCNN This repo is for an example of auralisastion of CNNs that is demonstrated on ISMIR 2015. Files auralise.py: includes all required func

Keunwoo Choi 39 Nov 19, 2022
Visualize a molecule and its conformations in Jupyter notebooks/lab using py3dmol

Mol Viewer This is a simple package wrapping py3dmol for a single command visualization of a RDKit molecule and its conformations (embed as Conformer

Benoît BAILLIF 1 Feb 11, 2022
A game theoretic approach to explain the output of any machine learning model.

SHAP (SHapley Additive exPlanations) is a game theoretic approach to explain the output of any machine learning model. It connects optimal credit allo

Scott Lundberg 18.3k Jan 08, 2023
Model analysis tools for TensorFlow

TensorFlow Model Analysis TensorFlow Model Analysis (TFMA) is a library for evaluating TensorFlow models. It allows users to evaluate their models on

1.2k Dec 26, 2022
Python implementation of R package breakDown

pyBreakDown Python implementation of breakDown package (https://github.com/pbiecek/breakDown). Docs: https://pybreakdown.readthedocs.io. Requirements

MI^2 DataLab 41 Mar 17, 2022
ModelChimp is an experiment tracker for Deep Learning and Machine Learning experiments.

ModelChimp What is ModelChimp? ModelChimp is an experiment tracker for Deep Learning and Machine Learning experiments. ModelChimp provides the followi

ModelChimp 124 Dec 21, 2022