Code to reproduce experiments in the paper "Explainability Requires Interactivity".

Overview

Explainability Requires Interactivity

This repository contains the code to train all custom models used in the paper Explainability Requires Interactivity as well as to create all static explanations (heat maps and generative). For our interactive framework, see the sister repositor.

Precomputed generative explanations are located at static_generative_explanations.

Requirements

Install the conda environment via conda env create -f env.yml (depending on your system you might need to change some versions, e.g. for pytorch, cudatoolkit and pytorch-lightning).

For some parts you will need the FairFace model, which can be downloaded from the authors' repo. You will only need the res34_fair_align_multi_7_20190809.pt file.

Training classification networks

CelebA dataset

You first need to download and decompress the CelebAMask-HQ dataset (or here). Then run the training with

python train.py --dset celeb --dset_path /PATH/TO/CelebAMask-HQ/ --classes_or_attr Smiling --target_path /PATH/TO/OUTPUT

/PATH/TO/FLOWERS102/ should contain a CelebAMask-HQ-attribute-anno.txt file and an CelebA-HQ-img directory. Any of the columns in CelebAMask-HQ-attribute-anno.txt can be used; in the paper we used Heavy_Makeup, Male, Smiling, and Young.

Flowers102 dataset

You first need to download and decompress the Flowers102 data. Then run the training with

python train.py --dset flowers102 --dset_path /PATH/TO/FLOWERS102/ --classes_or_attr 49-65 --target_path /PATH/TO/OUTPUT/

/PATH/TO/FLOWERS102/ should contain an imagelabels.mat file and an images directory. Classes 49 and 65 correspond to the "Oxeye daisy" and "California poppy", while 63 and 54 correspond to "Black-eyed Susan" and "Sunflower" as in the paper.

Generating heatmap explanations

Heatmap explanations are generated using the Captum library. After training, run explanations via

python static_exp.py --model_path /PATH/TO/MODEL.pt --img_path /PATH/TO/IMGS/ --model_name celeb --fig_dir /PATH/TO/OUTPUT/

/PATH/TO/IMGS/ contains (only) image files and can be omitted in order to run the default images exported by train.py. To run on FairFace, choose --model_name fairface and add --attr age or --attr gender. Other explanation methods can be easily added by modifying the explain_all function in static_exp.py. Explanations are saved to fig_dir. Only tested for the networks trained on the facial images data in the previous step, but any resnet18 with scalar output layer should work just as well.

Generating generative explanations

First, clone the original NVIDIA StyleGAN2-ada-pytorch repo. Make sure everything works as expected (e.g. run the getting started code). If the code is stuck at loading TODO, usually ctrl-C will let the model fall back to a smaller reference implementation which is good enough for our use case. Next, export the repo into your PYTHONPATH (e.g. via export PYTHONPATH=$PYTHONPATH:/PATH/TO/stylegan2-ada-pytorch/). To generate explanations, you will need to 0) train an image model (see above, or use the FairFace model); 1) create a dataset of latent codes + labels; 2) train a latent space logistic regression models; and 3) create the explanations. As each of the steps can be very slow, we split them up

Create labeled latent dataset

First, make sure to either train at least one image model as in the first step and/or download the FairFace model.

python generative_exp.py --phase 1 --attrs Smiling,ff-skin-color --base_dir /PATH/TO/BASE/ --generator_path /PATH/TO/STYLEGAN2.pkl --n_train 20000 --n_valid 5000

The base_dir is the directory where all files/sub-directories are stored and should be the same as the target_path from train.py (e.g., just .). It should contain e.g. the celeb-Smiling directory and the res34_fair_align_multi_7_20190809.pt file if using --attrs Smiling,ff-skin-color.

Train latent space model

After the first step, run

python generative_exp.py --phase 2 --attrs Smiling,ff-skin-color --base_dir /PATH/TO/BASE/ --epochs 50

with same base_dir and attrs.

Create generative explanations

Finally, you can generate generative explanations via

python generative_exp.py --phase 3 --base_dir /PATH/TO/BASE/ --eval_attr Smiling --generator_path /PATH/TO/STYLEGAN2.pkl --attrs Smiling,ff-skin-color --reconstruction_steps 1000 --ampl 0.09 --input_img_dir /PATH/TO/IMAGES/ --output_dir /PATH/TO/OUTPUT/

Here, eval_attr is the final evaluation model's class that you want to explain; attrs are the same as before, the directions in latent space; input_img_dir is a directory with (only) image files that are to be explained. Explanations are saved to output_dir.

Owner
Digital Health & Machine Learning
Digital Health & Machine Learning
The Few-Shot Bot: Prompt-Based Learning for Dialogue Systems

Few-Shot Bot: Prompt-Based Learning for Dialogue Systems This repository includes the dataset, experiments results, and code for the paper: Few-Shot B

Andrea Madotto 103 Dec 28, 2022
ACAV100M: Automatic Curation of Large-Scale Datasets for Audio-Visual Video Representation Learning. In ICCV, 2021.

ACAV100M: Automatic Curation of Large-Scale Datasets for Audio-Visual Video Representation Learning This repository contains the code for our ICCV 202

sangho.lee 28 Nov 08, 2022
Open-source codebase for EfficientZero, from "Mastering Atari Games with Limited Data" at NeurIPS 2021.

EfficientZero (NeurIPS 2021) Open-source codebase for EfficientZero, from "Mastering Atari Games with Limited Data" at NeurIPS 2021. Thank you for you

Weirui Ye 671 Jan 03, 2023
MatchGAN: A Self-supervised Semi-supervised Conditional Generative Adversarial Network

MatchGAN: A Self-supervised Semi-supervised Conditional Generative Adversarial Network This repository is the official implementation of MatchGAN: A S

Justin Sun 12 Dec 27, 2022
[ICCV'2021] Image Inpainting via Conditional Texture and Structure Dual Generation

[ICCV'2021] Image Inpainting via Conditional Texture and Structure Dual Generation

Xiefan Guo 122 Dec 11, 2022
Causal Imitative Model for Autonomous Driving

Causal Imitative Model for Autonomous Driving Mohammad Reza Samsami, Mohammadhossein Bahari, Saber Salehkaleybar, Alexandre Alahi. arXiv 2021. [Projec

VITA lab at EPFL 8 Oct 04, 2022
A python script to convert images to animated sus among us crewmate twerk jifs as seen on r/196

img_sussifier A python script to convert images to animated sus among us crewmate twerk jifs as seen on r/196 Examples How to use install python pip i

41 Sep 30, 2022
A package for "Procedural Content Generation via Reinforcement Learning" OpenAI Gym interface.

Readme: Illuminating Diverse Neural Cellular Automata for Level Generation This is the codebase used to generate the results presented in the paper av

Sam Earle 27 Jan 05, 2023
DeepMind's software stack for physics-based simulation and Reinforcement Learning environments, using MuJoCo.

dm_control: DeepMind Infrastructure for Physics-Based Simulation. DeepMind's software stack for physics-based simulation and Reinforcement Learning en

DeepMind 3k Dec 31, 2022
Build fully-functioning computer vision models with PyTorch

Detecto is a Python package that allows you to build fully-functioning computer vision and object detection models with just 5 lines of code. Inferenc

Alan Bi 576 Dec 29, 2022
CLDF dataset derived from Robbeets et al.'s "Triangulation Supports Agricultural Spread" from 2021

CLDF dataset derived from Robbeets et al.'s "Triangulation Supports Agricultural Spread" from 2021 How to cite If you use these data please cite the o

Digital Linguistics 2 Dec 20, 2021
Learning to Simulate Dynamic Environments with GameGAN (CVPR 2020)

Learning to Simulate Dynamic Environments with GameGAN PyTorch code for GameGAN Learning to Simulate Dynamic Environments with GameGAN Seung Wook Kim,

199 Dec 26, 2022
Code for CVPR2021 paper 'Where and What? Examining Interpretable Disentangled Representations'.

PS-SC GAN This repository contains the main code for training a PS-SC GAN (a GAN implemented with the Perceptual Simplicity and Spatial Constriction c

Xinqi/Steven Zhu 40 Dec 16, 2022
Scale-aware Automatic Augmentation for Object Detection (CVPR 2021)

SA-AutoAug Scale-aware Automatic Augmentation for Object Detection Yukang Chen, Yanwei Li, Tao Kong, Lu Qi, Ruihang Chu, Lei Li, Jiaya Jia [Paper] [Bi

DV Lab 182 Dec 29, 2022
Convert BART models to ONNX with quantization. 3X reduction in size, and upto 3X boost in inference speed

fast-Bart Reduction of BART model size by 3X, and boost in inference speed up to 3X BART implementation of the fastT5 library (https://github.com/Ki6a

Siddharth Sharma 19 Dec 09, 2022
AdaFocus (ICCV 2021) Adaptive Focus for Efficient Video Recognition

AdaFocus (ICCV 2021) This repo contains the official code and pre-trained models for AdaFocus. Adaptive Focus for Efficient Video Recognition Referenc

Rainforest Wang 115 Dec 21, 2022
Easy-to-use library to boost AI inference leveraging state-of-the-art optimization techniques.

NEW RELEASE How Nebullvm Works • Tutorials • Benchmarks • Installation • Get Started • Optimization Examples Discord | Website | LinkedIn | Twitter Ne

Nebuly 1.7k Dec 31, 2022
Efficient and intelligent interactive segmentation annotation software

Efficient and intelligent interactive segmentation annotation software

294 Dec 30, 2022
Main Results on ImageNet with Pretrained Models

This repository contains Pytorch evaluation code, training code and pretrained models for the following projects: SPACH (A Battle of Network Structure

Microsoft 151 Dec 14, 2022
MVGCN: a novel multi-view graph convolutional network (MVGCN) framework for link prediction in biomedical bipartite networks.

MVGCN MVGCN: a novel multi-view graph convolutional network (MVGCN) framework for link prediction in biomedical bipartite networks. Developer: Fu Hait

13 Dec 01, 2022