Improving Contrastive Learning by Visualizing Feature Transformation, ICCV 2021 Oral

Overview

Improving Contrastive Learning by Visualizing Feature Transformation

This project hosts the codes, models and visualization tools for the paper:

Improving Contrastive Learning by Visualizing Feature Transformation,
Rui Zhu*, Bingchen Zhao*, Jingen Liu, Zhenglong Sun, Chang Wen Chen
Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, Oral
arXiv preprint (arXiv 2108.02982)

@inproceedings{zhu2021Improving,
  title={Improving Contrastive Learning by Visualizing Feature Transformation},
  author={Zhu, Rui and Zhao, Bingchen and Liu, Jingen and Sun, Zhenglong and Chen, Chang Wen},
  booktitle =  {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
  year={2021}
}

highlights2

Highlights

  • Visualization Tools: We provide a visualization tool for pos/neg score distribution, which enables us to analyze, interpret and understand the contrastive learning process.
  • Feature Transformation: Inspired by the visualization, we propose a simple yet effective feature transformation (FT), which creates both hard positives and diversified negatives to enhance the training. FT enables to learn more view-invariant and discriminative representations.
  • Less Task-biased: FT makes the model less “task-bias”, which means we can achievesignificant performance improvement on various downstream tasks (object detection, instance segmentation, and long-tailed classification).

highlights

Updates

  • Code, pre-trained models and visualization tools are released. (07/08/2021)

Installation

This project is mainly based on the open-source code PyContrast.

Please refer to the INSTALL.md and RUN.md for installation and dataset preparation.

Models

For your convenience, we provide the following pre-trained models on ImageNet-1K and ImageNet-100.

pre-train method pre-train dataset backbone #epoch ImageNet-1K VOC det AP50 COCO det AP Link
Supervised ImageNet-1K ResNet-50 - 76.1 81.3 38.2 download
MoCo-v1 ImageNet-1K ResNet-50 200 60.6 81.5 38.5 download
MoCo-v1+FT ImageNet-1K ResNet-50 200 61.9 82.0 39.0 download
MoCo-v2 ImageNet-1K ResNet-50 200 67.5 82.4 39.0 download
MoCo-v2+FT ImageNet-1K ResNet-50 200 69.6 83.3 39.5 download
MoCo-v1+FT ImageNet-100 ResNet-50 200 IN-100 result 77.2 - - download

Note:

  • See our paper for more results on different benchmarks.

Usage

Training on IN-1K

python main_contrast.py --method MoCov2 --data_folder your/path/to/imagenet-1K/dataset  --dataset imagenet  --epochs 200 --input_res 224 --cosine --batch_size 256 --learning_rate 0.03   --mixnorm --mixnorm_target posneg --sep_alpha --pos_alpha 2.0 --neg_alpha 1.6 --mask_distribution beta --expolation_mask --alpha 0.999 --multiprocessing-distributed --world-size 1 --rank 0 --save_score

Linear Evaluation on IN-1K

python main_linear.py --method MoCov2 --data_folder your/path/to/imagenet-1K/dataset --ckpt your/path/to/pretrain_model   --n_class 1000 --multiprocessing-distributed --world-size 1 --rank 0 --epochs 100 --lr_decay_epochs 60,80

Training on IN-100

python main_contrast.py --method MoCov2 --data_folder your/path/to/imagenet-1K/dataset  --dataset imagenet100  --imagenet100path your/path/to/imagenet100.class  --epochs 200 --input_res 224 --cosine --batch_size 256 --learning_rate 0.03   --mixnorm --mixnorm_target posneg --sep_alpha --pos_alpha 2.0 --neg_alpha 1.6 --mask_distribution beta --expolation_mask --alpha 0.999 --multiprocessing-distributed --world-size 1 --rank 0 --save_score

Linear Evaluation on IN-100

python main_linear.py --method MoCov2 --data_folder your/path/to/imagenet-1K/dataset  --dataset imagenet100  --imagenet100path your/path/to/imagenet100.class  --n_class 100  --ckpt your/path/to/pretrain_model  --multiprocessing-distributed --world-size 1 --rank 0 

Transferring to Object Detection

Please refer to DenseCL and MoCo for transferring to object detection.

Visualization Tools

  • Our visualization is offline, which almost does not affect the training speed. Instead of storing K (65536) pair scores, we save their statistical mean and variance to represent the scores’ distribution. You can refer to the original paper for the details.

  • Visualization code is line 69-74 to store the scores. And then we further process the scores in the IpythonNotebook for drawing.

Citations

Please consider citing our paper in your publications if the project helps your research. BibTeX reference is as follow.

@inproceedings{zhu2021Improving,
  title={Improving Contrastive Learning by Visualizing Feature Transformation},
  author={Zhu, Rui and Zhao, Bingchen and Liu, Jingen and Sun, Zhenglong and Chen, Chang Wen},
  booktitle =  {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
  year={2021}
}
Owner
Bingchen Zhao
Currently study @ Tongji University, Super interested in DL and its applications
Bingchen Zhao
Genetic Algorithm, Particle Swarm Optimization, Simulated Annealing, Ant Colony Optimization Algorithm,Immune Algorithm, Artificial Fish Swarm Algorithm, Differential Evolution and TSP(Traveling salesman)

scikit-opt Swarm Intelligence in Python (Genetic Algorithm, Particle Swarm Optimization, Simulated Annealing, Ant Colony Algorithm, Immune Algorithm,A

郭飞 3.7k Jan 03, 2023
PyTorch implementation of HDN(Homography Decomposition Networks) for planar object tracking

Homography Decomposition Networks for Planar Object Tracking This project is the offical PyTorch implementation of HDN(Homography Decomposition Networ

CaptainHook 48 Dec 15, 2022
Official code of the paper "Expanding Low-Density Latent Regions for Open-Set Object Detection" (CVPR 2022)

OpenDet Expanding Low-Density Latent Regions for Open-Set Object Detection (CVPR2022) Jiaming Han, Yuqiang Ren, Jian Ding, Xingjia Pan, Ke Yan, Gui-So

csuhan 64 Jan 07, 2023
Official Implementation of "Learning Disentangled Behavior Embeddings"

DBE: Disentangled-Behavior-Embedding Official implementation of Learning Disentangled Behavior Embeddings (NeurIPS 2021). Environment requirement The

Mishne Lab 12 Sep 28, 2022
All public open-source implementations of convnets benchmarks

convnet-benchmarks Easy benchmarking of all public open-source implementations of convnets. A summary is provided in the section below. Machine: 6-cor

Soumith Chintala 2.7k Dec 30, 2022
Fully Connected DenseNet for Image Segmentation

Fully Connected DenseNets for Semantic Segmentation Fully Connected DenseNet for Image Segmentation implementation of the paper The One Hundred Layers

Somshubra Majumdar 84 Oct 31, 2022
Global Pooling, More than Meets the Eye: Position Information is Encoded Channel-Wise in CNNs, ICCV 2021

Global Pooling, More than Meets the Eye: Position Information is Encoded Channel-Wise in CNNs, ICCV 2021 Global Pooling, More than Meets the Eye: Posi

Md Amirul Islam 32 Apr 24, 2022
[ICLR 2021] Rank the Episodes: A Simple Approach for Exploration in Procedurally-Generated Environments.

[ICLR 2021] RAPID: A Simple Approach for Exploration in Reinforcement Learning This is the Tensorflow implementation of ICLR 2021 paper Rank the Episo

Daochen Zha 48 Nov 21, 2022
Unofficial PyTorch implementation of Masked Autoencoders Are Scalable Vision Learners

Unofficial PyTorch implementation of Masked Autoencoders Are Scalable Vision Learners This repository is built upon BEiT, thanks very much! Now, we on

Zhiliang Peng 2.3k Jan 04, 2023
PyTorch implementation for ACL 2021 paper "Maria: A Visual Experience Powered Conversational Agent".

Maria: A Visual Experience Powered Conversational Agent This repository is the Pytorch implementation of our paper "Maria: A Visual Experience Powered

Jokie 22 Dec 12, 2022
Public implementation of the Convolutional Motif Kernel Network (CMKN) architecture

CMKN Implementation of the convolutional motif kernel network (CMKN) introduced in Ditz et al., "Convolutional Motif Kernel Network", 2021. Testing Yo

1 Nov 17, 2021
Object Detection and Multi-Object Tracking

Object Detection and Multi-Object Tracking

Bobby Chen 1.6k Jan 04, 2023
GANTheftAuto is a fork of the Nvidia's GameGAN

Description GANTheftAuto is a fork of the Nvidia's GameGAN, which is research focused on emulating dynamic game environments. The early research done

Harrison 801 Dec 27, 2022
A toolkit for Lagrangian-based constrained optimization in Pytorch

Cooper About Cooper is a toolkit for Lagrangian-based constrained optimization in Pytorch. This library aims to encourage and facilitate the study of

Cooper 34 Jan 01, 2023
This is the official implement of paper "ActionCLIP: A New Paradigm for Action Recognition"

This is an official pytorch implementation of ActionCLIP: A New Paradigm for Video Action Recognition [arXiv] Overview Content Prerequisites Data Prep

268 Jan 09, 2023
PyTorch implementation of MLP-Mixer

PyTorch implementation of MLP-Mixer MLP-Mixer: an all-MLP architecture composed of alternate token-mixing and channel-mixing operations. The token-mix

Duo Li 33 Nov 27, 2022
This is a Pytorch implementation of the paper: Self-Supervised Graph Transformer on Large-Scale Molecular Data.

This is a Pytorch implementation of the paper: Self-Supervised Graph Transformer on Large-Scale Molecular Data.

212 Dec 25, 2022
Pytorch and Torch testing code of CartoonGAN

CartoonGAN-Test-Pytorch-Torch Pytorch and Torch testing code of CartoonGAN [Chen et al., CVPR18]. With the released pretrained models by the authors,

Yijun Li 642 Dec 27, 2022
TF Image Segmentation: Image Segmentation framework

TF Image Segmentation: Image Segmentation framework The aim of the TF Image Segmentation framework is to provide/provide a simplified way for: Convert

Daniil Pakhomov 546 Dec 17, 2022
SGPT: Multi-billion parameter models for semantic search

SGPT: Multi-billion parameter models for semantic search This repository contains code, results and pre-trained models for the paper SGPT: Multi-billi

Niklas Muennighoff 182 Dec 29, 2022