VOneNet: CNNs with a Primary Visual Cortex Front-End

Related tags

Deep Learningvonenet
Overview

VOneNet: CNNs with a Primary Visual Cortex Front-End

A family of biologically-inspired Convolutional Neural Networks (CNNs). VOneNets have the following features:

  • Fixed-weight neural network model of the primate primary visual cortex (V1) as the front-end.
  • Robust to image perturbations
  • Brain-mapped
  • Flexible: can be adapted to different back-end architectures

read more...

Available Models

(Click on model names to download the weights of ImageNet-trained models. Alternatively, you can use the function get_model in the vonenet package to download the weights.)

Name Description
VOneResNet50 Our best performing VOneNet with a ResNet50 back-end
VOneCORnet-S VOneNet with a recurrent neural network back-end based on the CORnet-S
VOneAlexNet VOneNet with a back-end based on AlexNet

Quick Start

VOneNets was trained with images normalized with mean=[0.5,0.5,0.5] and std=[0.5,0.5,0.5]

More information coming soon...

Longer Motivation

Current state-of-the-art object recognition models are largely based on convolutional neural network (CNN) architectures, which are loosely inspired by the primate visual system. However, these CNNs can be fooled by imperceptibly small, explicitly crafted perturbations, and struggle to recognize objects in corrupted images that are easily recognized by humans. Recently, we observed that CNN models with a neural hidden layer that better matches primate primary visual cortex (V1) are also more robust to adversarial attacks. Inspired by this observation, we developed VOneNets, a new class of hybrid CNN vision models. Each VOneNet contains a fixed weight neural network front-end that simulates primate V1, called the VOneBlock, followed by a neural network back-end adapted from current CNN vision models. The VOneBlock is based on a classical neuroscientific model of V1: the linear-nonlinear-Poisson model, consisting of a biologically-constrained Gabor filter bank, simple and complex cell nonlinearities, and a V1 neuronal stochasticity generator. After training, VOneNets retain high ImageNet performance, but each is substantially more robust, outperforming the base CNNs and state-of-the-art methods by 18% and 3%, respectively, on a conglomerate benchmark of perturbations comprised of white box adversarial attacks and common image corruptions. Additionally, all components of the VOneBlock work in synergy to improve robustness. Read more: Dapello*, Marques*, et al. (biorxiv, 2020)

Requirements

  • Python 3.6+
  • PyTorch 0.4.1+
  • numpy
  • pandas
  • tqdm
  • scipy

Citation

Dapello, J., Marques, T., Schrimpf, M., Geiger, F., Cox, D.D., DiCarlo, J.J. (2020) Simulating a Primary Visual Cortex at the Front of CNNs Improves Robustness to Image Perturbations. biorxiv. doi.org/10.1101/2020.06.16.154542

License

GNU GPL 3+

FAQ

Soon...

Setup and Run

  1. You need to clone it in your local repository $ git clone https://github.com/dicarlolab/vonenet.git

  2. And when you setup its codes, you must need 'val' directory. so here is link. this link is from Korean's blog I refered as below https://seongkyun.github.io/others/2019/03/06/imagenet_dn/

    ** Download link**
    

https://academictorrents.com/collection/imagenet-2012

Once you download that large tar files, you must unzip that files -- all instructions below are refered above link, I only translate it

Unzip training dataset

$ mkdir train && mb ILSVRC2012_img_train.tar train/ && cd train $ tar -xvf ILSVRC2012_img_train.tar $ rm -f ILSVRC2012_img_train.tar (If you want to remove zipped file(tar)) $ find . -name "*.tar" | while read NAME ; do mkdir -p "${NAME%.tar}"; tar -xvf "${NAME}" -C "${NAME%.tar}"; rm -f "${NAME}"; done $ cd ..

Unzip validation dataset

$ mkdir val && mv ILSVRC2012_img_val.tar val/ && cd val && tar -xvf ILSVRC2012_img_val.tar $ wget -qO- https://raw.githubusercontent.com/soumith/imagenetloader.torch/master/valprep.sh | bash

when it's finished, you can see train directory, val directory that 'val' directory is needed when setting up

Caution!!!!

after all execution above, must remove directory or file not having name n0000 -> there will be fault in training -> ex) 'ILSVRC2012_img_train' in train directory, 'ILSVRC2012_img_val.tar' in val directory

  1. if you've done getting data, then we can setting up go to local repository which into you cloned and open terminal (you must check your versions of python, pytorch, cudatoolkit if okay then,) $ python3 setup.py install $ python3 run.py --in_path {directory including above dataset, 'val' directory must be in!}

If you see any GPU related problem especially 'GPU is not available' although you already got

$ python3 run.py --in_path {directory including above dataset, 'val' directory must be in!} --ngpus 0

ngpus is 1 as default. if you don't care running on CPU you do so

Comments
  • GPU requirements

    GPU requirements

    Hi! Thank you so much for releasing the code!

    If I wanted to train the VOneResNet50 on a NVIDIA GeForce RTX 2070 how long should I expect it to take? I'm new to training neural networks this big and am working on a small project for a course, so it would be good to have an estimate.

    Thank you so much!

    Maria Inês

    opened by mariainescravo 4
  • k_exc parameter

    k_exc parameter

    Hi,

    Thanks for releasing your code! Quick question- what is the significance of the k_exc parameter used in the V1 block?

    https://github.com/dicarlolab/vonenet/blob/master/vonenet/modules.py#L91

    Norman

    opened by normster 4
  • Robust Accuracy results not matching

    Robust Accuracy results not matching

    Firstly, thank you for open sourcing the code for your paper. It has been really helpful !!

    I had a small query regarding the robust evaluation of models. I tried to evaluate the pretrained VoneResNet50 model with standard PGD with EOT and I get the following results:

    robust accuracy (top1):0.3666
    robust accuracy (top5):0.635
    

    My PGD parameters were as follows :

    iterations : 64
    norm : L inifity
    epsilon: 0.0009803921569 (= 1/1020)
    eot_iterations : 8
    Library: advertorch 
    

    I used the code in this PR and also checked with another library

    It seems like the top-5 accuracy is closer to the accuracy mentioned in the paper. I'm confused since the paper mentions that the accuracy is always top-1?

    opened by code-Assasin 3
  • Can you provide the trained VOneNet model file onto google drive?

    Can you provide the trained VOneNet model file onto google drive?

    Can you provide the trained VOneNet model file onto google drive so that I can download for my experiments. CIFAR-10, CIFAR-100, ImageNet datasets, do you have the trained model file??

    opened by machanic 2
  • Update README.md

    Update README.md

    There are problems in line 17, 18, 19 README.md. Because When I finished download, system tells me this is wrong extension.

    and add setup and run instructions. please check it and if there some error, please correct it

    opened by comeeasy 1
  • explaining neural variances

    explaining neural variances

    Thank you for the code for the V1Block. Interesting work!

    I was wondering how you exactly compared regular convolutional features and the ones from VOneNet to explain the Neural Variances.

    Since the paper stresses that this model is SoTA in explaining these, I would be really glad if you can include the code for that too / or if you could point me to existing repositories that do that (if you are aware of any), that'd be great too!

    Thanks again!

    opened by vinbhaskara 1
  • fix: added missing argument for restoring model training

    fix: added missing argument for restoring model training

    For restoring the model training, the code already provided the logic but forgot to add the argument to the parser. Now it is able to restore the model training providing the epoch number and the path containing those files.

    opened by ALLIESXO 0
  • How to test the top-scoring Brain Score model - vonenet-resnet50-non-stochastic?

    How to test the top-scoring Brain Score model - vonenet-resnet50-non-stochastic?

    Hi, I am trying to understand what's the correct way to test (using the pretrained model trained on ImageNet) the voneresnet-50-non_stochastic model that is currently scoring two on Brain Score.

    I want the model to be pretrained on ImageNet. When loading the model through net = vonenet.get_model(model_arch='resnet50', pretrained=True) a state_dict file that already contains the noise_level, noise_scale and noise_mode parameter gets loaded (in vonenet/__init__.py line 38. Do the pretrained model performance depends on these values to be fixed at 'neuronal', 0.35 and 0.07? Or can set one of these to 0 (which one?) and just keep using the same pretrained model for testing?

    Thanks, Valerio

    opened by ValerioB88 0
  • Alignment of quadrutre pairs (q0 and q1) in terms of input channels?

    Alignment of quadrutre pairs (q0 and q1) in terms of input channels?

    Hi Tiago and Joel, this is a very cool project.

    The initialize method of the GFB class doesn't set the random seed of randint:

        def initialize(self, sf, theta, sigx, sigy, phase):
            random_channel = torch.randint(0, self.in_channels, (self.out_channels,))
    

    Doesn't this cause the filters of simple_conv_q0 and simple_conv_q1 to be misaligned in terms of input channels?

    opened by Tal-Golan 1
  • add example of adversarial evaluation

    add example of adversarial evaluation

    check out my attack example and let me know what you think.

    I made it entirely self contained in adv_evaluate.py, and I added an example to the README.md

    opened by dapello 0
Owner
The DiCarlo Lab at MIT
Working to discover the neuronal algorithms underlying visual object recognition
The DiCarlo Lab at MIT
The repository contains reproducible PyTorch source code of our paper Generative Modeling with Optimal Transport Maps, ICLR 2022.

Generative Modeling with Optimal Transport Maps The repository contains reproducible PyTorch source code of our paper Generative Modeling with Optimal

Litu Rout 30 Dec 22, 2022
Computationally efficient algorithm that identifies boundary points of a point cloud.

BoundaryTest Included are MATLAB and Python packages, each of which implement efficient algorithms for boundary detection and normal vector estimation

6 Dec 09, 2022
Teaching end to end workflow of deep learning

Deep-Education This repository is now available for public use for teaching end to end workflow of deep learning. This implies that learners/researche

Data Lab at College of William and Mary 2 Sep 26, 2022
Official Code Release for "CLIP-Adapter: Better Vision-Language Models with Feature Adapters"

Official Code Release for "CLIP-Adapter: Better Vision-Language Models with Feature Adapters" Pipeline of CLIP-Adapter CLIP-Adapter is a drop-in modul

peng gao 157 Dec 26, 2022
Code for ICCV 2021 paper "HuMoR: 3D Human Motion Model for Robust Pose Estimation"

Code for ICCV 2021 paper "HuMoR: 3D Human Motion Model for Robust Pose Estimation"

Davis Rempe 367 Dec 24, 2022
A higher performance pytorch implementation of DeepLab V3 Plus(DeepLab v3+)

A Higher Performance Pytorch Implementation of DeepLab V3 Plus Introduction This repo is an (re-)implementation of Encoder-Decoder with Atrous Separab

linhua 326 Nov 22, 2022
AI Based Smart Exam Proctoring Package

AI Based Smart Exam Proctoring Package It takes image (base64) as input: Provide Output as: Detection of Mobile phone. Detection of More than 1 person

NARENDER KESWANI 3 Sep 09, 2022
pytorch implementation of "Contrastive Multiview Coding", "Momentum Contrast for Unsupervised Visual Representation Learning", and "Unsupervised Feature Learning via Non-Parametric Instance-level Discrimination"

Unofficial implementation: MoCo: Momentum Contrast for Unsupervised Visual Representation Learning (Paper) InsDis: Unsupervised Feature Learning via N

Zhiqiang Shen 16 Nov 04, 2020
The source code for the Cutoff data augmentation approach proposed in this paper: "A Simple but Tough-to-Beat Data Augmentation Approach for Natural Language Understanding and Generation".

Cutoff: A Simple Data Augmentation Approach for Natural Language This repository contains source code necessary to reproduce the results presented in

Dinghan Shen 49 Dec 22, 2022
Simulation-based performance analysis of server-less Blockchain-enabled Federated Learning

Blockchain-enabled Server-less Federated Learning Repository containing the files used to reproduce the results of the publication "Blockchain-enabled

Francesc Wilhelmi 9 Sep 27, 2022
Pytorch implementation of the paper "Enhancing Content Preservation in Text Style Transfer Using Reverse Attention and Conditional Layer Normalization"

Pytorch implementation of the paper "Enhancing Content Preservation in Text Style Transfer Using Reverse Attention and Conditional Layer Normalization"

Dongkyu Lee 4 Sep 18, 2022
An efficient PyTorch implementation of the winning entry of the 2017 VQA Challenge.

Bottom-Up and Top-Down Attention for Visual Question Answering An efficient PyTorch implementation of the winning entry of the 2017 VQA Challenge. The

Hengyuan Hu 731 Jan 03, 2023
HyperaPy: An automatic hyperparameter optimization framework ⚡🚀

hyperpy HyperPy: An automatic hyperparameter optimization framework Description HyperPy: Library for automatic hyperparameter optimization. Build on t

Sergio Mora 7 Sep 06, 2022
This repo holds the code of TransFuse: Fusing Transformers and CNNs for Medical Image Segmentation

TransFuse This repo holds the code of TransFuse: Fusing Transformers and CNNs for Medical Image Segmentation Requirements Pytorch=1.6.0, 1.9.0 (=1.

Rayicer 93 Dec 19, 2022
1st Solution For ICDAR 2021 Competition on Mathematical Formula Detection

This project releases our 1st place solution on ICDAR 2021 Competition on Mathematical Formula Detection. We implement our solution based on MMDetection, which is an open source object detection tool

yuxzho 94 Dec 25, 2022
An implementation of the research paper "Retina Blood Vessel Segmentation Using A U-Net Based Convolutional Neural Network"

Retina Blood Vessels Segmentation This is an implementation of the research paper "Retina Blood Vessel Segmentation Using A U-Net Based Convolutional

Srijarko Roy 23 Aug 20, 2022
Transfer-Learn is an open-source and well-documented library for Transfer Learning.

Transfer-Learn is an open-source and well-documented library for Transfer Learning. It is based on pure PyTorch with high performance and friendly API. Our code is pythonic, and the design is consist

THUML @ Tsinghua University 2.2k Jan 03, 2023
[SIGGRAPH 2021 Asia] DeepVecFont: Synthesizing High-quality Vector Fonts via Dual-modality Learning

DeepVecFont This is the official Pytorch implementation of the paper: Yizhi Wang and Zhouhui Lian. DeepVecFont: Synthesizing High-quality Vector Fonts

Yizhi Wang 146 Dec 18, 2022
All supplementary material used by me while TA-ing CS3244: Machine Learning

CS3244-Tutorial-Material All supplementary material used by me while TA-ing CS3244: Machine Learning at NUS School of Computing. What is this? I teach

Rishabh Anand 18 Sep 23, 2022
Flaxformer: transformer architectures in JAX/Flax

Flaxformer is a transformer library for primarily NLP and multimodal research at Google.

Google 116 Jan 05, 2023