[NeurIPS'21 Spotlight] PyTorch code for our paper "Aligned Structured Sparsity Learning for Efficient Image Super-Resolution"

Overview

ASSL

This repository is for a new network pruning method (Aligned Structured Sparsity Learning, ASSL) for efficient single image super-resolution (SR), introduced in our NeurIPS 2021 Spotlight paper:

Aligned Structured Sparsity Learning for Efficient Image Super-Resolution [Camera Ready]
Yulun Zhang*, Huan Wang*, Can Qin, and Yun Fu (*Contribute Equally)
Northeastern University, Boston, MA, USA

Stay tuned!

You might also like...
Anomaly Transformer: Time Series Anomaly Detection with Association Discrepancy" (ICLR 2022 Spotlight)

About Code release for Anomaly Transformer: Time Series Anomaly Detection with Association Discrepancy (ICLR 2022 Spotlight)

PyTorch code for our paper "Attention in Attention Network for Image Super-Resolution"

Under construction... Attention in Attention Network for Image Super-Resolution (A2N) This repository is an PyTorch implementation of the paper "Atten

PyTorch code for our paper
PyTorch code for our paper "Image Super-Resolution with Non-Local Sparse Attention" (CVPR2021).

Image Super-Resolution with Non-Local Sparse Attention This repository is for NLSN introduced in the following paper "Image Super-Resolution with Non-

PyTorch code for our ECCV 2020 paper "Single Image Super-Resolution via a Holistic Attention Network"

HAN PyTorch code for our ECCV 2020 paper "Single Image Super-Resolution via a Holistic Attention Network" This repository is for HAN introduced in the

PyTorch code for our ECCV 2018 paper
PyTorch code for our ECCV 2018 paper "Image Super-Resolution Using Very Deep Residual Channel Attention Networks"

PyTorch code for our ECCV 2018 paper "Image Super-Resolution Using Very Deep Residual Channel Attention Networks"

PyTorch code accompanying our paper on Maximum Entropy Generators for Energy-Based Models

Maximum Entropy Generators for Energy-Based Models All experiments have tensorboard visualizations for samples / density / train curves etc. To run th

Convolutional neural network web app trained to track our infant’s sleep schedule using our Google Nest camera.
Convolutional neural network web app trained to track our infant’s sleep schedule using our Google Nest camera.

Machine Learning Sleep Schedule Tracker What is it? Convolutional neural network web app trained to track our infant’s sleep schedule using our Google

Code for the prototype tool in our paper "CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning".

CoProtector Code for the prototype tool in our paper "CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning".

Official implementation of our paper
Official implementation of our paper "LLA: Loss-aware Label Assignment for Dense Pedestrian Detection" in Pytorch.

LLA: Loss-aware Label Assignment for Dense Pedestrian Detection This project provides an implementation for "LLA: Loss-aware Label Assignment for Dens

Comments
  • Could you share the code with me?

    Could you share the code with me?

    @MingSun-Tse Thanks for your excellent work. I read the paper ,and I want to learn the details. Could you share the paper with me? Thank you very much!!

    opened by ciwei123 3
  • Why simply use the first constrained layer as pruning template for all constrained layers?

    Why simply use the first constrained layer as pruning template for all constrained layers?

    From the observation of training results, the hard mask's weights between the constrained layers are not exactly aligned. https://github.com/MingSun-Tse/ASSL/blob/a564556c8b578c2ee86d135044f088bfeaafc707/src/pruner/utils.py#L71

    opened by yumath 2
  • Questions about implementation detail

    Questions about implementation detail

    hello , I have some questiones about implementation details.

    Data are obtained using the HR-LR data pairs obtained by the down-sampling code provided in BasicSR. The training data was DF2K (900 DIV2K + 2650 Flickr2K), and the test data was Set5.

    I run this command to prune the EDSR_16_256 model to EDSR_16_48. Only the pruning ratio and storage path name are modified compared to the command provided by the official.

    Prune from 256 to 48, pr=0.8125, x2, ASSL

    python main.py --model LEDSR --scale 2 --patch_size 96 --ext sep --dir_data /home/notebook/data/group_cpfs/wurongyuan/data/data
    --data_train DF2K --data_test DF2K --data_range 1-3550/3551-3555 --chop --save_results --n_resblocks 16 --n_feats 256
    --method ASSL --wn --stage_pr [0-1000:0.8125] --skip_layers *mean*,*tail*
    --same_pruned_wg_layers model.head.0,model.body.16,*body.2 --reg_upper_limit 0.5 --reg_granularity_prune 0.0001
    --update_reg_interval 20 --stabilize_reg_interval 43150 --pre_train pretrained_models/LEDSR_F256R16BIX2_DF2K_M311.pt
    --same_pruned_wg_criterion reg --save main/SR/LEDSR_F256R16BIX2_DF2K_ASSL_0.8125_RGP0.0001_RUL0.5_Pretrain_06011101 Results model_just_finished_prune ---> 33.739dB fine-tuning after one epoch ---> 37.781dB fine-tuning after 756 epoch ---> 37.940dB

    The result (37.940dB) I obtained with the code provided by the official is still a certain gap from the result in the paper (38.12dB). I should have overlooked some details.

    I also compared L1-norm method provided in the code. Prune from 256 to 48, pr=0.8125, x2, L1

    python main.py --model LEDSR --scale 2 --patch_size 96 --ext sep --dir_data /home/notebook/data/group_cpfs/wurongyuan/data/data
    --data_train DF2K --data_test DF2K --data_range 1-3550/3551-3555 --chop --save_results --n_resblocks 16 --n_feats 256
    --method L1 --wn --stage_pr [0-1000:0.8125] --skip_layers *mean*,*tail*
    --same_pruned_wg_layers model.head.0,model.body.16,*body.2 --reg_upper_limit 0.5 --reg_granularity_prune 0.0001
    --update_reg_interval 20 --stabilize_reg_interval 43150 --pre_train pretrained_models/LEDSR_F256R16BIX2_DF2K_M311.pt
    --same_pruned_wg_criterion reg --save main/SR/LEDSR_F256R16BIX2_DF2K_L1_0.8125_06011101

    Results

    model_just_finished_prune ---> 13.427dB fine-tuning after one epoch ---> 33.202dB fine-tuning after 756 epoch ---> 37.933dB

    The difference between the results of L1-norm method and those of ASSL seems negligible at this pruning ratio (256->48)

    Is there something I missed? Looking forward to your reply! >-<

    opened by wurongyuan 2
  • Questions on Data Preparation

    Questions on Data Preparation

    Hello and thanks for your amazing work! When I try to reproduce the paper results, I met some trouble binarizing the DF2K data:

    data/DF2K/bin/DF2K_train_LR_bicubic/X4/3548x4.pt does not exist. Now making binary...
    Direct pt file without name or image
    data/DF2K/bin/DF2K_train_LR_bicubic/X4/3549x4.pt does not exist. Now making binary...
    Direct pt file without name or image
    data/DF2K/bin/DF2K_train_LR_bicubic/X4/3550x4.pt does not exist. Now making binary...
    Direct pt file without name or image
    data/DF2K/bin/DF2K_train_HR/3551.pt does not exist. Now making binary...
    Traceback (most recent call last):
    ...
    FileNotFoundError: No such file: '/home/nfs_data/shixiangsheng/projects/ModelCompression/Prune/ASSL/src/data/DF2K/DF2K_train_HR/3551.png'
    

    I created dirs like this: ----data |__DF2K |__DF2K_train_HR |__DF2K_train_LR_bicubic

    I put '0001.png' - '0900.png' from ./data/DIV2K/DIV2K_train_HR and '000001.png' - '002650.png' (renamed to '0901.png' - '3550.png') from .data/Flickr2K/Flickr2K_HR to ./DF2K/DF2K_train_HR. As for downsampled images, I created folders named in ['X2', 'X3', 'X4'] under ./DF2K/DF2K_train_LR_bicubic and copied related images from DIV2K_train_LR_bicubic and Flickr2K_LR_bicubic (with images renamed as '0001x_.png' to '3550x_.png'). At the first and second stages of binarization (binarizing HR images and X4 LR images), it seems OK, but then the above error emerged. It's kind of weird since the total training images are 900 + 2650 and I have no idea why it returned to binarize the HR images after binarizing X4 LR images. I'm new to SR and have tried to look up for data preparation of DF2K in other SR repos, but in vain. I wonder how you actually get DF2K images binarized. Thanks for your help in advance XD

    opened by YouCaiJun98 0
Releases(v0.1)
Owner
Huan Wang
B.E. and M.S. graduate from Zhejiang University, China. Now Ph.D. candidate at Northeastern, USA. I work on interpretable model compression and daydreaming.
Huan Wang
An unsupervised learning framework for depth and ego-motion estimation from monocular videos

SfMLearner This codebase implements the system described in the paper: Unsupervised Learning of Depth and Ego-Motion from Video Tinghui Zhou, Matthew

Tinghui Zhou 1.8k Dec 30, 2022
Colar: Effective and Efficient Online Action Detection by Consulting Exemplars, CVPR 2022.

Colar: Effective and Efficient Online Action Detection by Consulting Exemplars This repository is the official implementation of Colar. In this work,

LeYang 246 Dec 13, 2022
CVPR 2021 Official Pytorch Code for UC2: Universal Cross-lingual Cross-modal Vision-and-Language Pre-training

UC2 UC2: Universal Cross-lingual Cross-modal Vision-and-Language Pre-training Mingyang Zhou, Luowei Zhou, Shuohang Wang, Yu Cheng, Linjie Li, Zhou Yu,

Mingyang Zhou 28 Dec 30, 2022
Pytorch implementation of YOLOX、PPYOLO、PPYOLOv2、FCOS an so on.

简体中文 | English miemiedetection 概述 miemiedetection是女装大佬咩酱基于YOLOX进行二次开发的个人检测库(使用的深度学习框架为pytorch),支持Windows、Linux系统,以女装大佬咩酱的名字命名。miemiedetection是一个不需要安装的

248 Jan 02, 2023
Object-Centric Learning with Slot Attention

Slot Attention This is a re-implementation of "Object-Centric Learning with Slot Attention" in PyTorch (https://arxiv.org/abs/2006.15055). Requirement

Untitled AI 72 Jan 02, 2023
EMNLP 2021 paper Models and Datasets for Cross-Lingual Summarisation.

This repository contains data and code for our EMNLP 2021 paper Models and Datasets for Cross-Lingual Summarisation. Please contact me at

9 Oct 28, 2022
Implementation of "A MLP-like Architecture for Dense Prediction"

A MLP-like Architecture for Dense Prediction (arXiv) Updates (22/07/2021) Initial release. Model Zoo We provide CycleMLP models pretrained on ImageNet

Shoufa Chen 244 Dec 27, 2022
Retrieve and analysis data from SDSS (Sloan Digital Sky Survey)

Author: Behrouz Safari License: MIT sdss A python package for retrieving and analysing data from SDSS (Sloan Digital Sky Survey) Installation Install

Behrouz 3 Oct 28, 2022
🤖 A Python library for learning and evaluating knowledge graph embeddings

PyKEEN PyKEEN (Python KnowlEdge EmbeddiNgs) is a Python package designed to train and evaluate knowledge graph embedding models (incorporating multi-m

PyKEEN 1.1k Jan 09, 2023
Official Implementation of "Third Time's the Charm? Image and Video Editing with StyleGAN3" https://arxiv.org/abs/2201.13433

Third Time's the Charm? Image and Video Editing with StyleGAN3 Yuval Alaluf*, Or Patashnik*, Zongze Wu, Asif Zamir, Eli Shechtman, Dani Lischinski, Da

531 Dec 20, 2022
DSL for matching Python ASTs

py-ast-rule-engine This library provides a DSL (domain-specific language) to match a pattern inside a Python AST (abstract syntax tree). The library i

1 Dec 18, 2021
Deep Learning and Reinforcement Learning Library for Scientists and Engineers 🔥

TensorLayer is a novel TensorFlow-based deep learning and reinforcement learning library designed for researchers and engineers. It provides an extens

TensorLayer Community 7.1k Dec 29, 2022
mPose3D, a mmWave-based 3D human pose estimation model.

mPose3D, a mmWave-based 3D human pose estimation model.

KylinChen 35 Nov 08, 2022
Neuralnetwork - Basic Multilayer Perceptron Neural Network for deep learning

Neural Network Just a basic Neural Network module Usage Example Importing Module

andreecy 0 Nov 01, 2022
Natural Posterior Network: Deep Bayesian Predictive Uncertainty for Exponential Family Distributions

Natural Posterior Network This repository provides the official implementation o

Oliver Borchert 54 Dec 06, 2022
Pytorch implementation of four neural network based domain adaptation techniques: DeepCORAL, DDC, CDAN and CDAN+E. Evaluated on benchmark dataset Office31.

Deep-Unsupervised-Domain-Adaptation Pytorch implementation of four neural network based domain adaptation techniques: DeepCORAL, DDC, CDAN and CDAN+E.

Alan Grijalva 49 Dec 20, 2022
Deep Halftoning with Reversible Binary Pattern

Deep Halftoning with Reversible Binary Pattern ICCV Paper | Project Website | BibTex Overview Existing halftoning algorithms usually drop colors and f

Menghan Xia 17 Nov 22, 2022
Points2Surf: Learning Implicit Surfaces from Point Clouds (ECCV 2020 Spotlight)

Points2Surf: Learning Implicit Surfaces from Point Clouds (ECCV 2020 Spotlight)

Philipp Erler 329 Jan 06, 2023
Vowpal Wabbit is a machine learning system which pushes the frontier of machine learning with techniques such as online, hashing, allreduce, reductions, learning2search, active, and interactive learning.

This is the Vowpal Wabbit fast online learning code. Why Vowpal Wabbit? Vowpal Wabbit is a machine learning system which pushes the frontier of machin

Vowpal Wabbit 8.1k Jan 06, 2023
TCPNet - Temporal-attentive-Covariance-Pooling-Networks-for-Video-Recognition

Temporal-attentive-Covariance-Pooling-Networks-for-Video-Recognition This is an implementation of TCPNet. Introduction For video recognition task, a g

Zilin Gao 21 Dec 08, 2022