Training RNNs as Fast as CNNs

Overview

News

SRU++, a new SRU variant, is released. [tech report] [blog]

The experimental code and SRU++ implementation are available on the dev branch which will be merged into master later.

About

SRU is a recurrent unit that can run over 10 times faster than cuDNN LSTM, without loss of accuracy tested on many tasks.


Average processing time of LSTM, conv2d and SRU, tested on GTX 1070

For example, the figure above presents the processing time of a single mini-batch of 32 samples. SRU achieves 10 to 16 times speed-up compared to LSTM, and operates as fast as (or faster than) word-level convolution using conv2d.

Reference:

Simple Recurrent Units for Highly Parallelizable Recurrence [paper]

@inproceedings{lei2018sru,
  title={Simple Recurrent Units for Highly Parallelizable Recurrence},
  author={Tao Lei and Yu Zhang and Sida I. Wang and Hui Dai and Yoav Artzi},
  booktitle={Empirical Methods in Natural Language Processing (EMNLP)},
  year={2018}
}

When Attention Meets Fast Recurrence: Training Language Models with Reduced Compute [paper]

@article{lei2021srupp,
  title={When Attention Meets Fast Recurrence: Training Language Models with Reduced Compute},
  author={Tao Lei},
  journal={arXiv preprint arXiv:2102.12459},
  year={2021}
}

Requirements

Install requirements via pip install -r requirements.txt.


Installation

From source:

SRU can be installed as a regular package via python setup.py install or pip install ..

From PyPi:

pip install sru

Directly use the source without installation:

Make sure this repo and CUDA library can be found by the system, e.g.

export PYTHONPATH=path_to_repo/sru
export LD_LIBRARY_PATH=/usr/local/cuda/lib64

Examples

The usage of SRU is similar to nn.LSTM. SRU likely requires more stacking layers than LSTM. We recommend starting by 2 layers and use more if necessary (see our report for more experimental details).

import torch
from sru import SRU, SRUCell

# input has length 20, batch size 32 and dimension 128
x = torch.FloatTensor(20, 32, 128).cuda()

input_size, hidden_size = 128, 128

rnn = SRU(input_size, hidden_size,
    num_layers = 2,          # number of stacking RNN layers
    dropout = 0.0,           # dropout applied between RNN layers
    bidirectional = False,   # bidirectional RNN
    layer_norm = False,      # apply layer normalization on the output of each layer
    highway_bias = -2,        # initial bias of highway gate (<= 0)
)
rnn.cuda()

output_states, c_states = rnn(x)      # forward pass

# output_states is (length, batch size, number of directions * hidden size)
# c_states is (layers, batch size, number of directions * hidden size)

Contributing

Please read and follow the guidelines.

Other Implementations

@musyoku had a very nice SRU implementaion in chainer.

@adrianbg implemented the first CPU version.


Comments
  • Enable both Pytorch native AMP and Nvidia APEX AMP for SRU

    Enable both Pytorch native AMP and Nvidia APEX AMP for SRU

    Hi!

    I was happily using SRUs with Pytorch native AMP, however I started experimenting with training using Microsoft DeepSpeed and bumped in to an issue.

    Basically the issues is that I observed that FP16 training using DeepSpeed doesn't work for both GRUs and SRUs. However when using Nvidia APEX AMP, DeepSpeed training using GRUs does work.

    So, based on the tips in one of your issues, I started looking in to how I could enable Pytorch native AMP and Nvidia APEX AMP for SRUs, so I could train models based on SRUs using DeepSpeed.

    That is why I created this pull request. Basically, I found that by making the code simpler, I can make SRUs work with both methods of AMP.

    Now amp_recurrence_fp16 can be used for both types of AMP. When amp_recurrence_fp16=True, the tensor's are cast to float16, otherwise nothing special happens. So, I also removed the torch.cuda.amp.autocast(enabled=False) region; I might be wrong, but it seems that we don't need it.

    I did some tests with my own code and it works in the different scenarios of interest:

    • Using PyTorch native AMP, not using DeepSpeed
    • Not using PyTorch native AMP, not using DeepSpeed
    • Using Nvidia APEX AMP, using DeepSpeed
    • Not using Nvidia APEX AMP, using DeepSpeed

    It would be beneficial if we can test this with an official SRU repo test, maybe repurposing the language_model/train_lm.py?

    opened by visionscaper 13
  • float16 handling

    float16 handling

    When I convert my model, which using this SRU unit, into float16 enabled one, it fails. Is this SRU not implemented to use in float16 environment, or is it hard to fix it?

    bug 
    opened by ywatanabe1989 11
  • support GPU inference in torchscript

    support GPU inference in torchscript

    This is on 3.0.0-dev branch for now

    A non-trivial PR to support GPU inference in torchscript

    • Load CUDA kernels as non-python modules; this is needed for torchscript compilation
    • Refactored CUDA APIs as functions that return output as tensors, instead of procedures that modify some passed-in tensors.
    • Added a workaround in case TS tries to locate and compile CUDA methods on machines that don't have CUDA / GPUs

    The refactored code has passed the forward() & backward() test. I also checked the outputs are the same for the non-torchscript and torchscript versions of the same model.

    opened by taoleicn 8
  • Error unpacking PackedSequence on latest version

    Error unpacking PackedSequence on latest version

    Hello @taolei87 , After updating to the latest version, my code broke. It works great on the previous 2.3.5 version and with nn.LSTM.

    File "C:\xxx\lib\site-packages\torch\nn\modules\module.py", line 722, in _call_impl
      result = self.forward(*input, **kwargs)
    File "C:\xxx\lib\site-packages\sru\modules.py", line 576, in forward
      mask_pad = (mask_pad >= batch_sizes.view(length, 1)).contiguous()
    RuntimeError: shape '[393, 1]' is invalid for input of size 384
    

    I can see that in the previous version the unpacking code on forward was different:

            input_packed = isinstance(input, nn.utils.rnn.PackedSequence)
            if input_packed:
                input, lengths = nn.utils.rnn.pad_packed_sequence(input)
                max_length = lengths.max().item()
                mask_pad = torch.ByteTensor([[0] * l + [1] * (max_length - l) for l in lengths.tolist()])
                mask_pad = mask_pad.to(input.device).transpose(0, 1).contiguous()
    

    Now is:

    
            orig_input = input
            if isinstance(orig_input, PackedSequence):
                input, batch_sizes, sorted_indices, unsorted_indices = input
                length = input.size(0)
                batch_size = input.size(1)
                mask_pad = torch.arange(batch_size,
                                        device=batch_sizes.device).expand(length, batch_size)
                mask_pad = (mask_pad >= batch_sizes.view(length, 1)).contiguous()
    
    bug 
    opened by bratao 8
  • Increasing GPU Usage each epoch

    Increasing GPU Usage each epoch

    I'm trying to implement a model that includes a SRUCell. This are my specs:

    Tesla M60 GPU torch.version: 0.4.1.post2 torch.cuda.version: 9.0.176

    Although its training, every epoch the memory usage in the GPU increases until it fills it. I made a toy example where this error occurs:

    import torch
    from torch.autograd import Variable
    from sru import SRUCell
    
    
    batch_size = 5
    seq_len = 60
    epochs = 1000
    cuda = torch.cuda.is_available()
    
    model = SRUCell(100, 100)
    
    if cuda:
        model.cuda(0)
    
    optimizer = torch.optim.Adam([
            {'params':model.parameters()}], lr=1e-3)
    
    loss_function = torch.nn.MSELoss()
        
    seq = Variable(torch.rand(batch_size,seq_len,100))
    y = Variable(torch.rand(batch_size,100))
    
    
    if cuda:
        seq = seq.cuda(0)
        y = y.cuda(0)
    
    
    model.train()
    
    for e in range(epochs):
        model.zero_grad()
        
        h = Variable(torch.zeros(batch_size, 100))
        c = Variable(torch.zeros(batch_size, 100))
        
        if cuda:
            h = h.cuda(0)
            c = c.cuda(0)
        
        for i in range(seq_len):
            x = seq[:,i,:]
            h, c = model(x, c)
        loss = loss_function(h, y)
        loss.backward()
        optimizer.step()
        print('Epoch: {} - Loss: {}'.format(e, loss))
    
    opened by santiag0m 8
  • Can i put hidden states in sru cell forward like in vanilla pytorch?

    Can i put hidden states in sru cell forward like in vanilla pytorch?

    In vanilla it work like this

    rnn = nn.LSTMCell(10, 20)
    input = torch.randn(6, 3, 10)
    hx = torch.randn(3, 20)
    cx = torch.randn(3, 20)
    output = []
    for i in range(6):
        hx, cx = rnn(input[i], (hx, cx))
        output.append(hx)
    

    How can i do same for sru cell?

    opened by hadaev8 7
  • AttributeError when preprocessing data for DrQA

    AttributeError when preprocessing data for DrQA

    Firstly i ran download.sh, and it succesfully downloaded glove and train/dev jsons for SQuAD. However, python prepro.py gave me this:

    Traceback (most recent call last):
      File "prepro.py", line 243, in <module>
        vocab_tag = list(nlp.tagger.tag_names)
    AttributeError: 'Tagger' object has no attribute 'tag_names'
    

    My Spacy version is 2.0.3, and it seems like something broke in update from 1.x that is written in requirements, and I didn't succeed in fixing it myself. Any suggests?

    opened by mojesty 7
  • Calculating Backwards For SRU Results in CUDA error.

    Calculating Backwards For SRU Results in CUDA error.

    I'm not sure how, but I'm seeing this error when I try to compute the backwards function. Don't know if you've come across this during your debug?

    Traceback (most recent call last):
      File "gan_language.py", line 341, in <module>
        G.backward(one)
      File "/usr/local/lib/python2.7/dist-packages/torch/autograd/variable.py", line 156, in backward
        torch.autograd.backward(self, gradient, retain_graph, create_graph, retain_variables)
      File "/usr/local/lib/python2.7/dist-packages/torch/autograd/__init__.py", line 98, in backward
        variables, grad_variables, retain_graph)
      File "/home/nick/wgan-gp/sru/cuda_functional.py", line 417, in backward
        stream=SRU_STREAM
      File "cupy/cuda/function.pyx", line 129, in cupy.cuda.function.Function.__call__ (cupy/cuda/function.cpp:4010)  File "cupy/cuda/function.pyx", line 111, in cupy.cuda.function._launch (cupy/cuda/function.cpp:3647)
      File "cupy/cuda/driver.pyx", line 127, in cupy.cuda.driver.launchKernel (cupy/cuda/driver.cpp:2541)
      File "cupy/cuda/driver.pyx", line 62, in cupy.cuda.driver.check_status (cupy/cuda/driver.cpp:1446)
    cupy.cuda.driver.CUDADriverError: CUDA_ERROR_INVALID_HANDLE: invalid resource handle
    
    opened by NickShahML 7
  • Speed up data loading / batching for ONE BILLION WORD experiment

    Speed up data loading / batching for ONE BILLION WORD experiment

    The data loading was inefficient and was found to be the bottleneck of BILLION WORD training. This PR rewrote the sharding (which data goes to a certain GPU / training process), and improved the training speed significantly.

    The figure compares a previous run and a new test run. We see 40% reduction on training time.

    This means our reported training efficiency will be much stronger from 59 GPU days to 36 GPU days, and 4x more efficient than FairSeq Transformer results.

    opened by taoleicn 6
  • Different input dimention compared to output dimension

    Different input dimention compared to output dimension

    Hi, I'm trying to implement a naive version of this paper in Keras, and was wondering how is the case that - n_in != n_out handled.

    I went through the code a few times, and couldn't understand the element wise multiplication of (1 - r_t) with x_t, if x_t is of a different shape than r_t.

    question 
    opened by titu1994 6
  • support GPU inference in torchscript model for v2.5 / v2.6

    support GPU inference in torchscript model for v2.5 / v2.6

    This PR works for master branch, v2.5 and v2.6 release

    A non-trivial PR to support GPU inference in torchscript

    • Load CUDA kernels as non-python modules; this is needed for torchscript compilation
    • Refactored CUDA APIs as functions that return output as tensors, instead of procedures that modify some passed-in tensors.
    • Added a workaround in case TS tries to locate and compile CUDA methods on machines that don't have CUDA / GPUs
    • The refactored code has passed the forward() & backward() test.
    • I also checked the outputs are the same for the non-torchscript and torchscript versions of the same model.
    opened by taoleicn 5
  • Mixed Precision Training

    Mixed Precision Training

    Hi,

    first of all I want to thank you for your great work. I'm using SRUs for speech enhancement, they do very well on a reasonable computational cost.

    I would like to know if there is a possibility to train SRUs in mixed precision mode? I tried to enable it, by setting precision=16 in the pytorch lightning trainer, but that didn't do the trick.

    Kind of regards, Zadagu

    opened by Zadagu 1
  • Any documentation on using SRU++ ?

    Any documentation on using SRU++ ?

    Hello, I've read and really appreciated your team's wonderful works on SRU++. I want to implement this architecture in other tasks, but i'm having problem finding the documentation on SRU++, as how I can use SRU++ the same way as SRU (calling directly from sru library after installing by pip install sru). I have looked into the dev-3.0.0 branch, which seems like the latest updated branch, but I still have no clues how to call and integrate sru++ modules into my custom defined pytorch modules. Could you help me ?

    opened by thangld201 1
  • FAILED: sru_cuda_kernel.cuda.o

    FAILED: sru_cuda_kernel.cuda.o

    when i run example, i meet this issue:FAILED: sru_cuda_kernel.cuda.o ,and in the end, it report ninja: build stopped: subcommand failed. what should i do to slove this problem?

    opened by xianyu-123 0
  • Avoid unintended eager cuda initialization

    Avoid unintended eager cuda initialization

    We noticed the package initialization for sru is eagerly triggering the initialization because of the following stack of module imports sru.modules -> sru.ops -> cuda_functional and this last module is executing the function load of torch.utils.cpp_extension.

    This was detected because of issues caused when running with the server framework in SUBPROCESS_MODE, that is forking a new process for it to run the model. We got an error complaining that CUDA had been already initialized in the parent process, which was not necessary because it is not meant to run the inference in the model.

    This PR changes this loading to be more lazy, more concretely we changed the code in sru.modules to avoid the eager import of sru.ops and instead postpone it to the instantiation of a first SRUCell.

    The changes in this PR have been tested doing a checkout of this branch in an AWS instance with GPU and running pytest -sv test which resulted in 141 passed, 161 warnings and no failures. So we understand this is working as expected for both CPU and GPU settings.

    opened by dkasapp 0
  • Unknown builtin op: sru_cuda::sru_bi_forward_simple

    Unknown builtin op: sru_cuda::sru_bi_forward_simple

    When using a bidirectional SRU, regular usage seems to be fine, and compilation to torchscript proceeds without error, but upon trying to infer with the compiled torchscript I get:

    Unknown builtin op: sru_cuda::sru_bi_forward_simple.

    Using pytorch 1.10, sru 2.6.0, cuda 11.3

    opened by ctlaltdefeat 2
Releases(v2.7.0-rc1)
Owner
ASAPP Research
AI for Enterprise
ASAPP Research
Sub-tomogram-Detection - Deep learning based model for Cyro ET Sub-tomogram-Detection

Deep learning based model for Cyro ET Sub-tomogram-Detection High degree of stru

Siddhant Kumar 2 Feb 04, 2022
Official pytorch implementation of Active Learning for deep object detection via probabilistic modeling (ICCV 2021)

Active Learning for Deep Object Detection via Probabilistic Modeling This repository is the official PyTorch implementation of Active Learning for Dee

NVIDIA Research Projects 130 Jan 06, 2023
Cleaned up code for DSTC 10: SIMMC 2.0 track: subtask 2: multimodal coreference resolution

UNITER-Based Situated Coreference Resolution with Rich Multimodal Input: arXiv MMCoref_cleaned Code for the MMCoref task of the SIMMC 2.0 dataset. Pre

Yichen (William) Huang 2 Dec 05, 2022
SIR model parameter estimation using a novel algorithm for differentiated uniformization.

TenSIR Parameter estimation on epidemic data under the SIR model using a novel algorithm for differentiated uniformization of Markov transition rate m

The Spang Lab 4 Nov 30, 2022
Prompt Tuning with Rules

PTR Code and datasets for our paper "PTR: Prompt Tuning with Rules for Text Classification" If you use the code, please cite the following paper: @art

THUNLP 118 Dec 30, 2022
An open source object detection toolbox based on PyTorch

MMDetection is an open source object detection toolbox based on PyTorch. It is a part of the OpenMMLab project.

Bo Chen 24 Dec 28, 2022
Official Repo for Ground-aware Monocular 3D Object Detection for Autonomous Driving

Visual 3D Detection Package: This repo aims to provide flexible and reproducible visual 3D detection on KITTI dataset. We expect scripts starting from

Yuxuan Liu 305 Dec 19, 2022
Pytorch implementation of OCNet series and SegFix.

openseg.pytorch News 2021/09/14 MMSegmentation has supported our ISANet and refer to ISANet for more details. 2021/08/13 We have released the implemen

openseg-group 1.1k Dec 23, 2022
Have you ever wondered how cool it would be to have your own A.I

Have you ever wondered how cool it would be to have your own A.I. assistant Imagine how easier it would be to send emails without typing a single word, doing Wikipedia searches without opening web br

Harsh Gupta 1 Nov 09, 2021
OstrichRL: A Musculoskeletal Ostrich Simulation to Study Bio-mechanical Locomotion.

OstrichRL This is the repository accompanying the paper OstrichRL: A Musculoskeletal Ostrich Simulation to Study Bio-mechanical Locomotion. It contain

Vittorio La Barbera 51 Nov 17, 2022
On Size-Oriented Long-Tailed Graph Classification of Graph Neural Networks

On Size-Oriented Long-Tailed Graph Classification of Graph Neural Networks We provide the code (in PyTorch) and datasets for our paper "On Size-Orient

Zemin Liu 4 Jun 18, 2022
Toolkit for collecting and applying prompts

PromptSource Promptsource is a toolkit for collecting and applying prompts to NLP datasets. Promptsource uses a simple templating language to programa

BigScience Workshop 998 Jan 03, 2023
Node Editor Plug for Blender

NodeEditor Blender的程序化建模插件 Show Current 基本框架:自定义的tree-node-socket、tree中的node与socket采用字典查询、基于socket入度的拓扑排序 数据传递和处理依靠Tree中的字典,socket传递字典key TODO 增加更多的节点

Cuimi 11 Dec 03, 2022
Blind Image Super-resolution with Elaborate Degradation Modeling on Noise and Kernel

Blind Image Super-resolution with Elaborate Degradation Modeling on Noise and Kernel This repository is the official PyTorch implementation of BSRDM w

Zongsheng Yue 69 Jan 05, 2023
Deepfake Scanner by Deepware.

Deepware Scanner (CLI) This repository contains the command-line deepfake scanner tool with the pre-trained models that are currently used at deepware

deepware 110 Jan 02, 2023
Codebase for BMVC 2021 paper "Text Based Person Search with Limited Data"

Text Based Person Search with Limited Data This is the codebase for our BMVC 2021 paper. Please bear with me refactoring this codebase after CVPR dead

Xiao Han 33 Nov 24, 2022
This is the pytorch implementation of the paper - Axiomatic Attribution for Deep Networks.

Integrated Gradients This is the pytorch implementation of "Axiomatic Attribution for Deep Networks". The original tensorflow version could be found h

Tianhong Dai 150 Dec 23, 2022
Python package for multiple object tracking research with focus on laboratory animals tracking.

motutils is a Python package for multiple object tracking research with focus on laboratory animals tracking. Features loads: MOTChallenge CSV, sleap

Matěj Šmíd 2 Sep 05, 2022
Create animations for the optimization trajectory of neural nets

Animating the Optimization Trajectory of Neural Nets loss-landscape-anim lets you create animated optimization path in a 2D slice of the loss landscap

Logan Yang 81 Dec 25, 2022
[NAACL & ACL 2021] SapBERT: Self-alignment pretraining for BERT.

SapBERT: Self-alignment pretraining for BERT This repo holds code for the SapBERT model presented in our NAACL 2021 paper: Self-Alignment Pretraining

Cambridge Language Technology Lab 104 Dec 07, 2022