Seeing All the Angles: Learning Multiview Manipulation Policies for Contact-Rich Tasks from Demonstrations

Overview

Seeing All the Angles: Learning Multiview Manipulation Policies for Contact-Rich Tasks from Demonstrations

Trevor Ablett, Daniel (Yifan) Zhai, Jonathan Kelly

Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS’21)

Paper website: https://papers.starslab.ca/multiview-manipulation/
arXiv paper: https://arxiv.org/abs/2104.13907
DOI: https://doi.org/10.1109/IROS51168.2021.9636440


This work was motivated by a relatively simple question: will increasingly popular end-to-end visuomotor policies work on a mobile manipulator, where the angle of the base will not be repeatable from one execution of a task to another? We conducted a variety of experiments to show that, naively, policies trained on fixed-base data with imitation learning do not generalize to various poses, and also generate multiview datasets and corresponding multiview policies to remedy the problem.

This repository contains the source code for reproducing our results and plots.

Requirements

We have only tested in python 3.7. Our simulated environments use pybullet, and our training code uses TensorFlow 2.x, specifically relying on our manipulator-learning package. All requirements (for simulated environments) are automatically installed by following Setup below.

Our policies also use the groups argument in TensorFlow Conv2d, which requires a GPU.

Setup

Preliminary note on TensorFlow install

This repository uses TensorFlow with GPU support, which can of course can be a bit of a pain to install. If you already have it installed, ignore this message. Otherwise, we have found the following procedure to work:

  1. Install conda.
  2. Create a new conda env to use for this work and activate it.
  3. Run the following to install a version of TensorFlow that may work with Conda
conda install cudatoolkit cudnn
pip install tensorflow==2.6.* tensorflow-probability==0.14

Now you can continue with the regular installation.

Regular Installation

Clone this repository and install in your python environment with pip.

git clone [email protected]:utiasSTARS/multiview-manipulation.git && cd multiview-manipulation
pip install -e .

A Note on Environment Names

The simulated environments that we use are all available in our manipulator-learning package and are called:

  • ThingLiftXYZImage
  • ThingLiftXYZMultiview
  • ThingStackSameImageV2
  • ThingStackSameMultiviewV2
  • ThingPickAndInsertSucDoneImage
  • ThingPickAndInsertSucDoneMultiview
  • ThingDoorImage
  • ThingDoorMultiview

The real environments we use with our mobile manipulator will, of course, be harder to reproduce, but were generated using our thing-gym-ros repository and are called:

  • ThingRosPickAndInsertCloser6DOFImageMB
  • ThingRosDrawerRanGrip6DOFImageMB
  • ThingRosDoorRanGrip6DOFImage
  • ThingRosDoorRanGrip6DOFImageMB

Running and Training Behavioural Cloning (BC) policies

The script in this repository can actually train and test (multiple)policies all in one shot.

  1. Choose one of:

    1. Train and test policies all at once. Download and uncompress any of the simulated expert data (generated using an HTC Vive hand tracker) from this Google Drive Folder.
    2. Generate policies using the procedure outlined in the following section.
    3. Download policies from this Google Drive Folder. We'll assume that you downloaded ThingDoorMultiview_bc_models.zip.

    If you choose i., your folder structure should be:

     .
     └── multiview-manipulation/
         ├── multiview_manipulation/
         └── data/
             ├── bc_models/
             └── demonstrations/
                 ├── ThingDoorMultiview/
                     ├── depth/
                     ├── img/
                     ├── data.npz
                     └── data_swp.npz
    

    If you choose ii. or iii., your folder structure should be:

    .
    └── multiview-manipulation/
        ├── multiview_manipulation/
        └── data/
            └── bc_models/
                ├── ThingDoorMultiview_25_trajs_1/
                ├── ThingDoorMultiview_25_trajs_2/
                ├── ThingDoorMultiview_25_trajs_3/
                ├── ThingDoorMultiview_25_trajs_4/
                ├── ThingDoorMultiview_25_trajs_5/   
                ├── ThingDoorMultiview_50_trajs_1/   
                └── ...   
    
  2. Modify the following options in multiview_manipulation/policies/test_policies.py to match your system and selected data:

    • main_data_dir: top level data directory (default: data)
    • bc_models_dir: top level trained BC models directory (default: bc_models)
    • expert_data_dir: top level expert data directory (default: demonstrations, only required if option i. above was selected).
  3. Change the following options to choose whether you want to test policies in a different environment from which they were trained in (e.g., as stated in the paper, you can test a ThingDoorMultiview policy in both ThingDoorMultiview and ThingDoorImage):

    • env_name: environment to test policy in
    • policy_env_name: name of environment that data for policy was generated from.
  4. Modify the options for choosing which policies to train/test:

    • bc_ckpts_num_traj: The different number of trajectories to use for training/trained policies (default: range(200, 24, -25))
    • seeds: Which seeds to use (default: [1, 2, 3, 4, 5])
  5. Run the script:

python multiview_manipulation/policies/test_policies.py
  1. Your results will show up in data/bc_results/{env_name}_{env_seed}_{experiment_name}.

Training policies with Behavioural Cloning (BC) only

  1. Download and uncompress any of simulated expert data from this Google Drive Folder. We'll assume that you downloaded ThingDoorMultiview.tar.gz and uncompressed it as ThingDoorMultiview.

  2. Modify the following options in multiview_manipulation/policies/gen_policies.py to match your system and selected data:

    • bc_models_dir: top level directory for trained BC models (default: data/bc_models)
    • expert_data_dir: top level directory for expert data (default: data/demonstrations)
    • dataset_dir: the name of the directory containing depth/, img/, data.npz and data_swp.npz.
    • env_str: The string corresponding to the name of the environment (only used for the saved BC policy name)

    For example, if you're using the default folder structure, your setup should look like this:

    .
    └── multiview-manipulation/
        ├── multiview_manipulation/
        └── data/
            ├── bc_models/
            └── demonstrations/
                ├── ThingDoorMultiview/
                    ├── depth/
                    ├── img/
                    ├── data.npz
                    └── data_swp.npz
    
  3. Modify the options for choosing which policies to train:

    • bc_ckpts_num_traj: The different number of trajectories to use for training policies (default: range(25, 201, 25))
    • seeds: Which seeds to train for (default: [1, 2, 3, 4, 5])
  4. Run the file:

python multiview_manipulation/policies/gen_policies.py
  1. Your trained policies will show up in individual folders under the bc_models folder as {env_str}_{num_trajs}_trajs_{seed}/.

Collecting Demonstrations

All of our demonstrations were collected using the collect_demos.py file from the manipulator-learning package and an HTC Vive Hand Tracker. To collect demonstrations, you would use, for example:

git clone [email protected]:utiasSTARS/manipulator-learning.git && cd manipulator-learning
pip install -e .
pip install -r device_requirements.txt
python manipulator_learning/learning/imitation/collect_demos.py --device vr --directory demonstrations --demo_name ThingDoorMultiview01 --environment ThingDoorMultiview

You can also try using the keyboard with:

python manipulator_learning/learning/imitation/collect_demos.py --device keyboard --directory demonstrations --demo_name ThingDoorMultiview01 --environment ThingDoorMultiview

More instructions can be found in the manipulator-learning README.

Real Environments

Although it would be nearly impossible to exactly reproduce our results with our real environments, the code we used for generating our real environments can be found in our thing-gym-ros repository.

Citation

If you use this in your work, please cite:

@inproceedings{2021_Ablett_Seeing,
    address = {Prague, Czech Republic},
    author = {Trevor Ablett and Yifan Zhai and Jonathan Kelly},
    booktitle = {Proceedings of the {IEEE/RSJ} International Conference on Intelligent Robots and Systems {(IROS'21)}},
    date = {2021-09-27/2021-10-01},
    month = {Sep. 27--Oct. 1},
    site = {https://papers.starslab.ca/multiview-manipulation/},
    title = {Seeing All the Angles: Learning Multiview Manipulation Policies for Contact-Rich Tasks from Demonstrations},
    url = {http://arxiv.org/abs/2104.13907},
    video1 = {https://youtu.be/oh0JMeyoswg},
    year = {2021}
}
Owner
STARS Laboratory
We are the Space and Terrestrial Autonomous Robotic Systems Laboratory at the University of Toronto
STARS Laboratory
Details about the wide minima density hypothesis and metrics to compute width of a minima

wide-minima-density-hypothesis Details about the wide minima density hypothesis and metrics to compute width of a minima This repo presents the wide m

Nikhil Iyer 9 Dec 27, 2022
Official code repository for the work: "The Implicit Values of A Good Hand Shake: Handheld Multi-Frame Neural Depth Refinement"

Handheld Multi-Frame Neural Depth Refinement This is the official code repository for the work: The Implicit Values of A Good Hand Shake: Handheld Mul

55 Dec 14, 2022
CS50's Introduction to Artificial Intelligence Test Scripts

CS50's Introduction to Artificial Intelligence Test Scripts 🤷‍♂️ What's this? 🤷‍♀️ This repository contains Python scripts to automate tests for mos

Jet Kan 2 Dec 28, 2022
PyTorch implementation of residual gated graph ConvNets, ICLR’18

Residual Gated Graph ConvNets April 24, 2018 Xavier Bresson http://www.ntu.edu.sg/home/xbresson https://github.com/xbresson https://twitter.com/xbress

Xavier Bresson 112 Aug 10, 2022
A flexible ML framework built to simplify medical image reconstruction and analysis experimentation.

meddlr Getting Started Meddlr is a config-driven ML framework built to simplify medical image reconstruction and analysis problems. Installation To av

Arjun Desai 36 Dec 16, 2022
A certifiable defense against adversarial examples by training neural networks to be provably robust

DiffAI v3 DiffAI is a system for training neural networks to be provably robust and for proving that they are robust. The system was developed for the

SRI Lab, ETH Zurich 202 Dec 13, 2022
Code to reproduce experiments in the paper "Explainability Requires Interactivity".

Explainability Requires Interactivity This repository contains the code to train all custom models used in the paper Explainability Requires Interacti

Digital Health & Machine Learning 5 Apr 07, 2022
Implementation of Hierarchical Transformer Memory (HTM) for Pytorch

Hierarchical Transformer Memory (HTM) - Pytorch Implementation of Hierarchical Transformer Memory (HTM) for Pytorch. This Deepmind paper proposes a si

Phil Wang 63 Dec 29, 2022
D²Conv3D: Dynamic Dilated Convolutions for Object Segmentation in Videos

D²Conv3D: Dynamic Dilated Convolutions for Object Segmentation in Videos This repository contains the implementation for "D²Conv3D: Dynamic Dilated Co

17 Oct 20, 2022
Official implementation of "Membership Inference Attacks Against Self-supervised Speech Models"

Introduction Official implementation of "Membership Inference Attacks Against Self-supervised Speech Models". In this work, we demonstrate that existi

Wei-Cheng Tseng 7 Nov 01, 2022
[ICLR 2021] Rank the Episodes: A Simple Approach for Exploration in Procedurally-Generated Environments.

[ICLR 2021] RAPID: A Simple Approach for Exploration in Reinforcement Learning This is the Tensorflow implementation of ICLR 2021 paper Rank the Episo

Daochen Zha 48 Nov 21, 2022
This repository provides data for the VAW dataset as described in the CVPR 2021 paper titled "Learning to Predict Visual Attributes in the Wild"

Visual Attributes in the Wild (VAW) This repository provides data for the VAW dataset as described in the CVPR 2021 Paper: Learning to Predict Visual

Adobe Research 36 Dec 30, 2022
Simulation-based inference for the Galactic Center Excess

Simulation-based inference for the Galactic Center Excess Siddharth Mishra-Sharma and Kyle Cranmer Abstract The nature of the Fermi gamma-ray Galactic

Siddharth Mishra-Sharma 3 Jan 21, 2022
VOneNet: CNNs with a Primary Visual Cortex Front-End

VOneNet: CNNs with a Primary Visual Cortex Front-End A family of biologically-inspired Convolutional Neural Networks (CNNs). VOneNets have the followi

The DiCarlo Lab at MIT 99 Dec 22, 2022
Iterative Training: Finding Binary Weight Deep Neural Networks with Layer Binarization

Iterative Training: Finding Binary Weight Deep Neural Networks with Layer Binarization This repository contains the source code for the paper (link wi

Rakuten Group, Inc. 0 Nov 19, 2021
Repo for flood prediction using LSTMs and HAND

Abstract Every year, floods cause billions of dollars’ worth of damages to life, crops, and property. With a proper early flood warning system in plac

1 Oct 27, 2021
JugLab 33 Dec 30, 2022
Library for time-series-forecasting-as-a-service.

TIMEX TIMEX (referred in code as timexseries) is a framework for time-series-forecasting-as-a-service. Its main goal is to provide a simple and generi

Alessandro Falcetta 8 Jan 06, 2023
Adversarial examples to the new ConvNeXt architecture

Adversarial examples to the new ConvNeXt architecture To get adversarial examples to the ConvNeXt architecture, run the Colab: https://github.com/stan

Stanislav Fort 19 Sep 18, 2022
PyTorch/GPU re-implementation of the paper Masked Autoencoders Are Scalable Vision Learners

Masked Autoencoders: A PyTorch Implementation This is a PyTorch/GPU re-implementation of the paper Masked Autoencoders Are Scalable Vision Learners: @

Meta Research 4.8k Jan 04, 2023