VID-Fusion: Robust Visual-Inertial-Dynamics Odometry for Accurate External Force Estimation

Overview

VID-Fusion

VID-Fusion: Robust Visual-Inertial-Dynamics Odometry for Accurate External Force Estimation

Authors: Ziming Ding , Tiankai Yang, Kunyi Zhang, Chao Xu, and Fei Gao from the ZJU FAST Lab.

0. Overview

VID-Fusion is a work to estimate odometry and external force simultaneously by a tightly coupled Visual-Inertial-Dynamics state estimator for multirotors. Just like VIMO, we formulate a new factor in the optimization-based visual-inertial odometry system VINS-Mono. But we compare the dynamics model with the imu measurements to observe the external force and formulate the external force preintegration like imu preintegration. So, the thrust and external force can be added into the classical VIO system such as VINS-Mono as a new factor.

We present:

  • An external force preintegration term for back-end optimization.
  • A complete, robust, tightly-coupled Visual-Inertial-Dynamics state estimator.
  • Demonstration of robust and accurate external force and pose estimation.

Simultaneously estimating the external force and odometry within a sliding window.

Related Paper: VID-Fusion: Robust Visual-Inertial-Dynamics Odometry for Accurate External Force Estimation, Ziming Ding, Tiankai Yang, Kunyi Zhang, Chao Xu, and Fei Gao, ICRA 2021.

Video Links: bilibili or Youtube.

1. Prerequisites

Our software is developed and only tested in Ubuntu 16.04, ROS Kinetic (ROS Installation), OpenCV 3.3.1.

Ceres Solver (Ceres Installation) is needed.

2. Build on ROS

cd your_catkin_ws/src
git clone [email protected]:ZJU-FAST-Lab/VID-Fusion.git
cd ..
catkin_make  --pkg quadrotor_msgs  # pre-build msg
catkin_make

3. Run in vid-dataset

cd your_catkin_ws
source ~/catkin_ws/devel/setup.bash
roslaunch vid_estimator vid_realworld.launch
roslaunch benchmark_publisher publish.launch #(option)
rosbag play YOUR_PATH_TO_DATASET

We provide the experiment data for testing, in which the vid-experiment-dataset is in ros bag type. The dataset provides two kinds of scenarios: tarj8_with_gt and line_with_force_gt.

  • tarj8_with_gt is a dataset with odometry groundtruth. The drone flys with a payload.

  • line_with_force_gt is a dataset with external force groundtruth. The drone connects a force sensor via a elastic rope.

A new visual-inertial-dynamics dataset with richer scenarios is provided in VID-Dataset.

The drone information should be provided in VID-Fusion/config/experiments/drone.yaml. It is noticed that you should use the proper parameter of the drone such as the mass and the thrust_coefficient, according to the related bag file.

As for the benchmark comparison, we naively edit the benchmark_publisher from VINS-Mono to compare the estimated path, and add a external force visualization about the estimated force and the ground truth force. The ground truth data is in VID-Fusion/benchmark_publisher/data. You should switch path or force comparison by cur_kind in publish.launch (0 for path comparison and 1 for force comparison).

As for model identification, we collect the hovering data for identification. For the two data bags, tarj8_with_gt and line_with_force_gt, we also provide the hovering data for thrust_coefficient identification. After system identification, you should copy the thrust_coefficient result to VID-Fusion/config/experiments/drone.yaml

roslaunch system_identification system_identify.launch 
rosbag play YOUR_PATH_TO_DATASET
#copy the thrust_coefficient result to VID-Fusion/config/experiments/drone.yaml

The external force is the resultant force except for rotor thrust and aircraft gravity. You can set force_wo_rotor_drag as 1 in config file to subtract the rotor drag force from the estimated force. And the related drag coefficient k_d_x and k_d_y should be given.

4. Acknowledgements

We replace the model preintegration and dynamics factor from VIMO, and formulate the proposed dynamics and external force factor atop the source code of VIMO and VINS-Mono. The ceres solver is used for back-end non-linear optimization, and DBoW2 for loop detection, and a generic camera model. The monocular initialization, online extrinsic calibration, failure detection and recovery, loop detection, and global pose graph optimization, map merge, pose graph reuse, online temporal calibration, rolling shutter support are also from VINS-Mono.

5. Licence

The source code is released under GPLv3 license.

6. Maintaince

For any technical issues, please contact Ziming Ding ([email protected]) or Fei GAO ([email protected]).

For commercial inquiries, please contact Fei GAO ([email protected]).

Owner
ZJU FAST Lab
ZJU FAST Lab
Deep universal probabilistic programming with Python and PyTorch

Getting Started | Documentation | Community | Contributing Pyro is a flexible, scalable deep probabilistic programming library built on PyTorch. Notab

7.7k Dec 30, 2022
Grammar Induction using a Template Tree Approach

Gitta Gitta ("Grammar Induction using a Template Tree Approach") is a method for inducing context-free grammars. It performs particularly well on data

Thomas Winters 36 Nov 15, 2022
Awesome-google-colab - Google Colaboratory Notebooks and Repositories

Unofficial Google Colaboratory Notebook and Repository Gallery Please contact me to take over and revamp this repo (it gets around 30k views and 200k

Derek Snow 1.2k Jan 03, 2023
Flexible-Modal Face Anti-Spoofing: A Benchmark

Flexible-Modal FAS This is the official repository of "Flexible-Modal Face Anti-

Zitong Yu 22 Nov 10, 2022
use machine learning to recognize gesture on raspberrypi

Raspberrypi_Gesture-Recognition use machine learning to recognize gesture on raspberrypi 說明 利用 tensorflow lite 訓練手部辨識模型 分辨 "剪刀"、"石頭"、"布" 之手勢 再將訓練模型匯入

1 Dec 10, 2021
labelpix is a graphical image labeling interface for drawing bounding boxes

Welcome to labelpix 👋 labelpix is a graphical image labeling interface for drawing bounding boxes. 🏠 Homepage Install pip install -r requirements.tx

schissmantics 26 May 24, 2022
Single Red Blood Cell Hydrodynamic Traps Via the Generative Design

Rbc-traps-generative-design - The generative design for single red clood cell hydrodynamic traps using GEFEST framework

Natural Systems Simulation Lab 4 Jun 16, 2022
Official repository for the paper, MidiBERT-Piano: Large-scale Pre-training for Symbolic Music Understanding.

MidiBERT-Piano Authors: Yi-Hui (Sophia) Chou, I-Chun (Bronwin) Chen Introduction This is the official repository for the paper, MidiBERT-Piano: Large-

137 Dec 15, 2022
Research using Cirq!

ReCirq Research using Cirq! This project contains modules for running quantum computing applications and experiments through Cirq and Quantum Engine.

quantumlib 230 Dec 29, 2022
CM building dataset Timisoara

CM_building_dataset_Timisoara Date created: Febr-2020 The Timi\c{s}oara Building Dataset - TMBuD - is composed of 160 images with the resolution of 76

Orhei Ciprian 5 Sep 07, 2022
The codes and models in 'Gaze Estimation using Transformer'.

GazeTR We provide the code of GazeTR-Hybrid in "Gaze Estimation using Transformer". We recommend you to use data processing codes provided in GazeHub.

65 Dec 27, 2022
Locally Enhanced Self-Attention: Rethinking Self-Attention as Local and Context Terms

LESA Introduction This repository contains the official implementation of Locally Enhanced Self-Attention: Rethinking Self-Attention as Local and Cont

Chenglin Yang 20 Dec 31, 2021
We utilize deep reinforcement learning to obtain favorable trajectories for visual-inertial system calibration.

Unified Data Collection for Visual-Inertial Calibration via Deep Reinforcement Learning Update: The lastest code will be updated in this branch. Pleas

ETHZ ASL 27 Dec 29, 2022
Code for Multiple Instance Active Learning for Object Detection, CVPR 2021

MI-AOD Language: 简体中文 | English Introduction This is the code for Multiple Instance Active Learning for Object Detection (The PDF is not available tem

Tianning Yuan 269 Dec 21, 2022
IJCAI2020 & IJCV 2020 :city_sunrise: Unsupervised Scene Adaptation with Memory Regularization in vivo

Seg_Uncertainty In this repo, we provide the code for the two papers, i.e., MRNet:Unsupervised Scene Adaptation with Memory Regularization in vivo, IJ

Zhedong Zheng 348 Jan 05, 2023
An interpreter for RASP as described in the ICML 2021 paper "Thinking Like Transformers"

RASP Setup Mac or Linux Run ./setup.sh . It will create a python3 virtual environment and install the dependencies for RASP. It will also try to insta

141 Jan 03, 2023
Arxiv harvester - Poor man's simple harvester for arXiv resources

Poor man's simple harvester for arXiv resources This modest Python script takes

Patrice Lopez 5 Oct 18, 2022
Real time sign language recognition

The proposed work aims at converting american sign language gestures into English that can be understood by everyone in real time.

Mohit Kaushik 6 Jun 13, 2022
Bayesian Neural Networks in PyTorch

We present the new scheme to compute Monte Carlo estimator in Bayesian VI settings with almost no memory cost in GPU, regardles of the number of sampl

Jurijs Nazarovs 7 May 03, 2022
Pytorch re-implementation of Paper: SwinTextSpotter: Scene Text Spotting via Better Synergy between Text Detection and Text Recognition (CVPR 2022)

SwinTextSpotter This is the pytorch implementation of Paper: SwinTextSpotter: Scene Text Spotting via Better Synergy between Text Detection and Text R

mxin262 183 Jan 03, 2023