ReConsider is a re-ranking model that re-ranks the top-K (passage, answer-span) predictions of an Open-Domain QA Model like DPR (Karpukhin et al., 2020).

Overview

ReConsider

ReConsider is a re-ranking model that re-ranks the top-K (passage, answer-span) predictions of an Open-Domain QA Model like DPR (Karpukhin et al., 2020).

The technical details are described in:

@inproceedings{iyer2020reconsider,
 title={RECONSIDER: Re-Ranking using Span-Focused Cross-Attention for Open Domain Question Answering},
 author={Iyer, Srinivasan and Min, Sewon and Mehdad, Yashar and Yih, Wen-tau},
 booktitle={NAACL},
 year={2021}
}

https://arxiv.org/abs/2010.10757

LICENSE

The majority of ReConsider is licensed under CC-BY-NC, however portions of the project are available under separate license terms: huggingface transformers and HotpotQA Utils are licensed under the Apache 2.0 license.

Re-producing results from the paper

The ReConsider models in the paper are trained on the top-100 predictions from the DPR Retriever + Reader model (Karpukhin et al., 2020) on four datasets: NaturalQuestions, TriviaQA, Trec, and WebQ.

We outline all the steps here for NaturalQuestions, but the same steps can be followed for the other datasets.

  1. Environment Setup
pip install -r requirements.txt
  1. [optional] Get the top-100 retrieved passages for each question using the best DPR retriever model for the NQ train, dev, and test sets. We provide these in our repo, but alternatively, you can obtain them by training the DPR retriever from scratch (from here). You can skip this entire step if you are only running ReConsider.
wget http://dl.fbaipublicfiles.com/reconsider/dpr_retriever_outputs/{nq|webq|trec|tqa}-{train|dev|test}-multi.json
  1. [optional] Get the top-100 predictions from the DPR reader (Karpukhin et al., 2020) executed on the output of the DPR retriever, on the NQ train, dev, and test sets. We provide these in our repo, but alternatively, you can obtain them by training the DPR reader from scratch (from here). You can skip this entire step if you are only running ReConsider.
wget http://dl.fbaipublicfiles.com/reconsider/dpr_reader_outputs/ttttt_{train|dev|test}.{nq|tqa|trec|webq}.{bbase|blarge}.output.nopp.title.json
  1. [optional] Convert DPR reader predictions to the marked-passage format required by ReConsider.
python prepare_marked_dataset.py --answer_json ttttt__train.{nq|tqa|trec|webq}.{bbase|blarge}.output.nopp.title.json --orig_json {nq|webq|trec|tqa}-train-multi.json --out_json paraphrase_selection_train.{nq|tqa|trec|webq}.{bbase|blarge}.100.qp_mp.nopp.title.json --train_M 100

python prepare_marked_dataset.py --answer_json ttttt_dev.{nq|tqa|trec|webq}.{bbase|blarge}.output.nopp.title.json --orig_json {nq|webq|trec|tqa}-dev-multi.json --out_json paraphrase_selection_dev.{nq|tqa|trec|webq}.{bbase|blarge}.5.qp_mp.nopp.title.json --dev --test_M 5

python prepare_marked_dataset.py --answer_json ttttt_test.{nq|tqa|trec|webq}.{bbase|blarge}.output.nopp.title.json --orig_json {nq|webq|trec|tqa}-test-multi.json --out_json paraphrase_selection_test.{nq|tqa|trec|webq}.{bbase|blarge}.5.qp_mp.nopp.title.json --dev --test_M 5

We also provide these files, so that you don't need to execute this command. You can directly download the output files using:

wget http://dl.fbaipublicfiles.com/reconsider/reconsider_inputs/paraphrase_selection_{train|dev|test}.{nq|tqa|trec|webq}.{bbase|blarge}.qp_mp.nopp.title.json
  1. Train ReConsider Models For Base models:
dset={nq|tqa|trec|webq}
python main.py --do_train --output_dir ps.$dset.bbase --train_file paraphrase_selection_train.$dset.bbase.qp_mp.nopp.title.json --predict_file paraphrase_selection_dev.$dset.bbase.qp_mp.nopp.title.json --train_batch_size 16 --predict_batch_size 144 --eval_period 500 --threads 80 --pad_question --max_question_length 0 --max_passage_length 240 --train_M 30 --test_M 5

For Large models:

dset={nq|tqa|trec|webq}
python main.py --do_train --output_dir ps.$dset.bbase --train_file paraphrase_selection_train.$dset.bbase.qp_mp.nopp.title.json --predict_file paraphrase_selection_dev.$dset.bbase.qp_mp.nopp.title.json --train_batch_size 16 --predict_batch_size 144 --eval_period 500 --threads 80 --pad_question --max_question_length 0 --max_passage_length 240 --train_M 10 --test_M 5 --bert_name bert-large-uncased

Note: If training on Trec or Webq, initialize the model with the model trained on NQ of the corresponding size by adding this parameter: --checkpoint $model_nq_{bbase|blarge}. You can either train this NQ model using the commands above, or directly download it as described below:

We also provide our pre-trained models for download, using this script:

python download_reconsider_models.py --model {nq|trec|tqa|webq}_{bbase|blarse}
  1. Predict on the test set using ReConsider Models
python main.py --do_predict --output_dir /tmp/ --predict_file paraphrase_selection_test.{nq|trec|webq|tqa}.{bbase|blarge}.qp_mp.nopp.title.json  --checkpoint {path_to_model} --predict_batch_size 72 --threads 80 --n_paragraphs 100  --verbose --prefix test_  --pad_question --max_question_length 0 --max_passage_length 240 --predict_batch_size 72 --test_M 5 --bert_name {bert-base-uncased|bert-large-uncased}
Owner
Facebook Research
Facebook Research
Source code for Transformer-based Multi-task Learning for Disaster Tweet Categorisation (UCD's participation in TREC-IS 2020A, 2020B and 2021A).

Source code for "UCD participation in TREC-IS 2020A, 2020B and 2021A". *** update at: 2021/05/25 This repo so far relates to the following work: Trans

Congcong Wang 4 Oct 19, 2021
Code for the ICASSP-2021 paper: Continuous Speech Separation with Conformer.

Continuous Speech Separation with Conformer Introduction We examine the use of the Conformer architecture for continuous speech separation. Conformer

Sanyuan Chen (陈三元) 81 Nov 28, 2022
This python-based package offers a way of creating a parametric OpenMC plasma source from plasma parameters.

openmc-plasma-source This python-based package offers a way of creating a parametric OpenMC plasma source from plasma parameters. The OpenMC sources a

Fusion Energy 10 Oct 18, 2022
Differentiable rasterization applied to 3D model simplification tasks

nvdiffmodeling Differentiable rasterization applied to 3D model simplification tasks, as described in the paper: Appearance-Driven Automatic 3D Model

NVIDIA Research Projects 336 Dec 30, 2022
Official repo for AutoInt: Automatic Integration for Fast Neural Volume Rendering in CVPR 2021

AutoInt: Automatic Integration for Fast Neural Volume Rendering CVPR 2021 Project Page | Video | Paper PyTorch implementation of automatic integration

Stanford Computational Imaging Lab 149 Dec 22, 2022
Package for working with hypernetworks in PyTorch.

Package for working with hypernetworks in PyTorch.

Christian Henning 71 Jan 05, 2023
Decorator for PyMC3

sampled Decorator for reusable models in PyMC3 Provides syntactic sugar for reusable models with PyMC3. This lets you separate creating a generative m

Colin 50 Oct 08, 2021
Python and Julia in harmony.

PythonCall & JuliaCall Bringing Python® and Julia together in seamless harmony: Call Python code from Julia and Julia code from Python via a symmetric

Christopher Rowley 414 Jan 07, 2023
3D-aware GANs based on NeRF (arXiv).

CIPS-3D This repository will contain the code of the paper, CIPS-3D: A 3D-Aware Generator of GANs Based on Conditionally-Independent Pixel Synthesis.

Peterou 563 Dec 31, 2022
Activating More Pixels in Image Super-Resolution Transformer

HAT [Paper Link] Activating More Pixels in Image Super-Resolution Transformer Xiangyu Chen, Xintao Wang, Jiantao Zhou and Chao Dong BibTeX @article{ch

XyChen 270 Dec 27, 2022
Implementation of our recent paper, WOOD: Wasserstein-based Out-of-Distribution Detection.

WOOD Implementation of our recent paper, WOOD: Wasserstein-based Out-of-Distribution Detection. Abstract The training and test data for deep-neural-ne

8 Dec 24, 2022
CV backbones including GhostNet, TinyNet and TNT, developed by Huawei Noah's Ark Lab.

CV Backbones including GhostNet, TinyNet, TNT (Transformer in Transformer) developed by Huawei Noah's Ark Lab. GhostNet Code TinyNet Code TNT Code Pyr

HUAWEI Noah's Ark Lab 3k Jan 08, 2023
This is the repo for Uncertainty Quantification 360 Toolkit.

UQ360 The Uncertainty Quantification 360 (UQ360) toolkit is an open-source Python package that provides a diverse set of algorithms to quantify uncert

International Business Machines 207 Dec 30, 2022
GPU implementation of $k$-Nearest Neighbors and Shared-Nearest Neighbors

GPU implementation of kNN and SNN GPU implementation of $k$-Nearest Neighbors and Shared-Nearest Neighbors Supported by numba cuda and faiss library E

Hyeon Jeon 7 Nov 23, 2022
Supplemental learning materials for "Fourier Feature Networks and Neural Volume Rendering"

Fourier Feature Networks and Neural Volume Rendering This repository is a companion to a lecture given at the University of Cambridge Engineering Depa

Matthew A Johnson 133 Dec 26, 2022
PyTorch implementation of an end-to-end Handwritten Text Recognition (HTR) system based on attention encoder-decoder networks

AttentionHTR PyTorch implementation of an end-to-end Handwritten Text Recognition (HTR) system based on attention encoder-decoder networks. Scene Text

Dmitrijs Kass 31 Dec 22, 2022
Exploring Relational Context for Multi-Task Dense Prediction [ICCV 2021]

Adaptive Task-Relational Context (ATRC) This repository provides source code for the ICCV 2021 paper Exploring Relational Context for Multi-Task Dense

David Brüggemann 35 Dec 05, 2022
Codes for our paper "SentiLARE: Sentiment-Aware Language Representation Learning with Linguistic Knowledge" (EMNLP 2020)

SentiLARE: Sentiment-Aware Language Representation Learning with Linguistic Knowledge Introduction SentiLARE is a sentiment-aware pre-trained language

74 Dec 30, 2022
A naive ROS interface for visualDet3D.

YOLO3D ROS Node This repo contains a Monocular 3D detection Ros node. Base on https://github.com/Owen-Liuyuxuan/visualDet3D All parameters are exposed

Yuxuan Liu 19 Oct 08, 2022
Conversion between units used in magnetism

convmag Conversion between various units used in magnetism The conversions between base units available are: T - G : 1e4

0 Jul 15, 2021