i3DMM: Deep Implicit 3D Morphable Model of Human Heads

Related tags

Deep Learningi3DMM
Overview

i3DMM: Deep Implicit 3D Morphable Model of Human Heads

CVPR 2021 (Oral)

Arxiv | Poject Page

Teaser

This project is the official implementation our work, i3DMM. Much of our code is from DeepSDF's repository. We thank Park et al. for making their code publicly available.

The pretrained model is included in this repository.

Setup

  1. To get started, clone this repository into a local directory.
  2. Install Anaconda, if you don't already have it.
  3. Create a conda environment in the path with the following command:
conda create -p ./i3dmm_env
  1. Activate the conda environment from the same folder:
conda activate ./i3dmm_env
  1. Use the following commands to install required packages:
conda install pytorch=1.1 cudatoolkit=10.0 -c pytorch
pip install opencv-python trimesh[all] scikit-learn mesh-to-sdf plyfile

Preparing Data

Rigid Alignment

We assume that all the input data is rigidly aligned. Therefore, we provide reference 3D landmarks to align your test/training data. Please use centroids.txt file in the model folder to align your data to these landmarks. The landmarks in the file are in the following order:

  1. Right eye left corner
  2. Right eye right corner
  3. Left eye left corner
  4. Left eye right corner
  5. Nose tip
  6. Right lips corner
  7. Left lips corner
  8. Point on the chin The following image shows these landmarks. The centroids.txt file consists of 3D landmarks with coordinates x, y, z. Each file consists of 8 lines. Each line consists of the 3 values in 'x y z' order corresponding to the landmarks described above separated by a space.

Please see our paper for more information on rigid alignment.

Dataset

We closely follow ShapeNet Dataset's folder structure. Please see the a mesh folder in the dataset for an example. The dataset is assumed to be as follows:


   
    /
    
     /
     
      /models/
      
       .obj

       
        /
        
         /
         
          /models/
          
           .mtl 
           
            /
            
             /
             
              /models/
              
               .jpg 
               
                /
                
                 /
                 
                  /models/centroids.txt 
                  
                   /
                   
                    /
                    
                     /models/centroidsEars.txt 
                    
                   
                  
                 
                
               
              
             
            
           
          
         
        
       
      
     
    
   

The model name should be in a specific structure, xxxxx_eyy where xxxxx are 5 characters which identify an identity and yy are unique numbers to specify different expressions and hairstyles. We follow e01 - e10 for different expressions where e07 is neutral expression. e11-e13 are hairstyles in neutral expression. Rest of the expression identifiers are for test expressions.

The centroids.txt file contains landmarks as described in the alignment step. Additionally, to train the model, one could also have centroidEars.txt file which has the 3D ear landmarks in the following order:

  1. Left ear top
  2. Left ear left
  3. Left ear bottom
  4. Left ear right
  5. Right ear top
  6. Right ear left
  7. Right ear bottom
  8. Right ear right These 8 landmarks are as shown in the following image. The file is organized similar to centroids.txt. Please see the a mesh folder in the dataset for an example.

Once the dataset is prepared, create the splits as shown in model/headModel/splits/*.json files. These files are similar to the splits files in DeepSDF.

Preprocessing

The following commands preprocesses the meshes from the dataset described above and places them in data folder. The command must be run from "model" folder. To preprocess training data:

python preprocessData.py --samples_directory ./data --input_meshes_directory 
   
      -e headModel -s Train

   

To preprocess test data:

python preprocessData.py --samples_directory ./data --input_meshes_directory 
   
     -e headModel -s Test

   

'headModel' is the folder containing network settings for the 'specs.json'. The json file also contains split file and preprocessed data paths. The splits files are in model/headModel/splits/*.json These files indicate files that are for testing, training, and reference shape initialisation.

Training the Model

Once data is preprocessed, one can train the model with the following command.

python train_i3DMM.py -e headModel

When working with a large dataset, please consider using batch_split option with a power of 2 (2, 4, 8, 16 etc.). The following command is an example.

python train_i3DMM.py -e headModel --batch_split 2

Additionally, if one considers using landmark supervision or ears constraints for long hair (see paper for details), please export the centroids and ear centroids as a dictionaries with npy files (8 face landmarks: eightCentroids.npy, ear landmarks: gtEarCentroids.npy).

An example entry in the dictionary: {"xxxxx_eyy: 8x3 numpy array"}

Fitting i3DMM to Preprocessed Data

Please see the preprocessing section for preparing the data. Once the data is ready, please use the following command to fit i3DMM to the data.

To save as image:

python fit_i3DMM_to_mesh.py -e headModel -c latest -d data -s 
   
     --imNM True

   

To save as a mesh:

python fit_i3DMM_to_mesh.py -e headModel -c latest -d data -s 
   
     --imNM False

   

Test dataset can be downloaded with this link. Please extract and move the 'heads' folder to dataset folder.

Citation

Please cite our paper if you use any part of this repository.

@inproceedings {yenamandra2020i3dmm,
 author = {T Yenamandra and A Tewari and F Bernard and HP Seidel and M Elgharib and D Cremers and C Theobalt},
 title = {i3DMM: Deep Implicit 3D Morphable Model of Human Heads},
 booktitle = {Proceedings of the IEEE / CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
 month = {June},
 year = {2021}
}
Owner
Tarun Yenamandra
Tarun Yenamandra
Learning from graph data using Keras

Steps to run = Download the cora dataset from this link : https://linqs.soe.ucsc.edu/data unzip the files in the folder input/cora cd code python eda

Mansar Youness 64 Nov 16, 2022
"Structure-Augmented Text Representation Learning for Efficient Knowledge Graph Completion"(WWW 2021)

STAR_KGC This repo contains the source code of the paper accepted by WWW'2021. "Structure-Augmented Text Representation Learning for Efficient Knowled

Bo Wang 60 Dec 26, 2022
This project provides the proof of the uniqueness of the equilibrium and the global asymptotic stability.

Delayed-cellular-neural-network This project provides the proof of the uniqueness of the equilibrium and the global asymptotic stability. There is als

4 Apr 28, 2022
Official PyTorch code of DeepPanoContext: Panoramic 3D Scene Understanding with Holistic Scene Context Graph and Relation-based Optimization (ICCV 2021 Oral).

DeepPanoContext (DPC) [Project Page (with interactive results)][Paper] DeepPanoContext: Panoramic 3D Scene Understanding with Holistic Scene Context G

Cheng Zhang 66 Nov 16, 2022
Ludwig is a toolbox that allows to train and evaluate deep learning models without the need to write code.

Translated in 🇰🇷 Korean/ Ludwig is a toolbox that allows users to train and test deep learning models without the need to write code. It is built on

Ludwig 8.7k Jan 05, 2023
[ ICCV 2021 Oral ] Our method can estimate camera poses and neural radiance fields jointly when the cameras are initialized at random poses in complex scenarios (outside-in scenes, even with less texture or intense noise )

GNeRF This repository contains official code for the ICCV 2021 paper: GNeRF: GAN-based Neural Radiance Field without Posed Camera. This implementation

Quan Meng 191 Dec 26, 2022
SASM - simple crossplatform IDE for NASM, MASM, GAS and FASM assembly languages

SASM (SimpleASM) - простая кроссплатформенная среда разработки для языков ассемблера NASM, MASM, GAS, FASM с подсветкой синтаксиса и отладчиком. В SA

Dmitriy Manushin 5.6k Jan 06, 2023
Semi-Supervised 3D Hand-Object Poses Estimation with Interactions in Time

Semi Hand-Object Semi-Supervised 3D Hand-Object Poses Estimation with Interactions in Time (CVPR 2021).

96 Dec 27, 2022
[ICRA 2022] CaTGrasp: Learning Category-Level Task-Relevant Grasping in Clutter from Simulation

This is the official implementation of our paper: Bowen Wen, Wenzhao Lian, Kostas Bekris, and Stefan Schaal. "CaTGrasp: Learning Category-Level Task-R

Bowen Wen 199 Jan 04, 2023
A framework for Quantification written in Python

QuaPy QuaPy is an open source framework for quantification (a.k.a. supervised prevalence estimation, or learning to quantify) written in Python. QuaPy

41 Dec 14, 2022
[ICCV'21] PlaneTR: Structure-Guided Transformers for 3D Plane Recovery

PlaneTR: Structure-Guided Transformers for 3D Plane Recovery This is the official implementation of our ICCV 2021 paper News There maybe some bugs in

73 Nov 30, 2022
The official implementation for "FQ-ViT: Fully Quantized Vision Transformer without Retraining".

FQ-ViT [arXiv] This repo contains the official implementation of "FQ-ViT: Fully Quantized Vision Transformer without Retraining". Table of Contents In

132 Jan 08, 2023
This is a work in progress reimplementation of Instant Neural Graphics Primitives

Neural Hash Encoding This is a work in progress reimplementation of Instant Neural Graphics Primitives Currently this can train an implicit representa

Penn 79 Sep 01, 2022
Bayesian-Torch is a library of neural network layers and utilities extending the core of PyTorch to enable the user to perform stochastic variational inference in Bayesian deep neural networks

Bayesian-Torch is a library of neural network layers and utilities extending the core of PyTorch to enable the user to perform stochastic variational inference in Bayesian deep neural networks. Bayes

Intel Labs 210 Jan 04, 2023
PyGAD, a Python 3 library for building the genetic algorithm and training machine learning algorithms (Keras & PyTorch).

PyGAD: Genetic Algorithm in Python PyGAD is an open-source easy-to-use Python 3 library for building the genetic algorithm and optimizing machine lear

Ahmed Gad 1.1k Dec 26, 2022
A simple, fully convolutional model for real-time instance segmentation.

You Only Look At CoefficienTs ██╗ ██╗ ██████╗ ██╗ █████╗ ██████╗████████╗ ╚██╗ ██╔╝██╔═══██╗██║ ██╔══██╗██╔════╝╚══██╔══╝ ╚██

Daniel Bolya 4.6k Dec 30, 2022
Continuous Augmented Positional Embeddings (CAPE) implementation for PyTorch

PyTorch implementation of Continuous Augmented Positional Embeddings (CAPE), by Likhomanenko et al. Enhance your Transformer positional embeddings with easy-to-use augmentations!

Guillermo Cámbara 26 Dec 13, 2022
Code for the Higgs Boson Machine Learning Challenge organised by CERN & EPFL

A method to solve the Higgs boson challenge using Least Squares - Novae This project is the Project 1 of EPFL CS-433 Machine Learning. The project is

Giacomo Orsi 1 Nov 09, 2021
Extreme Rotation Estimation using Dense Correlation Volumes

Extreme Rotation Estimation using Dense Correlation Volumes This repository contains a PyTorch implementation of the paper: Extreme Rotation Estimatio

Ruojin Cai 29 Nov 18, 2022
DirectVoxGO reconstructs a scene representation from a set of calibrated images capturing the scene.

DirectVoxGO reconstructs a scene representation from a set of calibrated images capturing the scene. We achieve NeRF-comparable novel-view synthesis quality with super-fast convergence.

sunset 709 Dec 31, 2022