Code for PhySG: Inverse Rendering with Spherical Gaussians for Physics-based Relighting and Material Editing

Related tags

Deep LearningPhySG
Overview

PhySG: Inverse Rendering with Spherical Gaussians for Physics-based Relighting and Material Editing

Quick start

  • Create conda environment
conda env create -f environment.yml
conda activate PhySG
  • Download example data from google drive.

  • Optimize for geometry and material given a set of posed images and object segmentation masks

cd code
~~python training/exp_runner.py --conf confs_sg/default.conf \
                              --data_split_dir ../example_data/kitty/train \
                              --expname kitty \
                              --nepoch 2000 --max_niter 200001 \
                              --gamma 1.0
  • Render novel views, relighting and mesh extraction, etc.
cd code
# use same lighting as training
python evaluation/eval.py --conf confs_sg/default.conf \
                              --data_split_dir ../example_data/kitty/test \
                              --expname kitty \
                              --gamma 1.0 --resolution 256 --save_exr
# plug in new lighting                              
python evaluation/eval.py --conf confs_sg/default.conf \
                              --data_split_dir ../example_data/kitty/test \
                              --expname kitty \
                              --gamma 1.0 --resolution 256 --save_exr \
                              --light_sg ./envmaps/envmap3_sg_fit/tmp_lgtSGs_100.npy

Tips: for viewing exr images, you can use tev hdr viewer.

Some important pointers

  • code/model/sg_render.py: core of the appearance modelling that evaluates rendering equation using spherical Gaussians.
    • code/model/sg_envmap_convention.png: coordinate system convention for the envmap.
  • code/model/sg_envmap_material.py: optimizable parameters for the material part.
  • code/model/implicit_differentiable_renderer.py: optimizable parameters for the geometry part; it also contains our foward rendering code.
  • code/training/idr_train.py: SGD optimization of unknown geometry and material.
  • code/evaluation/eval.py: novel view rendering, relighting, mesh extraction, etc.
  • code/envmaps/fit_envmap_with_sg.py: represent an envmap with mixture of spherical Gaussians. We provide three envmaps represented by spherical Gaussians optimized via this script in the 'code/envmaps' folder.

Prepare your own data

  • Organize the images and masks in the same way as the provided data.
  • As to camera parameters, we follow the same convention as NeRF++ to use OpenCV conventions.

Acknowledgements: this codebase borrows a lot from the awesome IDR work; we thank the authors for releasing their code.

Owner
Kai Zhang
PhD candidate at Cornell.
Kai Zhang
the official implementation of the paper "Isometric Multi-Shape Matching" (CVPR 2021)

Isometric Multi-Shape Matching (IsoMuSh) Paper-CVF | Paper-arXiv | Video | Code Citation If you find our work useful in your research, please consider

Maolin Gao 9 Jul 17, 2022
DeepMind Alchemy task environment: a meta-reinforcement learning benchmark

The DeepMind Alchemy environment is a meta-reinforcement learning benchmark that presents tasks sampled from a task distribution with deep underlying structure.

DeepMind 188 Dec 25, 2022
Background Matting: The World is Your Green Screen

Background Matting: The World is Your Green Screen By Soumyadip Sengupta, Vivek Jayaram, Brian Curless, Steve Seitz, and Ira Kemelmacher-Shlizerman Th

Soumyadip Sengupta 4.6k Jan 04, 2023
A New Approach to Overgenerating and Scoring Abstractive Summaries

We provide the source code for the paper "A New Approach to Overgenerating and Scoring Abstractive Summaries" accepted at NAACL'21. If you find the code useful, please cite the following paper.

Kaiqiang Song 4 Apr 03, 2022
EdMIPS: Rethinking Differentiable Search for Mixed-Precision Neural Networks

EdMIPS is an efficient algorithm to search the optimal mixed-precision neural network directly without proxy task on ImageNet given computation budgets. It can be applied to many popular network arch

Zhaowei Cai 47 Dec 30, 2022
Language-Agnostic Website Embedding and Classification

Homepage2Vec Language-Agnostic Website Embedding and Classification based on Curlie labels https://arxiv.org/pdf/2201.03677.pdf Homepage2Vec is a pre-

25 Dec 27, 2022
Semi-Supervised Semantic Segmentation via Adaptive Equalization Learning, NeurIPS 2021 (Spotlight)

Semi-Supervised Semantic Segmentation via Adaptive Equalization Learning, NeurIPS 2021 (Spotlight) Abstract Due to the limited and even imbalanced dat

Hanzhe Hu 99 Dec 12, 2022
Official PyTorch Implementation of "AgentFormer: Agent-Aware Transformers for Socio-Temporal Multi-Agent Forecasting".

AgentFormer This repo contains the official implementation of our paper: AgentFormer: Agent-Aware Transformers for Socio-Temporal Multi-Agent Forecast

Ye Yuan 161 Dec 23, 2022
HINet: Half Instance Normalization Network for Image Restoration

HINet: Half Instance Normalization Network for Image Restoration Liangyu Chen, Xin Lu, Jie Zhang, Xiaojie Chu, Chengpeng Chen Paper: https://arxiv.org

303 Dec 31, 2022
The source code for Generating Training Data with Language Models: Towards Zero-Shot Language Understanding.

SuperGen The source code for Generating Training Data with Language Models: Towards Zero-Shot Language Understanding. Requirements Before running, you

Yu Meng 38 Dec 12, 2022
yolox_backbone is a deep-learning library and is a collection of YOLOX Backbone models.

YOLOX-Backbone yolox-backbone is a deep-learning library and is a collection of YOLOX backbone models. Install pip install yolox-backbone Load a Pret

Yonghye Kwon 21 Dec 28, 2022
A large-scale benchmark for co-optimizing the design and control of soft robots, as seen in NeurIPS 2021.

Evolution Gym A large-scale benchmark for co-optimizing the design and control of soft robots. As seen in Evolution Gym: A Large-Scale Benchmark for E

121 Dec 14, 2022
Framework for training options with different attention mechanism and using them to solve downstream tasks.

Using Attention in HRL Framework for training options with different attention mechanism and using them to solve downstream tasks. Requirements GPU re

5 Nov 03, 2022
The code of "Dependency Learning for Legal Judgment Prediction with a Unified Text-to-Text Transformer".

Code data_preprocess.py: preprocess data for Dependent-T5. parameters.py: define parameters of Dependent-T5. train_tools.py: traning and evaluation co

1 Apr 21, 2022
Expressive Power of Invariant and Equivaraint Graph Neural Networks (ICLR 2021)

Expressive Power of Invariant and Equivaraint Graph Neural Networks In this repository, we show how to use powerful GNN (2-FGNN) to solve a graph alig

Marc Lelarge 36 Dec 12, 2022
Residual Dense Net De-Interlace Filter (RDNDIF)

Residual Dense Net De-Interlace Filter (RDNDIF) Work in progress deep de-interlacer filter. It is based on the architecture proposed by Bernasconi et

Louis 7 Feb 15, 2022
Public scripts, services, and configuration for running a smart home K3S network cluster

makerhouse_network Public scripts, services, and configuration for running MakerHouse's home network. This network supports: TODO features here For mo

Scott Martin 1 Jan 15, 2022
Very deep VAEs in JAX/Flax

Very Deep VAEs in JAX/Flax Implementation of the experiments in the paper Very Deep VAEs Generalize Autoregressive Models and Can Outperform Them on I

Jamie Townsend 42 Dec 12, 2022
Representing Long-Range Context for Graph Neural Networks with Global Attention

Graph Augmentation Graph augmentation/self-supervision/etc. Algorithms gcn gcn+virtual node gin gin+virtual node PNA GraphTrans Augmentation methods N

UC Berkeley RISE 67 Dec 30, 2022
Toontown House CT Edition

Toontown House: Classic Toontown House Classic source that should just work. ❓ W

Open Source Toontown Servers 5 Jan 09, 2022