Official implementation of particle-based models (GNS and DPI-Net) on the Physion dataset.

Overview

Physion: Evaluating Physical Prediction from Vision in Humans and Machines [paper]

Daniel M. Bear, Elias Wang, Damian Mrowca, Felix J. Binder, Hsiao-Yu Fish Tung, R.T. Pramod, Cameron Holdaway, Sirui Tao, Kevin Smith, Fan-Yun Sun, Li Fei-Fei, Nancy Kanwisher, Joshua B. Tenenbaum, Daniel L.K. Yamins, Judith E. Fan

This is the official implementation of particle-based models (GNS and DPI-Net) on the Physion dataset. The code is built based on the original implementation of DPI-Net (https://github.com/YunzhuLi/DPI-Net).

Contact: [email protected] (Fish Tung)

Papers of GNS and DPI-Net:

** Learning to Simulate Complex Physics with Graph Networks ** [paper]

Alvaro Sanchez-Gonzalez, Jonathan Godwin, Tobias Pfaff, Rex Ying, Jure Leskovec, Peter W. Battaglia

** Learning Particle Dynamics for Manipulating Rigid Bodies, Deformable Objects, and Fluids ** [website] [paper]

Yunzhu Li, Jiajun Wu, Russ Tedrake, Joshua B. Tenenbaum, Antonio Torralba **

Demo

Rollout from our learned model (left is ground truth, right is prediction)

Dominoes Roll Contain Drape

Installation

Clone this repo:

git clone https://github.com/htung0101/DPI-Net-p.git
cd DPI-Net-p
git submodule update --init --recursive

Install Dependencies if using Conda

For Conda users, we provide an installation script:

bash ./scripts/conda_deps.sh
pip install pyyaml

To use tensorboard for training visualization

pip install tensorboardX
pip install tensorboard

Install binvox

We use binvox to transform object mesh into particles. To use binvox, please download binvox from https://www.patrickmin.com/binvox/, put it under ./bin, and include it in your path with

export PATH=$PATH:$PWD/bin.

You might need to do chmod 777 binvox in order to execute the file.

Setup your own data path

open paths.yaml and write your own path there. You can set up different paths for different machines under different user name.

Preprocessing the Physion dataset

1) We need to convert the mesh scenes into particle scenes. This line will generate a separate folder (dpi_data_dir specified in paths.yaml) that holds data for the particle-based models

bash run_preprocessing_tdw_cheap.sh [SCENARIO_NAME] [MODE]

e.g., bash run_preprocessing_tdw_cheap.sh Dominoes train SCENARIO_NAME can be one of the following: Dominoes, Collide, Support, Link, Contain, Roll, Drop, or Drape. MODE can be either train or test

You can visualize the original videos and the generated particle scenes with

python preprocessing_tdw_cheap.py --scenario Dominones --mode "train" --visualization 1

There will be videos generated under the folder vispy.

2) Then, try generate a train.txt and valid.txt files that indicates the trials you want to use for training and validaiton.

python create_train_valid.py

You can also design your specific split. Just put the trial names into one txt file.

3) For evalution on the red-hits-yellow prediciton, we can get the binary red-hits-yellow label txt file from the test dataset with

bash run_get_label_txt.sh [SCENARIO_NAME] test

This will generate a folder called labels under your output_folder dpi_data_dir. In the folder, each scenario will have a corresponding label file called [SCENARIO_NAME].txt

Training

Ok, now we are ready to start training the models.You can use the following command to train from scratch.

  • Train GNS
    bash scripts/train_gns.sh [SCENARIO_NAME] [GPU_ID]

SCENARIO_NAME can be one of the following: Dominoes, Collide, Support, Link, Contain, Roll, Drop and Drape.

  • Train DPI
    bash scripts/train_dpi.sh [SCENARIO_NAME] [GPU_ID]

Our implementation is different from the original DPI paper in 2 ways: (1) our model takes as inputs relative positions as opposed to absolute positions, (2) our model is trained with injected noise. These two features are suggested in the GNS paper, and we found them to be critcial for the models to generalize well to unseen scenes.

  • Train with multiple scenarios

You can also train with more than one scenarios by adding different scenario to the argument dataf

 python train.py  --env TDWdominoes --model_name GNS --log_per_iter 1000 --training_fpt 3 --ckp_per_iter 5000 --floor_cheat 1  --dataf "Dominoes, Collide, Support, Link, Roll, Drop, Contain, Drape" --outf "all_gns"
  • Visualize your training progress

Models and model logs are saved under [out_dir]/dump/dump_TDWdominoes. You can visualize the training progress using tensorboard

tensorboard --logdir MODEL_NAME/log

Evaluation

  • Evaluate GNS
bash scripts/eval_gns.sh [TRAIN_SCENARIO_NAME] [EPOCH] [ITER] [Test SCENARIO_NAME] [GPU_ID]

You can get the prediction txt file under eval/eval_TDWdominoes/[MODEL_NAME], e.g., test-Drape.txt, which contains results of testing the model on the Drape scenario. You can visualize the results with additional argument --vis 1.

  • Evaluate GNS-Ransac
bash scripts/eval_gns_ransac.sh [TRAIN_SCENARIO_NAME] [EPOCH] [ITER] [Test SCENARIO_NAME] [GPU_ID]
  • Evaluate DPI
bash scripts/eval_dpi.sh [TRAIN_SCENARIO_NAME] [EPOCH] [ITER] [Test SCENARIO_NAME] [GPU_ID]
  • Evaluate Models trained on multiple scenario Here we provide some example of evaluating on arbitray models trained on all scenarios.
bash eval_all_gns.sh [EPOCH] [ITER] [Test SCENARIO_NAME] [GPU_ID]
bash eval_all_dpi.sh [EPOCH] [ITER] [Test SCENARIO_NAME] [GPU_ID]
bash eval_all_gns_ransac.sh [EPOCH] [ITER] [Test SCENARIO_NAME] [GPU_ID]
  • Visualize trained Models Here we provide an example of visualizing the rollout results from trained arbitray models.
bash vis_gns.sh [EPOCH] [ITER] [Test SCENARIO_NAME] [GPU_ID]

You can find the visualization under eval/eval_TDWdominoes/[MODEL_NAME]/test-[Scenario]. We should see a gif for the original RGB videos, and another gif for the side-by-side comparison of gt particle scenes and the predicted particle scenes.

Citing Physion

If you find this codebase useful in your research, please consider citing:

@inproceedings{bear2021physion,
    Title={Physion: Evaluating Physical Prediction from Vision in Humans and Machines},
    author= {Daniel M. Bear and
           Elias Wang and
           Damian Mrowca and
           Felix J. Binder and
           Hsiao{-}Yu Fish Tung and
           R. T. Pramod and
           Cameron Holdaway and
           Sirui Tao and
           Kevin A. Smith and
           Fan{-}Yun Sun and
           Li Fei{-}Fei and
           Nancy Kanwisher and
           Joshua B. Tenenbaum and
           Daniel L. K. Yamins and
           Judith E. Fan},
    url = {https://arxiv.org/abs/2106.08261},
    archivePrefix = {arXiv},
    eprint = {2106.08261},
    Year = {2021}
}
Owner
Hsiao-Yu Fish Tung
Postdoc at MIT CoCosci Lab and Stanford NeuroAILab. PhD at CMU MLD
Hsiao-Yu Fish Tung
Implementation of "Deep Implicit Templates for 3D Shape Representation"

Deep Implicit Templates for 3D Shape Representation Zerong Zheng, Tao Yu, Qionghai Dai, Yebin Liu. arXiv 2020. This repository is an implementation fo

Zerong Zheng 144 Dec 07, 2022
Ppq - A powerful offline neural network quantization tool with custimized IR

PPL Quantization Tool(PPL 量化工具) PPL Quantization Tool (PPQ) is a powerful offlin

605 Jan 03, 2023
Analyzing basic network responses to novel classes

novelty-detection Analyzing how AlexNet responds to novel classes with varying degrees of similarity to pretrained classes from ImageNet. If you find

Noam Eshed 34 Oct 02, 2022
Repository for the paper titled: "When is BERT Multilingual? Isolating Crucial Ingredients for Cross-lingual Transfer"

When is BERT Multilingual? Isolating Crucial Ingredients for Cross-lingual Transfer This repository contains code for our paper titled "When is BERT M

Princeton Natural Language Processing 9 Dec 23, 2022
Transfer Learning for Pose Estimation of Illustrated Characters

bizarre-pose-estimator Transfer Learning for Pose Estimation of Illustrated Characters Shuhong Chen *, Matthias Zwicker * WACV2022 [arxiv] [video] [po

Shuhong Chen 142 Dec 28, 2022
Black-Box-Tuning - Black-Box Tuning for Language-Model-as-a-Service

Black-Box-Tuning Source code for paper "Black-Box Tuning for Language-Model-as-a

Tianxiang Sun 149 Jan 04, 2023
Explainer for black box models that predict molecule properties

Explaining why that molecule exmol is a package to explain black-box predictions of molecules. The package uses model agnostic explanations to help us

White Laboratory 172 Dec 19, 2022
Audio Visual Emotion Recognition using TDA

Audio Visual Emotion Recognition using TDA RAVDESS database with two datasets analyzed: Video and Audio dataset: Audio-Dataset: https://www.kaggle.com

Combinatorial Image Analysis research group 3 May 11, 2022
A Player for Kanye West's Stem Player. Sort of an emulator.

Stem Player Player Stem Player Player Usage Download the latest release here Optional: install ffmpeg, instructions here NOTE: DOES NOT ENABLE DOWNLOA

119 Dec 28, 2022
Self-training with Weak Supervision (NAACL 2021)

This repo holds the code for our weak supervision framework, ASTRA, described in our NAACL 2021 paper: "Self-Training with Weak Supervision"

Microsoft 148 Nov 20, 2022
PyTorch implementation for our paper "Deep Facial Synthesis: A New Challenge"

FSGAN Here is the official PyTorch implementation for our paper "Deep Facial Synthesis: A New Challenge". This project achieve the translation between

Deng-Ping Fan 32 Oct 10, 2022
Implementation for "Manga Filling Style Conversion with Screentone Variational Autoencoder" (SIGGRAPH ASIA 2020 issue)

Manga Filling with ScreenVAE SIGGRAPH ASIA 2020 | Project Website | BibTex This repository is for ScreenVAE introduced in the following paper "Manga F

30 Dec 24, 2022
SAS: Self-Augmentation Strategy for Language Model Pre-training

SAS: Self-Augmentation Strategy for Language Model Pre-training This repository

Alibaba 5 Nov 02, 2022
Official implementation for Scale-Aware Neural Architecture Search for Multivariate Time Series Forecasting

1 SNAS4MTF This repo is the official implementation for Scale-Aware Neural Architecture Search for Multivariate Time Series Forecasting. 1.1 The frame

SZJ 5 Sep 21, 2022
[CVPR 2021] Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers

[CVPR 2021] Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers

Fudan Zhang Vision Group 897 Jan 05, 2023
official Pytorch implementation of ICCV 2021 paper FuseFormer: Fusing Fine-Grained Information in Transformers for Video Inpainting.

FuseFormer: Fusing Fine-Grained Information in Transformers for Video Inpainting By Rui Liu, Hanming Deng, Yangyi Huang, Xiaoyu Shi, Lewei Lu, Wenxiu

77 Dec 27, 2022
Fine-grained Control of Image Caption Generation with Abstract Scene Graphs

Faster R-CNN pretrained on VisualGenome This repository modifies maskrcnn-benchmark for object detection and attribute prediction on VisualGenome data

Shizhe Chen 7 Apr 20, 2021
Intelligent Video Analytics toolkit based on different inference backends.

English | 中文 OpenIVA OpenIVA is an end-to-end intelligent video analytics development toolkit based on different inference backends, designed to help

Quantum Liu 15 Oct 27, 2022
Code for our NeurIPS 2021 paper 'Exploiting the Intrinsic Neighborhood Structure for Source-free Domain Adaptation'

Exploiting the Intrinsic Neighborhood Structure for Source-free Domain Adaptation (NeurIPS 2021) Code for our NeurIPS 2021 paper 'Exploiting the Intri

Shiqi Yang 53 Dec 25, 2022
An end-to-end regression problem of predicting the price of properties in Bangalore.

Bangalore-House-Price-Prediction An end-to-end regression problem of predicting the price of properties in Bangalore. Deployed in Heroku using Flask.

Shruti Balan 1 Nov 25, 2022