Dynamical Wasserstein Barycenters for Time Series Modeling

Overview

Dynamical Wasserstein Barycenters for Time Series Modeling

This is the code related for the Dynamical Wasserstein Barycenter model published in Neurips 2021.

To run the code and replicate the results reported in our paper,

# usage: DynamicalWassersteinBarycenters.py dataSet dataFile debugFolder interpModel [--ParamTest PARAMTEST] [--lambda LAM] [--s S]

# Sample run on MSR data                                         
>> python DynamicalWassersteinBarycenters.py MSR_Batch ../Data/MSR_Data/subj090_1.mat ../debug/MSR/subj001_1.mat Wass 

# Sample run for parameter test
>> python DynamicalWassersteinBarycenters.py MSR_Batch ../Data/MSR_Data/subj090_1.mat ../debug/ParamTest/subj001_1.mat Wass --ParamTest 1 --lambda 100 --s 1.0

The interpMethod is either Wass` for the Wasserstein barycentric model or GMM`` for the linear interpolation model.

Simulated Data

The simulated data and experiment included in this supplement can be replicated using using the following commands.

# Generate 2 and 3 state simulated data                                         
>> python GenerateOptimizationExperimentData.py
>> python GenerateOptimizationExperimentData_3K.py

# usage: OptimizationExperiment.py FileIn Mode File
# Sample run for optimization experiment
>> python OptimizationExperiment.py ../data/SimulatedOptimizationData_2K/dim_5_5.mat/ WB ../debug/SimulatedData/dim_5_5_out.mat 

The Mode is either WB for Wasserstein-Bures geometry and Euc for Euclidean geometry using Cholesky decomposition parameterization.

Requirements

_libgcc_mutex=0.1=conda_forge
_openmp_mutex=4.5=1_llvm
_pytorch_select=0.2=gpu_0
blas=2.17=openblas
ca-certificates=2020.12.5=ha878542_0
certifi=2020.12.5=py38h578d9bd_1
cffi=1.14.4=py38h261ae71_0
cudatoolkit=8.0=3
cudnn=7.1.3=cuda8.0_0
cycler=0.10.0=py_2
freetype=2.10.4=h7ca028e_0
future=0.18.2=py38h578d9bd_3
immutables=0.15=py38h497a2fe_0
intel-openmp=2020.2=254
joblib=1.0.0=pyhd8ed1ab_0
jpeg=9d=h36c2ea0_0
kiwisolver=1.3.1=py38h82cb98a_0
lcms2=2.11=hcbb858e_1
ld_impl_linux-64=2.33.1=h53a641e_7
libblas=3.8.0=17_openblas
libcblas=3.8.0=17_openblas
libedit=3.1.20191231=h14c3975_1
libffi=3.3=he6710b0_2
libgcc-ng=9.3.0=h5dbcf3e_17
libgfortran-ng=7.3.0=hdf63c60_0
libgomp=9.3.0=h5dbcf3e_17
liblapack=3.8.0=17_openblas
liblapacke=3.8.0=17_openblas
libopenblas=0.3.10=pthreads_hb3c22a3_4
libpng=1.6.37=h21135ba_2
libstdcxx-ng=9.3.0=h6de172a_18
libtiff=4.1.0=h4f3a223_6
libwebp-base=1.1.0=h36c2ea0_3
llvm-openmp=11.0.0=hfc4b9b4_1
lz4-c=1.9.2=he1b5a44_3
matplotlib-base=3.3.3=py38h5c7f4ab_0
mkl=2020.4=h726a3e6_304
mkl-service=2.3.0=py38he904b0f_0
mkl_fft=1.3.0=py38h5c078b8_1
mkl_random=1.2.0=py38hc5bc63f_1
ncurses=6.2=he6710b0_1
ninja=1.10.2=py38hff7bd54_0
numpy=1.19.5=py38h18fd61f_1
numpy-base=1.18.5=py38h2f8d375_0
olefile=0.46=pyh9f0ad1d_1
openssl=1.1.1k=h7f98852_0
pillow=8.1.0=py38h357d4e7_1
pip=20.3.3=py38h06a4308_0
pot=0.7.0=py38h950e882_0
pycparser=2.20=py_2
pyparsing=2.4.7=pyh9f0ad1d_0
python=3.8.5=h7579374_1
python-dateutil=2.8.1=py_0
python_abi=3.8=1_cp38
pytorch=1.7.1=cpu_py38h36eccb8_1
readline=8.0=h7b6447c_0
scikit-learn=0.24.1=py38h658cfdd_0
scipy=1.5.2=py38h8c5af15_0
setuptools=51.1.2=py38h06a4308_4
six=1.15.0=py38h06a4308_0
sqlite=3.33.0=h62c20be_0
threadpoolctl=2.1.0=pyh5ca1d4c_0
tk=8.6.10=hbc83047_0
tornado=6.1=py38h497a2fe_1
wheel=0.36.2=pyhd3eb1b0_0
xz=5.2.5=h7b6447c_0
zlib=1.2.11=h7b6447c_3
zstd=1.4.5=h6597ccf_2
Machine Translation Implement By Bi-GRU And Transformer

Seq2Seq Translation Implement By Bidirectional GRU And Transformer In Pytorch Before You Run The Code You should download the data through the link be

He Wang 2 Oct 27, 2021
Human annotated noisy labels for CIFAR-10 and CIFAR-100.

Dataloader for CIFAR-N CIFAR-10N noise_label = torch.load('./data/CIFAR-10_human.pt') clean_label = noise_label['clean_label'] worst_label = noise_lab

<a href=[email protected]"> 117 Nov 30, 2022
Code for "PV-RAFT: Point-Voxel Correlation Fields for Scene Flow Estimation of Point Clouds", CVPR 2021

PV-RAFT This repository contains the PyTorch implementation for paper "PV-RAFT: Point-Voxel Correlation Fields for Scene Flow Estimation of Point Clou

Yi Wei 43 Dec 05, 2022
Code for our ICCV 2021 Paper "OadTR: Online Action Detection with Transformers".

Code for our ICCV 2021 Paper "OadTR: Online Action Detection with Transformers".

66 Dec 15, 2022
The authors' implementation of Unsupervised Adversarial Learning of 3D Human Pose from 2D Joint Locations

Unsupervised Adversarial Learning of 3D Human Pose from 2D Joint Locations This is the authors' implementation of Unsupervised Adversarial Learning of

Dwango Media Village 140 Dec 07, 2022
An official implementation of the paper Exploring Sequence Feature Alignment for Domain Adaptive Detection Transformers

Sequence Feature Alignment (SFA) By Wen Wang, Yang Cao, Jing Zhang, Fengxiang He, Zheng-jun Zha, Yonggang Wen, and Dacheng Tao This repository is an o

WangWen 79 Dec 24, 2022
Discretized Integrated Gradients for Explaining Language Models (EMNLP 2021)

Discretized Integrated Gradients for Explaining Language Models (EMNLP 2021) Overview of paths used in DIG and IG. w is the word being attributed. The

INK Lab @ USC 17 Oct 27, 2022
The official project of SimSwap (ACM MM 2020)

SimSwap: An Efficient Framework For High Fidelity Face Swapping Proceedings of the 28th ACM International Conference on Multimedia The official reposi

Six_God 2.6k Jan 08, 2023
Collects many various multi-modal transformer architectures, including image transformer, video transformer, image-language transformer, video-language transformer and related datasets

The repository collects many various multi-modal transformer architectures, including image transformer, video transformer, image-language transformer, video-language transformer and related datasets

Jun Chen 139 Dec 21, 2022
Python implementation of Bayesian optimization over permutation spaces.

Bayesian Optimization over Permutation Spaces This repository contains the source code and the resources related to the paper "Bayesian Optimization o

Aryan Deshwal 9 Dec 23, 2022
This is the dataset for testing the robustness of various VO/VIO methods

KAIST VIO dataset This is the dataset for testing the robustness of various VO/VIO methods You can download the whole dataset on KAIST VIO dataset Ind

1 Sep 01, 2022
LeetCode Solutions https://t.me/tenvlad

leetcode LeetCode Solutions groupped by common patterns YouTube: https://www.youtube.com/c/vladten Telegram: https://t.me/nilinterface Problems source

Vlad Ten 158 Dec 29, 2022
Facebook AI Image Similarity Challenge: Descriptor Track

Facebook AI Image Similarity Challenge: Descriptor Track This repository contains the code for our solution to the Facebook AI Image Similarity Challe

Sergio MP 17 Dec 14, 2022
根据midi文件演奏“风物之诗琴”的脚本 "Windsong Lyre" auto play

Genshin-lyre-auto-play 简体中文 | English 简介 根据midi文件演奏“风物之诗琴”的脚本。由Python驱动,在此承诺, ⚠️ 项目内绝不含任何能够引起安全问题的代码。 前排提示:所有键盘在动但是原神没反应的都是因为没有管理员权限,双击run.bat或者以管理员模式

御坂17032号 386 Jan 01, 2023
Disagreement-Regularized Imitation Learning

Due to a normalization bug the expert trajectories have lower performance than the rl_baseline_zoo reported experts. Please see the following link in

Kianté Brantley 25 Apr 28, 2022
ImageNet Adversarial Image Evaluation

ImageNet Adversarial Image Evaluation This repository contains the code and some materials used in the experimental work presented in the following pa

Utku Ozbulak 11 Dec 26, 2022
基于PaddleOCR搭建的OCR server... 离线部署用

开头说明 DangoOCR 是基于大家的 CPU处理器 来运行的,CPU处理器 的好坏会直接影响其速度, 但不会影响识别的精度 ,目前此版本识别速度可能在 0.5-3秒之间,具体取决于大家机器的配置,可以的话尽量不要在运行时开其他太多东西。需要配合团子翻译器 Ver3.6 及其以上的版本才可以使用!

胖次团子 131 Dec 25, 2022
PyTorch implementation of SIFT descriptor

This is an differentiable pytorch implementation of SIFT patch descriptor. It is very slow for describing one patch, but quite fast for batch. It can

Dmytro Mishkin 150 Dec 24, 2022
Code for the ICML 2021 paper: "ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision"

ViLT Code for the paper: "ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision" Install pip install -r requirements.txt pip

Wonjae Kim 922 Jan 01, 2023
Nvdiffrast - Modular Primitives for High-Performance Differentiable Rendering

Nvdiffrast – Modular Primitives for High-Performance Differentiable Rendering Modular Primitives for High-Performance Differentiable Rendering Samuli

NVIDIA Research Projects 675 Jan 06, 2023