The code for replicating the experiments from the LFI in SSMs with Unknown Dynamics paper.

Overview

Likelihood-Free Inference in State-Space Models with Unknown Dynamics

This package contains the codes required to run the experiments in the paper. The simulators used for the State-Space Models in the experiments are implemented based on Engine for Likelihood-free Inference (ELFI) models.

Installation

We recommend using an Anaconda environment. To create and activate the conda environment with all dependencies installed, run:

conda create -c conda-forge --name env --file lfi-requirements.txt
conda activate env
pip install -e .
pip install sbi blitz-bayesian-pytorch stable_baselines3

For the GP-SSM and PR-SSM methods, we recommend creating a separate environment, in which one should install tensorflow, and then clone the 'custom_multiouput' branch of the GPflow from https://github.com/ialong/GPflow. Once GPflow is installed, one should clone GPt from https://github.com/ialong/GPt and execute 'experiments/run_gpssms.py', the code will complete 30 repletions of experiments with tractable likelihoods.

Running the experiments

The experiment scripts can be found in the 'experiments/' folder. To run the experiments on one of the considered SSM, one should run the 'run_experiment.py' script with the following arguments (options are in the parentheses): --sim ('lgssm', 'toy', 'sv', 'umap', 'gaze'), --meth ('bnn', 'qehvi', 'blr', 'SNPE', 'SNLE', 'SNRE'), --seed (any seed number), --budget (available simulation budget for each new state), --tasks (number of tasks considered/ moving window size for LMC-BNN, LMC-qEHVI and LMC-BLR methods). For instance:

python3 experiments/run_experiment.py --sim=lgssm --meth=bolfi --seed=0 --budget=2 --tasks=2

The results will be saved in the corresponding folders 'experiments/[sim]/[meth]-w[tasks]-s[budget]/'. To build plots and output the results, one should run 'collect_plots.py' script with specified arguments: --type ('inf' in case of evaluating state inference quality or 'traj' in case of evaluating the generated trajectories), --tasks (the number of tasks used by the methods). For example:

python3 experiments/collect_results.py --type=inf --tasks=2

The plots with experiment results will be stored in 'experiments/plots'.

Implementing custom simulators

The simulators for all experiments can be found in elfi/examples. Example implementations used in the paper are found in gaze_selection.py, umap_tasks.py, LGSSM.py (LG), dynamic_toy_model.py (NN), and stochastic_volatility.py (SV). To create a new SSM, implement a new class that inherits from elfi.DynamicProcess with custom generating function for observations, create_model(), and update_dynamic().

The code for all methods can be found in 'elfi/methods/dynamic_parameter_inference.py' and 'elfi/methods/bo/mogp.py'.

Citation


Owner
Alex Aushev
Alex Aushev
Python版OpenCVのTracking APIのサンプルです。DaSiamRPNアルゴリズムまで対応しています。

OpenCV-Object-Tracker-Sample Python版OpenCVのTracking APIのサンプルです。   Requirement opencv-contrib-python 4.5.3.56 or later Algorithm 2021/07/16時点でOpenCVには以

KazuhitoTakahashi 36 Jan 01, 2023
[ACMMM 2021, Oral] Code release for "Elastic Tactile Simulation Towards Tactile-Visual Perception"

EIP: Elastic Interaction of Particles Code release for "Elastic Tactile Simulation Towards Tactile-Visual Perception", in ACMMM (Oral) 2021. By Yikai

Yikai Wang 37 Dec 20, 2022
The official implementation of paper Siamese Transformer Pyramid Networks for Real-Time UAV Tracking, accepted by WACV22

SiamTPN Introduction This is the official implementation of the SiamTPN (WACV2022). The tracker intergrates pyramid feature network and transformer in

Robotics and Intelligent Systems Control @ NYUAD 28 Nov 25, 2022
A denoising autoencoder + adversarial losses and attention mechanisms for face swapping.

faceswap-GAN Adding Adversarial loss and perceptual loss (VGGface) to deepfakes'(reddit user) auto-encoder architecture. Updates Date Update 2018-08-2

3.2k Dec 30, 2022
Projecting interval uncertainty through the discrete Fourier transform

Projecting interval uncertainty through the discrete Fourier transform This repo

1 Mar 02, 2022
Pytorch implementation of U-Net, R2U-Net, Attention U-Net, and Attention R2U-Net.

pytorch Implementation of U-Net, R2U-Net, Attention U-Net, Attention R2U-Net U-Net: Convolutional Networks for Biomedical Image Segmentation https://a

leejunhyun 2k Jan 02, 2023
In this project we use both Resnet and Self-attention layer for cat, dog and flower classification.

cdf_att_classification classes = {0: 'cat', 1: 'dog', 2: 'flower'} In this project we use both Resnet and Self-attention layer for cdf-Classification.

3 Nov 23, 2022
A Repository of Community-Driven Natural Instructions

A Repository of Community-Driven Natural Instructions TLDR; this repository maintains a community effort to create a large collection of tasks and the

AI2 244 Jan 04, 2023
Fine-grained Post-training for Improving Retrieval-based Dialogue Systems - NAACL 2021

Fine-grained Post-training for Multi-turn Response Selection Implements the model described in the following paper Fine-grained Post-training for Impr

Janghoon Han 83 Dec 20, 2022
P-Tuning v2: Prompt Tuning Can Be Comparable to Finetuning Universally Across Scales and Tasks

P-tuning v2 P-Tuning v2: Prompt Tuning Can Be Comparable to Finetuning Universally Across Scales and Tasks An optimized prompt tuning strategy for sma

THUDM 540 Dec 30, 2022
GCNet: Non-local Networks Meet Squeeze-Excitation Networks and Beyond

GCNet for Object Detection By Yue Cao, Jiarui Xu, Stephen Lin, Fangyun Wei, Han Hu. This repo is a official implementation of "GCNet: Non-local Networ

Jerry Jiarui XU 1.1k Dec 29, 2022
PyTorch implementation of "A Two-Stage End-to-End System for Speech-in-Noise Hearing Aid Processing"

Implementation of the Sheffield entry for the first Clarity enhancement challenge (CEC1) This repository contains the PyTorch implementation of "A Two

10 Aug 19, 2022
Draw like Bob Ross using the power of Neural Networks (With PyTorch)!

Draw like Bob Ross using the power of Neural Networks! (+ Pytorch) Learning Process Visualization Getting started Install dependecies Requires python3

Kendrick Tan 116 Mar 07, 2022
Ἀνατομή is a PyTorch library to analyze representation of neural networks

Ἀνατομή is a PyTorch library to analyze representation of neural networks

Ryuichiro Hataya 50 Dec 05, 2022
A geometric deep learning pipeline for predicting protein interface contacts.

A geometric deep learning pipeline for predicting protein interface contacts.

44 Dec 30, 2022
Data, notebooks, and articles associated with the RSNA AI Deep Learning Lab at RSNA 2021

RSNA AI Deep Learning Lab 2021 Intro Welcome Deep Learners! This document provides all the information you need to participate in the RSNA AI Deep Lea

RSNA 65 Dec 16, 2022
3D dataset of humans Manipulating Objects in-the-Wild (MOW)

MOW dataset [Website] This repository maintains our 3D dataset of humans Manipulating Objects in-the-Wild (MOW). The dataset contains 512 images in th

Zhe Cao 28 Nov 06, 2022
This repository contains the implementation of the HealthGen model, a generative model to synthesize realistic EHR time series data with missingness

HealthGen: Conditional EHR Time Series Generation This repository contains the implementation of the HealthGen model, a generative model to synthesize

0 Jan 20, 2022
RLMeta is a light-weight flexible framework for Distributed Reinforcement Learning Research.

RLMeta rlmeta - a flexible lightweight research framework for Distributed Reinforcement Learning based on PyTorch and moolib Installation To build fro

Meta Research 281 Dec 22, 2022
Repository for MuSiQue: Multi-hop Questions via Single-hop Question Composition

🎵 MuSiQue: Multi-hop Questions via Single-hop Question Composition This is the repository for our paper "MuSiQue: Multi-hop Questions via Single-hop

21 Jan 02, 2023