State-to-Distribution (STD) Model

Related tags

Deep LearningSTD
Overview

State-to-Distribution (STD) Model

In this repository we provide exemplary code on how to construct and evaluate a state-to-distribution (STD) model for a reactive atom-diatom collision system.

Requirements

  • python 3.7
  • TensorFlow 2.4
  • SciKit-learn 0.20

Setting up the environment

We recommend to use Miniconda for the creation of a virtual environment.

Once in miniconda, you can create a virtual enviroment called StD from the .yml file with the following command

conda env create --file StD.yml

On the same file, there is a version of the required packages. Additionally, a .txt file is included, if this is used the necessary command for the creation of the environment is:

conda create --file StD.txt 

To activate the virtual environment use the command:

conda activate StD

You are ready to run the code.

Predict product state distributions

For specific initial conditions

To predict product state distributions for fixed nitial conditions from the test set (77 data sets). Go to the evaluation_InitialCondition folder.

Don't remove (external_plotting directory).

python3 evaluate.py 

The evaluate.py file predicts product state distributions for all initial conditions within the test set and compares them with reference data obtained from quasi-classical trajectory similations (QCT).

Edit the code evaluation.py in the folder evaluation_InitialCondition to specify whether accuracy measures should be calculated based on comparison of the NN predictions and QCT data solely at the grid points where the NN places its predictions (flag "NN") or at all points where QCT data is available (flag "QCT") based on linear interpolation. Then run the code to obtain a file containing the desired accuracy measures, as well as a PDF with the corresponding plots. The evaluations are compared with available QCT data located in QCT_Data/Initial_Condition_Data.

For thermal reactant state dsitributions

To predict product state distributions from thermal reactant state distributions go to the evaluation_Temperature folder.

Edit the code evaluation.py in the folder evaluation_Temperature, to specify which of the four studied cases

  • Ttrans=Trot=Tvib (indices_set1.txt)
  • Ttrans != Tvib =Trot (indices_set2.txt)
  • Ttrans=Tvib != Trot (indices_set3.txt)
  • Ttrans != Tvib != Trot (indices_set4.txt)

you want to analyse.

Then run the code with the following command to obtain a file containing the desired accuracy measures, as well as a PDF with the corresponding plots for three example temperatures.

Don't remove (external_plotting directory).

python3 evaluate.py

The evaluations are compared with the available QCT data in QCT_Data/Temp_Data.

The complete list of temperatures and can be read from the file tinput.dat in data_preprocessing/TEMP/tinput.dat .

Cite as:

Julian Arnold, Debasish Koner, Juan Carlos San Vicente, Narendra Singh, Raymond J. Bemish, and Markus Meuwly,

!*Complete name of paper or do you want to cite the repository? Also, add an email or responsable*
Owner
[email protected]
Repository for free and open-source code developed by people from Markus Meuwly's group at university of Basel, Switzerland
<a href=[email protected]">
[ICCV2021] Learning to Track Objects from Unlabeled Videos

Unsupervised Single Object Tracking (USOT) 🌿 Learning to Track Objects from Unlabeled Videos Jilai Zheng, Chao Ma, Houwen Peng and Xiaokang Yang 2021

53 Dec 28, 2022
Simple keras FCN Encoder/Decoder model for MS-COCO (food subset) segmentation

FCN_MSCOCO_Food_Segmentation Simple keras FCN Encoder/Decoder model for MS-COCO (food subset) segmentation Input data: [http://mscoco.org/dataset/#ove

Alexander Kalinovsky 11 Jan 08, 2019
Official implementation of DreamerPro: Reconstruction-Free Model-Based Reinforcement Learning with Prototypical Representations in TensorFlow 2

DreamerPro Official implementation of DreamerPro: Reconstruction-Free Model-Based Reinforcement Learning with Prototypical Representations in TensorFl

22 Nov 01, 2022
Quickly and easily create / train a custom DeepDream model

Dream-Creator This project aims to simplify the process of creating a custom DeepDream model by using pretrained GoogleNet models and custom image dat

55 Dec 27, 2022
A task-agnostic vision-language architecture as a step towards General Purpose Vision

Towards General Purpose Vision Systems By Tanmay Gupta, Amita Kamath, Aniruddha Kembhavi, and Derek Hoiem Overview Welcome to the official code base f

AI2 79 Dec 23, 2022
Doge-Prediction - Coding Club prediction ig

Doge-Prediction Coding Club prediction ig Basically: Create an application that

1 Jan 10, 2022
General-purpose program synthesiser

DeepSynth General-purpose program synthesiser. This is the repository for the code of the paper "Scaling Neural Program Synthesis with Distribution-ba

Nathanaël Fijalkow 24 Oct 23, 2022
Pointer-generator - Code for the ACL 2017 paper Get To The Point: Summarization with Pointer-Generator Networks

Note: this code is no longer actively maintained. However, feel free to use the Issues section to discuss the code with other users. Some users have u

Abi See 2.1k Jan 04, 2023
AttentionGAN for Unpaired Image-to-Image Translation & Multi-Domain Image-to-Image Translation

AttentionGAN-v2 for Unpaired Image-to-Image Translation AttentionGAN-v2 Framework The proposed generator learns both foreground and background attenti

Hao Tang 530 Dec 27, 2022
Training Certifiably Robust Neural Networks with Efficient Local Lipschitz Bounds (Local-Lip)

Training Certifiably Robust Neural Networks with Efficient Local Lipschitz Bounds (Local-Lip) Introduction TL;DR: We propose an efficient and trainabl

17 Dec 01, 2022
banditml is a lightweight contextual bandit & reinforcement learning library designed to be used in production Python services.

banditml is a lightweight contextual bandit & reinforcement learning library designed to be used in production Python services. This library is developed by Bandit ML and ex-authors of Facebook's app

Bandit ML 51 Dec 22, 2022
A Broad Study on the Transferability of Visual Representations with Contrastive Learning

A Broad Study on the Transferability of Visual Representations with Contrastive Learning This repository contains code for the paper: A Broad Study on

Ashraful Islam 29 Nov 09, 2022
Pose Transformers: Human Motion Prediction with Non-Autoregressive Transformers

Pose Transformers: Human Motion Prediction with Non-Autoregressive Transformers This is the repo used for human motion prediction with non-autoregress

Idiap Research Institute 26 Dec 14, 2022
Multi-Agent Reinforcement Learning for Active Voltage Control on Power Distribution Networks (MAPDN)

Multi-Agent Reinforcement Learning for Active Voltage Control on Power Distribution Networks (MAPDN) This is the implementation of the paper Multi-Age

Future Power Networks 83 Jan 06, 2023
Image super-resolution through deep learning

srez Image super-resolution through deep learning. This project uses deep learning to upscale 16x16 images by a 4x factor. The resulting 64x64 images

David Garcia 5.3k Dec 28, 2022
Official implementation of NeurIPS 2021 paper "Contextual Similarity Aggregation with Self-attention for Visual Re-ranking"

CSA: Contextual Similarity Aggregation with Self-attention for Visual Re-ranking PyTorch training code for CSA (Contextual Similarity Aggregation). We

Hui Wu 19 Oct 21, 2022
African language Speech Recognition - Speech-to-Text

Swahili-Speech-To-Text Table of Contents Swahili-Speech-To-Text Overview Scenario Approach Project Structure data: models: notebooks: scripts tests: l

2 Jan 05, 2023
OMAMO: orthology-based model organism selection

OMAMO: orthology-based model organism selection OMAMO is a tool that suggests the best model organism to study a biological process based on orthologo

Dessimoz Lab 5 Apr 22, 2022
Python scripts for performing lane detection using the LSTR model in ONNX

ONNX LSTR Lane Detection Python scripts for performing lane detection using the Lane Shape Prediction with Transformers (LSTR) model in ONNX. Requirem

Ibai Gorordo 29 Aug 30, 2022
XtremeDistil framework for distilling/compressing massive multilingual neural network models to tiny and efficient models for AI at scale

XtremeDistilTransformers for Distilling Massive Multilingual Neural Networks ACL 2020 Microsoft Research [Paper] [Video] Releasing [XtremeDistilTransf

Microsoft 125 Jan 04, 2023