Application of the L2HMC algorithm to simulations in lattice QCD.

Overview

l2hmc-qcd CodeFactor

📊 Slides

📒 Example Notebook


Overview

The L2HMC algorithm aims to improve upon HMC by optimizing a carefully chosen loss function which is designed to minimize autocorrelations within the Markov Chain, thereby improving the efficiency of the sampler.

This work is based on the original implementation: brain-research/l2hmc/.

A detailed description of the L2HMC algorithm can be found in the paper:

Generalizing Hamiltonian Monte Carlo with Neural Network

by Daniel Levy, Matt D. Hoffman and Jascha Sohl-Dickstein.

Broadly, given an analytically described target distribution, π(x), L2HMC provides a statistically exact sampler that:

  • Quickly converges to the target distribution (fast burn-in).
  • Quickly produces uncorrelated samples (fast mixing).
  • Is able to efficiently mix between energy levels.
  • Is capable of traversing low-density zones to mix between modes (often difficult for generic HMC).

L2HMC for LatticeQCD

Goal: Use L2HMC to efficiently generate gauge configurations for calculating observables in lattice QCD.

A detailed description of the (ongoing) work to apply this algorithm to simulations in lattice QCD (specifically, a 2D U(1) lattice gauge theory model) can be found in doc/main.pdf.

l2hmc-qcd poster

Organization

Dynamics / Network

The base class for the augmented L2HMC leapfrog integrator is implemented in the BaseDynamics (a tf.keras.Model object).

The GaugeDynamics is a subclass of BaseDynamics containing modifications for the 2D U(1) pure gauge theory.

The network is defined in l2hmc-qcd/network/functional_net.py.

Network Architecture

An illustration of the leapfrog layer updating (x, v) --> (x', v') can be seen below.

leapfrog layer

Lattice

Lattice code can be found in lattice.py, specifically the GaugeLattice object that provides the base structure on which our target distribution exists.

Additionally, the GaugeLattice object implements a variety of methods for calculating physical observables such as the average plaquette, Éļₚ, and the topological charge Q,

Training

The training loop is implemented in l2hmc-qcd/utils/training_utils.py .

To train the sampler on a 2D U(1) gauge model using the parameters specified in bin/train_configs.json:

$ python3 /path/to/l2hmc-qcd/l2hmc-qcd/train.py --json_file=/path/to/l2hmc-qcd/bin/train_configs.json

Or via the bin/train.sh script provided in bin/.

Features

  • Distributed training (via horovod): If horovod is installed, the model can be trained across multiple GPUs (or CPUs) by:

    #!/bin/bash
    
    TRAINER=/path/to/l2hmc-qcd/l2hmc-qcd/train.py
    JSON_FILE=/path/to/l2hmc-qcd/bin/train_configs.json
    
    horovodrun -np ${PROCS} python3 ${TRAINER} --json_file=${JSON_FILE}

Contact


Code author: Sam Foreman

Pull requests and issues should be directed to: saforem2

Citation

If you use this code or found this work interesting, please cite our work along with the original paper:

@misc{foreman2021deep,
      title={Deep Learning Hamiltonian Monte Carlo}, 
      author={Sam Foreman and Xiao-Yong Jin and James C. Osborn},
      year={2021},
      eprint={2105.03418},
      archivePrefix={arXiv},
      primaryClass={hep-lat}
}
@article{levy2017generalizing,
  title={Generalizing Hamiltonian Monte Carlo with Neural Networks},
  author={Levy, Daniel and Hoffman, Matthew D. and Sohl-Dickstein, Jascha},
  journal={arXiv preprint arXiv:1711.09268},
  year={2017}
}

Acknowledgement

This research used resources of the Argonne Leadership Computing Facility, which is a DOE Office of Science User Facility supported under contract DE_AC02-06CH11357. This work describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the work do not necessarily represent the views of the U.S. DOE or the United States Government. Declaration of Interests - None.

Hits

Stargazers over time

Comments
  • Remove upper bound on python_requires

    Remove upper bound on python_requires

    (I'm moving between meetings so can iterate on this more later, so excuse the very brief Issue for now).

    At the moment the project has an upper bound on python_requires

    https://github.com/saforem2/l2hmc-qcd/blob/2eb6ee63cc0c53b187e6d716f4c12f418c8b8515/setup.py#L165

    Assuming that you're intending l2hmc to be a library and not an application, then I would highly recommend removing this for the reasons summarized in Henry's detailed blog post on the subject.

    Congrats on getting l2hmc up on PyPI though! :snake: :rocket:

    opened by matthewfeickert 2
  • Alpha

    Alpha

    Pull upstream alpha branch into main

    Major changes

    • new src/ hierarchical module organization
    • Contains skeleton implementation of 4D SU(3) lattice gauge model
    • Framework independent configuration
      • Unified configuration system simplifies logic, same configs used for both tensorflow and pytorch experiments
      • Plan to be able to specify which backend to use through config option
    • Unified (and framework independent) configurations between tensorflow and pytorch implementations

    Note: This is still very much a WIP. Many existing features still need to be re-implemented / updated into new code in src/.

    Todo

    • [ ] Write unit tests
    • [ ] Use simple configs for end-to-end workflow test + integrate into CI
    • [ ] dynamic learning rate scheduling
    • [ ] Test 4D SU(3) numpy code
    • [ ] Write tensorflow and pytorch implementations of LatticeSU3 objects
    • [ ] Improved / simplified ( / trainable?) annealing schedule
    • [ ] Distributed training support
      • [ ] horovod
      • [ ] DDP for pytorch implementation
      • [ ] DeepSpeed from Microsoft??
    • [ ] Testing / inference logic
    • [ ] Automatic checkpointing
    • [ ] Metric logging
      • [ ] Tensorboard?
      • [ ] Sacred?
      • [ ] build custom dashboard? plot.ly?
    • [ ] Setup packaging / distribution through pip
    • [ ] Resolve issue
    opened by saforem2 1
  • Alpha

    Alpha

    opened by saforem2 1
  • Rich

    Rich

    General improvements, rewrote logging methods to use Rich for better formatting.

    • Adds dynamic (trainable) step size eps for each separate x and v updates, seems to generally increase the total energy towards the middle of the trajectory but it remains unclear if this corresponds to an improvement in the tunneling rate
    • Adds methods for calculating autocorrelations of the topological charge, as well as notebooks for generating the plots
    • Updates to the writeup in doc/main.pdf
    • Will likely be last changes to writeup before public release of official draft
    opened by saforem2 1
  • Dev

    Dev

    • Updates to README

    • Ability to load network with new training instance

    • Updates to doc/, removes old sections related to debugging the bias in the plaquette

    opened by saforem2 1
  • Saveable model

    Saveable model

    Complete rewrite of dynamics.xnet and dynamics.vnet models to use tf.keras.functional Models.

    Additional changes include:

    • Non-Compact Projection update for gauge fields
    • Ability to specify convolution structure to be prepended at beginning of gauge network
    opened by saforem2 1
  • Dev

    Dev

    Removes models/gauge_model.py entirely.

    Instead, a base dynamics class is implemented in dynamics/dynamics.py, and an example subclass is provided in dynamics/gauge_dynamics.py.

    opened by saforem2 1
  • Split networks

    Split networks

    Major rewrite of existing codebase.

    This pull request updates everything to be compatible with tensorflow >= 2.2 and removes a bunch of redundant legacy code.

    opened by saforem2 1
  • Dev

    Dev

    • Dynamics object is now compatible with tf >= 2.0
    • Running inference on trained model with tensorflow now creates identical graphs and summary files to numpy inference code
    • Inference with numpy now uses object oriented structure
    • Adds LaTeX + PDF documentation in doc/
    opened by saforem2 1
  • Cooley dev

    Cooley dev

    Adds new GaugeNetwork architecture as the default for training GaugeModel

    Additionally, replaces pickle with joblib for saving data as .z compressed files (as opposed to .pkl files).

    opened by saforem2 1
  • Testing

    Testing

    Implemented nnehmc_loss calculation for an alternative loss function using the approach suggested in https://infoscience.epfl.ch/record/264887/files/robust_parameter_estimation.pdf.

    This modified loss function can be chosen (instead of the standard loss described in the original paper) by passing --use_nnehmc_loss as a command line argument.

    opened by saforem2 1
  • Packaging and PyPI distribution?

    Packaging and PyPI distribution?

    As you've made a library and are using it as such:

    # snippet from toy_distributions.ipynb
    
    # append parent directory to `sys.path`
    # to load from modules in `../l2hmc-qcd/`
    module_path = os.path.join('..')
    if module_path not in sys.path:
        sys.path.append(module_path)
    
    # Local imports
    from utils.attr_dict import AttrDict
    from utils.training_utils import train_dynamics
    from dynamics.config import DynamicsConfig
    from dynamics.base_dynamics import BaseDynamics
    from dynamics.generic_dynamics import GenericDynamics
    from network.config import LearningRateConfig
    from config import (State, NetWeights, MonteCarloStates,
                        BASE_DIR, BIN_DIR, TF_FLOAT)
    
    from utils.distributions import (plot_samples2D, contour_potential,
                                     two_moons_potential, sin_potential,
                                     sin_potential1, sin_potential2)
    

    do you have any plans and/or interest in packaging it as a Python library so it can either be pip installed from GitHub or be distributed on PyPI?

    opened by matthewfeickert 5
Releases(0.12.0)
Owner
Sam Foreman
Computational science Postdoc at Argonne National Laboratory working on applying machine learning to simulations in lattice QCD.
Sam Foreman
Source code for deep symbolic optimization.

Update July 10, 2021: This repository now supports an additional symbolic optimization task: learning symbolic policies for reinforcement learning. Th

Brenden Petersen 290 Dec 25, 2022
MLP-Numpy - A simple modular implementation of Multi Layer Perceptron in pure Numpy.

MLP-Numpy A simple modular implementation of Multi Layer Perceptron in pure Numpy. I used the Iris dataset from scikit-learn library for the experimen

Soroush Omranpour 1 Jan 01, 2022
Implementation of our paper 'RESA: Recurrent Feature-Shift Aggregator for Lane Detection' in AAAI2021.

RESA PyTorch implementation of the paper "RESA: Recurrent Feature-Shift Aggregator for Lane Detection". Our paper has been accepted by AAAI2021. Intro

137 Jan 02, 2023
Precomputed Real-Time Texture Synthesis with Markovian Generative Adversarial Networks

MGANs Training & Testing code (torch), pre-trained models and supplementary materials for "Precomputed Real-Time Texture Synthesis with Markovian Gene

290 Nov 15, 2022
The official implementation of ICCV paper "Box-Aware Feature Enhancement for Single Object Tracking on Point Clouds".

Box-Aware Tracker (BAT) Pytorch-Lightning implementation of the Box-Aware Tracker. Box-Aware Feature Enhancement for Single Object Tracking on Point C

Kangel Zenn 5 Mar 26, 2022
A different spin on dataclasses.

dataklasses Dataklasses is a library that allows you to quickly define data classes using Python type hints. Here's an example of how you use it: from

David Beazley 752 Nov 18, 2022
Official code for "EagerMOT: 3D Multi-Object Tracking via Sensor Fusion" [ICRA 2021]

EagerMOT: 3D Multi-Object Tracking via Sensor Fusion Read our ICRA 2021 paper here. Check out the 3 minute video for the quick intro or the full prese

Aleksandr Kim 276 Dec 30, 2022
Tensorflow2 Keras-based Semantic Segmentation Models Implementation

Tensorflow2 Keras-based Semantic Segmentation Models Implementation

Hah Min Lew 1 Feb 08, 2022
Facial expression detector

A tensorflow convolutional neural network model to detect facial expressions.

Carlos TardÃģn Rubio 5 Apr 20, 2022
Hitters Linear Regression - Hitters Linear Regression With Python

Hitters_Linear_Regression KullanacağımÄąz veri seti Carnegie Mellon Üniversitesi'

AyseBuyukcelik 2 Jan 26, 2022
Deep Reinforced Attention Regression for Partial Sketch Based Image Retrieval.

DARP-SBIR Intro This repository contains the source code implementation for ICDM submission paper Deep Reinforced Attention Regression for Partial Ske

2 Jan 09, 2022
code and data for paper "GIANT: Scalable Creation of a Web-scale Ontology"

GIANT Code and data for paper "GIANT: Scalable Creation of a Web-scale Ontology" https://arxiv.org/pdf/2004.02118.pdf Please cite our paper if this pr

Excalibur 39 Dec 29, 2022
Implementation of DocFormer: End-to-End Transformer for Document Understanding, a multi-modal transformer based architecture for the task of Visual Document Understanding (VDU)

DocFormer - PyTorch Implementation of DocFormer: End-to-End Transformer for Document Understanding, a multi-modal transformer based architecture for t

171 Jan 06, 2023
DeepFill v1/v2 with Contextual Attention and Gated Convolution, CVPR 2018, and ICCV 2019 Oral

Generative Image Inpainting An open source framework for generative image inpainting task, with the support of Contextual Attention (CVPR 2018) and Ga

2.9k Dec 16, 2022
A big endian Gentoo port developed on a Pine64.org RockPro64

Gentoo-aarch64_be A big endian Gentoo port developed on a Pine64.org RockPro64 The endian wars are over... little endian won. As a result, it is incre

Rory Bolt 6 Dec 07, 2022
8-week curriculum for AI Builders

curriculum 8-week curriculum for AI Builders āļŠāļēāļĢāļšāļąāļ āļšāļ—āļ—āļĩāđˆ 1 - Machine Learning āļ„āļ·āļ­āļ­āļ°āđ„āļĢ āļšāļ—āļ—āļĩāđˆ 2 - āļŠāļļāļ”āļ‚āđ‰āļ­āļĄāļđāļĨāļĄāļŦāļąāļĻāļˆāļĢāļĢāļĒāđŒāđāļĨāļ°āļ–āļīāđˆāļ™āļ—āļĩāđˆāļ­āļĒāļđāđˆ āļšāļ—āļ—āļĩāđˆ 3 - Stochastic

AI Builders 134 Jan 03, 2023
Backdoor Attack through Frequency Domain

Backdoor Attack through Frequency Domain DEPENDENCIES python==3.8.3 numpy==1.19.4 tensorflow==2.4.0 opencv==4.5.1 idx2numpy==1.2.3 pytorch==1.7.0 Data

5 Jun 18, 2022
Codes for AAAI22 paper "Learning to Solve Travelling Salesman Problem with Hardness-Adaptive Curriculum"

Paper For more details, please see our paper Learning to Solve Travelling Salesman Problem with Hardness-Adaptive Curriculum which has been accepted a

14 Sep 30, 2022
PyTorch implementation of the paper:A Convolutional Approach to Melody Line Identification in Symbolic Scores.

Symbolic Melody Identification This repository is an unofficial PyTorch implementation of the paper:A Convolutional Approach to Melody Line Identifica

Sophia Y. Chou 3 Feb 21, 2022
Code repo for realtime multi-person pose estimation in CVPR'17 (Oral)

Realtime Multi-Person Pose Estimation By Zhe Cao, Tomas Simon, Shih-En Wei, Yaser Sheikh. Introduction Code repo for winning 2016 MSCOCO Keypoints Cha

Zhe Cao 4.9k Dec 31, 2022