A python package simulating the quasi-2D pseudospin-1/2 Gross-Pitaevskii equation with NVIDIA GPU acceleration.

Overview
docs/_static/final_logo.png

A python package simulating the quasi-2D pseudospin-1/2 Gross-Pitaevskii equation with NVIDIA GPU acceleration.

Introduction

spinor-gpe is high-level, object-oriented Python package for numerically solving the quasi-2D, psuedospinor (two component) Gross-Piteavskii equation (GPE), for both ground state solutions and real-time dynamics. This project grew out of a desire to make high-performance simulations of the GPE more accessible to the entering researcher.

While this package is primarily built on NumPy, the main computational heavy-lifting is performed using PyTorch, a deep neural network library commonly used in machine learning applications. PyTorch has a NumPy-like interface, but a backend that can run either on a conventional processor or a CUDA-enabled NVIDIA(R) graphics card. Accessing a CUDA device will provide a significant hardware acceleration of the simulations.

This package has been tested on Windows, Mac, and Linux systems.

View the documentation on ReadTheDocs

Installation

Dependencies

Primary packages:

  1. PyTorch >= 1.8.0
  2. cudatoolkit >= 11.1
  3. NumPy

Other packages:

  1. matplotlib (visualizing results)
  2. tqdm (progress messages)
  3. scikit-image (matrix signal processing)
  4. ffmpeg = 4.3.1 (animation generation)

Installing Dependencies

The dependencies for spinor-gpe can be installed directly into the new conda virtual environment spinor using the environment.yml file included with the package:

conda env create --file environment.yml

This installation may take a while.

Note

The version of CUDA used in this package does not support macOS. Users on these computers may still install PyTorch and run the examples on their CPU. To install correctly on macOS, remove the - cudatoolkit=11.1 line from the environment.yml file. After installation, you will need to modify the example code to run on the cpu device instead of the cuda device.

The above dependencies can also be installed manually using conda into a virtual environment:

conda activate <new_virt_env_name>
conda install pytorch torchvision torchaudio cudatoolkit=11.1 -c pytorch -c conda-forge
conda install numpy matplotlib tqdm scikit-image ffmpeg spyder

Note

For more information on installing PyTorch, see its installation instructions page.

To verify that Pytorch was installed correctly, you should be able to import it:

>>> import torch
>>> x = torch.rand(5, 3)
>>> print(x)
tensor([[0.2757, 0.3957, 0.9074],
        [0.6304, 0.1279, 0.7565],
        [0.0946, 0.7667, 0.2934],
        [0.9395, 0.4782, 0.9530],
        [0.2400, 0.0020, 0.9569]])

Also, if you have an NVIDIA GPU, you can test that it is available for GPU computing:

>>> torch.cuda.is_available()
True

CUDA Installation

CUDA is the API that interfaces with the computing resources on NVIDIA graphics cards, and it can be accessed through the PyTorch package. If your computer has an NVIDIA graphics card, start by verifying that it is CUDA-compatible. This page lists out the compute capability of many NVIDIA devices. (Note: yours may still be CUDA-compatible even if it is not listed here.)

Given that your graphics card can run CUDA, the following are the steps to install CUDA on a Windows computer:

  1. Install the NVIDIA CUDA Toolkit. Go to the CUDA download page for the most recent version. Select the operating system options and installer type. Download the installer and install it via the wizard on the screen. This may take a while. For reference, here is the Windows CUDA Toolkit installation guide.

    To test that CUDA is installed, run which nvcc, and, if instlled correctly, will return the installation path. Also run nvcc --version to verify that the version of CUDA matches the PyTorch CUDA toolkit version (>=11.1).

  2. Download the correct drivers for your NVIDIA device. Once the driver is installed, you will have the NVIDIA Control Panel installed on your computer.

Getting Started

  1. Clone the repository.
  2. Navigate to the spinor_gpe/examples/ directory, and start to experiment with the examples there.

Basic Operation

This package has a simple, object-oriented interface for imaginary- and real-time propagations of the pseudospinor-GPE. While there are other operations and features to this package, all simulations will have the following basic structure:

1. Setup: Data path and PSpinor object

>>> import pspinor as spin
>>> DATA_PATH = '<project_name>/Trial_###'
>>> ps = spin.PSpinor(DATA_PATH)

The program will create a new directory DATA_PATH, in which the data and results from this simulation trial will be saved. If DATA_PATH is a relative path, as shown above, then the trial data will be located in the /data/ folder. When working with multiple simulation projects, it can be helpful to specify a <project_name> directory; furthermore, the form Trial_### is convenient, but not strictly required.

2. Run: Begin Propagation

The example below demonstrates imaginary-time propagation. The method PSpinor.imaginary performs the propagation loop and returns a PropResult object. This object contains the results, including the final wavefunctions and populations, and analysis and plotting methods (described below).

>>> DT = 1/50
>>> N_STEPS = 1000
>>> DEVICE = 'cuda'
>>> res = ps.imaginary(DT, N_STEPS, DEVICE, is_sampling=True, n_samples=50)

For real-time propagation, use the method PSpinor.real.

3. Analyze: Plot the results

PropResult provides several methods for viewing and understanding the final results. The code block below demonstrates several of them:

>>> res.plot_spins()  # Plots the spin-dependent densities and phases.
>>> res.plot_total()  # Plots the total densities and phases.
>>> res.plot_pops()   # Plots the spin populations throughout the propagation.
>>> res.make_movie()  # Generates a movie from the sampled wavefunctions.

Note that PSpinor also exposes methods to plot the spin and total densities. These can be used independent of PropResult:

>>> ps.plot_spins()

4. Repeat

Likely you will want to repeat or chain together different segments of this structure. Demonstrations of this are shown in the Examples gallery.

OpenLT: An open-source project for long-tail classification

OpenLT: An open-source project for long-tail classification Supported Methods for Long-tailed Recognition: Cross-Entropy Loss Focal Loss (ICCV'17) Cla

Ming Li 37 Sep 15, 2022
A deep learning library that makes face recognition efficient and effective

Distributed Arcface Training in Pytorch This is a deep learning library that makes face recognition efficient, and effective, which can train tens of

Sajjad Aemmi 10 Nov 23, 2021
Official Implementation for "StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery" (ICCV 2021 Oral)

StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery (ICCV 2021 Oral) Run this model on Replicate Optimization: Global directions: Mapper: Check ou

3.3k Jan 05, 2023
Code release for "Making a Bird AI Expert Work for You and Me".

Making-a-Bird-AI-Expert-Work-for-You-and-Me Code release for "Making a Bird AI Expert Work for You and Me". arxiv (Coming soon...) Changelog 2021/12/6

PRIS-CV: Computer Vision Group 11 Dec 11, 2022
LAMDA: Label Matching Deep Domain Adaptation

LAMDA: Label Matching Deep Domain Adaptation This is the implementation of the paper LAMDA: Label Matching Deep Domain Adaptation which has been accep

Tuan Nguyen 9 Sep 06, 2022
Ppq - A powerful offline neural network quantization tool with custimized IR

PPL Quantization Tool(PPL 量化工具) PPL Quantization Tool (PPQ) is a powerful offlin

605 Jan 03, 2023
Pytorch-Swin-Unet-V2 - a modified version of Swin Unet based on Swin Transfomer V2

Swin Unet V2 Swin Unet V2 is a modified version of Swin Unet arxiv based on Swin

Chenxu Peng 26 Dec 03, 2022
An Abstract Cyber Security Simulation and Markov Game for OpenAI Gym

gym-idsgame An Abstract Cyber Security Simulation and Markov Game for OpenAI Gym gym-idsgame is a reinforcement learning environment for simulating at

Kim Hammar 29 Dec 03, 2022
PyTorch reimplementation of hand-biomechanical-constraints (ECCV2020)

Hand Biomechanical Constraints Pytorch Unofficial PyTorch reimplementation of Hand-Biomechanical-Constraints (ECCV2020). This project reimplement foll

Hao Meng 59 Dec 20, 2022
Groceries ARL: Association Rules (Birliktelik Kuralı)

Groceries_ARL Association Rules (Birliktelik Kuralı) Birliktelik kuralları, mark

Şebnem 5 Feb 08, 2022
Open & Efficient for Framework for Aspect-based Sentiment Analysis

PyABSA - Open & Efficient for Framework for Aspect-based Sentiment Analysis Fast & Low Memory requirement & Enhanced implementation of Local Context F

YangHeng 567 Jan 07, 2023
An auto discord account and token generator. Automatically verifies the phone number. Works without proxy. Bypasses captcha.

JOIN DISCORD SERVER https://discord.gg/uAc3agBY FREE HCAPTCHA SOLVING API Discord-Token-Gen An auto discord token generator. Auto verifies phone numbe

3kp 271 Jan 01, 2023
Specification language for generating Generalized Linear Models (with or without mixed effects) from conceptual models

tisane Tisane: Authoring Statistical Models via Formal Reasoning from Conceptual and Data Relationships TL;DR: Analysts can use Tisane to author gener

Eunice Jun 11 Nov 15, 2022
Script that receives an Image (original) and a set of images to be used as "pixels" in reconstruction of the Original image using the set of images as "pixels"

picinpics Script that receives an Image (original) and a set of images to be used as "pixels" in reconstruction of the Original image using the set of

RodrigoCMoraes 1 Oct 24, 2021
Contrastive Learning for Many-to-many Multilingual Neural Machine Translation(mCOLT/mRASP2), ACL2021

Contrastive Learning for Many-to-many Multilingual Neural Machine Translation(mCOLT/mRASP2), ACL2021 The code for training mCOLT/mRASP2, a multilingua

104 Jan 01, 2023
How Do Adam and Training Strategies Help BNNs Optimization? In ICML 2021.

AdamBNN This is the pytorch implementation of our paper "How Do Adam and Training Strategies Help BNNs Optimization?", published in ICML 2021. In this

Zechun Liu 47 Sep 20, 2022
Implementations of polygamma, lgamma, and beta functions for PyTorch

lgamma Implementations of polygamma, lgamma, and beta functions for PyTorch. It's very hacky, but that's usually ok for research use. To build, run: .

Rachit Singh 24 Nov 09, 2021
EgoNN: Egocentric Neural Network for Point Cloud Based 6DoF Relocalization at the City Scale

EgonNN: Egocentric Neural Network for Point Cloud Based 6DoF Relocalization at the City Scale Paper: EgoNN: Egocentric Neural Network for Point Cloud

19 Sep 20, 2022
Code for the paper "Jukebox: A Generative Model for Music"

Status: Archive (code is provided as-is, no updates expected) Jukebox Code for "Jukebox: A Generative Model for Music" Paper Blog Explorer Colab Insta

OpenAI 6k Jan 02, 2023
A graph adversarial learning toolbox based on PyTorch and DGL.

GraphWar: Arms Race in Graph Adversarial Learning NOTE: GraphWar is still in the early stages and the API will likely continue to change. 🚀 Installat

Jintang Li 54 Jan 05, 2023