Pytorch bindings for Fortran

Overview

Pytorch Fortran bindings

The goal of this code is provide Fortran HPC codes with a simple way to use Pytorch deep learning framework. We want Fortran developers to take advantage of rich and optimized Torch ecosystem from within their existing codes. The code is very much work-in-progress right now and any feedback or bug reports are welcome.

Features

  • Define the model convinently in Python, save it and open in Fortran
  • Pass Fortran arrays into the model, run inference and get output as a native Fortran array
  • Train the model from inside Fortran (limit support for now) and save it
  • Run the model on the CPU or the GPU with the data also coming from the CPU or GPU
  • Focus on achieving negligible performance overhead

Building

To assist with the build, we provide the Docker and HPCCM recipe for the container with all the necessary dependancies installed, see container

You'll need to mount a folder with the cloned repository into the container, cd into this folder from the running container and execute

./make_all.sh

By default, we build the code with NVIDIA HPC SDK Fortran compiler without GPU support. To enable the GPU support, change OPENACC parameter in make_all.sh to 1. Changing the compiler is possible by modifying CMAKE_Fortran_COMPILER cmake flag. Note that we are still working on testing different compilers, so issues are possible.

Examples

examples folder contains two samples: inference with pre-trained ResNet and end-to-end training on a simple NN predicting a polynomial. Before running the examples, you'll need to execute setup-model.py scripts in the corresponding example folder that would define the model and store in to the disk. With the saved models, run the following:

cd /path/to/repository/
install/bin/resnet_forward examples/resnet_forward/traced_model.pt
install/bin/polynomial     examples/polynomial/traced_model.pt     examples/polynomial/your_new_trained_model.pt

API

We are working on documenting the API, for now please refer to the examples.

You might also like...
A collection of extensions and data-loaders for few-shot learning & meta-learning in PyTorch

Torchmeta A collection of extensions and data-loaders for few-shot learning & meta-learning in PyTorch. Torchmeta contains popular meta-learning bench

PyTorch Extension Library of Optimized Scatter Operations

PyTorch Scatter Documentation This package consists of a small extension library of highly optimized sparse update (scatter and segment) operations fo

PyTorch Extension Library of Optimized Autograd Sparse Matrix Operations

PyTorch Sparse This package consists of a small extension library of optimized sparse matrix operations with autograd support. This package currently

Reformer, the efficient Transformer, in Pytorch
Reformer, the efficient Transformer, in Pytorch

Reformer, the Efficient Transformer, in Pytorch This is a Pytorch implementation of Reformer https://openreview.net/pdf?id=rkgNKkHtvB It includes LSH

higher is a pytorch library allowing users to obtain higher order gradients over losses spanning training loops rather than individual training steps.
higher is a pytorch library allowing users to obtain higher order gradients over losses spanning training loops rather than individual training steps.

higher is a library providing support for higher-order optimization, e.g. through unrolled first-order optimization loops, of "meta" aspects of these

PyTorch implementation of TabNet paper : https://arxiv.org/pdf/1908.07442.pdf

README TabNet : Attentive Interpretable Tabular Learning This is a pyTorch implementation of Tabnet (Arik, S. O., & Pfister, T. (2019). TabNet: Attent

PyTorch extensions for fast R&D prototyping and Kaggle farming

Pytorch-toolbelt A pytorch-toolbelt is a Python library with a set of bells and whistles for PyTorch for fast R&D prototyping and Kaggle farming: What

An implementation of Performer, a linear attention-based transformer, in Pytorch
An implementation of Performer, a linear attention-based transformer, in Pytorch

Performer - Pytorch An implementation of Performer, a linear attention-based transformer variant with a Fast Attention Via positive Orthogonal Random

The goal of this library is to generate more helpful exception messages for numpy/pytorch matrix algebra expressions.
The goal of this library is to generate more helpful exception messages for numpy/pytorch matrix algebra expressions.

Tensor Sensor See article Clarifying exceptions and visualizing tensor operations in deep learning code. One of the biggest challenges when writing co

Comments
  • Citing pytorch-fortran

    Citing pytorch-fortran

    Hi Dmitry,

    I am a Ph.D. student at UC Santa Cruz and Los Alamos National Laboratory. I specialize in ML-based turbulence modeling within stellar explosions. This repo has been incredibly helpful for the last chapter of my thesis, which involved the integration of PyTorch models in a legacy Fortran code for 1D supernovae (pikarpov-lanl/COLLAPSO1D). We are writing a paper for Astrophysical Journal on this subject, and I would like to give you proper credit for the pytorch-fortran repo. Do you have any preferences on how to cite your work?

    In addition, I wrote an interface to integrate your ML wrapper into any legacy F90 code, which is pretty generalizable. As such, I think it would be highly beneficial for the astrophysical community if this pipeline would be published separately, e.g., in the Journal of Open Source Software. Please let me know your thoughts and whether you would want to collaborate. Feel free to send me an email ([email protected]).

    opened by pikarpov-LANL 1
  • implicit none missing in example

    implicit none missing in example

    https://github.com/alexeedm/pytorch-fortran/blob/cd4334a0f1bfbd87402a2dd1fa43f41c2a1cd150/examples/resnet_forward/resnet_forward.f90#L22

    The example program uses implicit typing. It might lead to some surprises if someone tries to extend the example without noticing.

    opened by ivan-pi 1
  • Some questions about the future plans of pytorch-fortran

    Some questions about the future plans of pytorch-fortran

    Hi @alexeedm, I am LuChen, a postgraduate majored in Software Engineering in Tongji University, China. And my current research interests are around Climate AI. Since I can't find your contact information, I create an issue here.

    As you may think, we also encountered the lack of AI ecology issues during our research. Therefore, I have developed a tool Fortran-Torch-Adapter by myself from scratch in the past few months and used it in my research. (🀣Yes, exactly based on the same idea with pytorch-fortran, calling a TorchScript model directly from Fortran through interoperability between C++ and Fortran.) And I was also working on a paper to introduce this new tool as I found your repo yesterday. It seems that Nvidia was also working on this even earlier. What a coincidence! πŸ˜‚πŸ˜‚

    Since so, I want to know what are the future plans for pytorch-fortran. For the project, Fortran-Torch-Adapter was still in its infancy and I would love to see a more powerful and well-organized tool like pytorch-fortran to take it over and maybe I could also make some small contributions to this wonderful project. For the paper, I don't know if Nvidia has any plan to apply a pattern or maybe a paper for this? Since I was preparing a paper for this currently, if you are interested, you are very welcome to join this by co-authoring or anything else.

    It's all open by now. Just want to hear your thoughts.

    opened by luc99hen 2
Releases(0.3)
  • 0.3(Nov 8, 2022)

    Pytorch Fortran bindings

    The goal of this code is to provide Fortran HPC codes with a simple way to use Pytorch deep learning framework. We want Fortran developers to take advantage of rich and optimized Torch ecosystem from within their existing codes. The code is very much work-in-progress right now and any feedback or bug reports are welcome.

    Features

    • Define the model conveniently in Python, save it and open in Fortran
    • Pass Fortran arrays into the model, run inference and get output as a native Fortran array
    • Train the model from inside Fortran and save it
    • Run the model on the CPU or the GPU with the data also coming from the CPU or GPU
    • Use OpenACC to achieve zero-copy data transfer for the GPU models
    • Focus on achieving negligible performance overhead

    Building

    To assist with the build, we provide the Docker and HPCCM recipe for the container with all the necessary dependencies installed, see container

    You'll need to mount a folder with the cloned repository into the container, cd into this folder from the running container and execute ./make_nvhpc.sh, ./make_gcc.sh or ./make_intel.sh depending on the compiler you want to use.

    To enable the GPU support, you'll need the NVIDIA HPC SDK build. GNU compiler is ramping up its OpenACC implementation, and soon may also be supported. Changing the compiler is possible by modifying CMAKE_Fortran_COMPILER cmake flag. Note that we are still working on testing different compilers, so issues are possible.

    Examples

    examples folder contains three samples:

    • inference with the pre-trained ResNet;
    • end-to-end training on a simple NN predicting a polynomial;
    • training and inference through directly running Python (as opposed to pre-compiled Torch scripts), this example is work-in-progress. The polynomial case will run on the GPU if both the bindings and the example are compiled with the OpenACC support. Before running the examples, you'll need to execute setup-model.py scripts in the corresponding example folder that would define the model and store in on the disk. With the models saved and ready, run the following:
    cd /path/to/repository/
    install/bin/resnet_forward ../examples/resnet_forward/traced_model.pt
    install/bin/polynomial ../examples/polynomial/traced_model.pt ../examples/polynomial/your_new_trained_model.pt
    install/bin/python_training  ../examples/python_training/model.py
    

    API

    We are working on documenting the full API. Please refer to the examples for more details. The bindings are provided through the following Fortran classes:

    Class torch_tensor

    This class represents a light-weight Pytorch representation of a Fortran array. It does not own the data and only keeps the respective pointer. Supported arrays of ranks up to 7 and datatypes real32, real64, int32, int64. Members:

    • from_array(Fortran array or pointer :: array) : create the tensor representation of a Fortran array.
    • to_array(pointer :: array) : create a Fortran pointer from the tensor. This API should be used to convert the returning data of a Pytorch model to the Fortran array.

    Class torch_tensor_wrap

    This class wraps a few tensors or scalars that can be passed as input into Pytorch models. Arrays and scalars must be of types real32, real64, int32 or int64. Members:

    • add_scalar(scalar) : add the scalar value into the wrapper.
    • add_tensor(torch_tensor :: tensor) : add the tensor into the wrapper.
    • add_array(Fortran array or pointe :: array) : create the tensor representation of a Fortran array and add it into the wrapper.

    Class torch_module

    This class represents the traced Pytorch model, typically a result of torch.jit.trace or torch.jit.script call from your Python script. This class in not thread-safe. For multi-threaded inference either create a threaded Pytorch model, or use a torch_module instance per thread (the latter could be less efficient). Members:

    • load( character(*) :: filename, integer :: flags) : load the module from a file. Flag can be set to module_use_device to enable the GPU processing.
    • forward(torch_tensor_wrap :: inputs, torch_tensor :: output, integer :: flags) : run the inference with Pytorch. The tensors and scalars from the inputs will be passed into Pytorch and the output will contain the result. flags is unused now
    • create_optimizer_sgd(real :: learning_rate) : create an SGD optimizer to use in the following training
    • train(torch_tensor_wrap :: inputs, torch_tensor :: target, real :: loss) : perform a single training step where target is the target result and loss is the L2 squared loss returned by the optimizer
    • save(character(*) :: filename) : save the trained model

    Class torch_pymodule

    This class represents the Pytorch Python script and required the interpreter to be called. Only one torch_pymodule can be opened at a time due to the Python interpreter limitation. Overheads calling this class are higher than with torch_module, but contrary to the torch_module%train one can now train their Pytorch model with any optimizer, dropouts, etc. The intended usage of this class is to run online training with a complex pipeline that cannot be expressed as TorchScript. Members:

    • load( character(*) :: filename) : load the module from a Python script
    • forward(torch_tensor_wrap :: inputs, torch_tensor :: output) : execute ftn_pytorch_forward function from the Python script. The function is expected to accept tensors and scalars and returns one tensor. The tensors and scalars from the inputs will be passed as argument and the output will contain the result.
    • train(torch_tensor_wrap :: inputs, torch_tensor :: target, real :: loss) : execute ftn_pytorch_train function from the Python script. The function is expected to accept tensors and scalars (with the last argument required to be the target tensor) and returns a tuple of bool is_completed and float loss. is_completed is returned as a result of the train function, and loss is set accordingly to the Python output. is_completed is meant to signify that the training is completed due to any stopping criterion
    • save(character(*) :: filename) : save the trained model

    Changelog

    v0.3

    • Changed interface: forward and train routines now accept torch_tensor_wrap instead of just torch_tensor. This allows a user to add multiple inputs consisting of tensors of different size and scalar values
    • Fixed possible small memory leaks due to tensor handles
    • Fixed build targets in the scripts, they now properly build Release versions by default
    • Added a short API help
    Source code(tar.gz)
    Source code(zip)
Owner
Dmitry Alexeev
HPC Compute DevTech
Dmitry Alexeev
Official implementations of EigenDamage: Structured Pruning in the Kronecker-Factored Eigenbasis.

EigenDamage: Structured Pruning in the Kronecker-Factored Eigenbasis This repo contains the official implementations of EigenDamage: Structured Prunin

Chaoqi Wang 107 Apr 20, 2022
Fast, general, and tested differentiable structured prediction in PyTorch

Torch-Struct: Structured Prediction Library A library of tested, GPU implementations of core structured prediction algorithms for deep learning applic

HNLP 1.1k Jan 07, 2023
torch-optimizer -- collection of optimizers for Pytorch

torch-optimizer torch-optimizer -- collection of optimizers for PyTorch compatible with optim module. Simple example import torch_optimizer as optim

Nikolay Novik 2.6k Jan 03, 2023
PyTorch Implementation of [1611.06440] Pruning Convolutional Neural Networks for Resource Efficient Inference

PyTorch implementation of [1611.06440 Pruning Convolutional Neural Networks for Resource Efficient Inference] This demonstrates pruning a VGG16 based

Jacob Gildenblat 836 Dec 26, 2022
GPU-accelerated PyTorch implementation of Zero-shot User Intent Detection via Capsule Neural Networks

GPU-accelerated PyTorch implementation of Zero-shot User Intent Detection via Capsule Neural Networks This repository implements a capsule model Inten

Joel Huang 15 Dec 24, 2022
PyGCL: Graph Contrastive Learning Library for PyTorch

PyGCL is an open-source library for graph contrastive learning (GCL), which features modularized GCL components from published papers, standardized evaluation, and experiment management.

GCL: Graph Contrastive Learning Library for PyTorch 592 Jan 07, 2023
A code copied from google-research which named motion-imitation was rewrited with PyTorch

motor-system Introduction A code copied from google-research which named motion-imitation was rewrited with PyTorch. More details can get from this pr

NewEra 6 Jan 08, 2022
Training PyTorch models with differential privacy

Opacus is a library that enables training PyTorch models with differential privacy. It supports training with minimal code changes required on the cli

1.3k Dec 29, 2022
Pretrained EfficientNet, EfficientNet-Lite, MixNet, MobileNetV3 / V2, MNASNet A1 and B1, FBNet, Single-Path NAS

(Generic) EfficientNets for PyTorch A 'generic' implementation of EfficientNet, MixNet, MobileNetV3, etc. that covers most of the compute/parameter ef

Ross Wightman 1.5k Jan 01, 2023
ocaml-torch provides some ocaml bindings for the PyTorch tensor library.

ocaml-torch provides some ocaml bindings for the PyTorch tensor library. This brings to OCaml NumPy-like tensor computations with GPU acceleration and tape-based automatic differentiation.

Laurent Mazare 369 Jan 03, 2023
PyNIF3D is an open-source PyTorch-based library for research on neural implicit functions (NIF)-based 3D geometry representation.

PyNIF3D is an open-source PyTorch-based library for research on neural implicit functions (NIF)-based 3D geometry representation. It aims to accelerate research by providing a modular design that all

Preferred Networks, Inc. 96 Nov 28, 2022
lookahead optimizer (Lookahead Optimizer: k steps forward, 1 step back) for pytorch

lookahead optimizer for pytorch PyTorch implement of Lookahead Optimizer: k steps forward, 1 step back Usage: base_opt = torch.optim.Adam(model.parame

Liam 318 Dec 09, 2022
PyTorch framework A simple and complete framework for PyTorch, providing a variety of data loading and simple task solutions that are easy to extend and migrate

PyTorch framework A simple and complete framework for PyTorch, providing a variety of data loading and simple task solutions that are easy to extend and migrate

Cong Cai 12 Dec 19, 2021
Pytorch bindings for Fortran

Pytorch bindings for Fortran

Dmitry Alexeev 46 Dec 29, 2022
A very simple and small path tracer written in pytorch meant to be run on the GPU

MentisOculi Pytorch Path Tracer A very simple and small path tracer written in pytorch meant to be run on the GPU Why use pytorch and not some other c

Matthew B. Mirman 222 Dec 01, 2022
A simplified framework and utilities for PyTorch

Here is Poutyne. Poutyne is a simplified framework for PyTorch and handles much of the boilerplating code needed to train neural networks. Use Poutyne

GRAAL/GRAIL 534 Dec 17, 2022
The easiest way to use deep metric learning in your application. Modular, flexible, and extensible. Written in PyTorch.

News March 3: v0.9.97 has various bug fixes and improvements: Bug fixes for NTXentLoss Efficiency improvement for AccuracyCalculator, by using torch i

Kevin Musgrave 5k Jan 02, 2023
TorchSSL: A PyTorch-based Toolbox for Semi-Supervised Learning

TorchSSL: A PyTorch-based Toolbox for Semi-Supervised Learning

1k Dec 28, 2022
High-fidelity performance metrics for generative models in PyTorch

High-fidelity performance metrics for generative models in PyTorch

Vikram Voleti 5 Oct 24, 2021
This is an differentiable pytorch implementation of SIFT patch descriptor.

This is an differentiable pytorch implementation of SIFT patch descriptor. It is very slow for describing one patch, but quite fast for batch. It can

Dmytro Mishkin 150 Dec 24, 2022