Tensors and neural networks in Haskell

Overview

Hasktorch

Hasktorch is a library for tensors and neural networks in Haskell. It is an independent open source community project which leverages the core C++ libraries shared by PyTorch.

This project is in active development, so expect changes to the library API as it evolves. We would like to invite new users to join our Hasktorch slack space for questions and discussions. Contributions/PR are encouraged.

Currently we are developing the second major release of Hasktorch (0.2). Note the 1st release, Hasktorch 0.1, on hackage is outdated and should not be used.

Documentation

The documentation is divided into several sections:

Introductory Videos

Getting Started

The following steps will get you started. They assume the hasktorch repository has just been cloned. After setup is done, read the online tutorials and API documents.

linux+cabal+cpu

Starting from the top-level directory of the project, run:

$ pushd deps       # Change to the deps directory and save the current directory.
$ ./get-deps.sh    # Run the shell script to retrieve the libtorch dependencies.
$ popd             # Go back to the root directory of the project.
$ source setenv    # Set the shell environment to reference the shared library locations.
$ ./setup-cabal.sh # Create a cabal project file

To build and test the Hasktorch library, run:

$ cabal build hasktorch  # Build the Hasktorch library.
$ cabal test hasktorch   # Build and run the Hasktorch library test suite.

To build and test the example executables shipped with hasktorch, run:

$ cabal build examples  # Build the Hasktorch examples.
$ cabal test examples   # Build and run the Hasktorch example test suites.

To run the MNIST CNN example, run:

$ cd examples                   # Change to the examples directory.
$ ./datasets/download-mnist.sh  # Download the MNIST dataset.
$ mv mnist data                 # Move the MNIST dataset to the data directory.
$ export DEVICE=cpu             # Set device to CPU for the MNIST CNN example.
$ cabal run static-mnist-cnn    # Run the MNIST CNN example.

linux+cabal+cuda11

Starting from the top-level directory of the project, run:

$ pushd deps              # Change to the deps directory and save the current directory.
$ ./get-deps.sh -a cu111  # Run the shell script to retrieve the libtorch dependencies.
$ popd                    # Go back to the root directory of the project.
$ source setenv           # Set the shell environment to reference the shared library locations.
$ ./setup-cabal.sh        # Create a cabal project file

To build and test the Hasktorch library, run:

$ cabal build hasktorch  # Build the Hasktorch library.
$ cabal test hasktorch   # Build and run the Hasktorch library test suite.

To build and test the example executables shipped with hasktorch, run:

$ cabal build examples  # Build the Hasktorch examples.
$ cabal test examples   # Build and run the Hasktorch example test suites.

To run the MNIST CNN example, run:

$ cd examples                   # Change to the examples directory.
$ ./datasets/download-mnist.sh  # Download the MNIST dataset.
$ mv mnist data                 # Move the MNIST dataset to the data directory.
$ export DEVICE="cuda:0"        # Set device to CUDA for the MNIST CNN example.
$ cabal run static-mnist-cnn    # Run the MNIST CNN example.

macos+cabal+cpu

Starting from the top-level directory of the project, run:

$ pushd deps       # Change to the deps directory and save the current directory.
$ ./get-deps.sh    # Run the shell script to retrieve the libtorch dependencies.
$ popd             # Go back to the root directory of the project.
$ source setenv    # Set the shell environment to reference the shared library locations.
$ ./setup-cabal.sh # Create a cabal project file

To build and test the Hasktorch library, run:

$ cabal build hasktorch  # Build the Hasktorch library.
$ cabal test hasktorch   # Build and run the Hasktorch library test suite.

To build and test the example executables shipped with hasktorch, run:

$ cabal build examples  # Build the Hasktorch examples.
$ cabal test examples   # Build and run the Hasktorch example test suites.

To run the MNIST CNN example, run:

$ cd examples                   # Change to the examples directory.
$ ./datasets/download-mnist.sh  # Download the MNIST dataset.
$ mv mnist data                 # Move the MNIST dataset to the data directory.
$ export DEVICE=cpu             # Set device to CPU for the MNIST CNN example.
$ cabal run static-mnist-cnn    # Run the MNIST CNN example.

linux+stack+cpu

Install the Haskell Tool Stack if you haven't already, following instructions here

Starting from the top-level directory of the project, run:

$ pushd deps     # Change to the deps directory and save the current directory.
$ ./get-deps.sh  # Run the shell script to retrieve the libtorch dependencies.
$ popd           # Go back to the root directory of the project.
$ source setenv  # Set the shell environment to reference the shared library locations.

To build and test the Hasktorch library, run:

$ stack build hasktorch  # Build the Hasktorch library.
$ stack test hasktorch   # Build and run the Hasktorch library test suite.

To build and test the example executables shipped with hasktorch, run:

$ stack build examples  # Build the Hasktorch examples.
$ stack test examples   # Build and run the Hasktorch example test suites.

To run the MNIST CNN example, run:

$ cd examples                   # Change to the examples directory.
$ ./datasets/download-mnist.sh  # Download the MNIST dataset.
$ mv mnist data                 # Move the MNIST dataset to the data directory.
$ export DEVICE=cpu             # Set device to CPU for the MNIST CNN example.
$ stack run static-mnist-cnn     # Run the MNIST CNN example.

macos+stack+cpu

Install the Haskell Tool Stack if you haven't already, following instructions here

Starting from the top-level directory of the project, run:

$ pushd deps     # Change to the deps directory and save the current directory.
$ ./get-deps.sh  # Run the shell script to retrieve the libtorch dependencies.
$ popd           # Go back to the root directory of the project.
$ source setenv  # Set the shell environment to reference the shared library locations.

To build and test the Hasktorch library, run:

$ stack build hasktorch  # Build the Hasktorch library.
$ stack test hasktorch   # Build and run the Hasktorch library test suite.

To build and test the example executables shipped with hasktorch, run:

$ stack build examples  # Build the Hasktorch examples.
$ stack test examples   # Build and run the Hasktorch example test suites.

To run the MNIST CNN example, run:

$ cd examples                   # Change to the examples directory.
$ ./datasets/download-mnist.sh  # Download the MNIST dataset.
$ mv mnist data                 # Move the MNIST dataset to the data directory.
$ export DEVICE=cpu             # Set device to CPU for the MNIST CNN example.
$ stack run static-mnist-cnn     # Run the MNIST CNN example.

nixos+cabal+cpu

(Optional) Install and set up Cachix:

$ nix-env -iA cachix -f https://cachix.org/api/v1/install  # (Optional) Install Cachix.
$ cachix use iohk                                          # (Optional) Use IOHK's cache.
$ cachix use hasktorch                                     # (Optional) Use hasktorch's cache.

Starting from the top-level directory of the project, run:

$ nix-shell  # Enter the nix shell environment for Hasktorch.

To build and test the Hasktorch library, run:

$ cabal build hasktorch  # Build the Hasktorch library.
$ cabal test hasktorch   # Build and run the Hasktorch library test suite.

To build and test the example executables shipped with hasktorch, run:

$ cabal build examples  # Build the Hasktorch examples.
$ cabal test examples   # Build and run the Hasktorch example test suites.

To run the MNIST CNN example, run:

$ cd examples                   # Change to the examples directory.
$ ./datasets/download-mnist.sh  # Download the MNIST dataset.
$ mv mnist data                 # Move the MNIST dataset to the data directory.
$ export DEVICE=cpu             # Set device to CPU for the MNIST CNN example.
$ cabal run static-mnist-cnn    # Run the MNIST CNN example.

nixos+cabal+cuda11

(Optional) Install and set up Cachix:

$ nix-env -iA cachix -f https://cachix.org/api/v1/install  # (Optional) Install Cachix.
$ cachix use iohk                                          # (Optional) Use IOHK's cache.
$ cachix use hasktorch                                     # (Optional) Use hasktorch's cache.

Starting from the top-level directory of the project, run:

$ nix-shell --arg cudaSupport true --argstr cudaMajorVersion 11  # Enter the nix shell environment for Hasktorch.

To build and test the Hasktorch library, run:

$ cabal build hasktorch  # Build the Hasktorch library.
$ cabal test hasktorch   # Build and run the Hasktorch library test suite.

To build and test the example executables shipped with hasktorch, run:

$ cabal build examples  # Build the Hasktorch examples.
$ cabal test examples   # Build and run the Hasktorch example test suites.

To run the MNIST CNN example, run:

$ cd examples                   # Change to the examples directory.
$ ./datasets/download-mnist.sh  # Download the MNIST dataset.
$ mv mnist data                 # Move the MNIST dataset to the data directory.
$ export DEVICE="cuda:0"        # Set device to CUDA for the MNIST CNN example.
$ cabal run static-mnist-cnn    # Run the MNIST CNN example.

docker+jupyterlab+cuda11

This dockerhub repository provides the docker-image of jupyterlab. It supports cuda11, cuda10 and cpu only. When you use jupyterlab with hasktorch, type following command, then click a url in a console.

$ docker run --gpus all -it --rm -p 8888:8888 htorch/hasktorch-jupyter
or
$ docker run --gpus all -it --rm -p 8888:8888 htorch/hasktorch-jupyter:latest-cu11

Known Issues

Tensors Cannot Be Moved to CUDA

In rare cases, you may see errors like

cannot move tensor to "CUDA:0"

although you have CUDA capable hardware in your machine and have followed the getting-started instructions for CUDA support.

If that happens, check if /run/opengl-driver/lib exists. If not, make sure your CUDA drivers are installed correctly.

Weird Behaviour When Switching from CPU-Only to CUDA-Enabled Nix Shell

If you have run cabal in a CPU-only Hasktorch Nix shell before, you may need to:

  • Clean the dist-newstyle folder using cabal clean.
  • Delete the .ghc.environment* file in the Hasktorch root folder.

Otherwise, at best, you will not be able to move tensors to CUDA, and, at worst, you will see weird linker errors like

gcc: error: hasktorch/dist-newstyle/build/x86_64-linux/ghc-8.8.3/libtorch-ffi-1.5.0.0/build/Torch/Internal/Unmanaged/Autograd.dyn_o: No such file or directory
`cc' failed in phase `Linker'. (Exit code: 1)

Contributing

We welcome new contributors.

Contact us for access to the hasktorch slack channel. You can send an email to [email protected] or on twitter as @austinvhuang, @SamStites, @tscholak, or @junjihashimoto3.

Notes for library developers

See the wiki for developer notes.

Project Folder Structure

Basic functionality:

  • deps/ - submodules and downloads for build dependencies (libtorch, mklml, pytorch) -- you can ignore this if you are on Nix
  • examples/ - high level example models (xor mlp, typed cnn, etc.)
  • experimental/ - experimental projects or tips
  • hasktorch/ - higher level user-facing library, calls into ffi/, used by examples/

Internals (for contributing developers):

  • codegen/ - code generation, parses Declarations.yaml spec from pytorch and produces ffi/ contents
  • inline-c/ - submodule to inline-cpp fork used for C++ FFI
  • libtorch-ffi/- low level FFI bindings to libtorch
  • spec/ - specification files used for codegen/
This repository is dedicated to developing and maintaining code for experiments with wide neural networks.

Wide-Networks This repository contains the code of various experiments on wide neural networks. In particular, we implement classes for abc-parameteri

Karl Hajjar 0 Nov 02, 2021
Simple data balancing baselines for worst-group-accuracy benchmarks.

BalancingGroups Code to replicate the experimental results from Simple data balancing baselines achieve competitive worst-group-accuracy. Replicating

Meta Research 29 Dec 02, 2022
CowHerd is a partially-observed reinforcement learning environment

CowHerd is a partially-observed reinforcement learning environment, where the player walks around an area and is rewarded for milking cows. The cows try to escape and the player can place fences to h

Danijar Hafner 6 Mar 06, 2022
PyTorch code for our ECCV 2018 paper "Image Super-Resolution Using Very Deep Residual Channel Attention Networks"

PyTorch code for our ECCV 2018 paper "Image Super-Resolution Using Very Deep Residual Channel Attention Networks"

Yulun Zhang 1.2k Dec 26, 2022
MemStream: Memory-Based Anomaly Detection in Multi-Aspect Streams with Concept Drift

MemStream Implementation of MemStream: Memory-Based Anomaly Detection in Multi-Aspect Streams with Concept Drift . Siddharth Bhatia, Arjit Jain, Shivi

Stream-AD 61 Dec 02, 2022
Learning Facial Representations from the Cycle-consistency of Face (ICCV 2021)

Learning Facial Representations from the Cycle-consistency of Face (ICCV 2021) This repository contains the code for our ICCV2021 paper by Jia-Ren Cha

Jia-Ren Chang 40 Dec 27, 2022
Multispectral Object Detection with Yolov5

Multispectral-Object-Detection Intro Official Code for Cross-Modality Fusion Transformer for Multispectral Object Detection. Multispectral Object Dete

Richard Fang 121 Jan 01, 2023
Python package provinding tools for artistic interactive applications using AI

Documentation redrawing Python package provinding tools for artistic interactive applications using AI Created by ReDrawing Campinas team for the Open

ReDrawing Campinas 1 Sep 30, 2021
Pytorch implementation of “Recursive Non-Autoregressive Graph-to-Graph Transformer for Dependency Parsing with Iterative Refinement”

Graph-to-Graph Transformers Self-attention models, such as Transformer, have been hugely successful in a wide range of natural language processing (NL

Idiap Research Institute 40 Aug 14, 2022
Codes for TIM2021 paper "Anchor-Based Spatio-Temporal Attention 3-D Convolutional Networks for Dynamic 3-D Point Cloud Sequences"

Codes for TIM2021 paper "Anchor-Based Spatio-Temporal Attention 3-D Convolutional Networks for Dynamic 3-D Point Cloud Sequences"

Intelligent Robotics and Machine Vision Lab 4 Jul 19, 2022
PyTorch code to run synthetic experiments.

Code repository for Invariant Risk Minimization Source code for the paper: @article{InvariantRiskMinimization, title={Invariant Risk Minimization}

Facebook Research 345 Dec 12, 2022
Spherical CNNs

Spherical CNNs Equivariant CNNs for the sphere and SO(3) implemented in PyTorch Overview This library contains a PyTorch implementation of the rotatio

Jonas Köhler 893 Dec 28, 2022
Multivariate Boosted TRee

Multivariate Boosted TRee What is MBTR MBTR is a python package for multivariate boosted tree regressors trained in parameter space. The package can h

SUPSI-DACD-ISAAC 61 Dec 19, 2022
Code for the paper "Location-aware Single Image Reflection Removal"

Location-aware Single Image Reflection Removal The shown images are provided by the datasets from IBCLN, ERRNet, SIR2 and the Internet images. The cod

72 Dec 08, 2022
Sibur challange 2021 competition - 6 place

sibur challange 2021 Решение на 6 место: https://sibur.ai-community.com/competitions/5/tasks/13 Скор 1.4066/1.4159 public/private. Архитектура - однос

Ivan 5 Jan 11, 2022
LaneAF: Robust Multi-Lane Detection with Affinity Fields

LaneAF: Robust Multi-Lane Detection with Affinity Fields This repository contains Pytorch code for training and testing LaneAF lane detection models i

155 Dec 17, 2022
Continuous Conditional Random Field Convolution for Point Cloud Segmentation

CRFConv This repository is the implementation of "Continuous Conditional Random Field Convolution for Point Cloud Segmentation" 1. Setup 1) Building c

Fei Yang 8 Dec 08, 2022
Torch implementation of "Enhanced Deep Residual Networks for Single Image Super-Resolution"

NTIRE2017 Super-resolution Challenge: SNU_CVLab Introduction This is our project repository for CVPR 2017 Workshop (2nd NTIRE). We, Team SNU_CVLab, (B

Bee Lim 625 Dec 30, 2022
This is a code repository for paper OODformer: Out-Of-Distribution Detection Transformer

OODformer: Out-Of-Distribution Detection Transformer This repo is the official the implementation of the OODformer: Out-Of-Distribution Detection Tran

34 Dec 02, 2022
Adds timm pretrained backbone to pytorch's FasterRcnn model

Operating Systems Lab (ETCS-352) Experiments for Operating Systems Lab (ETCS-352) performed by me in 2021 at uni. All codes are written by me except t

Mriganka Nath 12 Dec 03, 2022