Myia prototyping

Related tags

Deep Learningmyia
Overview

Myia

Myia is a new differentiable programming language. It aims to support large scale high performance computations (e.g. linear algebra) and their gradients. The main application Myia aims to support is research in artificial intelligence, in particular deep learning algorithms.

  • Define a model using a subset of Python, which is compiled to Myia (interfaces in other languages than Python may follow). This subset is general purpose and includes looping constructs and recursion. It excludes side effects and inplace operations.
  • Ask for the derivative of your model. Derivatives are fully supported for all control flow and all differentiable primitives.
  • Compile to efficient CPU and GPU code that optimizes use of your resources.

If you want to play with the current implementation, you can check out ALPHA.md

A short document explaining some of Myia's inner workings is available here

Status

Myia is currently under development and is not yet ready for use. We are optimistic about having an alpha version to play with around the start of 2020.

See Roadmap.

Motivation

Development in artificial intelligence has been undergoing a boom in the past decade, chiefly due to the success of deep neural networks. The training of a neural network is a sort of differentiable program: one writes a program to compute the output and a cost, and then one computes the derivative of that cost with respect to the model's parameters to determine how they should be updated.

Differentiation can be automated, but mainstream programming languages offer no support for this, hence the need for libraries or programming languages that can reliably support these applications.

The current leading solutions for deep learning fall in two camps:

Computation graph-based solutions such as TensorFlow, Theano and MXNet support automatic differentiation and are very well optimized, but they are not fully general, with only limited support for loops and none for general recursion. Thus models like recursive neural networks are tricky and awkward to write.

Operator overloading solutions such as PyTorch or Autograd use a dynamic approach to automatic differentiation which makes them much more general, but they are tightly coupled to the Python language and cannot reap the benefits of an optimizing compiler. They also involve a certain quantity of overhead per operation which discourages composing small cheap operations.

Myia's solution is to define a strongly-typed, general-purpose intermediate representation with an IR-level automatic differentiation transformation, which can then be compiled and optimized for various targets, thereby getting the best of both leading approaches.

Roadmap

Current

  • Parser: Supports def, if, for, while, operators, function calls, class and methods (limited support).
  • Intermediate representation: Implemented, with an array of utilities.
  • Debug VM: Faithfully runs the IR.
  • VM: Works on the simplified/optimized IR.
  • Primitives: Scalar primitives work, as well as map, reduce, broadcasting, 2D convolutions, concat/split, and many other operations.
  • Type system: Types are inferred without the need for annotations. Shapes can also be inferred. Myia supports recursive ADTs (e.g. tree data structures).
  • Optimization: Pattern-based optimizations, inlining, constant propagation, common subexpression elimination, closure conversion.
  • Automatic differentiation: Second order differentiation is not yet in working order.
  • GPU support: Using Relay or PyTorch.

In development

  • Compiler optimization: The compiler currently needs to be optimized to reduce compile times.
  • Auto-monadization: We are working to support print statements and random number generation through an auto-monadization system that can automatically keep track of the IO or RNG state.

Next steps

  • Error messages: We need to make sure that every likely mistake leads to an understandable and traceable error diagnosis.

Near future

  • Serialization: Serializing optimized graphs will allow for greater performance across runs and greater portability across systems.
  • Debugger: Intent is to have a step debugger for Myia. There used to be a working one for a previous version of the IR, so this should not pose a problem.
  • More Python syntax: break/continue.

After Beta

  • Even more Python syntax: Support for these features is not certain.
    • Augmented assignment (under restrictions)
    • yield and await
  • Support other languages: Which ones depend on demand. A new language is also a possibility.

Publications

Citation

If you use Myia for a scientific paper, please cite the above paper or mention Myia in the acknowledgements. It would be great if you could also let us know about it.

Owner
Mila
Quebec Artificial Intelligence Institute
Mila
Playable Video Generation

Playable Video Generation Playable Video Generation Willi Menapace, Stéphane Lathuilière, Sergey Tulyakov, Aliaksandr Siarohin, Elisa Ricci Paper: ArX

Willi Menapace 136 Dec 31, 2022
COLMAP - Structure-from-Motion and Multi-View Stereo

COLMAP About COLMAP is a general-purpose Structure-from-Motion (SfM) and Multi-View Stereo (MVS) pipeline with a graphical and command-line interface.

4.7k Jan 07, 2023
LV-BERT: Exploiting Layer Variety for BERT (Findings of ACL 2021)

LV-BERT Introduction In this repo, we introduce LV-BERT by exploiting layer variety for BERT. For detailed description and experimental results, pleas

Weihao Yu 14 Aug 24, 2022
Official Code Release for Container : Context Aggregation Network

Container: Context Aggregation Network Official Code Release for Container : Context Aggregation Network Comparion between CNN, MLP-Mixer and Transfor

peng gao 42 Nov 17, 2021
《Dual-Resolution Correspondence Network》(NeurIPS 2020)

Dual-Resolution Correspondence Network Dual-Resolution Correspondence Network, NeurIPS 2020 Dependency All dependencies are included in asset/dualrcne

Active Vision Laboratory 45 Nov 21, 2022
OpenCV, MediaPipe Pose Estimation, Affine Transform for Icon Overlay

Yoga Pose Identification and Icon Matching Project Goal Detect yoga poses performed by a user and overlay a corresponding icon image. Running the main

Anna Garverick 1 Dec 03, 2021
A curated list of the top 10 computer vision papers in 2021 with video demos, articles, code and paper reference.

The Top 10 Computer Vision Papers of 2021 The top 10 computer vision papers in 2021 with video demos, articles, code, and paper reference. While the w

Louis-François Bouchard 118 Dec 21, 2022
Official implementation of "StyleCariGAN: Caricature Generation via StyleGAN Feature Map Modulation" (SIGGRAPH 2021)

StyleCariGAN in PyTorch Official implementation of StyleCariGAN:Caricature Generation via StyleGAN Feature Map Modulation in PyTorch Requirements PyTo

PeterZhouSZ 49 Oct 31, 2022
利用python脚本实现微信、支付宝账单的合并,并保存到excel文件实现自动记账,可查看可视化图表。

KeepAccounts_v2.0 KeepAccounts.exe和其配套表格能够实现微信、支付宝官方导出账单的读取合并,为每笔帐标记类型,并按月份和类型生成可视化图表。再也不用消费一笔记一笔,每月仅需10分钟,记好所有的帐。 作者: MickLife Bilibili: https://spac

159 Jan 01, 2023
code for paper "Does Unsupervised Architecture Representation Learning Help Neural Architecture Search?"

Does Unsupervised Architecture Representation Learning Help Neural Architecture Search? Code for paper: Does Unsupervised Architecture Representation

39 Dec 17, 2022
Official repo for BMVC2021 paper ASFormer: Transformer for Action Segmentation

ASFormer: Transformer for Action Segmentation This repo provides training & inference code for BMVC 2021 paper: ASFormer: Transformer for Action Segme

42 Dec 23, 2022
Enabling dynamic analysis of Legacy Embedded Systems in full emulated environment

PENecro This project is based on "Enabling dynamic analysis of Legacy Embedded Systems in full emulated environment", published on hardwear.io USA 202

Ta-Lun Yen 10 May 17, 2022
Empirical Study of Transformers for Source Code & A Simple Approach for Handling Out-of-Vocabulary Identifiers in Deep Learning for Source Code

Transformers for variable misuse, function naming and code completion tasks The official PyTorch implementation of: Empirical Study of Transformers fo

Bayesian Methods Research Group 56 Nov 15, 2022
Improving Transferability of Representations via Augmentation-Aware Self-Supervision

Improving Transferability of Representations via Augmentation-Aware Self-Supervision Accepted to NeurIPS 2021 TL;DR: Learning augmentation-aware infor

hankook 38 Sep 16, 2022
An end-to-end machine learning library to directly optimize AUC loss

LibAUC An end-to-end machine learning library for AUC optimization. Why LibAUC? Deep AUC Maximization (DAM) is a paradigm for learning a deep neural n

Andrew 75 Dec 12, 2022
QTool: A Low-bit Quantization Toolbox for Deep Neural Networks in Computer Vision

This project provides abundant choices of quantization strategies (such as the quantization algorithms, training schedules and empirical tricks) for quantizing the deep neural networks into low-bit c

Monash Green AI Lab 51 Dec 10, 2022
Implementation of PersonaGPT Dialog Model

PersonaGPT An open-domain conversational agent with many personalities PersonaGPT is an open-domain conversational agent cpable of decoding personaliz

ILLIDAN Lab 42 Jan 01, 2023
This repository contains the segmentation user interface from the OpenSurfaces project, extracted as a lightweight tool

OpenSurfaces Segmentation UI This repository contains the segmentation user interface from the OpenSurfaces project, extracted as a lightweight tool.

Sean Bell 66 Jul 11, 2022
The first dataset of composite images with rationality score indicating whether the object placement in a composite image is reasonable.

Object-Placement-Assessment-Dataset-OPA Object-Placement-Assessment (OPA) is to verify whether a composite image is plausible in terms of the object p

BCMI 53 Nov 15, 2022
Pytorch implementation of Learning with Opponent-Learning Awareness

Pytorch implementation of Learning with Opponent-Learning Awareness using DiCE

Alexis David Jacq 82 Sep 15, 2022