A highly modular PyTorch framework with a focus on Neural Architecture Search (NAS).

Related tags

Deep Learninguninas
Overview

UniNAS

A highly modular PyTorch framework with a focus on Neural Architecture Search (NAS).

under development

(which happens mostly on our internal GitLab, we push only every once in a while to Github)

  • APIs may change
  • argparse arguments may be moved to more fitting classes
  • there may be incomplete or not-yet-working pieces of code
  • ...

Features

  • modular and therefore reusable
    • data set loading,
    • network building code and topologies,
    • methods to train architecture weights,
    • sets of operations (primitives),
    • weight initializers,
    • metrics,
    • ... and more
  • everything is configurable from the command line and/or config files
    • improved reproducibility, since detailed run configurations are saved and logged
    • powerful search network descriptions enable e.g. highly customizable weight sharing settings
    • the underlying argparse mechanism enables using a GUI for configurations
  • compare results of different methods in the same environment
  • import and export detailed network descriptions
  • integrate new methods and more with fairly little effort
  • NAS-Benchmark integration
    • NAS-Bench 201
  • ... and more

Where is this code from?

Except for a few pieces, the code is entirely self-written. However, sometimes the (official) code is useful to learn from or clear up some details, and other frameworks can be used for their nice features.

Other meta-NAS frameworks

  • Deep Architect
    • highly customizable search spaces, hyperparameters, ...
    • the searchers (SMBO, MCTS, ...) focus on fully training (many) models and are not differentiable
  • D-X-Y NAS-Projects
  • Auto-PyTorch
    • stronger focus on model selection than optimizing one architecture
  • Vega
  • NNI

Repository notes

Dynamic argparse tree

Everything is an argument. Learning rate? Argument. Scheduler? Argument. The exact topology of a Network, including how many of each cell and whether they share their architecture weights? Also arguments.

This is enabled by the idea that each used class (method, network, cells, regularizers, ...) can add arguments to argparse, including which further classes are required (e.g. a method needs a network, which needs a stem).

It starts with the Main class adding a Task (cls_task), which itself adds all required components (cls_*).

To see all available (meta) arguments, run Main.list_all_arguments() in uninas/main.py

Graphical user interface

Since putting together the arguments correctly is not trivial (and requires some familiarity with the code base), an easier approach is using a GUI.

Have a look at uninas/gui/tk_gui/main.py, a tkinter GUI frontend.

The GUI can automatically filter usable classes, display available arguments, and display tooltips; based only on the implemented argparse (meta) arguments in the respective classes.

Some meta arguments take a single class name:

e.g: cls_task, cls_trainer, cls_data, cls_criterion, cls_method

The chosen classes define their own arguments, e.g.:

  • cls_trainer="SimpleTrainer"
  • SimpleTrainer.max_epochs=100
  • SimpleTrainer.test_last=10

Their names are also available as wildcards, automatically using their respectively set class name:

  • cls_trainer="SimpleTrainer"
  • {cls_trainer}.max_epochs --> SimpleTrainer.max_epochs
  • {cls_trainer}.test_last --> SimpleTrainer.test_last

Some meta arguments take a comma-separated list of class names:

e.g. cls_metrics, cls_initializers, cls_regularizers, cls_optimizers, cls_schedulers

The chosen classes also define their own arguments, but always include an index, e.g.:

  • cls_regularizers="DropOutRegularizer, DropPathRegularizer"
  • DropOutRegularizer#0.prob=0.5
  • DropPathRegularizer#1.max_prob=0.3
  • DropPathRegularizer#1.drop_id_paths=false

And they are also available as indexed wildcards:

  • cls_regularizers="DropOutRegularizer, DropPathRegularizer"
  • {cls_regularizers#0}.prob --> DropOutRegularizer#0.prob
  • {cls_regularizers#1}.max_prob --> DropPathRegularizer#1.max_prob
  • {cls_regularizers#1}.drop_id_paths --> DropPathRegularizer#1.drop_id_paths

Register

UniNAS makes heavy use of a registering mechanism (via decorators in uninas/register.py). Classes of the same type (e.g. optimizers, networks, ...) will register in one RegisterDict.

Registered classes can be accessed via their name in the Register, no matter of their actual location in the code. This enables e.g. saving network topologies as nested dictionaries, no matter how complicated they are, since the class names are enough to find the classes in the code. (It also grants a certain amount of refactoring-freedom.)

Exporting networks

(Trained) Networks can easily be used by other PyTorch frameworks/scripts, see verify.py for an easy example.

Citation

The framework

we will possibly create a whitepaper at some point

@misc{kl2020uninas,
  author = {Kevin Alexander Laube},
  title = {UniNAS},
  year = {2020},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/cogsys-tuebingen/uninas}}
}

Inter-choice dependent super-network weights

  1. Train super-networks, e.g. via experiments/demo/inter_choice_weights/icw1_train_supernet_nats.py
    • you will need Cifar10, but can also easily use fake data or download it
    • to generate SubImageNet see uninas/utils/generate/data/subImageNet
  2. Evaluate the super-network, e.g. via experiments/demo/inter_choice_weights/icw2_eval_supernet.py
  3. View the evaluation results in the save dir, in TensorBoard or plotted directly
@article{laube2021interchoice,
  title={Inter-choice dependent super-network weights},
  author={Kevin Alexander Laube, Andreas Zell},
  journal={arXiv preprint arXiv:2104.11522},
  year={2021}
}
Owner
Cognitive Systems Research Group
Autonomous Mobile Robots; Bioinformatics; Chemo- and Geoinformatics; Evolutionary Algorithms; Machine Learning
Cognitive Systems Research Group
PyTorch implementation of the paper Deep Networks from the Principle of Rate Reduction

Deep Networks from the Principle of Rate Reduction This repository is the official PyTorch implementation of the paper Deep Networks from the Principl

459 Dec 27, 2022
ACV is a python library that provides explanations for any machine learning model or data.

ACV is a python library that provides explanations for any machine learning model or data. It gives local rule-based explanations for any model or data and different Shapley Values for tree-based mod

Salim Amoukou 85 Dec 27, 2022
Pcos-prediction - Predicts the likelihood of Polycystic Ovary Syndrome based on patient attributes and symptoms

PCOS Prediction 🥼 Predicts the likelihood of Polycystic Ovary Syndrome based on

Samantha Van Seters 1 Jan 10, 2022
Small utility to demangle Nim symbols in callgrind files

nim_callgrind A small utility to demangle Nim symbols from callgrind files. Usage Run your (Nim) program with something like this: valgrind --tool=cal

kraptor 3 Feb 15, 2022
Data reduction pipeline for KOALA on the AAT.

KOALA KOALA, the Kilofibre Optical AAT Lenslet Array, is a wide-field, high efficiency, integral field unit used by the AAOmega spectrograph on the 3.

4 Sep 26, 2022
Pre-trained NFNets with 99% of the accuracy of the official paper

NFNet Pytorch Implementation This repo contains pretrained NFNet models F0-F6 with high ImageNet accuracy from the paper High-Performance Large-Scale

Benjamin Schmidt 133 Dec 09, 2022
[CVPR 2021 Oral] Variational Relational Point Completion Network

VRCNet: Variational Relational Point Completion Network This repository contains the PyTorch implementation of the paper: Variational Relational Point

PL 121 Dec 12, 2022
Ranger deep learning optimizer rewrite to use newest components

Ranger21 - integrating the latest deep learning components into a single optimizer Ranger deep learning optimizer rewrite to use newest components Ran

Less Wright 266 Dec 28, 2022
A framework for joint super-resolution and image synthesis, without requiring real training data

SynthSR This repository contains code to train a Convolutional Neural Network (CNN) for Super-resolution (SR), or joint SR and data synthesis. The met

83 Jan 01, 2023
Self-Supervised Multi-Frame Monocular Scene Flow (CVPR 2021)

Self-Supervised Multi-Frame Monocular Scene Flow 3D visualization of estimated depth and scene flow (overlayed with input image) from temporally conse

Visual Inference Lab @TU Darmstadt 85 Dec 22, 2022
Moment-DETR code and QVHighlights dataset

Moment-DETR QVHighlights: Detecting Moments and Highlights in Videos via Natural Language Queries Jie Lei, Tamara L. Berg, Mohit Bansal For dataset de

Jie Lei 雷杰 133 Dec 22, 2022
Reproduces ResNet-V3 with pytorch

ResNeXt.pytorch Reproduces ResNet-V3 (Aggregated Residual Transformations for Deep Neural Networks) with pytorch. Tried on pytorch 1.6 Trains on Cifar

Pau Rodriguez 481 Dec 23, 2022
Pure python PEMDAS expression solver without using built-in eval function

pypemdas Pure python PEMDAS expression solver without using built-in eval function. Supports nested parenthesis. Supported operators: + - * / ^ Exampl

1 Dec 22, 2021
ColossalAI-Examples - Examples of training models with hybrid parallelism using ColossalAI

ColossalAI-Examples This repository contains examples of training models with Co

HPC-AI Tech 185 Jan 09, 2023
Cupytorch - A small framework mimics PyTorch using CuPy or NumPy

CuPyTorch CuPyTorch是一个小型PyTorch,名字来源于: 不同于已有的几个使用NumPy实现PyTorch的开源项目,本项目通过CuPy支持

Xingkai Yu 23 Aug 17, 2022
BalaGAN: Image Translation Between Imbalanced Domains via Cross-Modal Transfer

BalaGAN: Image Translation Between Imbalanced Domains via Cross-Modal Transfer Project Page | Paper | Video State-of-the-art image-to-image translatio

47 Dec 06, 2022
PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation

BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation

Salesforce 1.3k Dec 31, 2022
Json2Xml tool will help you convert from json COCO format to VOC xml format in Object Detection Problem.

JSON 2 XML All codes assume running from root directory. Please update the sys path at the beginning of the codes before running. Over View Json2Xml t

Nguyễn Trường Lâu 6 Aug 22, 2022
[AAAI 2022] Sparse Structure Learning via Graph Neural Networks for Inductive Document Classification

Sparse Structure Learning via Graph Neural Networks for inductive document classification Make graph dataset create co-occurrence graph for datasets.

16 Dec 22, 2022
Large Scale Fine-Grained Categorization and Domain-Specific Transfer Learning. CVPR 2018

Large Scale Fine-Grained Categorization and Domain-Specific Transfer Learning Tensorflow code and models for the paper: Large Scale Fine-Grained Categ

Yin Cui 187 Oct 01, 2022