This is a collection of our NAS and Vision Transformer work.

Overview

AutoML - Neural Architecture Search

This is a collection of our AutoML-NAS work

iRPE (NEW): Rethinking and Improving Relative Position Encoding for Vision Transformer

AutoFormer (NEW): AutoFormer: Searching Transformers for Visual Recognition

Cream (@NeurIPS'20): Cream of the Crop: Distilling Prioritized Paths For One-Shot Neural Architecture Search

We also implemented our NAS algorithms on Microsoft NNI (Neural Network Intelligence).

News

  • ☀️ Hiring research interns for neural architecture search, tiny transformer design, model compression projects: [email protected]
  • 💥 Oct, 2021: AutoFormerV2 has been accepted by NeurIPS'21, will be released soon.
  • 💥 Aug, 2021: Code for AutoFormer is now released.
  • 💥 July, 2021: iRPE code (with CUDA Acceleration) is now released. Paper is here.
  • 💥 July, 2021: iRPE has been accepted by ICCV'21.
  • 💥 July, 2021: AutoFormer has been accepted by ICCV'21.
  • 💥 July, 2021: AutoFormer is now available on arXiv.
  • 💥 Oct, 2020: Code for Cream is now released.
  • 💥 Oct, 2020: Cream was accepted to NeurIPS'20

Works

AutoFormer

AutoFormer is new one-shot architecture search framework dedicated to vision transformer search. It entangles the weights of different vision transformer blocks in the same layers during supernet training. Benefiting from the strategy, the trained supernet allows thousands of subnets to be very well-trained. Specifically, the performance of these subnets with weights inherited from the supernet is comparable to those retrained from scratch.

AutoFormer overview

iRPE

Image RPE (iRPE for short) methods are new relative position encoding methods dedicated to 2D images, considering directional relative distance modeling as well as the interactions between queries and relative position embeddings in self-attention mechanism. The proposed iRPE methods are simple and lightweight, being easily plugged into transformer blocks. Experiments demonstrate that solely due to the proposed encoding methods, DeiT and DETR obtain up to 1.5% (top-1 Acc) and 1.3% (mAP) stable improvements over their original versions on ImageNet and COCO respectively, without tuning any extra hyperparamters such as learning rate and weight decay. Our ablation and analysis also yield interesting findings, some of which run counter to previous understanding.

iRPE overview

Cream

[Paper] [Models-Google Drive][Models-Baidu Disk (password: wqw6)] [Slides] [BibTex]

In this work, we present a simple yet effective architecture distillation method. The central idea is that subnetworks can learn collaboratively and teach each other throughout the training process, aiming to boost the convergence of individual models. We introduce the concept of prioritized path, which refers to the architecture candidates exhibiting superior performance during training. Distilling knowledge from the prioritized paths is able to boost the training of subnetworks. Since the prioritized paths are changed on the fly depending on their performance and complexity, the final obtained paths are the cream of the crop.

Bibtex

@InProceedings{iRPE,
    author    = {Wu, Kan and Peng, Houwen and Chen, Minghao and Fu, Jianlong and Chao, Hongyang},
    title     = {Rethinking and Improving Relative Position Encoding for Vision Transformer},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
    month     = {October},
    year      = {2021},
    pages     = {10033-10041}
}

@article{AutoFormer,
  title={AutoFormer: Searching Transformers for Visual Recognition},
  author={Chen, Minghao and Peng, Houwen and Fu, Jianlong and Ling, Haibin},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
  year={2021}
}

@article{Cream,
  title={Cream of the Crop: Distilling Prioritized Paths For One-Shot Neural Architecture Search},
  author={Peng, Houwen and Du, Hao and Yu, Hongyuan and Li, Qi and Liao, Jing and Fu, Jianlong},
  journal={Advances in Neural Information Processing Systems},
  volume={33},
  year={2020}
}

License

License under an MIT license.

Comments
  • Please open source the teacher logits

    Please open source the teacher logits

    Dear Authors,

    Very impressive work. For reproducibility purposes could you please share the teacher logits files for all the teachers shown in this paper?

    TinyViT 
    opened by sanyalsunny111 15
  • RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation:

    RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation:

    I encountered with a runtime error when I tried to search for an architecture based on your code.

    /opt/conda/conda-bld/pytorch_1565272279342/work/torch/csrc/autograd/python_anomaly_mode.cpp:57: UserWarning: Traceback of forward call that caused the error:
      File "tools/train.py", line 300, in <module>
        main()
      File "tools/train.py", line 259, in main
        est=model_est, local_rank=args.local_rank)
      File "/opt/tiger/cream/lib/core/train.py", line 55, in train_epoch
        output = model(input, random_cand)
      File "/home/tiger/.conda/envs/Cream/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in __call__
        result = self.forward(*input, **kwargs)
      File "/home/tiger/.conda/envs/Cream/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 442, in forward
        output = self.module(*inputs[0], **kwargs[0])
      File "/home/tiger/.conda/envs/Cream/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in __call__
        result = self.forward(*input, **kwargs)
      File "/opt/tiger/cream/lib/models/structures/supernet.py", line 121, in forward
        x = self.forward_features(x, architecture)
      File "/opt/tiger/cream/lib/models/structures/supernet.py", line 113, in forward_features
        x = blocks[arch](x)
      File "/home/tiger/.conda/envs/Cream/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in __call__
        result = self.forward(*input, **kwargs)
      File "/home/tiger/.conda/envs/Cream/lib/python3.6/site-packages/timm/models/efficientnet_blocks.py", line 133, in forward
        x = self.bn1(x)
      File "/home/tiger/.conda/envs/Cream/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in __call__
        result = self.forward(*input, **kwargs)
      File "/home/tiger/.conda/envs/Cream/lib/python3.6/site-packages/torch/nn/modules/batchnorm.py", line 81, in forward
        exponential_average_factor, self.eps)
      File "/home/tiger/.conda/envs/Cream/lib/python3.6/site-packages/torch/nn/functional.py", line 1656, in batch_norm
        training, momentum, eps, torch.backends.cudnn.enabled
    
    Traceback (most recent call last):
      File "tools/train.py", line 300, in <module>
        main()
      File "tools/train.py", line 259, in main
        est=model_est, local_rank=args.local_rank)
      File "/opt/tiger/cream/lib/core/train.py", line 67, in train_epoch
        loss.backward()
      File "/home/tiger/.conda/envs/Cream/lib/python3.6/site-packages/torch/tensor.py", line 118, in backward
        torch.autograd.backward(self, gradient, retain_graph, create_graph)
      File "/home/tiger/.conda/envs/Cream/lib/python3.6/site-packages/torch/autograd/__init__.py", line 93, in backward
        allow_unreachable=True)  # allow_unreachable flag
    RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [320]] is at version 2507; expected version 2506 instead. Hint: the backtr
    ace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
    

    I tried to locate the source of the error, and I find that whenever the code update the meta network or add the kd_loss to the final loss the error above appears. How can I fix this problem?

    opened by Ema1997 11
  • MiniVit: Some NCCL operations have failed or timed out

    MiniVit: Some NCCL operations have failed or timed out

    when I try to run Mini-Deit with 6 GPUs on the same node, the train stopped at some first several epoch, the error info like :

    [E ProcessGroupNCCL.cpp:587] [Rank 5] Watchdog caught collective operation timeout: WorkNCCL(OpType=ALLREDUCE, Timeout(ms)=1800000) ran for 1808699 milliseconds before timing out. [E ProcessGroupNCCL.cpp:587] [Rank 4] Watchdog caught collective operation timeout: WorkNCCL(OpType=ALLREDUCE, Timeout(ms)=1800000) ran for 1808705 milliseconds before timing out. [E ProcessGroupNCCL.cpp:587] [Rank 3] Watchdog caught collective operation timeout: WorkNCCL(OpType=ALLREDUCE, Timeout(ms)=1800000) ran for 1808703 milliseconds before timing out. [E ProcessGroupNCCL.cpp:587] [Rank 2] Watchdog caught collective operation timeout: WorkNCCL(OpType=ALLREDUCE, Timeout(ms)=1800000) ran for 1808731 milliseconds before timing out. [E ProcessGroupNCCL.cpp:587] [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(OpType=ALLREDUCE, Timeout(ms)=1800000) ran for 1808750 milliseconds before timing out. [E ProcessGroupNCCL.cpp:587] [Rank 0] Watchdog caught collective operation timeout: WorkNCCL(OpType=ALLREDUCE, Timeout(ms)=1800000) ran for 1808749 milliseconds before timing out. [E ProcessGroupNCCL.cpp:341] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data. To avoid this inconsistency, we are taking the entire process down. [E ProcessGroupNCCL.cpp:341] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data. To avoid this inconsistency, we are taking the entire process down. [E ProcessGroupNCCL.cpp:341] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data. To avoid this inconsistency, we are taking the entire process down. [E ProcessGroupNCCL.cpp:341] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data. To avoid this inconsistency, we are taking the entire process down.

    Can you tell me what reasons may lead to such problem? Thank you a lot !

    MiniViT 
    opened by Ga-Lee 7
  • using  TinyVit_5m_224 for backbone  to train segmentation task

    using TinyVit_5m_224 for backbone to train segmentation task

    Hi, thanks for sharing your excellent work. I want to try to use TinyVit_5m_224 for backbone to train segmentation task which input size is 512x512. Need I changed the original weight because of different size? How can I do it ?

    TinyViT 
    opened by haoxurt 6
  • RuntimeError: CUDA error: invalid device function?

    RuntimeError: CUDA error: invalid device function?

    We have compiled the cuda version of IRPE module with the setup.py file in DETR-with-iRPE. When start to train the model,

    there is the issue:

      File "***/rpe_attention/rpe_attention_function.py", line 330, in rpe_multi_head_attention_forward
    attn_output_weights_view.add_(rpe_k(q_view, height=hw[0], width=hw[1]))
    

    RuntimeError: CUDA error: invalid device function

    The environment of our project is :

    pytorch:1.9.1
    python:3.8
    torchvision: 0.10.1
    cudatoolkit: 10.2.89
    

    I debug the train process , the main reason is that the output of function rep_k(*) and rep_q, can not perform add operation with attn_output_weights_view. could you give suggestion?

    iRPE 
    opened by chenfsjz 6
  • How to design the flops range FLOPS_MINIMUM and FLOPS_MAXIMUM to specify the desired model Flops?

    How to design the flops range FLOPS_MINIMUM and FLOPS_MAXIMUM to specify the desired model Flops?

    Hi, Thanks for your excellent work. As the title show, How to design the flops range FLOPS_MINIMUM and FLOPS_MAXIMUM to specify the desired model Flops? Since the flops_minimum and flops_maxmum will influence subnets and teacher network sampling, the target model 500M and 50M may have different choices?

    opened by sunnyxiaohu 6
  • FLOPs in the paper

    FLOPs in the paper

    hi, I have a question about the FLOPs reported in the paper. In Table 5, Cream-S got 287M FLOPs. But, I found it should be 318M FLOPs based on the architecture in the appendix.

    opened by GG-Bonds 6
  • Accuracy of the network on the 50000 test images: 0.1%

    Accuracy of the network on the 50000 test images: 0.1%

    Hello, the author, this is a very meaningful work. I encountered this accuracy problem when running the following code: python -m torch.distributed.launch --nproc_per_node 8 main.py --cfg configs/22k_distill/tiny_vit_5m_22k_distill.yaml --data-path ./ImageNet --batch-size 128 --eval --resume ./checkpoints/tiny_vit_5m_22k_distill.pth --opts DATA.DATASET imagenet Did I do something wrong?The dataset I use is ILSVRC2012. Is this what the project calls ImageNet? On the other hand, I would like to ask how to evaluate it on a computer only with CPU? Look forward to your reply, thank you!

    TinyViT 
    opened by DCBXZ66 4
  • About the teacher logits of the TinyViT

    About the teacher logits of the TinyViT

    Hi, thanks for sharing your excellent work. I am trying to utilize the script save_logits.py to generate the soft label for knowledge distillation. During generation I found that the binary file of the same epoch is different when using different starting epoch option. Although, using the script save_logits.py with the flag --check-saved-logits, there is no different or error occur. But I am wonder that why these different might come from?

    To rebuild the issue, we can generate the logit of a specific epoch with different start epoch setting.

    Thanks !

    TinyViT 
    opened by shadowpa0327 4
  • Questions about search space of Cream

    Questions about search space of Cream

    Hi, thank you for your great work! I am interested in Cream but I met some problems when reading both the paper and the source code.

    Question 1:

    In supernet.py:

      arch_def = [
          # stage 0, 112x112 in
          ['ds_r1_k3_s1_e1_c16_se0.25'],
          # stage 1, 112x112 in
          ['ir_r1_k3_s2_e4_c24_se0.25', 'ir_r1_k3_s1_e4_c24_se0.25', 'ir_r1_k3_s1_e4_c24_se0.25',
           'ir_r1_k3_s1_e4_c24_se0.25'],
          # stage 2, 56x56 in
          ['ir_r1_k5_s2_e4_c40_se0.25', 'ir_r1_k5_s1_e4_c40_se0.25', 'ir_r1_k5_s2_e4_c40_se0.25',
           'ir_r1_k5_s2_e4_c40_se0.25'],
          # stage 3, 28x28 in
          ['ir_r1_k3_s2_e6_c80_se0.25', 'ir_r1_k3_s1_e4_c80_se0.25', 'ir_r1_k3_s1_e4_c80_se0.25',
           'ir_r2_k3_s1_e4_c80_se0.25'],
          # stage 4, 14x14in
          ['ir_r1_k3_s1_e6_c96_se0.25', 'ir_r1_k3_s1_e6_c96_se0.25', 'ir_r1_k3_s1_e6_c96_se0.25',
           'ir_r1_k3_s1_e6_c96_se0.25'],
          # stage 5, 14x14in
          ['ir_r1_k5_s2_e6_c192_se0.25', 'ir_r1_k5_s1_e6_c192_se0.25', 'ir_r1_k5_s2_e6_c192_se0.25',
           'ir_r1_k5_s2_e6_c192_se0.25'],
          # stage 6, 7x7 in
          ['cn_r1_k1_s1_c320_se0.25'],
      ]
    

    There are specific numbers of blocks for each stage. In this case, there are 4,4,5,4,4 blocks.

    However, in your paper, the repeat number is ranging from 4 to 6. The code doesn't match the description in the paper.

    image

    Question 2:

    Another question is about skip connection operation. In your paper, the description is as below:

    image

    But I can not find the Skip Connection in your search space.

    https://github.com/microsoft/Cream/blob/bd49e0b933eeb60147f2b3bcecf241f06d92b435/Cream/lib/models/builders/build_supernet.py#L34

    There are only 6 operations in your Search Space.

    cream 
    opened by pprp 4
  • Some wrongs with nvcc fatal   : Unsupported gpu architecture 'compute_86'

    Some wrongs with nvcc fatal : Unsupported gpu architecture 'compute_86'

    Hi, thanks for your great work. But I have some problems when I run the following command: cd /rpe_ops python setup.py install --user

    FAILED: /home/UserDirectory/hongshengz/Stark-main/lib/models/stark/rpe_ops/build/temp.linux-x86_64-3.8/rpe_index_cuda.o /usr/bin/nvcc --generate-dependencies-with-compile --dependency-output /home/UserDirectory/hongshengz/Stark-main/lib/models/stark/rpe_ops/build/temp.linux-x86_64-3.8/rpe_index_cuda.o.d -DWITH_CUDA -I/home/UserDirectory/hongshengz/anaconda3/lib/python3.8/site-packages/torch/include -I/home/UserDirectory/hongshengz/anaconda3/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/UserDirectory/hongshengz/anaconda3/lib/python3.8/site-packages/torch/include/TH -I/home/UserDirectory/hongshengz/anaconda3/lib/python3.8/site-packages/torch/include/THC -I/home/UserDirectory/hongshengz/anaconda3/include/python3.8 -c -c /home/UserDirectory/hongshengz/Stark-main/lib/models/stark/rpe_ops/rpe_index_cuda.cu -o /home/UserDirectory/hongshengz/Stark-main/lib/models/stark/rpe_ops/build/temp.linux-x86_64-3.8/rpe_index_cuda.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -O3 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=rpe_index_cpp -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 -std=c++14 nvcc fatal : Unsupported gpu architecture 'compute_86'

    My environment is: RTX3090 CUDA:11.4
    torch_version: 1.8.1

    iRPE 
    opened by hongsheng-Z 4
  • Loss Nan for AutoFormer Base Model

    Loss Nan for AutoFormer Base Model

    Thanks for your work! I try to reproduce the base model for Autoformer, but met the problem that loss might be nan during 200th ~ 300th epochs. Do you have any idea to solve this problem?

    AutoFormer 
    opened by rehulisw 1
  • Rethinking and Improving Relative Position Encoding for Vision Transformer with memory optimized attentions

    Rethinking and Improving Relative Position Encoding for Vision Transformer with memory optimized attentions

    Hello I was wondering whether your relative positional encoding schemes would work with approximate attention mechanisms for example like presented in flash attention https://arxiv.org/abs/2205.14135

    iRPE 
    opened by jakubMitura14 1
  • Model architecture search in TinyViT framework

    Model architecture search in TinyViT framework

    I have tried finding the search algorithm to find tinier versions of the parent model, using "constrained local search" as mentioned in the paper for reproducing your work.

    Could you release the search algorithm where you have used the progressive model contraction approach to find better architectures with good performance?

    TinyViT 
    opened by NKSagarReddy 3
  • Maybe the potential bug in autoformer

    Maybe the potential bug in autoformer

    https://github.com/microsoft/Cream/blob/a857830192d472e6776e9af4bbd988f35ebf1f4d/AutoFormer/model/module/qkv_super.py#L72-L83

    In the qkv_super the weight and bias sharing strategy is different. I think the selection of bias is unreasonable and should be modified in the following way.

     def sample_bias(bias, sample_out_dim): 
         sample_bias = torch.cat([sample_bias [i:sample_out_dim:3, :] for i in range(3)], dim =0) 
      
         return sample_bias 
    
    
    opened by crj1998 0
Releases(static_files)
Owner
Microsoft
Open source projects and samples from Microsoft
Microsoft
A quick recipe to learn all about Transformers

Transformers have accelerated the development of new techniques and models for natural language processing (NLP) tasks.

DAIR.AI 772 Dec 31, 2022
Code and data of the ACL 2021 paper: Few-Shot Text Ranking with Meta Adapted Synthetic Weak Supervision

MetaAdaptRank This repository provides the implementation of meta-learning to reweight synthetic weak supervision data described in the paper Few-Shot

THUNLP 5 Jun 16, 2022
PyTorch implementation of SmoothGrad: removing noise by adding noise.

SmoothGrad implementation in PyTorch PyTorch implementation of SmoothGrad: removing noise by adding noise. Vanilla Gradients SmoothGrad Guided backpro

SSKH 143 Jan 05, 2023
Causal-BALD: Deep Bayesian Active Learning of Outcomes to Infer Treatment-Effects from Observational Data.

causal-bald | Abstract | Installation | Example | Citation | Reproducing Results DUE An implementation of the methods presented in Causal-BALD: Deep B

OATML 13 Oct 07, 2022
Framework for training options with different attention mechanism and using them to solve downstream tasks.

Using Attention in HRL Framework for training options with different attention mechanism and using them to solve downstream tasks. Requirements GPU re

5 Nov 03, 2022
Python implementation of Bayesian optimization over permutation spaces.

Bayesian Optimization over Permutation Spaces This repository contains the source code and the resources related to the paper "Bayesian Optimization o

Aryan Deshwal 9 Dec 23, 2022
SpecAugmentPyTorch - A Pytorch (support batch and channel) implementation of GoogleBrain's SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition

SpecAugment An implementation of SpecAugment for Pytorch How to use Install pytorch, version=1.9.0 (new feature (torch.Tensor.take_along_dim) is used

IMLHF 3 Oct 11, 2022
Official implementation for NIPS'17 paper: PredRNN: Recurrent Neural Networks for Predictive Learning Using Spatiotemporal LSTMs.

PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive Learning The predictive learning of spatiotemporal sequences aims to generate future

THUML: Machine Learning Group @ THSS 243 Dec 26, 2022
BMVC 2021: This is the github repository for "Few Shot Temporal Action Localization using Query Adaptive Transformers" accepted in British Machine Vision Conference (BMVC) 2021, Virtual

FS-QAT: Few Shot Temporal Action Localization using Query Adaptive Transformer Accepted as Poster in BMVC 2021 This is an official implementation in P

Sauradip Nag 14 Dec 09, 2022
Code repo for "FASA: Feature Augmentation and Sampling Adaptation for Long-Tailed Instance Segmentation" (ICCV 2021)

FASA: Feature Augmentation and Sampling Adaptation for Long-Tailed Instance Segmentation (ICCV 2021) This repository contains the implementation of th

Yuhang Zang 21 Dec 17, 2022
City Surfaces: City-scale Semantic Segmentation of Sidewalk Surfaces

City Surfaces: City-scale Semantic Segmentation of Sidewalk Surfaces Paper Temporary GitHub page for City Surfaces paper. More soon! While designing s

14 Nov 10, 2022
Implementation of popular SOTA self-supervised learning algorithms as Fastai Callbacks.

Self Supervised Learning with Fastai Implementation of popular SOTA self-supervised learning algorithms as Fastai Callbacks. Install pip install self-

Kerem Turgutlu 276 Dec 23, 2022
Out-of-Domain Human Mesh Reconstruction via Dynamic Bilevel Online Adaptation

DynaBOA Code repositoty for the paper: Out-of-Domain Human Mesh Reconstruction via Dynamic Bilevel Online Adaptation Shanyan Guan, Jingwei Xu, Michell

198 Dec 29, 2022
A lightweight Python-based 3D network multi-agent simulator. Uses a cell-based congestion model. Calculates risk, loudness and battery capacities of the agents. Suitable for 3D network optimization tasks.

AMAZ3DSim AMAZ3DSim is a lightweight python-based 3D network multi-agent simulator. It uses a cell-based congestion model. It calculates risk, battery

Daniel Hirsch 13 Nov 04, 2022
A torch.Tensor-like DataFrame library supporting multiple execution runtimes and Arrow as a common memory format

TorchArrow (Warning: Unstable Prototype) This is a prototype library currently under heavy development. It does not currently have stable releases, an

Facebook Research 536 Jan 06, 2023
ResNEsts and DenseNEsts: Block-based DNN Models with Improved Representation Guarantees

ResNEsts and DenseNEsts: Block-based DNN Models with Improved Representation Guarantees This repository is the official implementation of the empirica

Kuan-Lin (Jason) Chen 2 Oct 02, 2022
A library for uncertainty quantification based on PyTorch

Torchuq [logo here] TorchUQ is an extensive library for uncertainty quantification (UQ) based on pytorch. TorchUQ currently supports 10 representation

TorchUQ 96 Dec 12, 2022
Differentiable architecture search for convolutional and recurrent networks

Differentiable Architecture Search Code accompanying the paper DARTS: Differentiable Architecture Search Hanxiao Liu, Karen Simonyan, Yiming Yang. arX

Hanxiao Liu 3.7k Jan 09, 2023
Hypercomplex Neural Networks with PyTorch

HyperNets Hypercomplex Neural Networks with PyTorch: this repository would be a container for hypercomplex neural network modules to facilitate resear

Eleonora Grassucci 21 Dec 27, 2022
[NeurIPS-2021] Slow Learning and Fast Inference: Efficient Graph Similarity Computation via Knowledge Distillation

Efficient Graph Similarity Computation - (EGSC) This repo contains the source code and dataset for our paper: Slow Learning and Fast Inference: Effici

23 Nov 11, 2022