This is a collection of our NAS and Vision Transformer work.

Overview

AutoML - Neural Architecture Search

This is a collection of our AutoML-NAS work

iRPE (NEW): Rethinking and Improving Relative Position Encoding for Vision Transformer

AutoFormer (NEW): AutoFormer: Searching Transformers for Visual Recognition

Cream (@NeurIPS'20): Cream of the Crop: Distilling Prioritized Paths For One-Shot Neural Architecture Search

We also implemented our NAS algorithms on Microsoft NNI (Neural Network Intelligence).

News

  • ☀️ Hiring research interns for neural architecture search, tiny transformer design, model compression projects: [email protected]
  • 💥 Oct, 2021: AutoFormerV2 has been accepted by NeurIPS'21, will be released soon.
  • 💥 Aug, 2021: Code for AutoFormer is now released.
  • 💥 July, 2021: iRPE code (with CUDA Acceleration) is now released. Paper is here.
  • 💥 July, 2021: iRPE has been accepted by ICCV'21.
  • 💥 July, 2021: AutoFormer has been accepted by ICCV'21.
  • 💥 July, 2021: AutoFormer is now available on arXiv.
  • 💥 Oct, 2020: Code for Cream is now released.
  • 💥 Oct, 2020: Cream was accepted to NeurIPS'20

Works

AutoFormer

AutoFormer is new one-shot architecture search framework dedicated to vision transformer search. It entangles the weights of different vision transformer blocks in the same layers during supernet training. Benefiting from the strategy, the trained supernet allows thousands of subnets to be very well-trained. Specifically, the performance of these subnets with weights inherited from the supernet is comparable to those retrained from scratch.

AutoFormer overview

iRPE

Image RPE (iRPE for short) methods are new relative position encoding methods dedicated to 2D images, considering directional relative distance modeling as well as the interactions between queries and relative position embeddings in self-attention mechanism. The proposed iRPE methods are simple and lightweight, being easily plugged into transformer blocks. Experiments demonstrate that solely due to the proposed encoding methods, DeiT and DETR obtain up to 1.5% (top-1 Acc) and 1.3% (mAP) stable improvements over their original versions on ImageNet and COCO respectively, without tuning any extra hyperparamters such as learning rate and weight decay. Our ablation and analysis also yield interesting findings, some of which run counter to previous understanding.

iRPE overview

Cream

[Paper] [Models-Google Drive][Models-Baidu Disk (password: wqw6)] [Slides] [BibTex]

In this work, we present a simple yet effective architecture distillation method. The central idea is that subnetworks can learn collaboratively and teach each other throughout the training process, aiming to boost the convergence of individual models. We introduce the concept of prioritized path, which refers to the architecture candidates exhibiting superior performance during training. Distilling knowledge from the prioritized paths is able to boost the training of subnetworks. Since the prioritized paths are changed on the fly depending on their performance and complexity, the final obtained paths are the cream of the crop.

Bibtex

@InProceedings{iRPE,
    author    = {Wu, Kan and Peng, Houwen and Chen, Minghao and Fu, Jianlong and Chao, Hongyang},
    title     = {Rethinking and Improving Relative Position Encoding for Vision Transformer},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
    month     = {October},
    year      = {2021},
    pages     = {10033-10041}
}

@article{AutoFormer,
  title={AutoFormer: Searching Transformers for Visual Recognition},
  author={Chen, Minghao and Peng, Houwen and Fu, Jianlong and Ling, Haibin},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
  year={2021}
}

@article{Cream,
  title={Cream of the Crop: Distilling Prioritized Paths For One-Shot Neural Architecture Search},
  author={Peng, Houwen and Du, Hao and Yu, Hongyuan and Li, Qi and Liao, Jing and Fu, Jianlong},
  journal={Advances in Neural Information Processing Systems},
  volume={33},
  year={2020}
}

License

License under an MIT license.

Comments
  • Please open source the teacher logits

    Please open source the teacher logits

    Dear Authors,

    Very impressive work. For reproducibility purposes could you please share the teacher logits files for all the teachers shown in this paper?

    TinyViT 
    opened by sanyalsunny111 15
  • RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation:

    RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation:

    I encountered with a runtime error when I tried to search for an architecture based on your code.

    /opt/conda/conda-bld/pytorch_1565272279342/work/torch/csrc/autograd/python_anomaly_mode.cpp:57: UserWarning: Traceback of forward call that caused the error:
      File "tools/train.py", line 300, in <module>
        main()
      File "tools/train.py", line 259, in main
        est=model_est, local_rank=args.local_rank)
      File "/opt/tiger/cream/lib/core/train.py", line 55, in train_epoch
        output = model(input, random_cand)
      File "/home/tiger/.conda/envs/Cream/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in __call__
        result = self.forward(*input, **kwargs)
      File "/home/tiger/.conda/envs/Cream/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 442, in forward
        output = self.module(*inputs[0], **kwargs[0])
      File "/home/tiger/.conda/envs/Cream/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in __call__
        result = self.forward(*input, **kwargs)
      File "/opt/tiger/cream/lib/models/structures/supernet.py", line 121, in forward
        x = self.forward_features(x, architecture)
      File "/opt/tiger/cream/lib/models/structures/supernet.py", line 113, in forward_features
        x = blocks[arch](x)
      File "/home/tiger/.conda/envs/Cream/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in __call__
        result = self.forward(*input, **kwargs)
      File "/home/tiger/.conda/envs/Cream/lib/python3.6/site-packages/timm/models/efficientnet_blocks.py", line 133, in forward
        x = self.bn1(x)
      File "/home/tiger/.conda/envs/Cream/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in __call__
        result = self.forward(*input, **kwargs)
      File "/home/tiger/.conda/envs/Cream/lib/python3.6/site-packages/torch/nn/modules/batchnorm.py", line 81, in forward
        exponential_average_factor, self.eps)
      File "/home/tiger/.conda/envs/Cream/lib/python3.6/site-packages/torch/nn/functional.py", line 1656, in batch_norm
        training, momentum, eps, torch.backends.cudnn.enabled
    
    Traceback (most recent call last):
      File "tools/train.py", line 300, in <module>
        main()
      File "tools/train.py", line 259, in main
        est=model_est, local_rank=args.local_rank)
      File "/opt/tiger/cream/lib/core/train.py", line 67, in train_epoch
        loss.backward()
      File "/home/tiger/.conda/envs/Cream/lib/python3.6/site-packages/torch/tensor.py", line 118, in backward
        torch.autograd.backward(self, gradient, retain_graph, create_graph)
      File "/home/tiger/.conda/envs/Cream/lib/python3.6/site-packages/torch/autograd/__init__.py", line 93, in backward
        allow_unreachable=True)  # allow_unreachable flag
    RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [320]] is at version 2507; expected version 2506 instead. Hint: the backtr
    ace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
    

    I tried to locate the source of the error, and I find that whenever the code update the meta network or add the kd_loss to the final loss the error above appears. How can I fix this problem?

    opened by Ema1997 11
  • MiniVit: Some NCCL operations have failed or timed out

    MiniVit: Some NCCL operations have failed or timed out

    when I try to run Mini-Deit with 6 GPUs on the same node, the train stopped at some first several epoch, the error info like :

    [E ProcessGroupNCCL.cpp:587] [Rank 5] Watchdog caught collective operation timeout: WorkNCCL(OpType=ALLREDUCE, Timeout(ms)=1800000) ran for 1808699 milliseconds before timing out. [E ProcessGroupNCCL.cpp:587] [Rank 4] Watchdog caught collective operation timeout: WorkNCCL(OpType=ALLREDUCE, Timeout(ms)=1800000) ran for 1808705 milliseconds before timing out. [E ProcessGroupNCCL.cpp:587] [Rank 3] Watchdog caught collective operation timeout: WorkNCCL(OpType=ALLREDUCE, Timeout(ms)=1800000) ran for 1808703 milliseconds before timing out. [E ProcessGroupNCCL.cpp:587] [Rank 2] Watchdog caught collective operation timeout: WorkNCCL(OpType=ALLREDUCE, Timeout(ms)=1800000) ran for 1808731 milliseconds before timing out. [E ProcessGroupNCCL.cpp:587] [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(OpType=ALLREDUCE, Timeout(ms)=1800000) ran for 1808750 milliseconds before timing out. [E ProcessGroupNCCL.cpp:587] [Rank 0] Watchdog caught collective operation timeout: WorkNCCL(OpType=ALLREDUCE, Timeout(ms)=1800000) ran for 1808749 milliseconds before timing out. [E ProcessGroupNCCL.cpp:341] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data. To avoid this inconsistency, we are taking the entire process down. [E ProcessGroupNCCL.cpp:341] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data. To avoid this inconsistency, we are taking the entire process down. [E ProcessGroupNCCL.cpp:341] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data. To avoid this inconsistency, we are taking the entire process down. [E ProcessGroupNCCL.cpp:341] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data. To avoid this inconsistency, we are taking the entire process down.

    Can you tell me what reasons may lead to such problem? Thank you a lot !

    MiniViT 
    opened by Ga-Lee 7
  • using  TinyVit_5m_224 for backbone  to train segmentation task

    using TinyVit_5m_224 for backbone to train segmentation task

    Hi, thanks for sharing your excellent work. I want to try to use TinyVit_5m_224 for backbone to train segmentation task which input size is 512x512. Need I changed the original weight because of different size? How can I do it ?

    TinyViT 
    opened by haoxurt 6
  • RuntimeError: CUDA error: invalid device function?

    RuntimeError: CUDA error: invalid device function?

    We have compiled the cuda version of IRPE module with the setup.py file in DETR-with-iRPE. When start to train the model,

    there is the issue:

      File "***/rpe_attention/rpe_attention_function.py", line 330, in rpe_multi_head_attention_forward
    attn_output_weights_view.add_(rpe_k(q_view, height=hw[0], width=hw[1]))
    

    RuntimeError: CUDA error: invalid device function

    The environment of our project is :

    pytorch:1.9.1
    python:3.8
    torchvision: 0.10.1
    cudatoolkit: 10.2.89
    

    I debug the train process , the main reason is that the output of function rep_k(*) and rep_q, can not perform add operation with attn_output_weights_view. could you give suggestion?

    iRPE 
    opened by chenfsjz 6
  • How to design the flops range FLOPS_MINIMUM and FLOPS_MAXIMUM to specify the desired model Flops?

    How to design the flops range FLOPS_MINIMUM and FLOPS_MAXIMUM to specify the desired model Flops?

    Hi, Thanks for your excellent work. As the title show, How to design the flops range FLOPS_MINIMUM and FLOPS_MAXIMUM to specify the desired model Flops? Since the flops_minimum and flops_maxmum will influence subnets and teacher network sampling, the target model 500M and 50M may have different choices?

    opened by sunnyxiaohu 6
  • FLOPs in the paper

    FLOPs in the paper

    hi, I have a question about the FLOPs reported in the paper. In Table 5, Cream-S got 287M FLOPs. But, I found it should be 318M FLOPs based on the architecture in the appendix.

    opened by GG-Bonds 6
  • Accuracy of the network on the 50000 test images: 0.1%

    Accuracy of the network on the 50000 test images: 0.1%

    Hello, the author, this is a very meaningful work. I encountered this accuracy problem when running the following code: python -m torch.distributed.launch --nproc_per_node 8 main.py --cfg configs/22k_distill/tiny_vit_5m_22k_distill.yaml --data-path ./ImageNet --batch-size 128 --eval --resume ./checkpoints/tiny_vit_5m_22k_distill.pth --opts DATA.DATASET imagenet Did I do something wrong?The dataset I use is ILSVRC2012. Is this what the project calls ImageNet? On the other hand, I would like to ask how to evaluate it on a computer only with CPU? Look forward to your reply, thank you!

    TinyViT 
    opened by DCBXZ66 4
  • About the teacher logits of the TinyViT

    About the teacher logits of the TinyViT

    Hi, thanks for sharing your excellent work. I am trying to utilize the script save_logits.py to generate the soft label for knowledge distillation. During generation I found that the binary file of the same epoch is different when using different starting epoch option. Although, using the script save_logits.py with the flag --check-saved-logits, there is no different or error occur. But I am wonder that why these different might come from?

    To rebuild the issue, we can generate the logit of a specific epoch with different start epoch setting.

    Thanks !

    TinyViT 
    opened by shadowpa0327 4
  • Questions about search space of Cream

    Questions about search space of Cream

    Hi, thank you for your great work! I am interested in Cream but I met some problems when reading both the paper and the source code.

    Question 1:

    In supernet.py:

      arch_def = [
          # stage 0, 112x112 in
          ['ds_r1_k3_s1_e1_c16_se0.25'],
          # stage 1, 112x112 in
          ['ir_r1_k3_s2_e4_c24_se0.25', 'ir_r1_k3_s1_e4_c24_se0.25', 'ir_r1_k3_s1_e4_c24_se0.25',
           'ir_r1_k3_s1_e4_c24_se0.25'],
          # stage 2, 56x56 in
          ['ir_r1_k5_s2_e4_c40_se0.25', 'ir_r1_k5_s1_e4_c40_se0.25', 'ir_r1_k5_s2_e4_c40_se0.25',
           'ir_r1_k5_s2_e4_c40_se0.25'],
          # stage 3, 28x28 in
          ['ir_r1_k3_s2_e6_c80_se0.25', 'ir_r1_k3_s1_e4_c80_se0.25', 'ir_r1_k3_s1_e4_c80_se0.25',
           'ir_r2_k3_s1_e4_c80_se0.25'],
          # stage 4, 14x14in
          ['ir_r1_k3_s1_e6_c96_se0.25', 'ir_r1_k3_s1_e6_c96_se0.25', 'ir_r1_k3_s1_e6_c96_se0.25',
           'ir_r1_k3_s1_e6_c96_se0.25'],
          # stage 5, 14x14in
          ['ir_r1_k5_s2_e6_c192_se0.25', 'ir_r1_k5_s1_e6_c192_se0.25', 'ir_r1_k5_s2_e6_c192_se0.25',
           'ir_r1_k5_s2_e6_c192_se0.25'],
          # stage 6, 7x7 in
          ['cn_r1_k1_s1_c320_se0.25'],
      ]
    

    There are specific numbers of blocks for each stage. In this case, there are 4,4,5,4,4 blocks.

    However, in your paper, the repeat number is ranging from 4 to 6. The code doesn't match the description in the paper.

    image

    Question 2:

    Another question is about skip connection operation. In your paper, the description is as below:

    image

    But I can not find the Skip Connection in your search space.

    https://github.com/microsoft/Cream/blob/bd49e0b933eeb60147f2b3bcecf241f06d92b435/Cream/lib/models/builders/build_supernet.py#L34

    There are only 6 operations in your Search Space.

    cream 
    opened by pprp 4
  • Some wrongs with nvcc fatal   : Unsupported gpu architecture 'compute_86'

    Some wrongs with nvcc fatal : Unsupported gpu architecture 'compute_86'

    Hi, thanks for your great work. But I have some problems when I run the following command: cd /rpe_ops python setup.py install --user

    FAILED: /home/UserDirectory/hongshengz/Stark-main/lib/models/stark/rpe_ops/build/temp.linux-x86_64-3.8/rpe_index_cuda.o /usr/bin/nvcc --generate-dependencies-with-compile --dependency-output /home/UserDirectory/hongshengz/Stark-main/lib/models/stark/rpe_ops/build/temp.linux-x86_64-3.8/rpe_index_cuda.o.d -DWITH_CUDA -I/home/UserDirectory/hongshengz/anaconda3/lib/python3.8/site-packages/torch/include -I/home/UserDirectory/hongshengz/anaconda3/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/UserDirectory/hongshengz/anaconda3/lib/python3.8/site-packages/torch/include/TH -I/home/UserDirectory/hongshengz/anaconda3/lib/python3.8/site-packages/torch/include/THC -I/home/UserDirectory/hongshengz/anaconda3/include/python3.8 -c -c /home/UserDirectory/hongshengz/Stark-main/lib/models/stark/rpe_ops/rpe_index_cuda.cu -o /home/UserDirectory/hongshengz/Stark-main/lib/models/stark/rpe_ops/build/temp.linux-x86_64-3.8/rpe_index_cuda.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -O3 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=rpe_index_cpp -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 -std=c++14 nvcc fatal : Unsupported gpu architecture 'compute_86'

    My environment is: RTX3090 CUDA:11.4
    torch_version: 1.8.1

    iRPE 
    opened by hongsheng-Z 4
  • Loss Nan for AutoFormer Base Model

    Loss Nan for AutoFormer Base Model

    Thanks for your work! I try to reproduce the base model for Autoformer, but met the problem that loss might be nan during 200th ~ 300th epochs. Do you have any idea to solve this problem?

    AutoFormer 
    opened by rehulisw 1
  • Rethinking and Improving Relative Position Encoding for Vision Transformer with memory optimized attentions

    Rethinking and Improving Relative Position Encoding for Vision Transformer with memory optimized attentions

    Hello I was wondering whether your relative positional encoding schemes would work with approximate attention mechanisms for example like presented in flash attention https://arxiv.org/abs/2205.14135

    iRPE 
    opened by jakubMitura14 1
  • Model architecture search in TinyViT framework

    Model architecture search in TinyViT framework

    I have tried finding the search algorithm to find tinier versions of the parent model, using "constrained local search" as mentioned in the paper for reproducing your work.

    Could you release the search algorithm where you have used the progressive model contraction approach to find better architectures with good performance?

    TinyViT 
    opened by NKSagarReddy 3
  • Maybe the potential bug in autoformer

    Maybe the potential bug in autoformer

    https://github.com/microsoft/Cream/blob/a857830192d472e6776e9af4bbd988f35ebf1f4d/AutoFormer/model/module/qkv_super.py#L72-L83

    In the qkv_super the weight and bias sharing strategy is different. I think the selection of bias is unreasonable and should be modified in the following way.

     def sample_bias(bias, sample_out_dim): 
         sample_bias = torch.cat([sample_bias [i:sample_out_dim:3, :] for i in range(3)], dim =0) 
      
         return sample_bias 
    
    
    opened by crj1998 0
Releases(static_files)
Owner
Microsoft
Open source projects and samples from Microsoft
Microsoft
DeepCO3: Deep Instance Co-segmentation by Co-peak Search and Co-saliency

[CVPR19] DeepCO3: Deep Instance Co-segmentation by Co-peak Search and Co-saliency (Oral paper) Authors: Kuang-Jui Hsu, Yen-Yu Lin, Yung-Yu Chuang PDF:

Kuang-Jui Hsu 139 Dec 22, 2022
A scikit-learn-compatible module for estimating prediction intervals.

|Anaconda|_ MAPIE - Model Agnostic Prediction Interval Estimator MAPIE allows you to easily estimate prediction intervals using your favourite sklearn

SimAI 584 Dec 27, 2022
Language Models Can See: Plugging Visual Controls in Text Generation

Language Models Can See: Plugging Visual Controls in Text Generation Authors: Yixuan Su, Tian Lan, Yahui Liu, Fangyu Liu, Dani Yogatama, Yan Wang, Lin

Yixuan Su 195 Dec 22, 2022
ivadomed is an integrated framework for medical image analysis with deep learning.

Repository on the collaborative IVADO medical imaging project between the Mila and NeuroPoly labs.

144 Dec 19, 2022
Off-policy continuous control in PyTorch, with RDPG, RTD3 & RSAC

arXiv technical report soon available. we are updating the readme to be as comprehensive as possible Please ask any questions in Issues, thanks. Intro

Zhihan 31 Dec 30, 2022
A Domain-Agnostic Benchmark for Self-Supervised Learning

DABS: A Domain Agnostic Benchmark for Self-Supervised Learning This repository contains the code for DABS, a benchmark for domain-agnostic self-superv

Alex Tamkin 81 Dec 09, 2022
[ICCV 2021] Official Pytorch implementation for Discriminative Region-based Multi-Label Zero-Shot Learning SOTA results on NUS-WIDE and OpenImages

Discriminative Region-based Multi-Label Zero-Shot Learning (ICCV 2021) [arXiv][Project page coming soon] Sanath Narayan*, Akshita Gupta*, Salman Kh

Akshita Gupta 54 Nov 21, 2022
A deep learning tabular classification architecture inspired by TabTransformer with integrated gated multilayer perceptron.

The GatedTabTransformer. A deep learning tabular classification architecture inspired by TabTransformer with integrated gated multilayer perceptron. C

Radi Cho 60 Dec 15, 2022
Multi-Objective Loss Balancing for Physics-Informed Deep Learning

Multi-Objective Loss Balancing for Physics-Informed Deep Learning Code for ReLoBRaLo. Abstract Physics Informed Neural Networks (PINN) are algorithms

Rafael Bischof 16 Dec 12, 2022
Using Self-Supervised Pretext Tasks for Active Learning - Official Pytorch Implementation

Using Self-Supervised Pretext Tasks for Active Learning - Official Pytorch Implementation Experiment Setting: CIFAR10 (downloaded and saved in ./DATA

John Seon Keun Yi 38 Dec 27, 2022
Unofficial PyTorch Implementation for HifiFace (https://arxiv.org/abs/2106.09965)

HifiFace — Unofficial Pytorch Implementation Image source: HifiFace: 3D Shape and Semantic Prior Guided High Fidelity Face Swapping (figure 1, pg. 1)

MINDs Lab 218 Jan 04, 2023
TFOD-MASKRCNN - Tensorflow MaskRCNN With Python

Tensorflow- MaskRCNN Steps git clone https://github.com/amalaj7/TFOD-MASKRCNN.gi

Amal Ajay 2 Jan 18, 2022
Medical-Image-Triage-and-Classification-System-Based-on-COVID-19-CT-and-X-ray-Scan-Dataset

Medical-Image-Triage-and-Classification-System-Based-on-COVID-19-CT-and-X-ray-Sc

2 Dec 26, 2021
PHOTONAI is a high level python API for designing and optimizing machine learning pipelines.

PHOTONAI is a high level python API for designing and optimizing machine learning pipelines. We've created a system in which you can easily select and

Medical Machine Learning Lab - University of Münster 57 Nov 12, 2022
Visual odometry package based on hardware-accelerated NVIDIA Elbrus library with world class quality and performance.

Isaac ROS Visual Odometry This repository provides a ROS2 package that estimates stereo visual inertial odometry using the Isaac Elbrus GPU-accelerate

NVIDIA Isaac ROS 343 Jan 03, 2023
Neural-PIL: Neural Pre-Integrated Lighting for Reflectance Decomposition - NeurIPS2021

Neural-PIL: Neural Pre-Integrated Lighting for Reflectance Decomposition Project Page | Video | Paper Implementation for Neural-PIL. A novel method wh

Computergraphics (University of Tübingen) 64 Dec 29, 2022
Dynamical Wasserstein Barycenters for Time Series Modeling

Dynamical Wasserstein Barycenters for Time Series Modeling This is the code related for the Dynamical Wasserstein Barycenter model published in Neurip

8 Sep 09, 2022
CPPE - 5 (Medical Personal Protective Equipment) is a new challenging object detection dataset

CPPE - 5 CPPE - 5 (Medical Personal Protective Equipment) is a new challenging dataset with the goal to allow the study of subordinate categorization

Rishit Dagli 53 Dec 17, 2022
Social Network Ads Prediction

Social network advertising, also social media targeting, is a group of terms that are used to describe forms of online advertising that focus on social networking services.

Khazar 2 Jan 28, 2022
Python binding for Khiva library.

Khiva-Python Build Documentation Build Linux and Mac OS Build Windows Code Coverage README This is the Khiva Python binding, it allows the usage of Kh

Shapelets 46 Oct 16, 2022