Codes and models of NeurIPS2021 paper - DominoSearch: Find layer-wise fine-grained N:M sparse schemes from dense neural networks

Overview

DominoSearch

This is repository for codes and models of NeurIPS2021 paper - DominoSearch: Find layer-wise fine-grained N:M sparse schemes from dense neural networks

Instructions and other materials will be released soon.

Search:

git clone https://github.com/NM-sparsity/DominoSearch.git
cd DominoSearch/DominoSearch/search/script_resnet_ImageNet

We provide several search scripts for different sparse-ratio target, you can specify your own target and change the parameters accordingly. Note, you need to first specify your ImageNet dataset path

The searching phase could take 2-3 hours, then you will get searched schemes stored in a txt file, which will be needed as input for mixed-sparsity training.

Below is an example of output formate.

{'SparseConv0_3-64-(7, 7)': [16, 16], 'SparseConv1_64-64-(1, 1)': [16, 16], 'SparseConv2_64-64-(3, 3)': [4, 16], 'SparseConv3_64-256-(1, 1)': [8, 16], 'SparseConv4_64-256-(1, 1)': [8, 16], 'SparseConv5_256-64-(1, 1)': [8, 16], 'SparseConv6_64-64-(3, 3)': [4, 16], 'SparseConv7_64-256-(1, 1)': [8, 16], 'SparseConv8_256-64-(1, 1)': [8, 16], 'SparseConv9_64-64-(3, 3)': [4, 16], 'SparseConv10_64-256-(1, 1)': [8, 16], 'SparseConv11_256-128-(1, 1)': [8, 16], 'SparseConv12_128-128-(3, 3)': [2, 16], 'SparseConv13_128-512-(1, 1)': [8, 16], 'SparseConv14_256-512-(1, 1)': [4, 16], 'SparseConv15_512-128-(1, 1)': [8, 16], 'SparseConv16_128-128-(3, 3)': [4, 16], 'SparseConv17_128-512-(1, 1)': [8, 16], 'SparseConv18_512-128-(1, 1)': [8, 16], 'SparseConv19_128-128-(3, 3)': [4, 16], 'SparseConv20_128-512-(1, 1)': [8, 16], 'SparseConv21_512-128-(1, 1)': [8, 16], 'SparseConv22_128-128-(3, 3)': [2, 16], 'SparseConv23_128-512-(1, 1)': [8, 16], 'SparseConv24_512-256-(1, 1)': [4, 16], 'SparseConv25_256-256-(3, 3)': [2, 16], 'SparseConv26_256-1024-(1, 1)': [4, 16], 'SparseConv27_512-1024-(1, 1)': [4, 16], 'SparseConv28_1024-256-(1, 1)': [4, 16], 'SparseConv29_256-256-(3, 3)': [2, 16], 'SparseConv30_256-1024-(1, 1)': [4, 16], 'SparseConv31_1024-256-(1, 1)': [4, 16], 'SparseConv32_256-256-(3, 3)': [2, 16], 'SparseConv33_256-1024-(1, 1)': [4, 16], 'SparseConv34_1024-256-(1, 1)': [4, 16], 'SparseConv35_256-256-(3, 3)': [2, 16], 'SparseConv36_256-1024-(1, 1)': [4, 16], 'SparseConv37_1024-256-(1, 1)': [4, 16], 'SparseConv38_256-256-(3, 3)': [2, 16], 'SparseConv39_256-1024-(1, 1)': [4, 16], 'SparseConv40_1024-256-(1, 1)': [4, 16], 'SparseConv41_256-256-(3, 3)': [2, 16], 'SparseConv42_256-1024-(1, 1)': [4, 16], 'SparseConv43_1024-512-(1, 1)': [4, 16], 'SparseConv44_512-512-(3, 3)': [2, 16], 'SparseConv45_512-2048-(1, 1)': [4, 16], 'SparseConv46_1024-2048-(1, 1)': [2, 16], 'SparseConv47_2048-512-(1, 1)': [4, 16], 'SparseConv48_512-512-(3, 3)': [2, 16], 'SparseConv49_512-2048-(1, 1)': [4, 16], 'SparseConv50_2048-512-(1, 1)': [4, 16], 'SparseConv51_512-512-(3, 3)': [2, 16], 'SparseConv52_512-2048-(1, 1)': [4, 16], 'Linear0_2048-1000': [4, 16]}

Train:

After getting the layer-wise sparse schemes, we need to fine-tune with the schemes to recover the accuracy. The training code is based on NM-sparsity, where we made some changes to support flexible N:M schemes.

Below is an example of training layer-wise sparse resnet50 with 80% overall sparsity.

cd DominoSearch\DominoSearch\train\classification_sparsity_level\train_imagenet
 python -m torch.distributed.launch --nproc_per_node=8 ../train_imagenet.py --config ./configs/config_resnet50.yaml  --base_lr 0.01 --decay 0.0005 --epochs 120 --schemes_file ./schemes/resnet50_M16_0.80.txt --model_dir ./resnet50/resnet50_0.80_M16

Experiments

We provide the trained models of the experiments. Please check our paper for details and intepretations of the experiments.

ResNet50 experiments in section 4.1

Model Name TOP1 Accuracy Trained Model Searched schemes
resnet50 - 0.80 model size 76.7 google drive google drive
resnet50 - 0.875 model size 75.7 google drive google drive
resnet50 - 0.9375 model size 73.5 google drive google drive
resnet50 - 8x FLOPs 75.4 google drive google drive
resnet50- 16x FLOPs 73.4 google drive google drive

Ablation experiments of ResNet50 in section 5.3

Model Name TOP1 Accuracy Trained Model Train log
Ablation E3 76.1 google drive google drive
Ablation E4 76.4 google drive google drive
Ablation E6 76.6 google drive google drive
Ablation E7 75.6 google drive google drive

Citation

@inproceedings{
sun2021dominosearch,
title={DominoSearch: Find layer-wise fine-grained N:M sparse schemes from dense neural networks},
author={Wei Sun and Aojun Zhou and Sander Stuijk and Rob G. J. Wijnhoven and Andrew Nelson and Hongsheng Li and Henk Corporaal},
booktitle={Thirty-Fifth Conference on Neural Information Processing Systems},
year={2021},
url={https://openreview.net/forum?id=IGrC6koW_g}
}
Unofficial Pytorch Lightning implementation of Contrastive Syn-to-Real Generalization (ICLR, 2021)

Unofficial Pytorch Lightning implementation of Contrastive Syn-to-Real Generalization (ICLR, 2021)

Gyeongjae Choi 17 Sep 23, 2021
This is a Machine Learning Based Hand Detector Project, It Uses Machine Learning Models and Modules Like Mediapipe, Developed By Google!

Machine Learning Hand Detector This is a Machine Learning Based Hand Detector Project, It Uses Machine Learning Models and Modules Like Mediapipe, Dev

Popstar Idhant 3 Feb 25, 2022
CVPR '21: In the light of feature distributions: Moment matching for Neural Style Transfer

In the light of feature distributions: Moment matching for Neural Style Transfer (CVPR 2021) This repository provides code to recreate results present

Nikolai Kalischek 49 Oct 13, 2022
Object recognition using Azure Custom Vision AI and Azure Functions

Step by Step on how to create an object recognition model using Custom Vision, export the model and run the model in an Azure Function

El Bruno 11 Jul 08, 2022
Studying Python release adoptions by looking at PyPI downloads

Analysis of version adoptions on PyPI We get PyPI download statistics via Google's BigQuery using the pypinfo tool. Usage First you need to get an acc

Julien Palard 9 Nov 04, 2022
Think Big, Teach Small: Do Language Models Distil Occam’s Razor?

Think Big, Teach Small: Do Language Models Distil Occam’s Razor? Software related to the paper "Think Big, Teach Small: Do Language Models Distil Occa

0 Dec 07, 2021
A basic duplicate image detection service using perceptual image hash functions and nearest neighbor search, implemented using faiss, fastapi, and imagehash

Duplicate Image Detection Getting Started Install dependencies pip install -r requirements.txt Run service python main.py Testing Test with pytest How

Matthew Podolak 21 Nov 11, 2022
This project aims at providing a concise, easy-to-use, modifiable reference implementation for semantic segmentation models using PyTorch.

Semantic Segmentation on PyTorch (include FCN, PSPNet, Deeplabv3, Deeplabv3+, DANet, DenseASPP, BiSeNet, EncNet, DUNet, ICNet, ENet, OCNet, CCNet, PSANet, CGNet, ESPNet, LEDNet, DFANet)

2.4k Jan 08, 2023
Source code for CVPR2022 paper "Abandoning the Bayer-Filter to See in the Dark"

Abandoning the Bayer-Filter to See in the Dark (CVPR 2022) Paper: https://arxiv.org/abs/2203.04042 (Arxiv version) This code includes the training and

74 Dec 15, 2022
Adapter-BERT: Parameter-Efficient Transfer Learning for NLP.

Adapter-BERT: Parameter-Efficient Transfer Learning for NLP.

Google Research 340 Jan 03, 2023
A Survey on Deep Learning Technique for Video Segmentation

A Survey on Deep Learning Technique for Video Segmentation A Survey on Deep Learning Technique for Video Segmentation Wenguan Wang, Tianfei Zhou, Fati

Tianfei Zhou 112 Dec 12, 2022
MAterial del programa Misión TIC 2022

Mision TIC 2022 Esta iniciativa, aparece como respuesta frente a los retos de la Cuarta Revolución Industrial, y tiene como objetivo la formación de 1

6 May 25, 2022
This program uses trial auth token of Azure Cognitive Services to do speech synthesis for you.

🗣️ aspeak A simple text-to-speech client using azure TTS API(trial). 😆 TL;DR: This program uses trial auth token of Azure Cognitive Services to do s

Levi Zim 359 Jan 05, 2023
The source code for 'Noisy-Labeled NER with Confidence Estimation' accepted by NAACL 2021

Kun Liu*, Yao Fu*, Chuanqi Tan, Mosha Chen, Ningyu Zhang, Songfang Huang, Sheng Gao. Noisy-Labeled NER with Confidence Estimation. NAACL 2021. [arxiv]

30 Nov 12, 2022
Fast and robust clustering of point clouds generated with a Velodyne sensor.

Depth Clustering This is a fast and robust algorithm to segment point clouds taken with Velodyne sensor into objects. It works with all available Velo

Photogrammetry & Robotics Bonn 957 Dec 21, 2022
Self-supervised learning optimally robust representations for domain generalization.

OptDom: Learning Optimal Representations for Domain Generalization This repository contains the official implementation for Optimal Representations fo

Yangjun Ruan 18 Aug 25, 2022
Block Sparse movement pruning

Movement Pruning: Adaptive Sparsity by Fine-Tuning Magnitude pruning is a widely used strategy for reducing model size in pure supervised learning; ho

Hugging Face 54 Dec 20, 2022
This is the official implementation of Elaborative Rehearsal for Zero-shot Action Recognition (ICCV2021)

Elaborative Rehearsal for Zero-shot Action Recognition This is an official implementation of: Shizhe Chen and Dong Huang, Elaborative Rehearsal for Ze

DeLightCMU 26 Sep 24, 2022
A TensorFlow 2.x implementation of Masked Autoencoders Are Scalable Vision Learners

Masked Autoencoders Are Scalable Vision Learners A TensorFlow implementation of Masked Autoencoders Are Scalable Vision Learners [1]. Our implementati

Aritra Roy Gosthipaty 59 Dec 10, 2022
Pytorch implementation of "MOSNet: Deep Learning based Objective Assessment for Voice Conversion"

MOSNet pytorch implementation of "MOSNet: Deep Learning based Objective Assessment for Voice Conversion" https://arxiv.org/abs/1904.08352 Dependency L

9 Nov 18, 2022