Multi-Glimpse Network With Python

Related tags

Deep LearningMGNet
Overview

Multi-Glimpse Network

Our code requires Python ≥ 3.8

Installation

For example, venv + pip:

$ python3 -m venv env
$ source env/bin/activate
(env) $ python3 -m pip install -r requirements.txt

Evaluation

Accuracy on clean images

  1. Create ImageNet100 from ImageNet (using symbolic links).
$ python3 tools/create_imagenet100.py tools/imagenet100.txt \
    /path/to/ImageNet /path/to/ImageNet100
  1. Download checkpoints from Google Drive.

  2. Test accuracy.

$ export dataset="--train_dir /path/to/ImageNet100/train \
    --val_dir /path/to/ImageNet100/val \
    --dataset imagenet --num_class 100"
# Baseline
$ python3 main.py $dataset --test --n_iter 1 --scale 1.0  --model resnet18 \
    --checkpoint resnet18_baseline
# Ours
$ python3 main.py $dataset --test --n_iter 4 --scale 2.33 --model resnet18 \
    --checkpoint resnet18_ours --alpha 0.6 --s 0.02

Add the flag --flop_count to count the approximate FLOPs for the inference of an image. (using fvcore)

Accuracy on adversarial attacks (PGD)

  1. Test adversarial accuracy.
# Baseline
$ python3 main.py $dataset --test --n_iter 1 --scale 1.0  --adv --step_k 10 \
    --model resnet18 --checkpoint resnet18_baseline
# Ours
$ python3 main.py $dataset --test --n_iter 4 --scale 2.33 --adv --step_k 10 \
    --model resnet18 --checkpoint resnet18_ours --alpha 0.6 --s 0.02

Accuracy on common corruptions

  1. Create ImageNet100-C from ImageNet-C (using symbolic links).
$ python3 tools/create_imagenet100c.py  \
    tools/imagenet100.txt  /path/to/ImageNet-C/ /path/to/ImageNet100-C/
  1. Test for a single corruption.
$ export dataset="--train_dir /path/to/ImageNet100/train \
    --val_dir /path/to/ImageNet100-C/pixelate/5 \
    --dataset imagenet --num_class 100"
# Baseline
$ python3 main.py $dataset --test --n_iter 1 --scale 1.0  --model resnet18 \
    --checkpoint resnet18_baseline
# Ours
$ python3 main.py $dataset --test --n_iter 4 --scale 2.33 --model resnet18 \
    --checkpoint resnet18_ours --alpha 0.6 --s 0.02
  1. A simple script to test all corruptions and collect results.
# Modify tools/eval_imagenet100c.py and run it to generate script
$ python3 tools/eval_imagenet100c.py /home2/ImageNet100-C/ > run.sh
# Evaluate
$ bash run.sh
# Collect results
$ python3 tools/collect_imagenet100c.py

Training

$ export dataset="--train_dir /path/to/ImageNet100/train \
    --val_dir /path/to/ImageNet100/val \
    --dataset imagenet --num_class 100"
# Baseline
$ python3 main.py $dataset --epochs 400 --n_iter 1 --scale 1.0 \
    --model resnet18 --gpu 0,1,2,3
# Ours
$ python3 main.py $dataset --epochs 400 --n_iter 4 --scale 2.33 \
    --model resnet18 --alpha 0.6 --s 0.02  --gpu 0,1,2,3

Check tensorboard for the logs. (When training with multiple gpus, the log value may be scaled by the number of gpus except for the validation accuracy)

tensorboard  --logdir=logs

Note that we left our exploration in the code for further study, e.g., self-supervised spatial guidance, dynamic gradient re-scaling operation.

Owner
LInkedIn https://www.linkedin.com/in/sia-huat-tan-2bb6911a5/
Code for Parameter Prediction for Unseen Deep Architectures (NeurIPS 2021)

Parameter Prediction for Unseen Deep Architectures (NeurIPS 2021) authors: Boris Knyazev, Michal Drozdzal, Graham Taylor, Adriana Romero-Soriano Overv

Facebook Research 462 Jan 03, 2023
Latex code for making neural networks diagrams

PlotNeuralNet Latex code for drawing neural networks for reports and presentation. Have a look into examples to see how they are made. Additionally, l

Haris Iqbal 18.6k Jan 01, 2023
Source code for TACL paper "KEPLER: A Unified Model for Knowledge Embedding and Pre-trained Language Representation".

KEPLER: A Unified Model for Knowledge Embedding and Pre-trained Language Representation Source code for TACL 2021 paper KEPLER: A Unified Model for Kn

THU-KEG 138 Dec 22, 2022
Hyperopt for solving CIFAR-100 with a convolutional neural network (CNN) built with Keras and TensorFlow, GPU backend

Hyperopt for solving CIFAR-100 with a convolutional neural network (CNN) built with Keras and TensorFlow, GPU backend This project acts as both a tuto

Guillaume Chevalier 103 Jul 22, 2022
Numenta published papers code and data

Numenta research papers code and data This repository contains reproducible code for selected Numenta papers. It is currently under construction and w

Numenta 293 Jan 06, 2023
Code & Models for 3DETR - an End-to-end transformer model for 3D object detection

3DETR: An End-to-End Transformer Model for 3D Object Detection PyTorch implementation and models for 3DETR. 3DETR (3D DEtection TRansformer) is a simp

Facebook Research 487 Dec 31, 2022
A graphical Semi-automatic annotation tool based on labelImg and Yolov5

💕YOLOV5 semi-automatic annotation tool (Based on labelImg)

EricFang 247 Jan 05, 2023
Official implementation of VaxNeRF (Voxel-Accelearated NeRF).

VaxNeRF Paper | Google Colab This is the official implementation of VaxNeRF (Voxel-Accelearated NeRF). VaxNeRF provides very fast training and slightl

naruya 132 Nov 21, 2022
QueryDet: Cascaded Sparse Query for Accelerating High-Resolution SmallObject Detection

QueryDet-PyTorch This repository is the official implementation of our paper: QueryDet: Cascaded Sparse Query for Accelerating High-Resolution Small O

Chenhongyi Yang 276 Dec 31, 2022
Compare outputs between layers written in Tensorflow and layers written in Pytorch

Compare outputs of Wasserstein GANs between TensorFlow vs Pytorch This is our testing module for the implementation of improved WGAN in Pytorch Prereq

Hung Nguyen 72 Dec 20, 2022
Semi-automated OpenVINO benchmark_app with variable parameters

Semi-automated OpenVINO benchmark_app with variable parameters. User can specify multiple options for any parameters in the benchmark_app and the progam runs the benchmark with all combinations of gi

Yasunori Shimura 8 Apr 11, 2022
Official repository with code and data accompanying the NAACL 2021 paper "Hurdles to Progress in Long-form Question Answering" (https://arxiv.org/abs/2103.06332).

Hurdles to Progress in Long-form Question Answering This repository contains the official scripts and datasets accompanying our NAACL 2021 paper, "Hur

Kalpesh Krishna 41 Nov 08, 2022
A collection of random and hastily hacked together scripts for investigating EU-DCC

A collection of random and hastily hacked together scripts for investigating EU-DCC

Ryan Barrett 8 Mar 01, 2022
Evaluation toolkit of the informative tracking benchmark comprising 9 scenarios, 180 diverse videos, and new challenges.

Informative-tracking-benchmark Informative tracking benchmark (ITB) higher diversity. It contains 9 representative scenarios and 180 diverse videos. m

Xin Li 15 Nov 26, 2022
Pytorch implementation of MLP-Mixer with loading pre-trained models.

MLP-Mixer-Pytorch PyTorch implementation of MLP-Mixer: An all-MLP Architecture for Vision with the function of loading official ImageNet pre-trained p

Qiushi Yang 2 Sep 29, 2022
Official implement of Evo-ViT: Slow-Fast Token Evolution for Dynamic Vision Transformer

Evo-ViT: Slow-Fast Token Evolution for Dynamic Vision Transformer This repository contains the PyTorch code for Evo-ViT. This work proposes a slow-fas

YifanXu 53 Dec 05, 2022
Flax is a neural network ecosystem for JAX that is designed for flexibility.

Flax: A neural network library and ecosystem for JAX designed for flexibility Overview | Quick install | What does Flax look like? | Documentation See

Google 3.9k Jan 02, 2023
EdMIPS: Rethinking Differentiable Search for Mixed-Precision Neural Networks

EdMIPS is an efficient algorithm to search the optimal mixed-precision neural network directly without proxy task on ImageNet given computation budgets. It can be applied to many popular network arch

Zhaowei Cai 47 Dec 30, 2022
Predict and time series avocado hass

RECOMMENDER SYSTEM MARKETING TỔNG QUAN VỀ HỆ THỐNG DỮ LIỆU 1. Giới thiệu - Tiki là một hệ sinh thái thương mại "all in one", trong đó có tiki.vn, là

hieulmsc 3 Jan 10, 2022
Molecular Sets (MOSES): A Benchmarking Platform for Molecular Generation Models

Molecular Sets (MOSES): A benchmarking platform for molecular generation models Deep generative models are rapidly becoming popular for the discovery

MOSES 656 Dec 29, 2022