All public open-source implementations of convnets benchmarks

Overview

convnet-benchmarks

Easy benchmarking of all public open-source implementations of convnets. A summary is provided in the section below.

Machine: 6-core Intel Core i7-5930K CPU @ 3.50GHz + NVIDIA Titan X + Ubuntu 14.04 x86_64

Imagenet Winners Benchmarking

I pick some popular imagenet models, and I clock the time for a full forward + backward pass. I average my times over 10 runs. I ignored dropout and softmax layers.

Notation

Input is described as {batch_size}x{num_filters}x{filter_width}x{filter_height}. Where batch_size is the number of images used in a minibatch, num_filters is the number of channels in an image, filter_width is the width of the image, and filter_height is the height of the image.

One small note:

The CuDNN benchmarks are done using Torch bindings. One can also do the same via Caffe bindings or bindings of any other library. This note is here to clarify that Caffe (native) and Torch (native) are the convolution kernels which are present as a default fallback. Some of the frameworks like TensorFlow and Chainer are benchmarked with CuDNN, but it is not explicitly mentioned, and hence one might think that these frameworks as a whole are faster, than for example Caffe, which might not be the case.

AlexNet (One Weird Trick paper) - Input 128x3x224x224

Library Class Time (ms) forward (ms) backward (ms)
CuDNN[R4]-fp16 (Torch) cudnn.SpatialConvolution 71 25 46
Nervana-neon-fp16 ConvLayer 78 25 52
CuDNN[R4]-fp32 (Torch) cudnn.SpatialConvolution 81 27 53
TensorFlow conv2d 81 26 55
Nervana-neon-fp32 ConvLayer 87 28 58
fbfft (Torch) fbnn.SpatialConvolution 104 31 72
Chainer Convolution2D 177 40 136
cudaconvnet2* ConvLayer 177 42 135
CuDNN[R2] * cudnn.SpatialConvolution 231 70 161
Caffe (native) ConvolutionLayer 324 121 203
Torch-7 (native) SpatialConvolutionMM 342 132 210
CL-nn (Torch) SpatialConvolutionMM 963 388 574
Caffe-CLGreenTea ConvolutionLayer 1442 210 1232

Overfeat [fast] - Input 128x3x231x231

Library Class Time (ms) forward (ms) backward (ms)
Nervana-neon-fp16 ConvLayer 176 58 118
Nervana-neon-fp32 ConvLayer 211 69 141
CuDNN[R4]-fp16 (Torch) cudnn.SpatialConvolution 242 86 156
CuDNN[R4]-fp32 (Torch) cudnn.SpatialConvolution 268 94 174
TensorFlow conv2d 279 90 189
fbfft (Torch) SpatialConvolutionCuFFT 342 114 227
Chainer Convolution2D 620 135 484
cudaconvnet2* ConvLayer 723 176 547
CuDNN[R2] * cudnn.SpatialConvolution 810 234 576
Caffe ConvolutionLayer 823 355 468
Torch-7 (native) SpatialConvolutionMM 878 379 499
CL-nn (Torch) SpatialConvolutionMM 963 388 574
Caffe-CLGreenTea ConvolutionLayer 2857 616 2240

OxfordNet [Model-A] - Input 64x3x224x224

Library Class Time (ms) forward (ms) backward (ms)
Nervana-neon-fp16 ConvLayer 254 82 171
Nervana-neon-fp32 ConvLayer 320 103 217
CuDNN[R4]-fp16 (Torch) cudnn.SpatialConvolution 471 140 331
CuDNN[R4]-fp32 (Torch) cudnn.SpatialConvolution 529 162 366
TensorFlow conv2d 540 158 382
Chainer Convolution2D 885 251 632
fbfft (Torch) SpatialConvolutionCuFFT 1092 355 737
cudaconvnet2* ConvLayer 1229 408 821
CuDNN[R2] * cudnn.SpatialConvolution 1099 342 757
Caffe ConvolutionLayer 1068 323 745
Torch-7 (native) SpatialConvolutionMM 1105 350 755
CL-nn (Torch) SpatialConvolutionMM 3437 875 2562
Caffe-CLGreenTea ConvolutionLayer 5620 988 4632

GoogleNet V1 - Input 128x3x224x224

Library Class Time (ms) forward (ms) backward (ms)
Nervana-neon-fp16 ConvLayer 230 72 157
Nervana-neon-fp32 ConvLayer 270 84 186
TensorFlow conv2d 445 135 310
CuDNN[R4]-fp16 (Torch) cudnn.SpatialConvolution 462 112 349
CuDNN[R4]-fp32 (Torch) cudnn.SpatialConvolution 470 130 340
Chainer Convolution2D 687 189 497
Caffe ConvolutionLayer 1935 786 1148
CL-nn (Torch) SpatialConvolutionMM 7016 3027 3988
Caffe-CLGreenTea ConvolutionLayer 9462 746 8716

Layer-wise Benchmarking (Last Updated April 2015)

Spatial Convolution layer (3D input 3D output, densely connected)

forward + backprop (wrt input and weights)
Original Library Class/Function Benchmarked Time (ms) forward (ms) backward (ms)
fbfft SpatialConvolutionCuFFT 256 101 155
cuda-convnet2 * ConvLayer 977 201 776
cuda-convnet** pylearn2.cuda_convnet 1077 312 765
CuDNN R2 * cudnn.SpatialConvolution 1019 269 750
Theano CorrMM 1225 407 818
Caffe ConvolutionLayer 1231 396 835
Torch-7 SpatialConvolutionMM 1265 418 877
DeepCL ConvolutionLayer 6280 2648 3632
cherry-picking**** best per layer 235 79 155

This table is NOT UPDATED For TITAN-X. These numbers below were on Titan Black and are here only for informational and legacy purposes.

Original Library Class/Function Benchmarked Time (ms) forward (ms) backward (ms)
Theano (experimental)*** conv2d_fft 1178 304 874
Torch-7 nn.SpatialConvolutionBHWD 1892 581 1311
ccv ccv_convnet_layer 809+bw 809
Theano (legacy) conv2d 70774 3833 66941
  • * indicates that the library was tested with Torch bindings of the specific kernels.
  • ** indicates that the library was tested with Pylearn2 bindings.
  • *** This is an experimental module which used FFT to calculate convolutions. It uses a lot of memory according to @benanne
  • **** The last row shows results obtainable when choosing the best-performing library for each layer.
  • L1 - Input: 128x128 Batch-size 128, Feature maps: 3->96, Kernel Size: 11x11, Stride: 1x1
  • L2 - Input: 64x64 Batch-size 128, Feature maps: 64->128, Kernel Size: 9x9, Stride: 1x1
  • L3 - Input: 32x32 Batch-size 128, Feature maps: 128->128, Kernel Size: 9x9, Stride: 1x1
  • L4 - Input: 16x16 Batch-size 128, Feature maps: 128->128, Kernel Size: 7x7, Stride: 1x1
  • L5 - Input: 13x13 Batch-size 128, Feature maps: 384->384, Kernel Size: 3x3, Stride: 1x1
  • The table is ranked according to the total time forward+backward calls for layers (L1 + L2 + L3 + L4 + L5)
Breakdown
forward

Columns L1, L2, L3, L4, L5, Total are times in milliseconds

Original Library Class/Function Benchmarked L1 L2 L3 L4 L5 Total
fbfft SpatialConvolutionCuFFT 57 27 6 2 9 101
cuda-convnet2 * ConvLayer 36 113 40 4 8 201
cuda-convnet** pylearn2.cuda_convnet 38 183 68 7 16 312
CuDNN R2 cudnn.SpatialConvolution 56 143 53 6 11 269
Theano CorrMM 91 143 121 24 28 407
Caffe ConvolutionLayer 93 136 116 24 27 396
Torch-7 nn.SpatialConvolutionMM 94 149 123 24 28 418
DeepCL ConvolutionLayer 738 1241 518 47 104 2648
cherry-picking**** best per layer 36 27 6 2 8 79
backward (gradInput + gradWeight)

Columns L1, L2, L3, L4, L5, Total are times in milliseconds

Original Library Class/Function Benchmarked L1 L2 L3 L4 L5 Total
fbfft SpatialConvolutionCuFFT 76 45 12 4 18 155
cuda-convnet2 * ConvLayer 103 467 162 15 29 776
cuda-convnet** pylearn2.cuda_convnet 136 433 147 15 34 765
CuDNN R2 cudnn.SpatialConvolution 139 401 159 19 32 750
Theano CorrMM 179 405 174 29 31 818
Caffe ConvolutionLayer 200 405 172 28 30 835
Torch-7 nn.SpatialConvolutionMM 206 432 178 29 32 877
DeepCL ConvolutionLayer 484 2144 747 59 198 3632
cherry-picking**** best per layer 76 45 12 4 18 155
Owner
Soumith Chintala
/\︿╱\ _________________________________ \0_ 0 /╱\╱____________________________ \▁︹_/
Soumith Chintala
Repository of Vision Transformer with Deformable Attention

Vision Transformer with Deformable Attention This repository contains the code for the paper Vision Transformer with Deformable Attention [arXiv]. Int

410 Jan 03, 2023
Markov Attention Models

Introduction This repo contains code for reproducing the results in the paper Graphical Models with Attention for Context-Specific Independence and an

Vicarious 0 Dec 09, 2021
Torch Implementation of "Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network"

Photo-Realistic-Super-Resoluton Torch Implementation of "Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network" [Paper]

Harry Yang 199 Dec 01, 2022
LAMDA: Label Matching Deep Domain Adaptation

LAMDA: Label Matching Deep Domain Adaptation This is the implementation of the paper LAMDA: Label Matching Deep Domain Adaptation which has been accep

Tuan Nguyen 9 Sep 06, 2022
Graduation Project

Gesture-Detection-and-Depth-Estimation This is my graduation project. (1) In this project, I use the YOLOv3 object detection model to detect gesture i

ChaosAT 1 Nov 23, 2021
Ros2-voiceroid2 - ROS2 wrapper package of VOICEROID2

ros2_voiceroid2 ROS2 wrapper package of VOICEROID2 Windows Only Installation Ins

Nkyoku 1 Jan 23, 2022
Allele-specific pipeline for unbiased read mapping(WIP), QTL discovery(WIP), and allelic-imbalance analysis

WASP2 (Currently in pre-development): Allele-specific pipeline for unbiased read mapping(WIP), QTL discovery(WIP), and allelic-imbalance analysis Requ

McVicker Lab 2 Aug 11, 2022
High performance, easy-to-use, and scalable machine learning (ML) package, including linear model (LR), factorization machines (FM), and field-aware factorization machines (FFM) for Python and CLI interface.

What is xLearn? xLearn is a high performance, easy-to-use, and scalable machine learning package that contains linear model (LR), factorization machin

Chao Ma 3k Jan 03, 2023
Official PyTorch code for the paper: "Point-Based Modeling of Human Clothing" (ICCV 2021)

Point-Based Modeling of Human Clothing Paper | Project page | Video This is an official PyTorch code repository of the paper "Point-Based Modeling of

Visual Understanding Lab @ Samsung AI Center Moscow 64 Nov 22, 2022
PyTorch implementation of the Deep SLDA method from our CVPRW-2020 paper "Lifelong Machine Learning with Deep Streaming Linear Discriminant Analysis"

Lifelong Machine Learning with Deep Streaming Linear Discriminant Analysis This is a PyTorch implementation of the Deep Streaming Linear Discriminant

Tyler Hayes 41 Dec 25, 2022
Text to image synthesis using thought vectors

Text To Image Synthesis Using Thought Vectors This is an experimental tensorflow implementation of synthesizing images from captions using Skip Though

Paarth Neekhara 2.1k Jan 05, 2023
Code for NeurIPS 2021 paper 'Spatio-Temporal Variational Gaussian Processes'

Spatio-Temporal Variational GPs This repository is the official implementation of the methods in the publication: O. Hamelijnck, W.J. Wilkinson, N.A.

AaltoML 26 Sep 16, 2022
GAN-based 3D human pose estimation model for 3DV'17 paper

Tensorflow implementation for 3DV 2017 conference paper "Adversarially Parameterized Optimization for 3D Human Pose Estimation". @inproceedings{jack20

Dominic Jack 15 Feb 27, 2021
Knowledge Distillation Toolbox for Semantic Segmentation

SegDistill: Toolbox for Knowledge Distillation on Semantic Segmentation Networks This repo contains the supported code and configuration files for Seg

9 Dec 12, 2022
DALL-Eval: Probing the Reasoning Skills and Social Biases of Text-to-Image Generative Transformers

DALL-Eval: Probing the Reasoning Skills and Social Biases of Text-to-Image Generative Transformers Authors: Jaemin Cho, Abhay Zala, and Mohit Bansal (

Jaemin Cho 98 Dec 15, 2022
学习 python3 以来写的一些垃圾玩具……

和东哥做兄弟 Author: chiupam 版权 未经本人同意,仓库内所有资源文件,禁止任何公众号、自媒体、开发者进行任何形式的转载、发布、搬运。 声明 这不是一个开源项目,只是把 GitHub 当作一个代码的存储空间,本项目不接受任何开源要求。 仅用于学习研究,禁止用于商业用途,不能保证其合法性

Chiupam 67 Mar 26, 2022
A PyTorch implementation for V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation

A PyTorch implementation of V-Net Vnet is a PyTorch implementation of the paper V-Net: Fully Convolutional Neural Networks for Volumetric Medical Imag

Matthew Macy 606 Dec 21, 2022
This repository contains notebook implementations of the following Neural Process variants: Conditional Neural Processes (CNPs), Neural Processes (NPs), Attentive Neural Processes (ANPs).

The Neural Process Family This repository contains notebook implementations of the following Neural Process variants: Conditional Neural Processes (CN

DeepMind 892 Dec 28, 2022
AutoML library for deep learning

Official Website: autokeras.com AutoKeras: An AutoML system based on Keras. It is developed by DATA Lab at Texas A&M University. The goal of AutoKeras

Keras 8.7k Jan 08, 2023
Open-source code for Generic Grouping Network (GGN, CVPR 2022)

Open-World Instance Segmentation: Exploiting Pseudo Ground Truth From Learned Pairwise Affinity Pytorch implementation for "Open-World Instance Segmen

Meta Research 99 Dec 06, 2022