A lightweight library to compare different PyTorch implementations of the same network architecture.

Related tags

Deep LearningTorchBug
Overview

TorchBug is a lightweight library designed to compare two PyTorch implementations of the same network architecture. It allows you to count, and compare, the different leaf modules (i.e., lowest level PyTorch modules, such as torch.nn.Conv2d) present both in the target model and the new model. These leaf modules are distinguished based on their attributes, so that an instance of Conv2d with a kernel_size of 3 and stride of 1 is counted separately from a Conv2d with kernel_size of 3 but stride 2.

Further, when the leaf modules match, the library also provides you the functionality to initialize both the models equivalently, by initializing the leaf modules with weights using seeds which are obtained from the hash of their attributes. TorchBug then lets you pass the same input through both the models, and compare their outputs, or the outputs of intermediate leaf modules, to help find where the new model implementaion deviates from the target model.

Setup | Usage | Docs | Examples

Setup

To install, simply clone the repository, cd into the TorchBug folder, and run the following command:

pip install .

Usage

To get started, check out demo.py.

Docs

Docstrings can be found for all the functions. Refer compare.py and model_summary.py for the main functions.

Examples

Summary of a model

Each row in the tables indicates a specific module type, along with a combination of its attributes, as shown in the columns.

  • The second row in the second table indicates, for example, that there are two instances of Conv2d with 6 in_channels and 6 out_channels in the Target Model. Each of these modules has 330 parameters.

Summary of a model

Comparison of leaf modules

TorchBug lets you compare the leaf modules present in both models, and shows you the missing/extraneous modules present in either.

Comparison of leaf modules

Comparison of leaf modules invoked in the forward pass

The comparison of leaf modules invoked in forward pass ensures that the registered leaf modules are indeed consumed in the forward function of the models.

Comparison of leaf modules

Comparison of outputs of all leaf modules

After instantiating the Target and New models equivalently, and passing the same data through both of them, the outputs of intermediate leaf modules (of the same types and attributes) are compared (by brute force).

  • The second row in the first table indicates, for example, that there are two instances of Conv2d with 6 in_channels and 6 out_channels in both the models, and their outputs match.

Module-wise comparison of models

Comparison of outputs of specific leaf modules only

TorchBug lets you mark specific leaf modules in the models, with names, and shows you whether the outputs of these marked modules match.

Comparison of outputs of marked modules

  • In the above example, a convolution and two linear layers in the New Model were marked with names "Second Convolution", "First Linear Layer", and "Second Linear Layer".
  • A convolution in the Target Model was marked with name "Second Convolution".
  • All the other leaf modules in the Target Model were marked using a convenience function, which set the names to a string describing the module.
Owner
Arjun Krishnakumar
Research Assistant (HiWi) | Master's in Computer Science @ University of Freiburg
Arjun Krishnakumar
[MedIA2021]MIDeepSeg: Minimally Interactive Segmentation of Unseen Objects from Medical Images Using Deep Learning

MIDeepSeg: Minimally Interactive Segmentation of Unseen Objects from Medical Images Using Deep Learning [MedIA or Arxiv] and [Demo] This repository pr

Healthcare Intelligence Laboratory 92 Dec 08, 2022
Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method (NeurIPS 2021)

Skyformer This repository is the official implementation of Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr"om Method (NeurIPS 2021).

Qi Zeng 46 Sep 20, 2022
Uni-Fold: Training your own deep protein-folding models.

Uni-Fold: Training your own deep protein-folding models. This package provides and implementation of a trainable, Transformer-based deep protein foldi

DeepModeling 88 Jan 03, 2023
This is the face keypoint train code of project face-detection-project

face-key-point-pytorch 1. Data structure The structure of landmarks_jpg is like below: |--landmarks_jpg |----AFW |------AFW_134212_1_0.jpg |------AFW_

I‘m X 3 Nov 27, 2022
Data and codes for ACL 2021 paper: Towards Emotional Support Dialog Systems

Emotional-Support-Conversation Copyright © 2021 CoAI Group, Tsinghua University. All rights reserved. Data and codes are for academic research use onl

126 Dec 21, 2022
Eth brownie struct encoding example

eth-brownie struct encoding example Overview This repository contains an example of encoding a struct, so that it can be used in a function call, usin

Ittai Svidler 2 Mar 04, 2022
NAS Benchmark in "Prioritized Architecture Sampling with Monto-Carlo Tree Search", CVPR2021

NAS-Bench-Macro This repository includes the benchmark and code for NAS-Bench-Macro in paper "Prioritized Architecture Sampling with Monto-Carlo Tree

35 Jan 03, 2023
Experiment about Deep Person Re-identification with EfficientNet-v2

We evaluated the baseline with Resnet50 and Efficienet-v2 without using pretrained models. Also Resnet50-IBN-A and Efficientnet-v2 using pretrained on ImageNet. We used two datasets: Market-1501 and

lan.nguyen2k 77 Jan 03, 2023
MvtecAD unsupervised Anomaly Detection

MvtecAD unsupervised Anomaly Detection This respository is the unofficial implementations of DFR: Deep Feature Reconstruction for Unsupervised Anomaly

0 Feb 25, 2022
NeuroFind - A solution to the to the Task given by the Oberseminar of Messtechnik Institute of TU Dresden in 2021

NeuroFind A solution to the to the Task given by the Oberseminar of Messtechnik

1 Jan 20, 2022
git《Learning Pairwise Inter-Plane Relations for Piecewise Planar Reconstruction》(ECCV 2020) GitHub:

Learning Pairwise Inter-Plane Relations for Piecewise Planar Reconstruction Code for the ECCV 2020 paper by Yiming Qian and Yasutaka Furukawa Getting

37 Dec 04, 2022
Automatic Image Background Subtraction

Automatic Image Background Subtraction This repo contains set of scripts for automatic one-shot image background subtraction task using the following

Oleg Sémery 6 Dec 05, 2022
Converts given image (png, jpg, etc) to amogus gif.

Image to Amogus Converter Converts given image (.png, .jpg, etc) to an amogus gif! Usage Place image in the /target/ folder (or anywhere realistically

Hank Magan 1 Nov 24, 2021
The repository forked from NVlabs uses our data. (Differentiable rasterization applied to 3D model simplification tasks)

nvdiffmodeling [origin_code] Differentiable rasterization applied to 3D model simplification tasks, as described in the paper: Appearance-Driven Autom

Qiujie (Jay) Dong 2 Oct 31, 2022
Self-Supervised Learning

Self-Supervised Learning Features self_supervised offers features like modular framework support for multi-gpu training using PyTorch Lightning easy t

Robin 1 Dec 14, 2021
Object-Centric Learning with Slot Attention

Slot Attention This is a re-implementation of "Object-Centric Learning with Slot Attention" in PyTorch (https://arxiv.org/abs/2006.15055). Requirement

Untitled AI 72 Jan 02, 2023
The lightweight PyTorch wrapper for high-performance AI research. Scale your models, not the boilerplate.

The lightweight PyTorch wrapper for high-performance AI research. Scale your models, not the boilerplate. Website • Key Features • How To Use • Docs •

Pytorch Lightning 21.1k Jan 08, 2023
Shuffle Attention for MobileNetV3

SA-MobileNetV3 Shuffle Attention for MobileNetV3 Train Run the following command for train model on your own dataset: python train.py --dataset mnist

Sajjad Aemmi 36 Dec 28, 2022
Dynamic Environments with Deformable Objects (DEDO)

DEDO - Dynamic Environments with Deformable Objects DEDO is a lightweight and customizable suite of environments with deformable objects. It is aimed

Rika 32 Dec 22, 2022