Reverse engineer your pytorch vision models, in style

Related tags

Deep Learningrover
Overview

🔍 Rover

Reverse engineer your CNNs, in style

Open In Colab

Rover will help you break down your CNN and visualize the features from within the model. No need to write weirdly abstract code to visualize your model's features anymore.

💻 Usage

git clone https://github.com/Mayukhdeb/rover.git; cd rover

install requirements:

pip install -r requirements.txt
from rover import core
from rover.default_models import models_dict

core.run(models_dict = models_dict)

and then run the script with streamlit as:

$ streamlit run your_script.py

if everything goes right, you'll see something like:

You can now view your Streamlit app in your browser.

  Local URL: http://localhost:8501

🧙 Custom models

rover supports pretty much any PyTorch model with an input of shape [N, 3, H, W] (even segmentation models/VAEs and all that fancy stuff) with imagenet normalization on input.

import torchvision.models as models 
model = models.resnet34(pretrained= True)  ## or any other model (need not be from torchvision.models)

models_dict = {
    'my model': model,  ## add in any number of models :)
}

core.run(
    models_dict = models_dict
)

🖼️ Channel objective

Optimizes a single channel from one of the layer(s) selected.

  • layer index: specifies which layer you want to use out of the layers selected.
  • channel index: specifies the exact channel which needs to be visualized.

🧙‍♂️ Writing your own objective

This is for the smarties who like to write their own objective function. The only constraint is that the function should be named custom_func.

Here's an example:

def custom_func(layer_outputs):
    '''
    layer_outputs is a list containing 
    the outputs (torch.tensor) of each layer you selected

    In this example we'll try to optimize the following:
    * the entire first layer -> layer_outputs[0].mean()
    * 20th channel of the 2nd layer -> layer_outputs[1][20].mean()
    '''
    loss = layer_outputs[0].mean() + layer_outputs[1][20].mean()
    return -loss

Running on google colab

Check out this notebook. I'll also include the instructions here just in case.

Clone the repo + install dependencies

!git clone https://github.com/Mayukhdeb/rover.git
!pip install torch-dreams --quiet
!pip install streamlit --quiet

Navigate into the repo

import os 
os.chdir('rover')

Write your file into a script from a cell. Here I wrote it into test.py

%%writefile  test.py

from rover import core
from rover.default_models import models_dict

core.run(models_dict = models_dict)

Run script on a thread

import threading

proc = threading.Thread(target= os.system, args=['streamlit run test.py'])
proc.start()

Download ngrok:

!wget https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip
!unzip -o ngrok-stable-linux-amd64.zi

More ngrok stuff

get_ipython().system_raw('./ngrok http 8501 &')

Get your URL where rover is hosted

!curl -s http://localhost:4040/api/tunnels | python3 -c \
    "import sys, json; print(json.load(sys.stdin)['tunnels'][0]['public_url'])"

💻 Args

  • width (int, optional): Width of image to be optimized
  • height (int, optional): Height of image to be optimized
  • iters (int, optional): Number of iterations, higher -> stronger visualization
  • lr (float, optional): Learning rate
  • rotate (deg) (int, optional): Max rotation in default transforms
  • scale max (float, optional): Max image size factor.
  • scale min (float, optional): Minimum image size factor.
  • translate (x) (float, optional): Maximum translation factor in x direction
  • translate (y) (float, optional): Maximum translation factor in y direction
  • weight decay (float, optional): Weight decay for default optimizer. Helps prevent high frequency noise.
  • gradient clip (float, optional): Maximum value of the norm of gradient.

Run locally

Clone the repo

git clone https://github.com/Mayukhdeb/rover.git

install requirements

pip install -r requirements.txt

showtime

streamlit run test.py
Owner
Mayukh Deb
Learning about life, one epoch at a time
Mayukh Deb
Certifiable Outlier-Robust Geometric Perception

Certifiable Outlier-Robust Geometric Perception About This repository holds the implementation for certifiably solving outlier-robust geometric percep

83 Dec 31, 2022
We utilize deep reinforcement learning to obtain favorable trajectories for visual-inertial system calibration.

Unified Data Collection for Visual-Inertial Calibration via Deep Reinforcement Learning Update: The lastest code will be updated in this branch. Pleas

ETHZ ASL 27 Dec 29, 2022
This code provides various models combining dilated convolutions with residual networks

Overview This code provides various models combining dilated convolutions with residual networks. Our models can achieve better performance with less

Fisher Yu 1.1k Dec 30, 2022
TLDR: Twin Learning for Dimensionality Reduction

TLDR (Twin Learning for Dimensionality Reduction) is an unsupervised dimensionality reduction method that combines neighborhood embedding learning with the simplicity and effectiveness of recent self

NAVER 105 Dec 28, 2022
Qcover is an open source effort to help exploring combinatorial optimization problems in Noisy Intermediate-scale Quantum(NISQ) processor.

Qcover is an open source effort to help exploring combinatorial optimization problems in Noisy Intermediate-scale Quantum(NISQ) processor. It is devel

33 Nov 11, 2022
Orthogonal Over-Parameterized Training

The inductive bias of a neural network is largely determined by the architecture and the training algorithm. To achieve good generalization, how to effectively train a neural network is of great impo

Weiyang Liu 11 Apr 18, 2022
Code accompanying the NeurIPS 2021 paper "Generating High-Quality Explanations for Navigation in Partially-Revealed Environments"

Generating High-Quality Explanations for Navigation in Partially-Revealed Environments This work presents an approach to explainable navigation under

RAIL Group @ George Mason University 1 Oct 28, 2022
Embracing Single Stride 3D Object Detector with Sparse Transformer

SST: Single-stride Sparse Transformer This is the official implementation of paper: Embracing Single Stride 3D Object Detector with Sparse Transformer

TuSimple 385 Dec 28, 2022
Official Implementation for "ReStyle: A Residual-Based StyleGAN Encoder via Iterative Refinement" https://arxiv.org/abs/2104.02699

ReStyle: A Residual-Based StyleGAN Encoder via Iterative Refinement Recently, the power of unconditional image synthesis has significantly advanced th

967 Jan 04, 2023
Source code and data in paper "MDFEND: Multi-domain Fake News Detection (CIKM'21)"

MDFEND: Multi-domain Fake News Detection This is an official implementation for MDFEND: Multi-domain Fake News Detection which has been accepted by CI

Rich 40 Dec 18, 2022
RaceBERT -- A transformer based model to predict race and ethnicty from names

RaceBERT -- A transformer based model to predict race and ethnicty from names Installation pip install racebert Using a virtual environment is highly

Prasanna Parasurama 3 Nov 02, 2022
Incorporating Transformer and LSTM to Kalman Filter with EM algorithm

Deep learning based state estimation: incorporating Transformer and LSTM to Kalman Filter with EM algorithm Overview Kalman Filter requires the true p

zshicode 57 Dec 27, 2022
Implementation of NÜWA, state of the art attention network for text to video synthesis, in Pytorch

NÜWA - Pytorch (wip) Implementation of NÜWA, state of the art attention network for text to video synthesis, in Pytorch. This repository will be popul

Phil Wang 463 Dec 28, 2022
Bringing Computer Vision and Flutter together , to build an awesome app !!

Bringing Computer Vision and Flutter together , to build an awesome app !! Explore the Directories Flutter · Machine Learning Table of Contents About

Padmanabha Banerjee 14 Apr 07, 2022
Qlib is an AI-oriented quantitative investment platform

Qlib is an AI-oriented quantitative investment platform, which aims to realize the potential, empower the research, and create the value of AI technologies in quantitative investment.

Microsoft 10.1k Dec 30, 2022
NLP made easy

GluonNLP: Your Choice of Deep Learning for NLP GluonNLP is a toolkit that helps you solve NLP problems. It provides easy-to-use tools that helps you l

Distributed (Deep) Machine Learning Community 2.5k Jan 04, 2023
Record radiologists' eye gaze when they are labeling images.

Record radiologists' eye gaze when they are labeling images. Read for installation, usage, and deep learning examples. Why use MicEye Versatile As a l

24 Nov 03, 2022
Transfer SemanticKITTI labeles into other dataset/sensor formats.

LiDAR-Transfer Transfer SemanticKITTI labeles into other dataset/sensor formats. Content Convert datasets (NUSCENES, FORD, NCLT) to KITTI format Minim

Photogrammetry & Robotics Bonn 64 Nov 21, 2022
JORLDY an open-source Reinforcement Learning (RL) framework provided by KakaoEnterprise

Repository for Open Source Reinforcement Learning Framework JORLDY

Kakao Enterprise Corp. 330 Dec 30, 2022
A Multi-modal Model Chinese Spell Checker Released on ACL2021.

ReaLiSe ReaLiSe is a multi-modal Chinese spell checking model. This the office code for the paper Read, Listen, and See: Leveraging Multimodal Informa

DaDa 106 Dec 29, 2022