Advances in Neural Information Processing Systems (NeurIPS), 2020.

Overview

What is being transferred in transfer learning?

This repo contains the code for the following paper:

Behnam Neyshabur*, Hanie Sedghi*, Chiyuan Zhang*. What is being transferred in transfer learning?. *equal contribution. Advances in Neural Information Processing Systems (NeurIPS), 2020.

Disclaimer: this is not an officially supported Google product.

Setup

Library dependencies

This code has the following dependencies

  • pytorch (1.4.0 is tested)
  • gin-config
  • tqdm
  • wget (the python package)

GPUs are needed to run most of the experiments.

Data

CheXpert data (the train and valid folders) needs to be placed in /mnt/data/CheXpert-v1.0-img224. If your data is in a different place, you can specify the data.image_path parameter (see configs/p100_chexpert.py). We pre-resized all the CheXpert images to reduce the burden of data pre-processing using the following script:

'" ../$NEWDIR/{} cd .. ">
#!/bin/bash

NEWDIR=CheXpert-v1.0-img224
mkdir -p $NEWDIR/{train,valid}

cd CheXpert-v1.0

echo "Prepare directory structure..."
find . -type d | parallel mkdir -p ../$NEWDIR/{}

echo "Resize all images to have at least 224 pixels on each side..."
find . -name "*.jpg" | parallel convert {} -resize "'224^>'" ../$NEWDIR/{}

cd ..

The DomainNet data will be automatically downloaded from the Internet upon first run. By default, it will download to /mnt/data, which can be changed with the data_dir config (see configs/p100_domain_net.py).

Common Experiments

Training jobs

CheXpert training from random init. We use 2 Nvidia V100 GPUs for CheXpert training. If you run into out-of-memory error, you can try to reduce the batch size.

CUDA_VISIBLE_DEVICES=0,1 python chexpert_train.py -k train/chexpert/fixup_resnet50_nzfc/randinit-lr0.1-bs256

CheXpert finetuning from ImageNet pre-trained checkpoint. The code tries to load the ImageNet pre-trained chexpoint from /mnt/data/logs/imagenet-lr01/ckpt-E090.pth.tar. Or you can customize the path to checkpoint (see configs/p100_chexpert.py).

CUDA_VISIBLE_DEVICES=0,1 python chexpert_train.py -k train/chexpert/fixup_resnet50_nzfc/finetune-lr0.02-bs256

Similarly, DomainNet training can be executed using the script imagenet_train.py (replace real with clipart and quickdraw to run on different domains).

# randinit
CUDA_VISIBLE_DEVICES=0 python imagenet_train.py -k train/DomainNet_real/fixup_resnet50_nzfc/randinit-lr0.1-MstepLR

# finetune
CUDA_VISIBLE_DEVICES=0 python imagenet_train.py -k train/DomainNet_real/fixup_resnet50_nzfc/finetune-lr0.02-MstepLR

Training with shuffled blocks

The training jobs with block-shuffled images are defined in configs/p200_pix_shuffle.py. Run

python -m configs pix_shuffle

To see the keys of all the training jobs with pixel shuffling. Similarly,

python -m configs blk7_shuffle

list all the jobs with 7x7 block-shuffled images. You can run any of those jobs using the -k command line argument. For example:

CUDA_VISIBLE_DEVICES=0 python imagenet_train.py \
    -k blk7_shuffle/DomainNet_quickdraw/fixup_resnet50_nzfc_noaug/randinit-lr0.1-MstepLR/seed0

Finetuning from different pre-training checkpoints

The config file configs/p200_finetune_ckpt.py defines training jobs that finetune from different ImageNet pre-training checkpoints along the pre-training optimization trajectory.

Linear interpolation between checkpoints (performance barrier)

The script ckpt_interpolation.py performs the experiment of linearly interpolating between different solutions. The file is self-contained. You can edit the file directly to specify which combinations of checkpoints are to be used. The command line argument -a compute and -a plot can be used to switch between doing the computation and making the plots based on computed results.

General Documentation

This codebase uses gin-config to customize the behavior of the program, and allows us to easily generate a large number of similar configurations with Python loops. This is especially useful for hyper-parameter sweeps.

Running a job

A script mainly takes a config key in the commandline, and it will pull the detailed configurations according to this key from the pre-defined configs. For example:

python3 imagenet_train.py -k train/cifar10/fixup_resnet50/finetune-lr0.02-MstepLR

Query pre-defined configs

You can list all the pre-defined config keys matching a given regex with the following command:

python3 -m configs 

For example:

$ python3 -m configs cifar10
2 configs found ====== with regex: cifar10
    0) train/cifar10/fixup_resnet50/randinit-lr0.1-MstepLR
    1) train/cifar10/fixup_resnet50/finetune-lr0.02-MstepLR

Defining new configs

All the configs are in the directory configs, with the naming convention pXXX_YYY.py. Here XXX are digits, which allows ordering between configs (so when defining configs we can reference and extend previously defined configs).

To add a new config file:

  1. create pXXX_YYY.py file.
  2. edit __init__.py to import this file.
  3. in the newly added file, define functions to registery new configs. All the functions with the name register_blah will be automatically called.

Customing new functions

To customize the behavior of a new function, make that function gin configurable by

@gin.configurable('config_name')
def my_func(arg1=gin.REQUIRED, arg2=0):
  # blah

Then in the pre-defined config files, you can specify the values by

spec['gin']['config_name.arg1'] = # whatever python objects
spec['gin']['config_name.arg2'] = 2

See gin-config for more details.

Owner
Google Research
Google Research
LSTM Neural Networks for Spectroscopic Studies of Type Ia Supernovae

Package Description The difficulties in acquiring spectroscopic data have been a major challenge for supernova surveys. snlstm is developed to provide

7 Oct 11, 2022
The author's officially unofficial PyTorch BigGAN implementation.

BigGAN-PyTorch The author's officially unofficial PyTorch BigGAN implementation. This repo contains code for 4-8 GPU training of BigGANs from Large Sc

Andy Brock 2.6k Jan 02, 2023
Session-aware Item-combination Recommendation with Transformer Network

Session-aware Item-combination Recommendation with Transformer Network 2nd place (0.39224) code and report for IEEE BigData Cup 2021 Track1 Report EDA

Tzu-Heng Lin 6 Mar 10, 2022
Global Pooling, More than Meets the Eye: Position Information is Encoded Channel-Wise in CNNs, ICCV 2021

Global Pooling, More than Meets the Eye: Position Information is Encoded Channel-Wise in CNNs, ICCV 2021 Global Pooling, More than Meets the Eye: Posi

Md Amirul Islam 32 Apr 24, 2022
A self-supervised 3D representation learning framework named viewpoint bottleneck.

Pointly-supervised 3D Scene Parsing with Viewpoint Bottleneck Paper Created by Liyi Luo, Beiwen Tian, Hao Zhao and Guyue Zhou from Institute for AI In

63 Aug 11, 2022
*ObjDetApp* deploys a pytorch model for object detection

*ObjDetApp* deploys a pytorch model for object detection

Will Chao 1 Dec 26, 2021
Generic ecosystem for feature extraction from aerial and satellite imagery

Note: Robosat is neither maintained not actively developed any longer by Mapbox. See this issue. The main developers (@daniel-j-h, @bkowshik) are no l

Mapbox 1.9k Jan 06, 2023
一个多模态内容理解算法框架,其中包含数据处理、预训练模型、常见模型以及模型加速等模块。

Overview 架构设计 插件介绍 安装使用 框架简介 方便使用,支持多模态,多任务的统一训练框架 能力列表: bert + 分类任务 自定义任务训练(插件注册) 框架设计 框架采用分层的思想组织模型训练流程。 DATA 层负责读取用户数据,根据 field 管理数据。 Parser 层负责转换原

Tencent 265 Dec 22, 2022
Deep learning library featuring a higher-level API for TensorFlow.

TFLearn: Deep learning library featuring a higher-level API for TensorFlow. TFlearn is a modular and transparent deep learning library built on top of

TFLearn 9.6k Jan 02, 2023
Pytorch implementation of "Forward Thinking: Building and Training Neural Networks One Layer at a Time"

forward-thinking-pytorch Pytorch implementation of Forward Thinking: Building and Training Neural Networks One Layer at a Time Requirements Python 2.7

Kim Heecheol 65 Oct 06, 2022
Global Rhythm Style Transfer Without Text Transcriptions

Global Prosody Style Transfer Without Text Transcriptions This repository provides a PyTorch implementation of AutoPST, which enables unsupervised glo

Kaizhi Qian 193 Dec 30, 2022
Fast Differentiable Matrix Sqrt Root

Fast Differentiable Matrix Sqrt Root Geometric Interpretation of Matrix Square Root and Inverse Square Root This repository constains the official Pyt

YueSong 42 Dec 30, 2022
Automatically erase objects in the video, such as logo, text, etc.

Video-Auto-Wipe Read English Introduction:Here   本人不定期的基于生成技术制作一些好玩有趣的算法模型,这次带来的作品是“视频擦除”方向的应用模型,它实现的功能是自动感知到视频中我们不想看见的部分(譬如广告、水印、字幕、图标等等)然后进行擦除。由于图标擦

seeprettyface.com 141 Dec 26, 2022
Neurons Dataset API - The official dataloader and visualization tools for Neurons Datasets.

Neurons Dataset API - The official dataloader and visualization tools for Neurons Datasets. Introduction We propose our dataloader API for loading and

1 Nov 19, 2021
NVIDIA container runtime

nvidia-container-runtime A modified version of runc adding a custom pre-start hook to all containers. If environment variable NVIDIA_VISIBLE_DEVICES i

NVIDIA Corporation 938 Jan 06, 2023
The authors' official PyTorch SigWGAN implementation

The authors' official PyTorch SigWGAN implementation This repository is the official implementation of [Sig-Wasserstein GANs for Time Series Generatio

9 Jun 16, 2022
MiniSom is a minimalistic implementation of the Self Organizing Maps

MiniSom Self Organizing Maps MiniSom is a minimalistic and Numpy based implementation of the Self Organizing Maps (SOM). SOM is a type of Artificial N

Giuseppe Vettigli 1.2k Jan 03, 2023
2D Time independent Schrodinger equation solver for arbitrary shape of well

Schrodinger Well Python Python solver for timeless Schrodinger equation for well with arbitrary shape https://imgur.com/a/jlhK7OZ Pictures of circular

WeightAn 24 Nov 18, 2022
[NeurIPS 2021] Large Scale Learning on Non-Homophilous Graphs: New Benchmarks and Strong Simple Methods

Large Scale Learning on Non-Homophilous Graphs: New Benchmarks and Strong Simple Methods Large Scale Learning on Non-Homophilous Graphs: New Benchmark

60 Jan 03, 2023
A PyTorch Toolbox for Face Recognition

FaceX-Zoo FaceX-Zoo is a PyTorch toolbox for face recognition. It provides a training module with various supervisory heads and backbones towards stat

JDAI-CV 1.6k Jan 06, 2023