PyContinual (An Easy and Extendible Framework for Continual Learning)

Overview

PyContinual (An Easy and Extendible Framework for Continual Learning)

Easy to Use

You can sumply change the baseline, backbone and task, and then ready to go. Here is an example:

	python run.py \  
	--bert_model 'bert-base-uncased' \  
	--backbone bert_adapter \ #or other backbones (bert, w2v...)  
	--baseline ctr \  #or other avilable baselines (classic, ewc...)
	--task asc \  #or other avilable task/dataset (dsc, newsgroup...)
	--eval_batch_size 128 \  
	--train_batch_size 32 \  
	--scenario til_classification \  #or other avilable scenario (dil_classification...)
	--idrandom 0  \ #which random sequence to use
	--use_predefine_args #use pre-defined arguments

Easy to Extend

You only need to write your own ./dataloader, ./networks and ./approaches. You are ready to go!

Introduction

Recently, continual learning approaches have drawn more and more attention. This repo contains pytorch implementation of a set of (improved) SoTA methods using the same training and evaluation pipeline.

This repository contains the code for the following papers:

Features

  • Datasets: It currently supports Language Datasets (Document/Sentence/Aspect Sentiment Classification, Natural Language Inference, Topic Classification) and Image Datasets (CelebA, CIFAR10, CIFAR100, FashionMNIST, F-EMNIST, MNIST, VLCS)
  • Scenarios: It currently supports Task Incremental Learning and Domain Incremental Learning
  • Training Modes: It currently supports single-GPU. You can also change it to multi-node distributed training and the mixed precision training.

Architecture

./res: all results saved in this folder.
./dat: processed data
./data: raw data ./dataloader: contained dataloader for different data ./approaches: code for training
./networks: code for network architecture
./data_seq: some reference sequences (e.g. asc_random) ./tools: code for preparing the data

Setup

  • If you want to run the existing systems, please see run_exist.md
  • If you want to expand the framework with your own model, please see run_own.md
  • If you want to see the full list of baselines and variants, please see baselines.md

Reference

If using this code, parts of it, or developments from it, please consider cite the references bellow.

@inproceedings{ke2021achieve,
  title={Achieving Forgetting Prevention and Knowledge Transfer in Continual Learning},
  author={Ke, Zixuan and Liu, Bing and Ma, Nianzu and Xu, Hu, and Lei Shu},
  booktitle={NeurIPS},
  year={2021}
}

@inproceedings{ke2021contrast,
  title={CLASSIC: Continual and Contrastive Learning of Aspect Sentiment Classification Tasks},
  author={Ke, Zixuan and Liu, Bing and Xu, Hu, and Lei Shu},
  booktitle={EMNLP},
  year={2021}
}

@inproceedings{ke2021adapting,
  title={Adapting BERT for Continual Learning of a Sequence of Aspect Sentiment Classification Tasks},
  author={Ke, Zixuan and Xu, Hu and Liu, Bing},
  booktitle={Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies},
  pages={4746--4755},
  year={2021}
}

@inproceedings{ke2020continualmixed,
author= {Ke, Zixuan and Liu, Bing and Huang, Xingchang},
title= {Continual Learning of a Mixed Sequence of Similar and Dissimilar Tasks},
booktitle = {Advances in Neural Information Processing Systems},
volume={33},
year = {2020}}

@inproceedings{ke2020continual,
author= {Zixuan Ke and Bing Liu and Hao Wang and Lei Shu},
title= {Continual Learning with Knowledge Transfer for Sentiment Classification},
booktitle = {ECML-PKDD},
year = {2020}}

Contact

Please drop an email to Zixuan Ke, Xingchang Huang or Nianzu Ma if you have any questions regarding to the code. We thank Bing Liu, Hu Xu and Lei Shu for their valuable comments and opinioins.

Owner
Zixuan Ke
Zixuan Ke
《Truly shift-invariant convolutional neural networks》(2021)

Truly shift-invariant convolutional neural networks [Paper] Authors: Anadi Chaman and Ivan Dokmanić Convolutional neural networks were always assumed

Anadi Chaman 46 Dec 19, 2022
Transfer-Learn is an open-source and well-documented library for Transfer Learning.

Transfer-Learn is an open-source and well-documented library for Transfer Learning. It is based on pure PyTorch with high performance and friendly API. Our code is pythonic, and the design is consist

THUML @ Tsinghua University 2.2k Jan 03, 2023
PyTorch implementation for "HyperSPNs: Compact and Expressive Probabilistic Circuits", NeurIPS 2021

HyperSPN This repository contains code for the paper: HyperSPNs: Compact and Expressive Probabilistic Circuits "HyperSPNs: Compact and Expressive Prob

8 Nov 08, 2022
This is official implementaion of paper "Token Shift Transformer for Video Classification".

This is official implementaion of paper "Token Shift Transformer for Video Classification". We achieve SOTA performance 80.40% on Kinetics-400 val. Paper link

VideoNet 60 Dec 30, 2022
Revisiting Global Statistics Aggregation for Improving Image Restoration

Revisiting Global Statistics Aggregation for Improving Image Restoration Xiaojie Chu, Liangyu Chen, Chengpeng Chen, Xin Lu Paper: https://arxiv.org/pd

MEGVII Research 128 Dec 24, 2022
EfficientNetv2 TensorRT int8

EfficientNetv2_TensorRT_int8 EfficientNetv2模型实现来自https://github.com/d-li14/efficientnetv2.pytorch 环境配置 ubuntu:18.04 cuda:11.0 cudnn:8.0 tensorrt:7

34 Apr 24, 2022
The code for our paper submitted to RAL/IROS 2022: OverlapTransformer: An Efficient and Rotation-Invariant Transformer Network for LiDAR-Based Place Recognition.

OverlapTransformer The code for our paper submitted to RAL/IROS 2022: OverlapTransformer: An Efficient and Rotation-Invariant Transformer Network for

HAOMO.AI 136 Jan 03, 2023
RRL: Resnet as representation for Reinforcement Learning

Resnet as representation for Reinforcement Learning (RRL) is a simple yet effective approach for training behaviors directly from visual inputs. We demonstrate that features learned by standard image

Meta Research 21 Dec 07, 2022
PaSST: Efficient Training of Audio Transformers with Patchout

PaSST: Efficient Training of Audio Transformers with Patchout This is the implementation for Efficient Training of Audio Transformers with Patchout Pa

165 Dec 26, 2022
TensorFlow-based implementation of "Pyramid Scene Parsing Network".

PSPNet_tensorflow Important Code is fine for inference. However, the training code is just for reference and might be only used for fine-tuning. If yo

HsuanKung Yang 323 Dec 20, 2022
Explaining neural decisions contrastively to alternative decisions.

Contrastive Explanations for Model Interpretability This is the repository for the paper "Contrastive Explanations for Model Interpretability", about

AI2 16 Oct 16, 2022
DeepVoxels is an object-specific, persistent 3D feature embedding.

DeepVoxels is an object-specific, persistent 3D feature embedding. It is found by globally optimizing over all available 2D observations of

Vincent Sitzmann 196 Dec 25, 2022
Pytorch implementation of NEGEV method. Paper: "Negative Evidence Matters in Interpretable Histology Image Classification".

Pytorch 1.10.0 code for: Negative Evidence Matters in Interpretable Histology Image Classification (https://arxiv. org/abs/xxxx.xxxxx) Citation: @arti

Soufiane Belharbi 4 Dec 01, 2022
Easy to use Python camera interface for NVIDIA Jetson

JetCam JetCam is an easy to use Python camera interface for NVIDIA Jetson. Works with various USB and CSI cameras using Jetson's Accelerated GStreamer

NVIDIA AI IOT 358 Jan 02, 2023
A geometric deep learning pipeline for predicting protein interface contacts.

A geometric deep learning pipeline for predicting protein interface contacts.

44 Dec 30, 2022
EvDistill: Asynchronous Events to End-task Learning via Bidirectional Reconstruction-guided Cross-modal Knowledge Distillation (CVPR'21)

EvDistill: Asynchronous Events to End-task Learning via Bidirectional Reconstruction-guided Cross-modal Knowledge Distillation (CVPR'21) Citation If y

addisonwang 18 Nov 11, 2022
A Unified Generative Framework for Various NER Subtasks.

This is the code for ACL-ICJNLP2021 paper A Unified Generative Framework for Various NER Subtasks. Install the package in the requirements.txt, then u

177 Jan 05, 2023
Base pretrained models and datasets in pytorch (MNIST, SVHN, CIFAR10, CIFAR100, STL10, AlexNet, VGG16, VGG19, ResNet, Inception, SqueezeNet)

This is a playground for pytorch beginners, which contains predefined models on popular dataset. Currently we support mnist, svhn cifar10, cifar100 st

Aaron Chen 2.4k Dec 28, 2022
QMagFace: Simple and Accurate Quality-Aware Face Recognition

Quality-Aware Face Recognition 26.11.2021 start readme QMagFace: Simple and Accurate Quality-Aware Face Recognition Research Paper Implementation - To

Philipp Terhörst 59 Jan 04, 2023
Official Pytorch Implementation for Splicing ViT Features for Semantic Appearance Transfer presenting Splice

Splicing ViT Features for Semantic Appearance Transfer [Project Page] Splice is a method for semantic appearance transfer, as described in Splicing Vi

Omer Bar Tal 253 Jan 06, 2023