A PyTorch-based library for fast prototyping and sharing of deep neural network models.

Related tags

Deep Learningclassy
Overview

classy logo

A PyTorch-based library for fast prototyping and sharing of deep neural network models.


Python PyPI PyTorch Lightning Config: hydra Code style: black Codecov

Quick Links

In this README

Getting Started using classy

If this is your first time meeting classy, don't worry! We have plenty of resources to help you learn how it works and what it can do for you.

For starters, have a look at our amazing website and our documentation!

If you want to get your hands dirty right away, have a look at our base classy template. Also, we have a few examples that you can look at to get to know classy!

Installation

For a more in-depth installation guide (covering also installing from source and through docker), please visit our installation page.

If you are using one of our templates, there is a handy setup.sh script you can use that will execute the commands to create the environment and install classy for you.

Installing via pip

Setting up a virtual environment

We strongly recommend using Conda as the environment manager when dealing with deep learning / data science / machine learning. It's also recommended that you install the PyTorch ecosystem before installing classy by following the instructions on pytorch.org

If you already have a Python 3 environment you want to use, you can skip to the installing via pip section.

  1. Download and install Conda.

  2. Create a Conda environment with Python 3.7-3.9:

    conda create -n classy python=3.7
  3. Activate the Conda environment:

    conda activate classy

Installing the library and dependencies

Simply execute

pip install classy-core

and voilà! You're all set.

Looking for some adventures? Install nightly releases directly from pypi! You will not regret it :)

Running classy

Once it is installed, classy is available as a command line tool. It offers a wide variety of subcommands, all listed below. Detailed guides and references for each command is available in the documentation. Every one of classy's subcommands have a -h|--help flag available which details the various arguments & options you can use (e.g., classy train -h).

classy train

In its simplest form, classy train lets you train a transformer-based neural network for one of the tasks supported by classy (see the documentation).

classy train sentence-pair path/to/dataset/folder-or-file -n my-model

The command above will train a model to predict a label given a pair of sentences as input (e.g., Natural Language Inference or NLI) and save it under experiments/my-model. This same model can be further used by all other classy commands which require a classy model (predict, evaluate, serve, demo, upload).

classy predict

classy predict actually has two subcommands: interactive and file.

The first loads the model in memory and lets you try it out through the shell directly, so that you can test the model you trained and see what it predicts given some input. It is particularly useful when your machine cannot open a port for classy demo.

The second, instead, works on a file and produces an output where, for each input, it associates the corresponding predicted label. It is very useful when doing pre-processing or when you need to evaluate your model (although we offer classy evaluate for that).

classy evaluate

classy evaluate lets you evaluate your model on standard metrics for the task your model was trained upon. Simply run classy evaluate my-model path/to/file -o path/to/output/file and it will dump the evaluation at path/to/output/file

classy serve

classy serve loads the model in memory and spawns a REST API you can use to query your model with any REST client.

classy demo

classy demo spawns a Streamlit interface which lets you quickly show and query your model.

classy describe

classy describe --dataset path/to/dataset runs some common metrics on a file formatted for the specific task. Great tool to run before training your model!

classy upload

classy upload lets you upload your classy-trained model on the HuggingFace Hub and lets other users download / use it. (NOTE: you need a HuggingFace Hub account in order to upload to their hub)

Models uploaded via classy upload will be available for download by other classy users by simply executing classy download [email protected].

classy download

classy download downloads a previously uploaded classy-trained model from the HuggingFace Hub and stores it on your machine so that it is usable with any other classy command which requires a trained model (predict, evaluate, serve, demo, upload).

You can find SunglassesAI's list of pre-trained models here.

Models uploaded via classy upload are available by doing classy download [email protected].

Enabling Shell Completion

To install shell completion, activate your conda environment and then execute

classy --install-autocomplete

From now on, whenever you activate your conda environment with classy installed, you are going to have autocompletion when pressing [TAB]!

Issues

You are more than welcome to file issues with either feature requests, bug reports, or general questions. If you already found a solution to your problem, don't hesitate to share it. Suggestions for new best practices and tricks are always welcome!

Contributions

We warmly welcome contributions from the community. If it is your first time as a contributor, we recommend you start by reading our CONTRIBUTING.md guide.

Small contributions can be made directly in a pull request. For contributing major features, we recommend you first create a issue proposing a design, so that it can be discussed before you risk wasting time.

Pull requests (PRs) must have one approving review and no requested changes before they are merged. As classy is primarily driven by SunglassesAI, we reserve the right to reject or revert contributions that we don't think are good additions or might not fit into our roadmap.

Comments
  • Documentation overhaul for release

    Documentation overhaul for release

    We need to restructure the documentation.

    • Intro (feature overview: supported tasks and commands)
    • Getting Started (big classy tutorial):
      • Installation (maybe move before "getting started"? right after Intro)
      • Basic (everything that does not require touching code):
        • Intro (explanation of raw dataset and end goal)
        • Formatting the data (convert data to tsv / jsonl)
        • Choosing a profile (high level - the profile list goes in the reference manual)
        • Train (quick overview of how to train from the CLI with classy train - no frills)
        • Inference (quick overview of inference commands + ref to CLI sections)
      • Customizing things
        • Template (we introduce the template as the go-to way to work with classy)
        • Changing profile
        • Custom data format
        • Custom model
        • [...] all other customizations
    • Reference manual:
      • CLI (detailed description) of classy commands
        • train
        • predict
        • other inference commands (evaluate, serve, demo)
        • up/download
        • describe (maybe as first?)
      • Tasks and input formats (joined together, for each task we show both formats - currently they're separated)
      • Profiles
      • Mixins? (what should we say about them?)
    documentation priority: high 
    opened by Valahaar 5
  • How to fine-tune the existing model

    How to fine-tune the existing model

    Hi, I'm always frustrated when fine-tuning the checkpoint. There has no access to it, let alone continuing to train the SOTA model by a different loss or use it as a submodule. I would be much grateful if you can provide this function.

    opened by yyq63728198 4
  • Boundaries of adjacent entities

    Boundaries of adjacent entities

    Hello, Using the current token annotation format, I can't set the boundaries of the following entities (they all have the same label):

    name personal_data address personal_data id card number personal_data

    ...in the following example please tell me your name address and identity card number

    The resulting labeled sentence would be O O O O PERSONAL_DATA PERSONAL_DATA O PERSONAL_DATA PERSONAL_DATA PERSONAL_DATA

    With this format, it is impossible to know if name and address are one or two different entities. Did you consider this issue?

    Thanks!

    opened by JuanFF 2
  • Cannot modify from command line a parameter introduced in a custom Profile

    Cannot modify from command line a parameter introduced in a custom Profile

    Describe the bug If you try to start a training modifying from the command line a parameter that has been introduced (i.e. was not in the default profile for the task you are doing) in a custom profile the process crashes.

    To Reproduce Create a custom model (CustomModel) with a custom parameter (custom_parameter). Then, create your own profile for the task (custom_profile). The model section of your profile will be something like this:

    model:
      _target_: my_project.my_models.CustomModel
      transformer_model: bert-base-cased
      custom_parameter: custom_value
    

    Then run from the command line:

    classy train my_task my_data -n my_model --profile custom_profile -c model.custom_parameter=new_value
    

    Expected behaviour The model should start with the model.custom_parameter set to the new value (new_value).

    Actual behaviour The training process crashes with the following hydra error:

    Could not override 'model.custom_parameter'.
    To append to your config use +model.custom_parameter=new_value
    Key 'custom_parameter' is not in struct
        full_key: model.custom_parameter
        object_type=dict
    

    Desktop (please complete the following information):

    • OS: Ubuntu Server 20.04
    • PyTorch Version: 1.9.0
    • Classy Version: 0.1.0
    bug priority: high 
    opened by edobobo 1
  • classy export / import

    classy export / import

    Is your feature request related to a problem? Please describe. How do we easily move specific runs across machines (e.g., train on a docker instance, run a demo on local machine)? At the moment, this has to be done manually, which is painful considering that we have to include everything under a certain folder and then we need to recreate the same structure in the target machine.

    Describe the solution you'd like classy export <experiment> should produce a zip/tgz containing everything necessary to move experiments (or single runs) across machines. We sort of do this already with classy upload / download, but just rely on the hub. Instead of uploading, we can just create the zip in the local folder (or a path specified by the user).

    Then, we should be able to run classy import <classy-exported-file.zip> to find our experiment in our local folder (better yet, every classy inference command could work with the export! :) )

    enhancement 
    opened by Valahaar 0
  • Profiles must have yaml extension

    Profiles must have yaml extension

    Custom profiles are only recognized if their extension is yaml (and the error message was completely unrelated and impossible to debug easily).

    We should allow profiles to end in yml as well :)

    bug 
    opened by Valahaar 0
  • Fixed resume from checkpoint

    Fixed resume from checkpoint

    Fixes the bug: The trainer parameters were not loaded correctly when the model was loaded from a checkpoint. E.g. val_check_interval was not set, so validation callbacks were not called rightly.

    opened by andreim14 0
  • Resume bug

    Resume bug

    Describe the bug The trainer parameters were not loaded correctly when the model was loaded from a checkpoint. E.g. val_check_interval was not set, so validation callbacks were not called rightly.

    To Reproduce Steps to reproduce the behaviour:

    1. Train a model for x steps
    2. Stop the training
    3. Restore the training from the checkpoint using resume_from parameter

    Expected behaviour The trainer parameters should be preserved. E.g if val_check_interval is set, the validation callbacks should be called accordingly.

    Actual behaviour The validation callbacks are not called and val_check_interval is set to 0.

    Desktop (please complete the following information):

    • OS: Ubuntu 20.04
    • PyTorch Version: 1.11
    bug 
    opened by andreim14 0
  • Token offsets computation fails when input is truncated

    Token offsets computation fails when input is truncated

    Describe the bug Title. In classy/data/dataset/hf/classification.py#L89 we invoke self.tokenize (#L109) which correctly truncates the input.

    The issue arises due to tuple(tok_encoding.word_to_tokens(wi)) for wi in range(len(tokens)): when a token is not included in the input due to truncation, word_to_tokens returns None, and tuple(None) raises a TypeError, which triggers the catch condition and makes the function return None, which cannot be unpacked in input_ids, token_offsets = self.tokenize(token_sample.tokens), resulting in another unhandled exception that finally crashes classy.

    To Reproduce In the token classification setting, input a sentence that has too many tokens (or reduce truncation to obtain the same effect).

    Expected behaviour I think there is a way to know how many of the original tokens were kept, and we can iterate over them instead of len(tokens), otherwise we can just iterate until word_to_tokens(wi) is not None. Comments?

    bug 
    opened by Valahaar 0
  • Allow users to choose which evaluation keys to log to progress bar

    Allow users to choose which evaluation keys to log to progress bar

    Should be pretty easy to implement: we could employ two methods / properties / classmethods for Evaluation, one for a white list of keys to log and the other for a black list, with the white list taking precedence over the black list (so if the user specifies both, we first get the subset of the white list and then remove from them the keys in the black list). By default, everything is logged to pb (as it is now).

    opened by Valahaar 0
  • Support pre-trained Transformers models at inference time

    Support pre-trained Transformers models at inference time

    Is your feature request related to a problem? Please describe. Huggingface Transformers has several task-pretrained models in its hub (e.g. facebook/bart-large-xsum). classy automatically supports using them at training time for further fine-tuning, but doesn't support using them directly at inference time. The current way to go about it is essentially weight porting.

    Rather, classy should support something like classy demo ... facebook/bart-large-xsum ...

    Describe the solution you'd like Personally, I'd like a solution that's symmetric with classy train. So, if you train bart with classy train generation data/xsum --profile bart-large -c transformer_model=facebook/bart-large-xsum, you could do inference tasks such as classy demo with something like classy demo "generation data/xsum --profile bart-large -c transformer_model=facebook/bart-large-xsum".

    I am unsure whether this would be friendly (same syntax) or unfriendly (overkill syntax), and suggestions/help are welcome.

    enhancement priority: low 
    opened by poccio 0
Releases(v0.3.2)
  • v0.3.2(Sep 8, 2022)

  • v0.3.1(Apr 25, 2022)

    What's Changed

    • README update to version 0.3.0 + bug fixes by @edobobo in https://github.com/sunglasses-ai/classy/pull/74
    • Hotfix for classy v0.3.0 by @poccio in https://github.com/sunglasses-ai/classy/pull/75

    Full Changelog: https://github.com/sunglasses-ai/classy/compare/v0.3.0...v0.3.1

    Source code(tar.gz)
    Source code(zip)
  • v0.3.0(Apr 22, 2022)

    What's Changed

    • Feature/multi dataset implementation by @edobobo in https://github.com/sunglasses-ai/classy/pull/65
    • training coordinates fix by @edobobo in https://github.com/sunglasses-ai/classy/pull/68
    • implement choice of evaluation output in classy evaluate by @Valahaar in https://github.com/sunglasses-ai/classy/pull/67
    • [feature] changed profile behavior by @poccio in https://github.com/sunglasses-ai/classy/pull/70
    • classy import/export by @Valahaar in https://github.com/sunglasses-ai/classy/pull/66
    • T5 support and bug fixes by @poccio in https://github.com/sunglasses-ai/classy/pull/71

    Full Changelog: https://github.com/sunglasses-ai/classy/compare/v0.2.1...v0.3.0

    Source code(tar.gz)
    Source code(zip)
  • v0.2.1(Mar 2, 2022)

    What's Changed

    • Feature/adam implementation by @edobobo in https://github.com/sunglasses-ai/classy/pull/60
    • Misc/v0.2.1 by @edobobo in https://github.com/sunglasses-ai/classy/pull/63

    Full Changelog: https://github.com/sunglasses-ai/classy/compare/v0.2.0...v0.2.1

    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Jan 26, 2022)

    What's Changed

    • Prediction output reification by @edobobo in https://github.com/sunglasses-ai/classy/pull/52
    • Feature/interfaces enhancements by @poccio in https://github.com/sunglasses-ai/classy/pull/53
    • Devops/black precommit by @poccio in https://github.com/sunglasses-ai/classy/pull/54
    • profile can now be provided as a path by @poccio in https://github.com/sunglasses-ai/classy/pull/57
    • Feature: Reference API by @Valahaar in https://github.com/sunglasses-ai/classy/pull/58

    Full Changelog: https://github.com/sunglasses-ai/classy/compare/v0.1.2...v0.2.0

    Source code(tar.gz)
    Source code(zip)
  • v0.1.2(Dec 3, 2021)

    What's Changed

    We now support specifying an entity (a team for wandb), a group and tags in logger's config.

    • Hotfix logging by @edobobo in https://github.com/sunglasses-ai/classy/pull/51

    Full Changelog: https://github.com/sunglasses-ai/classy/compare/v0.1.1...v0.1.2

    Source code(tar.gz)
    Source code(zip)
  • v0.1.1(Nov 22, 2021)

    At some point setuptools must have broken, we have updated it and now it should be working properly. Release v0.1.0 is being removed from pypi to avoid problems.

    What's Changed

    • Release 0.1.1: fix to packaging by @Valahaar in https://github.com/sunglasses-ai/classy/pull/50

    Full Changelog: https://github.com/sunglasses-ai/classy/compare/v0.1.0...v0.1.1

    Source code(tar.gz)
    Source code(zip)
  • v0.1.0(Nov 19, 2021)

    What's Changed

    • Sentence pair classification by @Valahaar in https://github.com/sunglasses-ai/classy/pull/1
    • Classy cli + setup.py by @Valahaar in https://github.com/sunglasses-ai/classy/pull/2
    • Serve by @poccio in https://github.com/sunglasses-ai/classy/pull/3
    • Evaluate + Demo + Bug fixes by @poccio in https://github.com/sunglasses-ai/classy/pull/4
    • Wandb support added by @edobobo in https://github.com/sunglasses-ai/classy/pull/5
    • resume-train implemented by @edobobo in https://github.com/sunglasses-ai/classy/pull/6
    • Dataset improvement by @edobobo in https://github.com/sunglasses-ai/classy/pull/7
    • Dataset shuffling by @edobobo in https://github.com/sunglasses-ai/classy/pull/8
    • Small functionalities and bug fixes by @edobobo in https://github.com/sunglasses-ai/classy/pull/9
    • Extractive qa by @edobobo in https://github.com/sunglasses-ai/classy/pull/10
    • profiles implementation by @edobobo in https://github.com/sunglasses-ai/classy/pull/11
    • Optimizer by @edobobo in https://github.com/sunglasses-ai/classy/pull/13
    • Describe by @edobobo in https://github.com/sunglasses-ai/classy/pull/14
    • Small features by @edobobo in https://github.com/sunglasses-ai/classy/pull/15
    • Demo UI by @poccio in https://github.com/sunglasses-ai/classy/pull/17
    • Autocomplete experiments by @Valahaar in https://github.com/sunglasses-ai/classy/pull/12
    • Consec by @Valahaar in https://github.com/sunglasses-ai/classy/pull/16
    • classy download implementation by @edobobo in https://github.com/sunglasses-ai/classy/pull/18
    • Describe update by @edobobo in https://github.com/sunglasses-ai/classy/pull/19
    • Profiles by @edobobo in https://github.com/sunglasses-ai/classy/pull/22
    • bunch of bug fixes by @poccio in https://github.com/sunglasses-ai/classy/pull/21
    • fixes for pip packaging by @Valahaar in https://github.com/sunglasses-ai/classy/pull/20
    • Bump nltk from 3.4.5 to 3.6.5 by @dependabot in https://github.com/sunglasses-ai/classy/pull/25
    • udpates and bug fixes on notebooks datasets by @edobobo in https://github.com/sunglasses-ai/classy/pull/23
    • Generation profiles by @poccio in https://github.com/sunglasses-ai/classy/pull/24
    • typing fixes by @edobobo in https://github.com/sunglasses-ai/classy/pull/27
    • qa fixes + special tokens by @poccio in https://github.com/sunglasses-ai/classy/pull/28
    • Qa bug fix by @poccio in https://github.com/sunglasses-ai/classy/pull/29
    • Develop by @edobobo in https://github.com/sunglasses-ai/classy/pull/30
    • Demo update by @Valahaar in https://github.com/sunglasses-ai/classy/pull/31
    • Develop by @edobobo in https://github.com/sunglasses-ai/classy/pull/32
    • Helpers added to every parameter in the cli by @edobobo in https://github.com/sunglasses-ai/classy/pull/37
    • Bringing (updated) docs into develop by @Valahaar in https://github.com/sunglasses-ai/classy/pull/41
    • Profile cli issue bug fix by @Valahaar in https://github.com/sunglasses-ai/classy/pull/42
    • Batch size and evaluation by @poccio in https://github.com/sunglasses-ai/classy/pull/43
    • release 0.1.0 by @Valahaar in https://github.com/sunglasses-ai/classy/pull/44
    • develop rebase + ci fix by @Valahaar in https://github.com/sunglasses-ai/classy/pull/45
    • ci fix by @Valahaar in https://github.com/sunglasses-ai/classy/pull/46

    Full Changelog: https://github.com/sunglasses-ai/classy/commits/v0.1.0

    Source code(tar.gz)
    Source code(zip)
Bringing Computer Vision and Flutter together , to build an awesome app !!

Bringing Computer Vision and Flutter together , to build an awesome app !! Explore the Directories Flutter · Machine Learning Table of Contents About

Padmanabha Banerjee 14 Apr 07, 2022
Fedlearn支持前沿算法研发的Python工具库 | Fedlearn algorithm toolkit for researchers

FedLearn-algo Installation Development Environment Checklist python3 (3.6 or 3.7) is required. To configure and check the development environment is c

89 Nov 14, 2022
An API-first distributed deployment system of deep learning models using timeseries data to analyze and predict systems behaviour

Gordo Building thousands of models with timeseries data to monitor systems. Table of content About Examples Install Uninstall Developer manual How to

Equinor 26 Dec 27, 2022
A simple Tensorflow based library for deep and/or denoising AutoEncoder.

libsdae - deep-Autoencoder & denoising autoencoder A simple Tensorflow based library for Deep autoencoder and denoising AE. Library follows sklearn st

Rajarshee Mitra 147 Nov 18, 2022
Sync2Gen Code for ICCV 2021 paper: Scene Synthesis via Uncertainty-Driven Attribute Synchronization

Sync2Gen Code for ICCV 2021 paper: Scene Synthesis via Uncertainty-Driven Attribute Synchronization 0. Environment Environment: python 3.6 and cuda 10

Haitao Yang 62 Dec 30, 2022
Use your Philips Hue lights as Racing Flags. Works with Assetto Corsa, Assetto Corsa Competizione and iRacing.

phue-racing-flags Use your Philips Hue lights as Racing Flags. Explore the docs » Report Bug · Request Feature Table of Contents About The Project Bui

50 Sep 03, 2022
WaveFake: A Data Set to Facilitate Audio DeepFake Detection

WaveFake: A Data Set to Facilitate Audio DeepFake Detection This is the code repository for our NeurIPS 2021 (Track on Datasets and Benchmarks) paper

Chair for Sys­tems Se­cu­ri­ty 27 Dec 22, 2022
SlotRefine: A Fast Non-Autoregressive Model forJoint Intent Detection and Slot Filling

SlotRefine: A Fast Non-Autoregressive Model for Joint Intent Detection and Slot Filling Reference Main paper to be cited (Di Wu et al., 2020) @article

Moore 34 Nov 03, 2022
上海交通大学全自动抢课脚本,支持准点开抢与抢课后持续捡漏两种模式。2021/06/08更新。

Welcome to Course-Bullying-in-SJTU-v3.1! 2021/6/8 紧急更新v3.1 更新说明 为了更好地保护用户隐私,将原来用户名+密码的登录方式改为微信扫二维码+cookie登录方式,不再需要配置使用pytesseract。在使用扫码登录模式时,请稍等,二维码将马

87 Sep 13, 2022
Reproducing-BowNet: Learning Representations by Predicting Bags of Visual Words

Reproducing-BowNet Our reproducibility effort based on the 2020 ML Reproducibility Challenge. We are reproducing the results of this CVPR 2020 paper:

6 Mar 16, 2022
This repository contains the code for "Self-Diagnosis and Self-Debiasing: A Proposal for Reducing Corpus-Based Bias in NLP".

Self-Diagnosis and Self-Debiasing This repository contains the source code for Self-Diagnosis and Self-Debiasing: A Proposal for Reducing Corpus-Based

Timo Schick 62 Dec 12, 2022
Code of the lileonardo team for the 2021 Emotion and Theme Recognition in Music task of MediaEval 2021

Emotion and Theme Recognition in Music The repository contains code for the submission of the lileonardo team to the 2021 Emotion and Theme Recognitio

Vincent Bour 8 Aug 02, 2022
Train Scene Graph Generation for Visual Genome and GQA in PyTorch >= 1.2 with improved zero and few-shot generalization.

Scene Graph Generation Object Detections Ground truth Scene Graph Generated Scene Graph In this visualization, woman sitting on rock is a zero-shot tr

Boris Knyazev 93 Dec 28, 2022
Using contrastive learning and OpenAI's CLIP to find good embeddings for images with lossy transformations

Creating Robust Representations from Pre-Trained Image Encoders using Contrastive Learning Sriram Ravula, Georgios Smyrnis This is the code for our pr

Sriram Ravula 26 Dec 10, 2022
10x faster matrix and vector operations

Bolt is an algorithm for compressing vectors of real-valued data and running mathematical operations directly on the compressed representations. If yo

2.3k Jan 09, 2023
Danfeng Hong, Lianru Gao, Jing Yao, Bing Zhang, Antonio Plaza, Jocelyn Chanussot. Graph Convolutional Networks for Hyperspectral Image Classification, IEEE TGRS, 2021.

Graph Convolutional Networks for Hyperspectral Image Classification Danfeng Hong, Lianru Gao, Jing Yao, Bing Zhang, Antonio Plaza, Jocelyn Chanussot T

Danfeng Hong 154 Dec 13, 2022
Auto Seg-Loss: Searching Metric Surrogates for Semantic Segmentation

Auto-Seg-Loss By Hao Li, Chenxin Tao, Xizhou Zhu, Xiaogang Wang, Gao Huang, Jifeng Dai This is the official implementation of the ICLR 2021 paper Auto

61 Dec 21, 2022
Training PSPNet in Tensorflow. Reproduce the performance from the paper.

Training Reproduce of PSPNet. (Updated 2021/04/09. Authors of PSPNet have provided a Pytorch implementation for PSPNet and their new work with support

Li Xuhong 126 Jul 13, 2022
Official code for "On the Frequency Bias of Generative Models", NeurIPS 2021

Frequency Bias of Generative Models Generator Testbed Discriminator Testbed This repository contains official code for the paper On the Frequency Bias

35 Nov 01, 2022
Deep learning models for change detection of remote sensing images

Change Detection Models (Remote Sensing) Python library with Neural Networks for Change Detection based on PyTorch. ⚡ ⚡ ⚡ I am trying to build this pr

Kaiyu Li 176 Dec 24, 2022