The code repository for "PyCIL: A Python Toolbox for Class-Incremental Learning" in PyTorch.

Related tags

Deep LearningPyCIL
Overview

PyCIL: A Python Toolbox for Class-Incremental Learning


IntroductionMethods ReproducedReproduced ResultsHow To UseLicenseAcknowledgementsContact


LICENSEPython PyTorch method CIL

The code repository for "PyCIL: A Python Toolbox for Class-Incremental Learning" [paper] in PyTorch. If you use any content of this repo for your work, please cite the following bib entry:

@misc{zhou2021pycil,
  title={PyCIL: A Python Toolbox for Class-Incremental Learning}, 
  author={Da-Wei Zhou and Fu-Yun Wang and Han-Jia Ye and De-Chuan Zhan},
  year={2021},
  eprint={2112.12533},
  archivePrefix={arXiv},
  primaryClass={cs.LG}
}

Introduction

Traditional machine learning systems are deployed under the closed-world setting, which requires the entire training data before the offline training process. However, real-world applications often face the incoming new classes, and a model should incorporate them continually. The learning paradigm is called Class-Incremental Learning (CIL). We propose a Python toolbox that implements several key algorithms for class-incremental learning to ease the burden of researchers in the machine learning community. The toolbox contains implementations of a number of founding works of CIL such as EWC and iCaRL, but also provides current state-of-the-art algorithms that can be used for conducting novel fundamental research. This toolbox, named PyCIL for Python Class-Incremental Learning, is open source with an MIT license.

Methods Reproduced

  • FineTune: Baseline method which simply updates parameters on new task, suffering from Catastrophic Forgetting. By default, weights corresponding to the outputs of previous classes are not updated.
  • EWC: Gradient Episodic Memory for Continual Learning. [paper]
  • LwF: Learning without Forgetting. [paper]
  • Replay: Baseline method with exemplars.
  • GEM: Gradient Episodic Memory for Continual Learning. [paper]
  • iCaRL: Incremental Classifier and Representation Learning. [paper]
  • BiC: Large Scale Incremental Learning. [paper]
  • WA: Maintaining Discrimination and Fairness in Class Incremental Learning. [paper]
  • PODNet: PODNet: Pooled Outputs Distillation for Small-Tasks Incremental Learning. [paper]
  • DER: DER: Dynamically Expandable Representation for Class Incremental Learning. [paper]
  • Coil: Co-Transport for Class-Incremental Learning. [paper]

Reproduced Results

CIFAR100

Imagenet100

More experimental details and results are shown in our paper.

How To Use

Clone

Clone this github repository:

git clone https://github.com/G-U-N/PyCIL.git
cd PyCIL

Dependencies

  1. torch 1.81
  2. torchvision 0.6.0
  3. tqdm
  4. numpy
  5. scipy
  6. quadprog

Run experiment

  1. Edit the [MODEL NAME].json file for global settings.
  2. Edit the hyperparameters in the corresponding [MODEL NAME].py file (e.g., models/icarl.py).
  3. Run:
python main.py --config=./exps/[MODEL NAME].json

where [MODEL NAME] should be chosen from: finetune, ewc, lwf, replay, gem, icarl, bic, wa, podnet, der.

  1. hyper-parameters

When using PyCIL, you can edit the global parameters and algorithm-specific hyper-parameter in the corresponding json file.

These parameters include:

  • memory-size: The total exemplar number in the incremental learning process. Assuming there are $K$ classes at current stage, the model will preserve $\left[\frac{memory-size}{K}\right]$ exemplar per class.
  • init-cls: The number of classes in the first incremental stage. Since there are different settings in CIL with a different number of classes in the first stage, our framework enables different choices to define the initial stage.
  • increment: The number of classes in each incremental stage $i$, $i$ > 1. By default, the number of classes per incremental stage is equivalent per stage.
  • convnet-type: The backbone network for the incremental model. According to the benchmark setting, ResNet32 is utilized for CIFAR100, and ResNet18 is utilized for ImageNet.
  • seed: The random seed adopted for shuffling the class order. According to the benchmark setting, it is set to 1993 by default.

Other parameters in terms of model optimization, e.g., batch size, optimization epoch, learning rate, learning rate decay, weight decay, milestone, temperature, can be modified in the corresponding Python file.

Datasets

We have implemented the pre-processing of CIFAR100, imagenet100 and imagenet1000. When training on CIFAR100, this framework will automatically download it. When training on imagenet100/1000, you should specify the folder of your dataset in utils/data.py.

    def download_data(self):
        assert 0,"You should specify the folder of your dataset"
        train_dir = '[DATA-PATH]/train/'
        test_dir = '[DATA-PATH]/val/'

License

Please check the MIT license that is listed in this repository.

Acknowledgements

We thank the following repos providing helpful components/functions in our work.

Contact

If there are any questions, please feel free to propose new features by opening an issue or contact with the author: Da-Wei Zhou([email protected]) and Fu-Yun Wang([email protected]). Enjoy the code.

Comments
  • why FeTrIL acc is so bad

    why FeTrIL acc is so bad

    config:

    {
        "prefix": "train",
        "dataset": "cifar100",
        "memory_size": 0,
        "shuffle": true,
        "init_cls": 50,
        "increment": 10,
        "model_name": "fetril",
        "convnet_type": "resnet32",
        "device": ["0"],
        "seed": [1993],
        "init_epochs": 200,
        "init_lr" : 0.1,
        "init_weight_decay" : 0,
        "epochs" : 50,
        "lr" : 0.05,
        "batch_size" : 128,
        "weight_decay" : 0,
        "num_workers" : 8,
        "T" : 2
    }
    

    final result:

    2022-12-16 23:22:29,436 [fetril.py] => svm train: acc: 10.01
    2022-12-16 23:22:29,451 [fetril.py] => svm evaluation: acc_list: [15.2, 12.47, 10.84, 9.78, 9.06, 8.13]
    2022-12-16 23:22:31,440 [trainer.py] => No NME accuracy.
    2022-12-16 23:22:31,441 [trainer.py] => CNN: {'total': 13.63, '00-09': 19.1, '10-19': 18.3, '20-29': 22.5, '30-39': 17.3, '40-49': 30.1, '50-59': 3.5, '60-69': 11.4, '70-79': 3.1, '80-89': 6.1, '90-99': 4.9, 'old': 14.6, 'new': 4.9}
    2022-12-16 23:22:31,441 [trainer.py] => CNN top1 curve: [28.94, 21.87, 18.54, 16.76, 15.1, 13.63]
    2022-12-16 23:22:31,441 [trainer.py] => CNN top5 curve: [53.28, 47.07, 43.17, 39.08, 36.41, 33.98]
    

    I can provide log if you need.

    opened by muyuuuu 10
  • coil fixed memory

    coil fixed memory

    Amazing toolbox!!!

    I got a question about ur results of coil.

    In your work. Section 5.2

    Since all compared methods are exemplar-based, we fix an equal number of exemplars for every method, i.e., 2,000 exemplars for CIFAR-100 and ImageNet100, 20,000 for ImageNet-1000. As a result, the picked exemplars per class is 20, which is abundant for every class.

    I just wanna check the replay size with fixed memory of 2,000 in totoal over training process, which means that the "fixed_memory" in json file is set false, as shown in this link. I'm a little bit confused about this setting due to there are different protocols in recent community.

    https://github.com/G-U-N/PyCIL/blob/6d2c1280e8ceb139f4e74a70297519af9eea4a5e/exps/coil.json#L6

    The reason why I came corss this issues is:

    1648192236(1)

    As shown in this table, the icarl results of 10 steps is reported about 61.74, which is lower than that in the original paper of about 64.

    Hope to get ur replay early. THX in advance.

    opened by qsunyuan 8
  • Inquiry about Pre-trained Model & Parameter Setup

    Inquiry about Pre-trained Model & Parameter Setup

    Many thanks for this wonderful framework! It really helps our work a lot!

    I have some questions about your experiment setup.

    1. I have reproduced your 10-stage CIFAR-100 experiments on my own PC (3*3090). The results are as followed:

    1650045621

    I followed all the bash files and parameters you have set up, but the results seem to be much lower than yours. Is that because you use an ImageNet pre-trained ResNet? Thanks!

    1. I found you set different model optimization parameters (e.g. learning rate, epoch, milestone, etc.) for each continual learning approach (instead of setting the same hyperparameter policy for all the continual learning approaches). I was wondering whether this kind of parameter setup could be considered a "fair" comparison?

    Thanks in advance for your answer!

    question 
    opened by longbai1006 4
  • 关于FOSTER的数据增强

    关于FOSTER的数据增强

    FOSTER很令我感兴趣的工作。

    我尝试了cifar100的benchmark的设置。

    我发现pycil中的实现并没有数据增强,如:https://github.com/G-U-N/ECCV22-FOSTER/blob/340fbd1a16ca6a787e522b9dc349a5e742f00c30/utils/data.py#L73

    因此,pycil的实现结果要差一点。请问还有其他的一些细节上的不同吗?

    opened by qsunyuan 3
  • Single GPU training error in DER

    Single GPU training error in DER

    Hi,

    Thank you for this wonderful code base!

    I noticed the current version doesn't support single GPU training. Could you please add this feature?

    Thank you!

    opened by htwang14 3
  • Weird Training Result with train_acc=0 & loss=0

    Weird Training Result with train_acc=0 & loss=0

    Thanks for your excellent work!

    But when I tried my own dataset on it, I found that the training result was really weired, just take EWC as an example:

    ================ EWC ================
    Task 0, Epoch 200/200 => Loss 0.886, Train_accy 70.60, Test_accy 73.00
    Task 1, Epoch 180/180 => Loss 0.002, Train_accy 0.00
     CNN: {'total': 7.36, '00-09': 7.8, 'old': 7.8, 'new': 0.0}
    Task 2, Epoch 180/180 => Loss 0.002, Train_accy 0.00
     CNN: {'total': 38.67, '00-09': 46.4, '10-19': 0.0, 'old': 43.77, 'new': 0.0}
    Task 3, Epoch 180/180 => Loss 0.003, Train_accy 0.00
     CNN: {'total': 12.99, '00-09': 17.4, '10-19': 0.0, 'old': 14.5, 'new': 0.0}
    ...
    Task 9, Epoch 180/180 => Loss 0.004, Train_accy 0.00
     CNN: {'total': 28.66, '00-09': 55.6, '10-19': 0.0, 'old': 30.89, 'new': 0.0}
    CNN top1 curve: [74.0, 7.36, 38.67, 12.99, 1.22, 23.83, 34.52, 16.55, 8.56, 28.66]
    CNN top5 curve: [98.6, 52.26, 78.0, 45.82, 29.05, 53.58, 57.26, 48.85, 31.78, 48.66]
    

    I also tried it on Lwf, the problem remains.

    Questions About Loss Function

    I am wondering whether issues exist in the loss function. And I found that the loss function is defined as: loss_clf = F.cross_entropy(logits[:,self._known_classes:],targets-self._known_classes), along with loss_ewc.

    In my experiments, I set base_session=10, increment=1. So for task1, when calculating the loss, targets=[10, 10, ..., 10] and self._known_classes=10. Therefore, the target(#2para) of the cross_entropy was [0, 0, ..., 0]. Is that mean the loss function pushes the model to reject the new task, making logits of the new task approach 0?

    Tuning Loss Function

    I tried to change the #2para to [1, 1, ..., 1] by setting targets-self._known_classes+1, but error arises:

    /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [0,0,0] Assertion `t >= 0 && t < n_classes` failed.
    /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [1,0,0] Assertion `t >= 0 && t < n_classes` failed.
    /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [2,0,0] Assertion `t >= 0 && t < n_classes` failed.
    ...
    Traceback (most recent call last):
      File "/tmp/pycharm_project_241/main.py", line 33, in <module>
        main()
      File "/tmp/pycharm_project_241/main.py", line 13, in main
        train(args)
      File "/tmp/pycharm_project_241/trainer.py", line 18, in train
        _train(args)
      File "/tmp/pycharm_project_241/trainer.py", line 48, in _train
        model.incremental_train(data_manager)
      File "/tmp/pycharm_project_241/models/ewc.py", line 62, in incremental_train
        self._train(self.train_loader, self.test_loader)
      File "/tmp/pycharm_project_241/models/ewc.py", line 84, in _train
        self._update_representation(train_loader, test_loader, optimizer, scheduler)
      File "/tmp/pycharm_project_241/models/ewc.py", line 138, in _update_representation
        loss.backward()
      File "/home/linkdata/data/yaoxinjie/anaconda3/envs/FACT18/lib/python3.7/site-packages/torch/tensor.py", line 245, in backward
        torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
      File "/home/linkdata/data/yaoxinjie/anaconda3/envs/FACT18/lib/python3.7/site-packages/torch/autograd/__init__.py", line 147, in backward
        allow_unreachable=True, accumulate_grad=True)  # allow_unreachable flag
    RuntimeError: Unable to find a valid cuDNN algorithm to run convolution
    

    And I don't why it happens and how to handle this. Could you please help me to clarify it? Thanks a lot!

    opened by Steven-cpp 3
  • why loss_clf = F.cross_entropy(logits[:, self._known_classes:], fake_targets)?

    why loss_clf = F.cross_entropy(logits[:, self._known_classes:], fake_targets)?

    Thank you very much for your excellent work.

    In models/finetune.py , line 117:

                    fake_targets=targets-self._known_classes
                    loss_clf = F.cross_entropy(logits[:,self._known_classes:], fake_targets)
                    
                    loss=loss_clf
    

    why not :

                    loss_clf = F.cross_entropy(logits, targets)
                    loss=loss_clf
    
    opened by chester-w-xie 3
  • foster error: TypeError: string indices must be integers

    foster error: TypeError: string indices must be integers

    after train the second task and before compress:

    Task 1, Epoch 167/170 => Loss 0.752, Loss_clf 0.000, Loss_fe 0.001, Loss_kd 0.375, Train_accy 99.00:  98%|█████████▊| 167/170 [09:02<00:10,  3.35s/it]
    Task 1, Epoch 168/170 => Loss 0.746, Loss_clf 0.001, Loss_fe 0.001, Loss_kd 0.372, Train_accy 98.75:  98%|█████████▊| 167/170 [09:04<00:10,  3.35s/it]
    Task 1, Epoch 168/170 => Loss 0.746, Loss_clf 0.001, Loss_fe 0.001, Loss_kd 0.372, Train_accy 98.75:  99%|█████████▉| 168/170 [09:04<00:06,  3.20s/it]
    Task 1, Epoch 169/170 => Loss 0.744, Loss_clf 0.001, Loss_fe 0.001, Loss_kd 0.371, Train_accy 98.81:  99%|█████████▉| 168/170 [09:07<00:06,  3.20s/it]
    Task 1, Epoch 169/170 => Loss 0.744, Loss_clf 0.001, Loss_fe 0.001, Loss_kd 0.371, Train_accy 98.81:  99%|█████████▉| 169/170 [09:07<00:03,  3.10s/it]
    Task 1, Epoch 170/170 => Loss 0.746, Loss_clf 0.001, Loss_fe 0.001, Loss_kd 0.372, Train_accy 98.67:  99%|█████████▉| 169/170 [09:10<00:03,  3.10s/it]
    Task 1, Epoch 170/170 => Loss 0.746, Loss_clf 0.001, Loss_fe 0.001, Loss_kd 0.372, Train_accy 98.67: 100%|██████████| 170/170 [09:10<00:00,  3.01s/it]
    Task 1, Epoch 170/170 => Loss 0.746, Loss_clf 0.001, Loss_fe 0.001, Loss_kd 0.372, Train_accy 98.67: 100%|██████████| 170/170 [09:10<00:00,  3.24s/it]
    Traceback (most recent call last):
      File "foster.py", line 31, in <module>
        main()
      File "foster.py", line 12, in main
        train(args)
      File "/home/20031211496/Documents/PYCIL/PyCIL-master/trainer.py", line 18, in train
        _train(args)
      File "/home/20031211496/Documents/PYCIL/PyCIL-master/trainer.py", line 67, in _train
        model.incremental_train(data_manager)
      File "/home/20031211496/Documents/PYCIL/PyCIL-master/models/foster.py", line 89, in incremental_train
        self._train(self.train_loader, self.test_loader)
      File "/home/20031211496/Documents/PYCIL/PyCIL-master/models/foster.py", line 169, in _train
        self._feature_compression(train_loader, test_loader)
      File "/home/20031211496/Documents/PYCIL/PyCIL-master/models/foster.py", line 289, in _feature_compression
        self._snet = FOSTERNet(self.args["convnet_type"], False)
      File "/home/20031211496/Documents/PYCIL/PyCIL-master/utils/inc_net.py", line 402, in __init__
        self.convnet_type = args["convnet_type"]
    TypeError: string indices must be integers
    
    opened by muyuuuu 2
  • the memory of rmm

    the memory of rmm

    Hello! First of all, thank you very much for your work.When I read the rmm code, I found that memory is not 2000. Instead, only 2000 pieces of data were used for the new task. Is rmm originally like this? I add the following code to rmm.py to see the number of memories.

    image image

    opened by zhl98 2
  • What is the purpose of the reduce Exemplar function

    What is the purpose of the reduce Exemplar function

    I'd like to know the role of the Reduce examplar functions, construct examplar functions, and construct Exemplar unified functions in your base.py file. Is there a reference for how they work?

    opened by Z-ZHIZHONG 2
  •  Genral idea for the code framework

    Genral idea for the code framework

    Hello, I have some questions for you.Is your code modified from the original DER? Could you give me a general idea of your thinking.Thank you very much. I look forward to your recovery.

    opened by Z-ZHIZHONG 2
Releases(v0.1)
  • v0.1(Dec 12, 2022)

    This is the first Github release of PyCIL. Reproduced methods are listed as:

    • FineTune: Baseline method which simply updates parameters on new tasks, suffering from Catastrophic Forgetting. By default, weights corresponding to the outputs of previous classes are not updated.
    • EWC: Overcoming catastrophic forgetting in neural networks. PNAS2017 [paper]
    • LwF: Learning without Forgetting. ECCV2016 [paper]
    • Replay: Baseline method with exemplars.
    • GEM: Gradient Episodic Memory for Continual Learning. NIPS2017 [paper]
    • iCaRL: Incremental Classifier and Representation Learning. CVPR2017 [paper]
    • BiC: Large Scale Incremental Learning. CVPR2019 [paper]
    • WA: Maintaining Discrimination and Fairness in Class Incremental Learning. CVPR2020 [paper]
    • PODNet: PODNet: Pooled Outputs Distillation for Small-Tasks Incremental Learning. ECCV2020 [paper]
    • DER: DER: Dynamically Expandable Representation for Class Incremental Learning. CVPR2021 [paper]
    • PASS: Prototype Augmentation and Self-Supervision for Incremental Learning. CVPR2021 [paper]
    • RMM: RMM: Reinforced Memory Management for Class-Incremental Learning. NeurIPS2021 [paper]
    • IL2A: Class-Incremental Learning via Dual Augmentation. NeurIPS2021 [paper]
    • SSRE: Self-Sustaining Representation Expansion for Non-Exemplar Class-Incremental Learning. CVPR2022 [paper]
    • FeTrIL: Feature Translation for Exemplar-Free Class-Incremental Learning. WACV2023 [paper]
    • Coil: Co-Transport for Class-Incremental Learning. ACM MM2021 [paper]
    • FOSTER: Feature Boosting and Compression for Class-incremental Learning. ECCV 2022 [paper]

    Stay tuned for more state-of-the-arts in PyCIL!

    Source code(tar.gz)
    Source code(zip)
Owner
Fu-Yun Wang
Fu-Yun Wang
An Open Source Machine Learning Framework for Everyone

Documentation TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries, a

170.1k Jan 04, 2023
toroidal - a lightweight transformer library for PyTorch

toroidal - a lightweight transformer library for PyTorch Toroidal transformers are of smaller size and lower weight than the more common E-I types. Th

MathInf GmbH 64 Jan 07, 2023
Augmented CLIP - Training simple models to predict CLIP image embeddings from text embeddings, and vice versa.

Train aug_clip against laion400m-embeddings found here: https://laion.ai/laion-400-open-dataset/ - note that this used the base ViT-B/32 CLIP model. S

Peter Baylies 55 Sep 13, 2022
Pytorch and Keras Implementations of Hyperspectral Image Classification -- Traditional to Deep Models: A Survey for Future Prospects.

The repository contains the implementations for Hyperspectral Image Classification -- Traditional to Deep Models: A Survey for Future Prospects. Model

Ankur Deria 115 Jan 06, 2023
This code finds bounding box of a single human mouth.

This code finds bounding box of a single human mouth. In comparison to other face segmentation methods, it is relatively insusceptible to open mouth conditions, e.g., yawning, surgical robots, etc. T

iThermAI 4 Nov 27, 2022
👨‍💻 run nanosaur in simulation with Gazebo/Ingnition

🦕 👨‍💻 nanosaur_gazebo nanosaur The smallest NVIDIA Jetson dinosaur robot, open-source, fully 3D printable, based on ROS2 & Isaac ROS. Designed & ma

nanosaur 9 Jul 19, 2022
Deploy optimized transformer based models on Nvidia Triton server

Deploy optimized transformer based models on Nvidia Triton server

Lefebvre Sarrut Services 1.2k Jan 05, 2023
Supervised Sliding Window Smoothing Loss Function Based on MS-TCN for Video Segmentation

SSWS-loss_function_based_on_MS-TCN Supervised Sliding Window Smoothing Loss Function Based on MS-TCN for Video Segmentation Supervised Sliding Window

3 Aug 03, 2022
Pomodoro timer that acknowledges the inexorable, infinite passage of time

Pomodouroboros Most pomodoro trackers assume you're going to start them. But time and tide wait for no one - the great pomodoro of the cosmos is cold

Glyph 66 Dec 13, 2022
Fast (simple) spectral synthesis and emission-line fitting of DESI spectra.

FastSpecFit Introduction This repository contains code and documentation to perform fast, simple spectral synthesis and emission-line fitting of DESI

5 Aug 02, 2022
Self-describing JSON-RPC services made easy

ReflectRPC Self-describing JSON-RPC services made easy Contents What is ReflectRPC? Installation Features Datatypes Custom Datatypes Returning Errors

Andreas Heck 31 Jul 16, 2022
Data cleaning, missing value handle, EDA use in this project

Lending Club Case Study Project Brief Solving this assignment will give you an idea about how real business problems are solved using EDA. In this cas

Dhruvil Sheth 1 Jan 05, 2022
A semismooth Newton method for elliptic PDE-constrained optimization

sNewton4PDEOpt The Python module implements a semismooth Newton method for solving finite-element discretizations of the strongly convex, linear ellip

2 Dec 08, 2022
Answering Open-Domain Questions of Varying Reasoning Steps from Text

This repository contains the authors' implementation of the Iterative Retriever, Reader, and Reranker (IRRR) model in the EMNLP 2021 paper "Answering Open-Domain Questions of Varying Reasoning Steps

26 Dec 22, 2022
A generalist algorithm for cell and nucleus segmentation.

Cellpose | A generalist algorithm for cell and nucleus segmentation. Cellpose was written by Carsen Stringer and Marius Pachitariu. To learn about Cel

MouseLand 733 Dec 29, 2022
ImVoxelNet: Image to Voxels Projection for Monocular and Multi-View General-Purpose 3D Object Detection

ImVoxelNet: Image to Voxels Projection for Monocular and Multi-View General-Purpose 3D Object Detection This repository contains implementation of the

Visual Understanding Lab @ Samsung AI Center Moscow 190 Dec 30, 2022
Model that predicts the probability of a Twitter user being anti-vaccination.

stylebody {text-align: justify}/style AVAXTAR: Anti-VAXx Tweet AnalyzeR AVAXTAR is a python package to identify anti-vaccine users on twitter. The

10 Sep 27, 2022
PyTorch implementation of paper "IBRNet: Learning Multi-View Image-Based Rendering", CVPR 2021.

IBRNet: Learning Multi-View Image-Based Rendering PyTorch implementation of paper "IBRNet: Learning Multi-View Image-Based Rendering", CVPR 2021. IBRN

Google Interns 371 Jan 03, 2023
Official Pytorch implementation of "CLIPstyler:Image Style Transfer with a Single Text Condition"

CLIPstyler Official Pytorch implementation of "CLIPstyler:Image Style Transfer with a Single Text Condition" Environment Pytorch 1.7.1, Python 3.6 $ c

201 Dec 29, 2022
Code for our work "Activation to Saliency: Forming High-Quality Labels for Unsupervised Salient Object Detection".

A2S-USOD Code for our work "Activation to Saliency: Forming High-Quality Labels for Unsupervised Salient Object Detection". Code will be released upon

15 Dec 16, 2022