mmfewshot is an open source few shot learning toolbox based on PyTorch

Overview

Introduction

English | 简体中文

Documentation actions codecov PyPI LICENSE Average time to resolve an issue Percentage of issues still open

mmfewshot is an open source few shot learning toolbox based on PyTorch. It is a part of the OpenMMLab project.

The master branch works with PyTorch 1.5+. The compatibility to earlier versions of PyTorch is not fully tested.

Documentation: https://mmfewshot.readthedocs.io/en/latest/.

Major features

  • Support multiple tasks in Few Shot Learning

    MMFewShot provides unified implementation and evaluation of few shot classification and detection.

  • Modular Design

    We decompose the few shot learning framework into different components, which makes it much easy and flexible to build a new model by combining different modules.

  • Strong baseline and State of the art

    The toolbox provides strong baselines and state-of-the-art methods in few shot classification and detection.

License

This project is released under the Apache 2.0 license.

Model Zoo

Supported algorithms:

classification
Detection

Changelog

Installation

Please refer to install.md for installation of mmfewshot.

Getting Started

Please see getting_started.md for the basic usage of mmfewshot.

Citation

If you find this project useful in your research, please consider cite:

@misc{mmfewshot2021,
    title={OpenMMLab Few Shot Learning Toolbox and Benchmark},
    author={mmfewshot Contributors},
    howpublished = {\url{https://github.com/open-mmlab/mmfewshot}},
    year={2021}
}

Contributing

We appreciate all contributions to improve mmfewshot. Please refer to CONTRIBUTING.md in MMFewShot for the contributing guideline.

Acknowledgement

mmfewshot is an open source project that is contributed by researchers and engineers from various colleges and companies. We appreciate all the contributors who implement their methods or add new features, as well as users who give valuable feedbacks. We wish that the toolbox and benchmark could serve the growing research community by providing a flexible toolkit to reimplement existing methods and develop their own new methods.

Projects in OpenMMLab

  • MMCV: OpenMMLab foundational library for computer vision.
  • MIM: MIM Installs OpenMMLab Packages.
  • MMClassification: OpenMMLab image classification toolbox and benchmark.
  • MMDetection: OpenMMLab detection toolbox and benchmark.
  • MMDetection3D: OpenMMLab's next-generation platform for general 3D object detection.
  • MMSegmentation: OpenMMLab semantic segmentation toolbox and benchmark.
  • MMAction2: OpenMMLab's next-generation action understanding toolbox and benchmark.
  • MMTracking: OpenMMLab video perception toolbox and benchmark.
  • MMPose: OpenMMLab pose estimation toolbox and benchmark.
  • MMEditing: OpenMMLab image and video editing toolbox.
  • MMOCR: A Comprehensive Toolbox for Text Detection, Recognition and Understanding.
  • MMGeneration: OpenMMLab image and video generative models toolbox.
  • MMFlow: OpenMMLab optical flow toolbox and benchmark.
  • MMFewShot: OpenMMLab FewShot Learning Toolbox and Benchmark.
Comments
  • about result reimplementation of meta-rcnn

    about result reimplementation of meta-rcnn

    When trying to reproduce results of meta-rcnn and TFA, under 1 shot setting of split1, I find that reproduced results of meta-rcnn is much higher, which is confusing.In paper of meta-rcnn(this 19.9 is the result i want to get): image

    In paper of TFA: image

    Result in paper shows that result of split1 under 1 shot setting is 19.9. But my results is much higher: base training : mAP is 76.2 finetunning : all class is 47.40, novel class is 38.80, base class is 50.53 Which is much higher than results in paper. This is confusing. Besides, in the README.md of meta-rcnn, results are even higher: image

    under split1 1 shot setting, the results of TFA I get is 40.4 which is basically the same as the paper report.

    Could you please kindly answer my questions?

    opened by JulioZhao97 8
  • confused about `samples_per_gpu` of meta_dataloader

    confused about `samples_per_gpu` of meta_dataloader

    https://github.com/open-mmlab/mmfewshot/blob/486c8c2fd7929880eab0dfcd73a3dd3a512ddfbe/configs/detection/base/datasets/nway_kshot/base_voc.py#L106

    Hi, thanks for your great work in fsod. I want to know why the value of samples_per_gpu is not 15 instead of 16 for voc base training. Hope you can help me.

    opened by Wei-i 8
  • coco dataset?

    coco dataset?

    我的coco数据目录是这样的: data --coco ----annotations ----train2014 ----val2014 --few_shot_ann ----coco ------benchmark_10shot -------- ... 当我运行fsce下的coco预训练config时,会报错:no such file or directory: 'data/few_shot_ann/coco/annotaions/train.json' 请问这个train.json是哪里来的,预训练的标签不是应该调用coco文件夹下的annotations吗? 另外我在data preparation找到一个trainvalno5k.json和5k.json,请问是这两个json文件吗? 期待您的回答!

    opened by kike-0304 6
  • RuntimeError: The expanded size of the tensor (21) must match the existing size (54) at non-singleton dimension 0.  Target sizes: [21, 1024].  Tensor sizes: [54, 1024]

    RuntimeError: The expanded size of the tensor (21) must match the existing size (54) at non-singleton dimension 0. Target sizes: [21, 1024]. Tensor sizes: [54, 1024]

    Traceback (most recent call last): File "/home/lbc/miniconda3/envs/mmfewshot/lib/python3.7/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/lbc/miniconda3/envs/mmfewshot/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/lbc/mmfewshot-main/tools/detection/misc/initialize_bbox_head.py", line 289, in <module> main() File "/home/lbc/mmfewshot-main/tools/detection/misc/initialize_bbox_head.py", line 278, in main args) File "/home/lbc/mmfewshot-main/tools/detection/misc/initialize_bbox_head.py", line 169, in random_init_checkpoint new_weight[:prev_cls] = pretrained_weight[:prev_cls] RuntimeError: The expanded size of the tensor (21) must match the existing size (54) at non-singleton dimension 0. Target sizes: [21, 1024]. Tensor sizes: [54, 1024]

    The process of fsce on my own coco format datasets is:

    1. Base Training : ckpt(step1)
    2. step two: ues the best val pth of step 1 for train? python3.7 -m tools.detection.misc.initialize_bbox_head --src1 ./work_dirs/fsce_r101_fpn_coco_base-training/best_bbox_mAP_iter_105000.pth --method random_init --save-dir ./work_dirs/fsce_r101_fpn_coco-split1_base-training
    opened by Williamlizl 6
  • Fix tabular printing of dataset information

    Fix tabular printing of dataset information

    Motivation

    When the length of the last row_data is less than 10 and greater than 0, the row_data will not be printed

    Modification

    When the last row_data is not empty, add to table_data

    opened by LiangYang666 4
  • Few-shot instead of one-shot in demo inference

    Few-shot instead of one-shot in demo inference

    Currently, the demo script (classification) takes only one sample in the support set. It uses the process_support_images() method to forward the support set. How to modify this in order to allow for more than one sample in the support set?

    One idea could be to place another set of support images in a different folder and then forward that as well. Then the model.before_forward_support() method can be modified if it resets the features. For e.g. for meta_baseline_head, it is resetting saved features.

    Then (again for meta_baseline), meta_baseline_head.before_forward_query would also have to be modified since it is replacing the self.mean_support_feats with the mean of the new support set.

    Would these two changes in this case be enough to adapt for a few-shot instead of a one-shot inference?

    opened by rlleshi 4
  • How does it work

    How does it work

    According to the document, the following errors occur during training. I don't know how to solve them. Has anyone encountered them. TypeError: init() got an unexpected keyword argument 'persistent_workers'

    opened by isJunCheng 3
  • Question about the training of MatchingNetwork

    Question about the training of MatchingNetwork

    Hi, Great Job.

    I have some questions about the training process of the matching network(classification)

    • In this line, https://github.com/open-mmlab/mmfewshot/blob/31583cccb8ef870c9e688b1dc259263b73e58884/configs/classification/matching_net/mini_imagenet/matching-net_conv4_1xb105_mini-imagenet_5way-1shot.py?_pjax=%23js-repo-pjax-container%2C%20div%5Bitemtype%3D%22http%3A%2F%2Fschema.org%2FSoftwareSourceCode%22%5D%20main%2C%20%5Bdata-pjax-container%5D#L28 You use num_shots=5 for training 5-way-1-shot, is this a bug?
    • The batch size shown in the result table is 64, I would like to know whether this number is the training batch size or test batch size?
    • How many gaps between the meta-val and meta-test split in your experiment?
      • In the log of matching_net 5-way-1-shot, the max accuracy is about 51%, while the test result is 53%, does it means there exists ~2 points between two sets?

    Thanks, Best

    opened by tonysy 3
  • meta_test_head is None on demo

    meta_test_head is None on demo

    The error occurs when running demo_metric_classifier_1shot_inference with a custom trained NegMargin model. The meta_test_head is None. Testing the model with dist_test works as expected though. I am not sure why it didn't save the meta test head. A comment here says that it is only built and run on testing. I am not sure what that means though.

    The model config is the same as the standard in other config files:

    model = dict(
        type='NegMargin',
        backbone=dict(type='Conv4'),
        head=dict(
            type='NegMarginHead',
            num_classes=6,
            in_channels=1600,
            metric_type='cosine',
            margin=-0.01,
            temperature=10.0),
        meta_test_head=dict(
            type='NegMarginHead',
            num_classes=6,
            in_channels=1600,
            metric_type='cosine',
            margin=0.0,
            temperature=5.0))
    

    Otherwise, the config file itself is similar to other neg_margin config files for the cube dataset.

    opened by rlleshi 3
  • Don't find the “frozen_parameters” parameter in the relevant source code

    Don't find the “frozen_parameters” parameter in the relevant source code

    I found that the “frozen_parameters” parameter is used in many detection models, but I have not found where this parameter is used in the relevant source code. Which part of the source code should I see?

    opened by wwwbq 2
  • FewShotCocoDefaultDataset中coco_benchmark的ann_file路径无法自定义

    FewShotCocoDefaultDataset中coco_benchmark的ann_file路径无法自定义

    在mmfewshot/detection/datasets/coco.py/FewShotCocoDefaultDataset 中的coco_benchmark固定了数据集路径为f'data/few_shot_ann/coco/benchmark_{shot}shot/full_box_{shot}shot_{class_name}_trainval.json'。但是我的few_shot_ann路径和上面不同,并且FewShotCocoDefaultDataset没有办法接受数据集路径的参数,希望可以增加此参数

    opened by wwwbq 2
  • 运行mpsr第一阶段报错~

    运行mpsr第一阶段报错~

    Traceback (most recent call last): File "/root/mmfewshot/./tools/detection/train.py", line 236, in main() File "/root/mmfewshot/./tools/detection/train.py", line 225, in main train_detector( File "/root/mmfewshot/mmfewshot/detection/apis/train.py", line 48, in train_detector data_loaders = [build_dataloader(ds, **train_loader_cfg) for ds in dataset] File "/root/mmfewshot/mmfewshot/detection/apis/train.py", line 48, in data_loaders = [build_dataloader(ds, **train_loader_cfg) for ds in dataset] File "/root/mmfewshot/mmfewshot/detection/datasets/builder.py", line 311, in build_dataloader data_loader = TwoBranchDataloader( TypeError: init() got an unexpected keyword argument 'persistent_workers' Killing subprocess 9272 Traceback (most recent call last): File "/opt/conda/envs/pytorch1.8/lib/python3.9/runpy.py", line 197, in _run_module_as_main return _run_code(code, main_globals, None, File "/opt/conda/envs/pytorch1.8/lib/python3.9/runpy.py", line 87, in _run_code exec(code, run_globals) File "/opt/conda/envs/pytorch1.8/lib/python3.9/site-packages/torch/distributed/launch.py", line 340, in main() File "/opt/conda/envs/pytorch1.8/lib/python3.9/site-packages/torch/distributed/launch.py", line 326, in main sigkill_handler(signal.SIGTERM, None) # not coming back File "/opt/conda/envs/pytorch1.8/lib/python3.9/site-packages/torch/distributed/launch.py", line 301, in sigkill_handler raise subprocess.CalledProcessError(returncode=last_return_code, cmd=cmd) subprocess.CalledProcessError: Command '['/opt/conda/envs/pytorch1.8/bin/python', '-u', './tools/detection/train.py', '--local_rank=0', 'configs/detection/mpsr/voc/split1/mpsr_r101_fpn_2xb2_voc-split1_base-training.py', '--launcher', 'pytorch']' returned non-zero exit status 1.

    opened by DaDogs 1
  • Where should I put my few shot dataset?

    Where should I put my few shot dataset?

    Since few shot dataset is just for finetuning the model and the test.py won't save the change of the model, where should I put my fewshot dataset? training set or validation set? In that way, I could use the pth file to predict my images in the demo.py?

    opened by winnie9802 0
  • The initialization is blocked on building the models in FSClassification

    The initialization is blocked on building the models in FSClassification

    We meet problem when training on classification models. We test several times, the code is blocked on this line of command in classification.api.train 截屏2022-10-15 下午12 31 58

    opened by jwfanDL 0
  • Request to add the ability to read tiff datasets

    Request to add the ability to read tiff datasets

    When I was studying the process of small sample learning, I came across tiff images in the data set. At this point, there is a problem with the dataset loading, would like to ask if you can add a tiff format read method.

    opened by Djn-swjtu 0
Releases(v0.1.0)
  • v0.1.0(Nov 24, 2021)

    Main Features

    • Support few shot classification and few shot detection.
    • For few shot classification, support fine-tune based methods (Baseline, Baseline++, NegMargin); metric-based methods (MatchingNet, ProtoNet, RelationNet, MetaBaseline); meta-learning based method (MAML).
    • For few shot detection, support fine-tune based methods (TFA, FSCE, MPSR); Meta-learning based methods (MetaRCNN, FsDetView, AttentionRPN).
    • Provide checkpoints and log files for all of the methods above.
    Source code(tar.gz)
    Source code(zip)
PyTorch implementation for Stochastic Fine-grained Labeling of Multi-state Sign Glosses for Continuous Sign Language Recognition.

Stochastic CSLR This is the PyTorch implementation for the ECCV 2020 paper: Stochastic Fine-grained Labeling of Multi-state Sign Glosses for Continuou

Zhe Niu 28 Dec 19, 2022
OpenPCDet Toolbox for LiDAR-based 3D Object Detection.

OpenPCDet OpenPCDet is a clear, simple, self-contained open source project for LiDAR-based 3D object detection. It is also the official code release o

OpenMMLab 3.2k Dec 31, 2022
A library for low-memory inferencing in PyTorch.

Pylomin Pylomin (PYtorch LOw-Memory INference) is a library for low-memory inferencing in PyTorch. Installation ... Usage For example, the following c

3 Oct 26, 2022
Toward Spatially Unbiased Generative Models (ICCV 2021)

Toward Spatially Unbiased Generative Models Implementation of Toward Spatially Unbiased Generative Models (ICCV 2021) Overview Recent image generation

Jooyoung Choi 88 Dec 01, 2022
Multiple style transfer via variational autoencoder

ST-VAE Multiple style transfer via variational autoencoder By Zhi-Song Liu, Vicky Kalogeiton and Marie-Paule Cani This repo only provides simple testi

13 Oct 29, 2022
Fast and simple implementation of RL algorithms, designed to run fully on GPU.

RSL RL Fast and simple implementation of RL algorithms, designed to run fully on GPU. This code is an evolution of rl-pytorch provided with NVIDIA's I

Robotic Systems Lab - Legged Robotics at ETH Zürich 68 Dec 29, 2022
A hobby project which includes a hand-gesture based virtual piano using a mobile phone camera and OpenCV library functions

Overview This is a hobby project which includes a hand-gesture controlled virtual piano using an android phone camera and some OpenCV library. My moti

Abhinav Gupta 1 Nov 19, 2021
DeepFaceEditing: Deep Face Generation and Editing with Disentangled Geometry and Appearance Control

DeepFaceEditing: Deep Face Generation and Editing with Disentangled Geometry and Appearance Control One version of our system is implemented using the

260 Nov 28, 2022
OverFeat is a Convolutional Network-based image classifier and feature extractor.

OverFeat OverFeat is a Convolutional Network-based image classifier and feature extractor. OverFeat was trained on the ImageNet dataset and participat

593 Dec 08, 2022
[SDM 2022] Towards Similarity-Aware Time-Series Classification

SimTSC This is the PyTorch implementation of SDM2022 paper Towards Similarity-Aware Time-Series Classification. We propose Similarity-Aware Time-Serie

Daochen Zha 49 Dec 27, 2022
TC-GNN with Pytorch integration

TC-GNN (Running Sparse GNN on Dense Tensor Core on Ampere GPU) Cite this project and paper. @inproceedings{TC-GNN, title={TC-GNN: Accelerating Spars

YUKE WANG 19 Dec 01, 2022
CoMoGAN: continuous model-guided image-to-image translation. CVPR 2021 oral.

CoMoGAN: Continuous Model-guided Image-to-Image Translation Official repository. Paper CoMoGAN: continuous model-guided image-to-image translation [ar

166 Dec 31, 2022
Two-stage CenterNet

Probabilistic two-stage detection Two-stage object detectors that use class-agnostic one-stage detectors as the proposal network. Probabilistic two-st

Xingyi Zhou 1.1k Jan 03, 2023
Learning recognition/segmentation models without end-to-end training. 40%-60% less GPU memory footprint. Same training time. Better performance.

InfoPro-Pytorch The Information Propagation algorithm for training deep networks with local supervision. (ICLR 2021) Revisiting Locally Supervised Lea

78 Dec 27, 2022
This library contains a Tensorflow implementation of the paper Stability Analysis of Unfolded WMMSE for Power Allocation

UWMMSE-stability Tensorflow implementation of Stability Analysis of UWMMSE Overview This library contains a Tensorflow implementation of the paper Sta

Arindam Chowdhury 1 Nov 16, 2022
The official PyTorch code for NeurIPS 2021 ML4AD Paper, "Does Thermal data make the detection systems more reliable?"

MultiModal-Collaborative (MMC) Learning Framework for integrating RGB and Thermal spectral modalities This is the official code for NeurIPS 2021 Machi

NeurAI 12 Nov 02, 2022
Bib-parser - Convenient script to parse .bib files with the ACM Digital Library like metadata

Bib Parser Convenient script to parse .bib files with the ACM Digital Library li

Mehtab Iqbal (Shahan) 1 Jan 26, 2022
C3D is a modified version of BVLC caffe to support 3D ConvNets.

C3D C3D is a modified version of BVLC caffe to support 3D convolution and pooling. The main supporting features include: Training or fine-tuning 3D Co

Meta Archive 1.1k Nov 14, 2022
Video Contrastive Learning with Global Context

Video Contrastive Learning with Global Context (VCLR) This is the official PyTorch implementation of our VCLR paper. Install dependencies environments

143 Dec 26, 2022
[CVPR'21] Learning to Recommend Frame for Interactive Video Object Segmentation in the Wild

IVOS-W Paper Learning to Recommend Frame for Interactive Video Object Segmentation in the Wild Zhaoyun Yin, Jia Zheng, Weixin Luo, Shenhan Qian, Hanli

SVIP Lab 38 Dec 12, 2022