mmfewshot is an open source few shot learning toolbox based on PyTorch

Overview

Introduction

English | 简体中文

Documentation actions codecov PyPI LICENSE Average time to resolve an issue Percentage of issues still open

mmfewshot is an open source few shot learning toolbox based on PyTorch. It is a part of the OpenMMLab project.

The master branch works with PyTorch 1.5+. The compatibility to earlier versions of PyTorch is not fully tested.

Documentation: https://mmfewshot.readthedocs.io/en/latest/.

Major features

  • Support multiple tasks in Few Shot Learning

    MMFewShot provides unified implementation and evaluation of few shot classification and detection.

  • Modular Design

    We decompose the few shot learning framework into different components, which makes it much easy and flexible to build a new model by combining different modules.

  • Strong baseline and State of the art

    The toolbox provides strong baselines and state-of-the-art methods in few shot classification and detection.

License

This project is released under the Apache 2.0 license.

Model Zoo

Supported algorithms:

classification
Detection

Changelog

Installation

Please refer to install.md for installation of mmfewshot.

Getting Started

Please see getting_started.md for the basic usage of mmfewshot.

Citation

If you find this project useful in your research, please consider cite:

@misc{mmfewshot2021,
    title={OpenMMLab Few Shot Learning Toolbox and Benchmark},
    author={mmfewshot Contributors},
    howpublished = {\url{https://github.com/open-mmlab/mmfewshot}},
    year={2021}
}

Contributing

We appreciate all contributions to improve mmfewshot. Please refer to CONTRIBUTING.md in MMFewShot for the contributing guideline.

Acknowledgement

mmfewshot is an open source project that is contributed by researchers and engineers from various colleges and companies. We appreciate all the contributors who implement their methods or add new features, as well as users who give valuable feedbacks. We wish that the toolbox and benchmark could serve the growing research community by providing a flexible toolkit to reimplement existing methods and develop their own new methods.

Projects in OpenMMLab

  • MMCV: OpenMMLab foundational library for computer vision.
  • MIM: MIM Installs OpenMMLab Packages.
  • MMClassification: OpenMMLab image classification toolbox and benchmark.
  • MMDetection: OpenMMLab detection toolbox and benchmark.
  • MMDetection3D: OpenMMLab's next-generation platform for general 3D object detection.
  • MMSegmentation: OpenMMLab semantic segmentation toolbox and benchmark.
  • MMAction2: OpenMMLab's next-generation action understanding toolbox and benchmark.
  • MMTracking: OpenMMLab video perception toolbox and benchmark.
  • MMPose: OpenMMLab pose estimation toolbox and benchmark.
  • MMEditing: OpenMMLab image and video editing toolbox.
  • MMOCR: A Comprehensive Toolbox for Text Detection, Recognition and Understanding.
  • MMGeneration: OpenMMLab image and video generative models toolbox.
  • MMFlow: OpenMMLab optical flow toolbox and benchmark.
  • MMFewShot: OpenMMLab FewShot Learning Toolbox and Benchmark.
Comments
  • about result reimplementation of meta-rcnn

    about result reimplementation of meta-rcnn

    When trying to reproduce results of meta-rcnn and TFA, under 1 shot setting of split1, I find that reproduced results of meta-rcnn is much higher, which is confusing.In paper of meta-rcnn(this 19.9 is the result i want to get): image

    In paper of TFA: image

    Result in paper shows that result of split1 under 1 shot setting is 19.9. But my results is much higher: base training : mAP is 76.2 finetunning : all class is 47.40, novel class is 38.80, base class is 50.53 Which is much higher than results in paper. This is confusing. Besides, in the README.md of meta-rcnn, results are even higher: image

    under split1 1 shot setting, the results of TFA I get is 40.4 which is basically the same as the paper report.

    Could you please kindly answer my questions?

    opened by JulioZhao97 8
  • confused about `samples_per_gpu` of meta_dataloader

    confused about `samples_per_gpu` of meta_dataloader

    https://github.com/open-mmlab/mmfewshot/blob/486c8c2fd7929880eab0dfcd73a3dd3a512ddfbe/configs/detection/base/datasets/nway_kshot/base_voc.py#L106

    Hi, thanks for your great work in fsod. I want to know why the value of samples_per_gpu is not 15 instead of 16 for voc base training. Hope you can help me.

    opened by Wei-i 8
  • coco dataset?

    coco dataset?

    我的coco数据目录是这样的: data --coco ----annotations ----train2014 ----val2014 --few_shot_ann ----coco ------benchmark_10shot -------- ... 当我运行fsce下的coco预训练config时,会报错:no such file or directory: 'data/few_shot_ann/coco/annotaions/train.json' 请问这个train.json是哪里来的,预训练的标签不是应该调用coco文件夹下的annotations吗? 另外我在data preparation找到一个trainvalno5k.json和5k.json,请问是这两个json文件吗? 期待您的回答!

    opened by kike-0304 6
  • RuntimeError: The expanded size of the tensor (21) must match the existing size (54) at non-singleton dimension 0.  Target sizes: [21, 1024].  Tensor sizes: [54, 1024]

    RuntimeError: The expanded size of the tensor (21) must match the existing size (54) at non-singleton dimension 0. Target sizes: [21, 1024]. Tensor sizes: [54, 1024]

    Traceback (most recent call last): File "/home/lbc/miniconda3/envs/mmfewshot/lib/python3.7/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/lbc/miniconda3/envs/mmfewshot/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/lbc/mmfewshot-main/tools/detection/misc/initialize_bbox_head.py", line 289, in <module> main() File "/home/lbc/mmfewshot-main/tools/detection/misc/initialize_bbox_head.py", line 278, in main args) File "/home/lbc/mmfewshot-main/tools/detection/misc/initialize_bbox_head.py", line 169, in random_init_checkpoint new_weight[:prev_cls] = pretrained_weight[:prev_cls] RuntimeError: The expanded size of the tensor (21) must match the existing size (54) at non-singleton dimension 0. Target sizes: [21, 1024]. Tensor sizes: [54, 1024]

    The process of fsce on my own coco format datasets is:

    1. Base Training : ckpt(step1)
    2. step two: ues the best val pth of step 1 for train? python3.7 -m tools.detection.misc.initialize_bbox_head --src1 ./work_dirs/fsce_r101_fpn_coco_base-training/best_bbox_mAP_iter_105000.pth --method random_init --save-dir ./work_dirs/fsce_r101_fpn_coco-split1_base-training
    opened by Williamlizl 6
  • Fix tabular printing of dataset information

    Fix tabular printing of dataset information

    Motivation

    When the length of the last row_data is less than 10 and greater than 0, the row_data will not be printed

    Modification

    When the last row_data is not empty, add to table_data

    opened by LiangYang666 4
  • Few-shot instead of one-shot in demo inference

    Few-shot instead of one-shot in demo inference

    Currently, the demo script (classification) takes only one sample in the support set. It uses the process_support_images() method to forward the support set. How to modify this in order to allow for more than one sample in the support set?

    One idea could be to place another set of support images in a different folder and then forward that as well. Then the model.before_forward_support() method can be modified if it resets the features. For e.g. for meta_baseline_head, it is resetting saved features.

    Then (again for meta_baseline), meta_baseline_head.before_forward_query would also have to be modified since it is replacing the self.mean_support_feats with the mean of the new support set.

    Would these two changes in this case be enough to adapt for a few-shot instead of a one-shot inference?

    opened by rlleshi 4
  • How does it work

    How does it work

    According to the document, the following errors occur during training. I don't know how to solve them. Has anyone encountered them. TypeError: init() got an unexpected keyword argument 'persistent_workers'

    opened by isJunCheng 3
  • Question about the training of MatchingNetwork

    Question about the training of MatchingNetwork

    Hi, Great Job.

    I have some questions about the training process of the matching network(classification)

    • In this line, https://github.com/open-mmlab/mmfewshot/blob/31583cccb8ef870c9e688b1dc259263b73e58884/configs/classification/matching_net/mini_imagenet/matching-net_conv4_1xb105_mini-imagenet_5way-1shot.py?_pjax=%23js-repo-pjax-container%2C%20div%5Bitemtype%3D%22http%3A%2F%2Fschema.org%2FSoftwareSourceCode%22%5D%20main%2C%20%5Bdata-pjax-container%5D#L28 You use num_shots=5 for training 5-way-1-shot, is this a bug?
    • The batch size shown in the result table is 64, I would like to know whether this number is the training batch size or test batch size?
    • How many gaps between the meta-val and meta-test split in your experiment?
      • In the log of matching_net 5-way-1-shot, the max accuracy is about 51%, while the test result is 53%, does it means there exists ~2 points between two sets?

    Thanks, Best

    opened by tonysy 3
  • meta_test_head is None on demo

    meta_test_head is None on demo

    The error occurs when running demo_metric_classifier_1shot_inference with a custom trained NegMargin model. The meta_test_head is None. Testing the model with dist_test works as expected though. I am not sure why it didn't save the meta test head. A comment here says that it is only built and run on testing. I am not sure what that means though.

    The model config is the same as the standard in other config files:

    model = dict(
        type='NegMargin',
        backbone=dict(type='Conv4'),
        head=dict(
            type='NegMarginHead',
            num_classes=6,
            in_channels=1600,
            metric_type='cosine',
            margin=-0.01,
            temperature=10.0),
        meta_test_head=dict(
            type='NegMarginHead',
            num_classes=6,
            in_channels=1600,
            metric_type='cosine',
            margin=0.0,
            temperature=5.0))
    

    Otherwise, the config file itself is similar to other neg_margin config files for the cube dataset.

    opened by rlleshi 3
  • Don't find the “frozen_parameters” parameter in the relevant source code

    Don't find the “frozen_parameters” parameter in the relevant source code

    I found that the “frozen_parameters” parameter is used in many detection models, but I have not found where this parameter is used in the relevant source code. Which part of the source code should I see?

    opened by wwwbq 2
  • FewShotCocoDefaultDataset中coco_benchmark的ann_file路径无法自定义

    FewShotCocoDefaultDataset中coco_benchmark的ann_file路径无法自定义

    在mmfewshot/detection/datasets/coco.py/FewShotCocoDefaultDataset 中的coco_benchmark固定了数据集路径为f'data/few_shot_ann/coco/benchmark_{shot}shot/full_box_{shot}shot_{class_name}_trainval.json'。但是我的few_shot_ann路径和上面不同,并且FewShotCocoDefaultDataset没有办法接受数据集路径的参数,希望可以增加此参数

    opened by wwwbq 2
  • 运行mpsr第一阶段报错~

    运行mpsr第一阶段报错~

    Traceback (most recent call last): File "/root/mmfewshot/./tools/detection/train.py", line 236, in main() File "/root/mmfewshot/./tools/detection/train.py", line 225, in main train_detector( File "/root/mmfewshot/mmfewshot/detection/apis/train.py", line 48, in train_detector data_loaders = [build_dataloader(ds, **train_loader_cfg) for ds in dataset] File "/root/mmfewshot/mmfewshot/detection/apis/train.py", line 48, in data_loaders = [build_dataloader(ds, **train_loader_cfg) for ds in dataset] File "/root/mmfewshot/mmfewshot/detection/datasets/builder.py", line 311, in build_dataloader data_loader = TwoBranchDataloader( TypeError: init() got an unexpected keyword argument 'persistent_workers' Killing subprocess 9272 Traceback (most recent call last): File "/opt/conda/envs/pytorch1.8/lib/python3.9/runpy.py", line 197, in _run_module_as_main return _run_code(code, main_globals, None, File "/opt/conda/envs/pytorch1.8/lib/python3.9/runpy.py", line 87, in _run_code exec(code, run_globals) File "/opt/conda/envs/pytorch1.8/lib/python3.9/site-packages/torch/distributed/launch.py", line 340, in main() File "/opt/conda/envs/pytorch1.8/lib/python3.9/site-packages/torch/distributed/launch.py", line 326, in main sigkill_handler(signal.SIGTERM, None) # not coming back File "/opt/conda/envs/pytorch1.8/lib/python3.9/site-packages/torch/distributed/launch.py", line 301, in sigkill_handler raise subprocess.CalledProcessError(returncode=last_return_code, cmd=cmd) subprocess.CalledProcessError: Command '['/opt/conda/envs/pytorch1.8/bin/python', '-u', './tools/detection/train.py', '--local_rank=0', 'configs/detection/mpsr/voc/split1/mpsr_r101_fpn_2xb2_voc-split1_base-training.py', '--launcher', 'pytorch']' returned non-zero exit status 1.

    opened by DaDogs 1
  • Where should I put my few shot dataset?

    Where should I put my few shot dataset?

    Since few shot dataset is just for finetuning the model and the test.py won't save the change of the model, where should I put my fewshot dataset? training set or validation set? In that way, I could use the pth file to predict my images in the demo.py?

    opened by winnie9802 0
  • The initialization is blocked on building the models in FSClassification

    The initialization is blocked on building the models in FSClassification

    We meet problem when training on classification models. We test several times, the code is blocked on this line of command in classification.api.train 截屏2022-10-15 下午12 31 58

    opened by jwfanDL 0
  • Request to add the ability to read tiff datasets

    Request to add the ability to read tiff datasets

    When I was studying the process of small sample learning, I came across tiff images in the data set. At this point, there is a problem with the dataset loading, would like to ask if you can add a tiff format read method.

    opened by Djn-swjtu 0
Releases(v0.1.0)
  • v0.1.0(Nov 24, 2021)

    Main Features

    • Support few shot classification and few shot detection.
    • For few shot classification, support fine-tune based methods (Baseline, Baseline++, NegMargin); metric-based methods (MatchingNet, ProtoNet, RelationNet, MetaBaseline); meta-learning based method (MAML).
    • For few shot detection, support fine-tune based methods (TFA, FSCE, MPSR); Meta-learning based methods (MetaRCNN, FsDetView, AttentionRPN).
    • Provide checkpoints and log files for all of the methods above.
    Source code(tar.gz)
    Source code(zip)
Deep learning operations reinvented (for pytorch, tensorflow, jax and others)

This video in better quality. einops Flexible and powerful tensor operations for readable and reliable code. Supports numpy, pytorch, tensorflow, and

Alex Rogozhnikov 6.2k Jan 01, 2023
Code for EMNLP 2021 paper Contrastive Out-of-Distribution Detection for Pretrained Transformers.

Contra-OOD Code for EMNLP 2021 paper Contrastive Out-of-Distribution Detection for Pretrained Transformers. Requirements PyTorch Transformers datasets

Wenxuan Zhou 27 Oct 28, 2022
This repository contains the official implementation code of the paper Transformer-based Feature Reconstruction Network for Robust Multimodal Sentiment Analysis

This repository contains the official implementation code of the paper Transformer-based Feature Reconstruction Network for Robust Multimodal Sentiment Analysis, accepted at ACMMM 2021.

Ziqi Yuan 10 Sep 30, 2022
Protect against subdomain takeover

domain-protect scans Amazon Route53 across an AWS Organization for domain records vulnerable to takeover deploy to security audit account scan your en

OVO Technology 0 Nov 17, 2022
Revisiting Oxford and Paris: Large-Scale Image Retrieval Benchmarking

Revisiting Oxford and Paris: Large-Scale Image Retrieval Benchmarking We revisit and address issues with Oxford 5k and Paris 6k image retrieval benchm

Filip Radenovic 188 Dec 17, 2022
SPLADE: Sparse Lexical and Expansion Model for First Stage Ranking

SPLADE 🍴 + 🥄 = 🔎 This repository contains the weights for four models as well as the code for running inference for our two papers: [v1]: SPLADE: S

NAVER 170 Dec 28, 2022
Segmentation models with pretrained backbones. PyTorch.

Python library with Neural Networks for Image Segmentation based on PyTorch. The main features of this library are: High level API (just two lines to

Pavel Yakubovskiy 6.6k Jan 06, 2023
The Rich Get Richer: Disparate Impact of Semi-Supervised Learning

The Rich Get Richer: Disparate Impact of Semi-Supervised Learning Preprocess file of the dataset used in implicit sub-populations: (Demographic groups

<a href=[email protected]"> 4 Oct 14, 2022
The official implementation of ICCV paper "Box-Aware Feature Enhancement for Single Object Tracking on Point Clouds".

Box-Aware Tracker (BAT) Pytorch-Lightning implementation of the Box-Aware Tracker. Box-Aware Feature Enhancement for Single Object Tracking on Point C

Kangel Zenn 5 Mar 26, 2022
On Evaluation Metrics for Graph Generative Models

On Evaluation Metrics for Graph Generative Models Authors: Rylee Thompson, Boris Knyazev, Elahe Ghalebi, Jungtaek Kim, Graham Taylor This is the offic

13 Jan 07, 2023
Official DGL implementation of "Rethinking High-order Graph Convolutional Networks"

SE Aggregation This is the implementation for Rethinking High-order Graph Convolutional Networks. Here we show the codes for citation networks as an e

Tianqi Zhang (张天启) 32 Jul 19, 2022
Art Project "Schrödinger's Game of Life"

Repo of the project "Team Creative Quantum AI: Schrödinger's Game of Life" Installation new conda env: conda create --name qcml python=3.8 conda activ

ℍ◮ℕℕ◭ℍ ℝ∈ᛔ∈ℝ 2 Sep 15, 2022
CyTran: Cycle-Consistent Transformers for Non-Contrast to Contrast CT Translation

CyTran: Cycle-Consistent Transformers for Non-Contrast to Contrast CT Translation We propose a novel approach to translate unpaired contrast computed

Nicolae Catalin Ristea 13 Jan 02, 2023
Saeed Lotfi 28 Dec 12, 2022
UniFormer - official implementation of UniFormer

UniFormer This repo is the official implementation of "Uniformer: Unified Transformer for Efficient Spatiotemporal Representation Learning". It curren

SenseTime X-Lab 573 Jan 04, 2023
ESP32 python application to read data from a Tilt™ Hydrometer for homebrewing

TitlESP32 ESP32 MicroPython application to read and log data from a Tilt™ Hydrometer. Requirements A board with an ESP32 chip USB cable - USB A / micr

IoBeer 5 Dec 01, 2022
piSTAR Lab is a modular platform built to make AI experimentation accessible and fun. (pistar.ai)

piSTAR Lab WARNING: This is an early release. Overview piSTAR Lab is a modular deep reinforcement learning platform built to make AI experimentation a

piSTAR Lab 0 Aug 01, 2022
Code for Ditto: Building Digital Twins of Articulated Objects from Interaction

Ditto: Building Digital Twins of Articulated Objects from Interaction Zhenyu Jiang, Cheng-Chun Hsu, Yuke Zhu CVPR 2022, Oral Project | arxiv News 2022

UT Robot Perception and Learning Lab 78 Dec 22, 2022
potpourri3d - An invigorating blend of 3D geometry tools in Python.

A Python library of various algorithms and utilities for 3D triangle meshes and point clouds. Managed by Nicholas Sharp, with new tools added lazily as needed. Currently, mainly bindings to C++ tools

Nicholas Sharp 295 Jan 05, 2023
Reproduction of Vision Transformer in Tensorflow2. Train from scratch and Finetune.

Vision Transformer(ViT) in Tensorflow2 Tensorflow2 implementation of the Vision Transformer(ViT). This repository is for An image is worth 16x16 words

sungjun lee 42 Dec 27, 2022