Datasets, Transforms and Models specific to Computer Vision

Related tags

Deep Learningvision
Overview

vision

Datasets, Transforms and Models specific to Computer Vision

Installation

  • First install the nightly version of OneFlow
python3 -m pip install oneflow -f https://staging.oneflow.info/branch/master/cu102
  • Then install the latest stable release of flowvision
pip install flowvision==0.0.4
  • Or install the nightly release of flowvision
pip install -i https://test.pypi.org/simple/ flowvision==0.0.4

Supported Model

All of the supported models can be found in our model summary page here.

Usage

Quick Start
  • list supported model
from flowvision import ModelCreator
ModelCreator.model_table()
  • search supported model by wildcard
from flowvision import ModelCreator
ModelCreator.model_table("*vit*", pretrained=True)
ModelCreator.model_table("*vit*", pretrained=False)
ModelCreator.model_table('alexnet')
  • create model use ModelCreator
from flowvision import ModelCreator
model = ModelCreator.create_model('alexnet', pretrained=True)
ModelCreator
  • Create model in a simple way
from flowvision.models import ModelCreator
model = ModelCreator.create_model('alexnet', pretrained=True)

the pretrained weight will be saved to ./checkpoints

  • Supported model table
from flowvision.models import ModelCreator
ModelCreator.model_table()
           Models            
┏━━━━━━━━━━━━━━┳━━━━━━━━━━━━┓
┃ Name         ┃ Pretrained ┃
┡━━━━━━━━━━━━━━╇━━━━━━━━━━━━┩
│ alexnet      │ true       │
│ vit_b_16_224 │ false      │
│ vit_b_16_384 │ true       │
│ vit_b_32_224 │ false      │
│ vit_b_32_384 │ true       │
│ vit_l_16_384 │ true       │
│ vit_l_32_384 │ true       │
└──────────────┴────────────┘

show all of the supported model in the table manner

  • List models with pretrained weights
from flowvision.models import ModelCreator
ModelCreator.model_table(pretrained=True)
           Models            
┏━━━━━━━━━━━━━━┳━━━━━━━━━━━━┓
┃ Name         ┃ Pretrained ┃
┡━━━━━━━━━━━━━━╇━━━━━━━━━━━━┩
│ alexnet      │ true       │
│ vit_b_16_384 │ true       │
│ vit_b_32_384 │ true       │
│ vit_l_16_384 │ true       │
│ vit_l_32_384 │ true       │
└──────────────┴────────────┘
  • Search for model by Wildcard
from flowvision.models import ModelCreator
ModelCreator.model_table('vit*')
           Models            
┏━━━━━━━━━━━━━━┳━━━━━━━━━━━━┓
┃ Name         ┃ Pretrained ┃
┡━━━━━━━━━━━━━━╇━━━━━━━━━━━━┩
│ vit_b_16_224 │ false      │
│ vit_b_16_384 │ true       │
│ vit_b_32_224 │ false      │
│ vit_b_32_384 │ true       │
│ vit_l_16_384 │ true       │
│ vit_l_32_384 │ true       │
└──────────────┴────────────┘
  • Search for model with pretrained weights by Wildcard
from flowvision.models import ModelCreator
ModelCreator.model_table('vit*', pretrained=True)
           Models            
┏━━━━━━━━━━━━━━┳━━━━━━━━━━━━┓
┃ Name         ┃ Pretrained ┃
┡━━━━━━━━━━━━━━╇━━━━━━━━━━━━┩
│ vit_b_16_384 │ true       │
│ vit_b_32_384 │ true       │
│ vit_l_16_384 │ true       │
│ vit_l_32_384 │ true       │
└──────────────┴────────────┘

Model Zoo

We have conducted all the tests under the same setting, please refer to the model page here for more details.

Disclaimer on Datasets

This is a utility library that downloads and prepares public datasets. We do not host or distribute these datasets, vouch for their quality or fairness, or claim that you have license to use the dataset. It is your responsibility to determine whether you have permission to use the dataset under the dataset's license.

If you're a dataset owner and wish to update any part of it (description, citation, etc.), or do not want your dataset to be included in this library, please get in touch through a GitHub issue. Thanks for your contribution to the ML community!

Comments
  • Support Poolformer

    Support Poolformer

    Support Poolformer

    • [x] build poolformer model
    • [x] convert pretrained weight
    • [x] inference test on imagenet and update model_zoo
    • [x] update docs
    • [x] update changelog
    • [x] pytorch speed comparison oneflow版本过慢,待解决
    New Features Priority: 0 
    opened by thinksoso 16
  • delete flowvision.models._util

    delete flowvision.models._util

    1. flowvision.models下面有_utils.pyutils.py
    2. IntermediateLayerGetter方法在flowvision.models._utils.pyflowvision.models.segmentation.seg_utils.py重复。

    所以删除flowvision.models._utils.py,并暂时引用flowvision.models.segmentation.seg_utils.py

    Priority: 1 Improvements 
    opened by kaijieshi7 9
  • pickle module :EOFError Ran out of input

    pickle module :EOFError Ran out of input

    when I want to use the model of vit_tiny_patch16_224 from flowvison module ,it prompt this EOFError: Ran out of input. 环境就是OneFlow实训平台的3090显卡:oneflow-0.7.0+torch-1.8.1-cu11.2-cudnn8

    opened by WanShaw 8
  • Support UniFormer

    Support UniFormer

    Support Uniformer

    • [x] build uniformer model
    • [x] convert pretrained weight
    • [x] inference test on imagenet and update model_zoo small_plus
    • [x] update docs
    • [x] update changelog
    • [x] pytorch speed comparison
    New Features 
    opened by thinksoso 6
  • add LeViT

    add LeViT

    Add LeViT

    • [x] build model
    • [x] update init.py in models
    • [x] convert pretrained weight
    • [x] inference test on imagenet and update model_zoo
    • [x] update docs
    • [x] update readme
    • [x] update changelog
    • [x] pytorch speed comparison
    opened by kaijieshi7 5
  • 解压预训练权重文件时报错

    解压预训练权重文件时报错

    使用 models 中的模型时,例如 model = vgg11(pretrained=True) ,成功下载 zip 权重文件后,解压过程出错,导致解压中断、参数文件不完整。如果自行将下载的 zip 解压,就能正常使用。多个模型都有同样的问题。

    Traceback (most recent call last):
      File "temp.py", line 77, in <module>
        model = vgg11(pretrained=True)
      File "/usr/local/miniconda3/lib/python3.7/site-packages/flowvision/models/vgg.py", line 182, in vgg11
        return _vgg("vgg11", "A", False, pretrained, progress, **kwargs)
      File "/usr/local/miniconda3/lib/python3.7/site-packages/flowvision/models/vgg.py", line 156, in _vgg
        state_dict = load_state_dict_from_url(model_urls[arch], progress=progress)
      File "/usr/local/miniconda3/lib/python3.7/site-packages/flowvision/models/utils.py", line 146, in load_state_dict_from_url
        return _legacy_zip_load(cached_file, model_dir, map_location, delete_file)
      File "/usr/local/miniconda3/lib/python3.7/site-packages/flowvision/models/utils.py", line 78, in _legacy_zip_load
        f.extractall(model_dir)
      File "/usr/local/miniconda3/lib/python3.7/zipfile.py", line 1636, in extractall
        self._extract_member(zipinfo, path, pwd)
      File "/usr/local/miniconda3/lib/python3.7/zipfile.py", line 1691, in _extract_member
        shutil.copyfileobj(source, target)
      File "/usr/local/miniconda3/lib/python3.7/shutil.py", line 79, in copyfileobj
        buf = fsrc.read(length)
      File "/usr/local/miniconda3/lib/python3.7/zipfile.py", line 930, in read
        data = self._read1(n)
      File "/usr/local/miniconda3/lib/python3.7/zipfile.py", line 1006, in _read1
        data = self._decompressor.decompress(data, n)
    zlib.error: Error -2 while decompressing data: inconsistent stream state
    
    opened by Alive1024 5
  • module 'flowvision.models' has no attribute 'face_recognition'

    module 'flowvision.models' has no attribute 'face_recognition'

    Hello, I need method for create model iresnet. I saw in documentation that flowvision has model iresnet, but when I import and use resnest50 = flowvision.models.face_recognition.iresnest50(pretrained=False, progress=True), python says module 'flowvision.models' has no attribute 'face_recognition'. What can be problem?

    good first issue Bug Fixes 
    opened by PhilippShemetov 4
  • add model: regionvit

    add model: regionvit

    Add RegionViT

    • [x] build model (F.unfold 算子不支持 https://github.com/Oneflow-Inc/oneflow/issues/3785)
    • [x] update init.py in models
    • [x] convert pretrained weight
    • [x] inference test on imagenet and update model_zoo
    • [x] update docs
    • [x] update changelog
    • [x] pytorch speed comparison
    New Features 
    opened by kaijieshi7 4
  • Add speed test script

    Add speed test script

    脚本运行方式:

    cd ci/check
    bash run_speed_test.sh
    

    结果会输出到 当前目录下的 result 文件里

    目前通过测速脚本发现的问题

    import torch as flow 运行会崩

    • vit
    • conv_mixer
    • crossformer
    • cswin
    • mlp_mixer
    • pvt
    • res_mlp
    • vgg

    本身运行也会报错,输入是 224x224 的时候

    • efficientnet
    • res2net
    Priority: 0 Improvements Bug Fixes 
    opened by Ldpe2G 4
  • add useful model utils

    add useful model utils

    TODO

    Model relative

    • [x] freeze_bn
    • [ ] unfreeze_bn
    • [x] ActivationHook
    • [ ] freeze_unfreeze_fn

    Others

    • [x] random seed

    Test

    • [x] test freeze_bn
    • [ ] test activation_hook
    New Features Priority: 2 
    opened by rentainhe 4
  • bug: module 'oneflow.nn' has no attribute 'ReLU'

    bug: module 'oneflow.nn' has no attribute 'ReLU'

    oneflow/nn/init.py

    from oneflow.python.ops.math_ops import fused_scale_tril from oneflow.python.ops.math_ops import fused_scale_tril_softmax_dropout from oneflow.python.ops.math_ops import relu from oneflow.python.ops.math_ops import tril

    应该是 as ReLU? 难道我的oneflow版本装错了。。 flowvision-0.1.0 oneflow==0.7.0+cu102

    bug 
    opened by zhanggj821 3
  • flow.div 算子和 torch.div 没对齐

    flow.div 算子和 torch.div 没对齐

    image

    import oneflow as flow
    import torch
    import numpy as np
    
    a = np.random.randn(3,3).astype(np.float32)
    
    b = 2
    
    torch_a = torch.from_numpy(a)
    flow_a = flow.from_numpy(a)
    
    print(torch.div(torch_a,b,rounding_mode='floor'))
    print(flow.div(flow_a,b).floor())
    print(flow.div(flow_a,b,rounding_mode='floor'))
    
    opened by triple-Mu 0
  • ResNet-50 训练

    ResNet-50 训练

    ResNet-50 训练

    参照当前 vision 下的 project 复现 resnet-50 训练和精度对齐。

    参考

    主要目标

    • [ ] 2022.05.11 - 2022.5.12:熟悉 vision 下的分类模型训练代码,数据集配置并跑通。
    • [ ] 2022.05.12 - 2022.05.20:对照 timm 和 pytorch 复现 reset-50 训练代码,对齐相关训练条件,测试并使用多卡训练。
    • [ ] 2022.05.21 - 2022.05.27:对比精度差异调整并复现精度,最终将训练好的权重替换为 oneflow 版本。

    项目负责人:林松 预计完成时间:2022.05.27

    相关 PR

    罗列对应的 PR,以为一个 issue 可能会对应多个 PR,所以这里提供的是表格

    | PR | 作者 | reviewer | 日期 | | | ------------------------------------------------------------ | ---- | -------- | -------- | ---- | | 首次上传提交代码 | 林松 | zzzzzzz | 20220510 | |

    opened by triple-Mu 0
  • Vision有效性验证 - 完善Vision下的训练项目

    Vision有效性验证 - 完善Vision下的训练项目

    目前Vision下已经有的一个可以参考的projects,迁移了Swin-T的训练代码,用于Vision下进行模型的训练,但是vision中绝大部分模型的精度复现还无法保证,所以这里开启一个完善训练的projects: 用于复现vision下实现的模型的精度,并且在后续逐渐将vision下迁移的权重替换为oneflow自身训练的权重,这里是暂时的规划,需要2-3位实习生参与完成:

    可参考的projects:

    • https://github.com/rwightman/pytorch-image-models
    • https://github.com/microsoft/Swin-Transformer

    训练的任务,以及首批需要复现精度的模型:

    • 完善Vision下的这个projects: - https://github.com/Oneflow-Inc/vision/tree/main/projects/classification, 熟悉这个projects的用法(与Swin-T基本一致)
    • 这里我们列举一下第一阶段在vision下需要复现精度的模型以及相关paper:

    | Model | Paper | 认领人 | PR | |:----:|:----:|:----:|:----:| | ResNet50 | ResNet strikes back: An improved training procedure in timm | 林松 | | DeiT | Training data-efficient image transformers & distillation through attention | | | Swin-Transformer | Swin Transformer: Hierarchical Vision Transformer using Shifted Windows | 林德铝 | | DeiT III | DeiT III: Revenge of ViT | | |

    • 需要的硬件条件:8卡V100机器,能跑得下单卡256batchsize即可
    opened by rentainhe 0
Releases(v0.1.0)
  • v0.1.0(Feb 17, 2022)

    Flowvision V0.1.0 Stable Release

    New Features

    • Support trunc_normal_ in flowvision.layers.weight_init #92
    • Support DeiT model #115
    • Support PolyLRScheduler and TanhLRScheduler in flowvision.scheduler #85
    • Add resmlp_12_224_dino model and pretrained weight #128
    • Support ConvNeXt model #93
    • Add ReXNet weights #132

    Bug Fixes

    • Fix F.normalize usage in SSD #116
    • Fix bug in EfficientNet and Res2Net #122
    • Fix error pretrained weight usage in vit_small_patch32_384 and res2net50_48w_2s #128

    Improvements

    • Refator trunc_normal_ and linspace usage in Swin-T, Cross-Former, PVT and CSWin models #100
    • Refator Vision Transformer model #115
    • Refine flowvision.models.ModelCreator to support ModelCreator.model_list func #123
    • Refator README #124
    • Refine load_state_dict_from_url in flowvision.models.utils to support downloading pretrained weights to cache dir ~/.oneflow/flowvision_cache #127
    • Rebuild a cleaner model zoo and test all the model with pretrained weights released in flowvision #128

    Docs Update

    • Update Vision Transformer docs #115
    • Add Getting Started docs #124
    • Add resmlp_12_224_dino docs #128
    • Fix VGG docs bug #128
    • Add ConvNeXt docs #93

    Contributors

    A total of 5 developers contributed to this release. Thanks @rentainhe, @simonJJJ, @kaijieshi7, @lixiang007666, @Ldpe2G

    Source code(tar.gz)
    Source code(zip)
Owner
OneFlow
OneFlow
SEAN: Image Synthesis with Semantic Region-Adaptive Normalization (CVPR 2020, Oral)

SEAN: Image Synthesis with Semantic Region-Adaptive Normalization (CVPR 2020 Oral) Figure: Face image editing controlled via style images and segmenta

Peihao Zhu 579 Dec 30, 2022
Official code repository for the EMNLP 2021 paper

Integrating Visuospatial, Linguistic and Commonsense Structure into Story Visualization PyTorch code for the EMNLP 2021 paper "Integrating Visuospatia

Adyasha Maharana 23 Dec 19, 2022
Source code and dataset of the paper "Contrastive Adaptive Propagation Graph Neural Networks forEfficient Graph Learning"

CAPGNN Source code and dataset of the paper "Contrastive Adaptive Propagation Graph Neural Networks forEfficient Graph Learning" Paper URL: https://ar

1 Mar 12, 2022
implementation of paper - You Only Learn One Representation: Unified Network for Multiple Tasks

YOLOR implementation of paper - You Only Learn One Representation: Unified Network for Multiple Tasks To reproduce the results in the paper, please us

Kin-Yiu, Wong 1.8k Jan 04, 2023
Efficient Online Bayesian Inference for Neural Bandits

Efficient Online Bayesian Inference for Neural Bandits By Gerardo Durán-Martín, Aleyna Kara, and Kevin Murphy AISTATS 2022.

Probabilistic machine learning 49 Dec 27, 2022
PyTorch implementation of MoCo: Momentum Contrast for Unsupervised Visual Representation Learning

MoCo: Momentum Contrast for Unsupervised Visual Representation Learning This is a PyTorch implementation of the MoCo paper: @Article{he2019moco, aut

Meta Research 3.7k Jan 02, 2023
Baseline and template code for node21 detection track

Nodule Detection Algorithm This codebase implements a baseline model, Faster R-CNN, for the nodule detection track in NODE21. It contains all necessar

node21challenge 11 Jan 15, 2022
ShinRL: A Library for Evaluating RL Algorithms from Theoretical and Practical Perspectives

Status: Under development (expect bug fixes and huge updates) ShinRL: A Library for Evaluating RL Algorithms from Theoretical and Practical Perspectiv

37 Dec 28, 2022
Official PyTorch Implementation of Mask-aware IoU and maYOLACT Detector [BMVC2021]

The official implementation of Mask-aware IoU and maYOLACT detector. Our implementation is based on mmdetection. Mask-aware IoU for Anchor Assignment

Kemal Oksuz 46 Sep 29, 2022
Pmapper is a super-resolution and deconvolution toolkit for python 3.6+

pmapper pmapper is a super-resolution and deconvolution toolkit for python 3.6+. PMAP stands for Poisson Maximum A-Posteriori, a highly flexible and a

NASA Jet Propulsion Laboratory 8 Nov 06, 2022
Does Oversizing Improve Prosumer Profitability in a Flexibility Market? - A Sensitivity Analysis using PV-battery System

Does Oversizing Improve Prosumer Profitability in a Flexibility Market? - A Sensitivity Analysis using PV-battery System The possibilities to involve

Babu Kumaran Nalini 0 Nov 19, 2021
Pytorch implementation of Straight Sampling Network For Point Cloud Learning (ICIP2021).

Pytorch code for SS-Net This is a pytorch implementation of Straight Sampling Network For Point Cloud Learning (ICIP2021). Environment Code is tested

Sun Ran 1 May 18, 2022
Torch implementation of SegNet and deconvolutional network

Torch implementation of SegNet and deconvolutional network

Fedor Chervinskii 5 Jul 17, 2020
Official Pytorch implementation for Deep Contextual Video Compression, NeurIPS 2021

Introduction Official Pytorch implementation for Deep Contextual Video Compression, NeurIPS 2021 Prerequisites Python 3.8 and conda, get Conda CUDA 11

51 Dec 03, 2022
G-NIA model from "Single Node Injection Attack against Graph Neural Networks" (CIKM 2021)

Single Node Injection Attack against Graph Neural Networks This repository is our Pytorch implementation of our paper: Single Node Injection Attack ag

Shuchang Tao 18 Nov 21, 2022
This code is a toolbox that uses Torch library for training and evaluating the ERFNet architecture for semantic segmentation.

ERFNet This code is a toolbox that uses Torch library for training and evaluating the ERFNet architecture for semantic segmentation. NEW!! New PyTorch

Edu 104 Jan 05, 2023
Codes for CVPR2021 paper "PWCLO-Net: Deep LiDAR Odometry in 3D Point Clouds Using Hierarchical Embedding Mask Optimization"

PWCLO-Net: Deep LiDAR Odometry in 3D Point Clouds Using Hierarchical Embedding Mask Optimization (CVPR 2021) This is the official implementation of PW

Intelligent Robotics and Machine Vision Lab 42 Dec 18, 2022
Efficient and Accurate Arbitrary-Shaped Text Detection with Pixel Aggregation Network

Efficient and Accurate Arbitrary-Shaped Text Detection with Pixel Aggregation Network Paddle-PANet 目录 结果对比 论文介绍 快速安装 结果对比 CTW1500 Method Backbone Fine

7 Aug 08, 2022
WORD: Revisiting Organs Segmentation in the Whole Abdominal Region

WORD: Revisiting Organs Segmentation in the Whole Abdominal Region (Paper and DataSet). [New] Note that all the emails about the download permission o

Healthcare Intelligence Laboratory 71 Dec 22, 2022