[CVPR 2021] VirTex: Learning Visual Representations from Textual Annotations

Overview

VirTex: Learning Visual Representations from Textual Annotations

Karan Desai and Justin Johnson
University of Michigan


CVPR 2021 arxiv.org/abs/2006.06666

Model Zoo, Usage Instructions and API docs: kdexd.github.io/virtex

VirTex is a pretraining approach which uses semantically dense captions to learn visual representations. We train CNN + Transformers from scratch on COCO Captions, and transfer the CNN to downstream vision tasks including image classification, object detection, and instance segmentation. VirTex matches or outperforms models which use ImageNet for pretraining -- both supervised or unsupervised -- despite using up to 10x fewer images.

virtex-model

Get the pretrained ResNet-50 visual backbone from our best performing VirTex model in one line without any installation!

import torch

# That's it, this one line only requires PyTorch.
model = torch.hub.load("kdexd/virtex", "resnet50", pretrained=True)

Note (For returning users before January 2021):

The pretrained models in our model zoo have changed from v1.0 onwards. They are slightly better tuned than older models, and reproduce the results in our CVPR 2021 accepted paper (arXiv v2). Some training and evaluation hyperparams are changed since v0.9. Please refer CHANGELOG.md

Usage Instructions

  1. How to setup this codebase?
  2. VirTex Model Zoo
  3. How to train your VirTex model?
  4. How to evaluate on downstream tasks?

Full documentation is available at kdexd.github.io/virtex.

Citation

If you find this code useful, please consider citing:

@inproceedings{desai2021virtex,
    title={{VirTex: Learning Visual Representations from Textual Annotations}},
    author={Karan Desai and Justin Johnson},
    booktitle={CVPR},
    year={2021}
}

Acknowledgments

We thank Harsh Agrawal, Mohamed El Banani, Richard Higgins, Nilesh Kulkarni and Chris Rockwell for helpful discussions and feedback on the paper. We thank Ishan Misra for discussions regarding PIRL evaluation protocol; Saining Xie for discussions about replicating iNaturalist evaluation as MoCo; Ross Girshick and Yuxin Wu for help with Detectron2 model zoo; Georgia Gkioxari for suggesting the Instance Segmentation pretraining task ablation; and Stefan Lee for suggestions on figure aesthetics. We thank Jia Deng for access to extra GPUs during project development; and UMich ARC-TS team for support with GPU cluster management. Finally, we thank all the Starbucks outlets in Ann Arbor for many hours of free WiFi. This work was partially supported by the Toyota Research Institute (TRI). However, note that this article solely reflects the opinions and conclusions of its authors and not TRI or any other Toyota entity.

Comments
  • run on single input image

    run on single input image

    Hi,

    I would like to evaluate your work on a single image for image captioning. Can you tell me the steps I should follow for a single input? For instance, given a folder of images, how would I use your model for inference only on the folder of images?

    Looking at captioning-task from your description, I am not sure how to go about using my own dataset for evaluation of the model.

    Thanks

    opened by nikky4D 15
  • Training loss acts strangely after resuming

    Training loss acts strangely after resuming

    Hi,

    I want to reproduce your pre-training result. There was a accident that caused the interruption of my training. I restored it by the flag "--resume-from" and it acts weirdly. The training and validation loss jumped dramatically at the beginning and then decreased, which seems there is a problem about the restoring. Could you help me about this?

    opened by BaohaoLiao 11
  • BatchNormalization's Running Stats are Accumulated in ImageNet Linear Evaluation

    BatchNormalization's Running Stats are Accumulated in ImageNet Linear Evaluation

    Hi,

    Thanks for the nice paper and clear code!

    I found that the models are set with .train() in clf_linear.py. Thus the running averages (i.e., the states) of BatchNormalization layers will be accumulated when training the ImageNet datasets (via calling the forward function), and the backbone model seems not to be fully frozen. Is it a special design for this fine-tuning task?

    Best, Hao

    opened by airsplay 6
  • Removed link for pretrained model

    Removed link for pretrained model

    Hi,

    I am trying to download the pertained model for image captioning. But the download link has been removed. Could you please update the download link?

    opened by zhuang93 4
  • unable to find a valid cuDNN algorithm to run convolution

    unable to find a valid cuDNN algorithm to run convolution

    sorry to bother you, but I run into this problem and can not to find a way to fix it. it happens when I train the base virtex model. I have update the cuDNN version into 8.0.3, the former version is 7.6.5. both version have this error.

    opened by Charlie-zhang1406 4
  • Question about SentencePiece [SOS] and [EOS] ID.

    Question about SentencePiece [SOS] and [EOS] ID.

    Hi, I saw that in SentencePieceTrainer, as below you made EOS and BOS and MASK and PAS tokens equal to Zero " --bos_id=-1 --eos_id=-1" " --control_symbols=[SOS],[EOS],[MASK]" However, during the captioning, you define sos_index: int = 1, eos_index: int = 2, I am wondering if these setups , have any effects?

    opened by nooralahzadeh 4
  • No loss when pretraining on token classification

    No loss when pretraining on token classification

    I am trying to pretrain using the token classification method. I copied this repo and was just trying to reproduce the results from the study. I am experiencing problems when pretraining using token classification. It seems as though the loss values are not in the output_dict variable.

    When I use pretrain_virtex.py and log every 20 iterations, I get the following output. 2021-11-16T12:20:04.960052+0000: Iter 20 | Time: 0.764 sec | ETA: 54h 39m [Loss nan] [GPU 8774 MB]

    Do you have any idea what could be wrong in the code?

    opened by alexkern1997 3
  • Possible inconsistency in data preprocessing

    Possible inconsistency in data preprocessing

    Hi, thank you so much for sharing this code. It is very helpful.

    However, I am confused about the data preprocessing configuration. In the config files, Caffe-style image mean and std is used, but it seams they are not used in the code. Instead, the code seems to hard-code torchvision-style mean and std (here). Can you confirm that both pretraining and fine-tuning use the latter?

    Furthermore, I am not sure whether the images are in 0-255 range or 0-1. For Caffe-style mean and std, it should be 0-255, but it seems with your hard-coded mean and std, it should be 0-1. However, I noticed you are using opencv to load images, which loads in 0-255, and I did not find anywhere in the code that they are transformed into 0-1, except in supervised pretraining (here).

    Could you please comment on the aforementioned issues? Especially it is important to make sure the config is identical for all pretraining and downstream settings. Since you fine-tune all layers and don't freeze the stem, it is hard to notice if such inconsistencies exist, because the fine-tuning process would fix them to some extent.

    Thank you so much.

    opened by alirezazareian 3
  • Pre-training on another dataset

    Pre-training on another dataset

    Hi,

    Thank you for making this code public!

    I want to pre-train a captioning model on another dataset (ARCH dataset). I went through your codebase and realized that first I need to create a Dataset class for my dataset similar to your Dataset class in virtex/data/datasets/coco_captions.py. Next, I will need to make a modified version of virtex/data/datasets/captioning.py.

    Somehow the files in virtex/data/datasets/ are all ignored by git and I can't make any of them become visible. Can you please help me with it? I would also appreciate any suggestions on how to modify the code at this stage in order to cause the least amount of disruption to the functions and classes which rely on the Dataset classes.

    Many thanks, George Batchkala

    opened by GeorgeBatch 2
  • The weight file on http://kdexd.xyz/virtex/virtex/usage/model_zoo.html was canceled

    The weight file on http://kdexd.xyz/virtex/virtex/usage/model_zoo.html was canceled

    Hello, your work is very attractive to me, but when I reproduced your excellent research results, I found that the weight file on http://kdexd.xyz/virtex/virtex/usage/model_zoo.html was canceled. I hope you can provide effective links to the weighted documents and reproduce your excellent work.

    opened by hubin111 2
  • torch.hub.load(

    torch.hub.load("kdexd/virtex", "resnet50", pretrained=True) not working

    I tried running this in Colab environment.

    Got the below error:

    KeyError                                  Traceback (most recent call last)
    
    <ipython-input-5-e8ec27705300> in <module>()
          1 import torch
          2 # model = torch.hub.load('pytorch/vision:v0.9.0', 'alexnet', pretrained=True)
    ----> 3 model = torch.hub.load("kdexd/virtex", "resnet50", pretrained=True)
          4 model.eval()
    
    2 frames
    
    /root/.cache/torch/hub/kdexd_virtex_master/hubconf.py in resnet50(pretrained, **kwargs)
         31                 "https://umich.box.com/shared/static/gsjqm4i4fm1wpzi947h27wweljd8gcpy.pth",
         32                 progress=False,
    ---> 33             )["model"]
         34         )
         35     return model
    
    KeyError: 'model'
    

    Can you let me know the fix ?

    opened by Sumegh-git 2
  • Cog version

    Cog version

    "😵 Uh oh! This model can't be run on Replicate because it was built with a version of Cog that is no longer supported." https://replicate.com/kdexd/virtex-image-captioning

    opened by Jakeukalane 0
  • Training with new Random Seed does not shuffle data

    Training with new Random Seed does not shuffle data

    I've been adapting the example scripts to my own training task, and I've noticed that the scripts do not handle different random seeds as expected. I've found this problem in two places, but there might be more:

    https://github.com/kdexd/virtex/blob/2baba8a4f3a4d80d617b3bc59e4be25b1052db57/scripts/clf_linear.py#L104-L109 https://github.com/kdexd/virtex/blob/2baba8a4f3a4d80d617b3bc59e4be25b1052db57/scripts/pretrain_virtex.py#L68

    The problem is that the DistributedSampler (from PyTorch 1.9.0) requires kwarg "seed" to shuffle differently, when shuffle=True. I believe that the correct use of DistributedSampler for training with different random seeds would be to add the kwarg seed=_DOWNC.RANDOM_SEED when DistributedSampler is initialized in these two places. As for reshuffling on additional epochs, DistributedSampler will add the seed to the epoch number, so nothing needs to be changed during epoch-setting for the sampler.

    https://github.com/pytorch/pytorch/blob/d69c22dd61a2f006dcfe1e3ea8468a3ecaf931aa/torch/utils/data/distributed.py#L100

    Please let me know your thoughts, or if I may have missed something.

    opened by keeganq 0
  • Decoder Attention Weight Visualization

    Decoder Attention Weight Visualization

    Hi, thanks for the awesome code base!

    I'm looking to produce visualizations of decoder attention weights similar to those shown in the paper, but I don't think that you have implemented this feature in the published code (although I may have overlooked it!)

    As best I can tell, the way this would be done is by using a new TransformerDecoderLayer which returns the multihead attention's attn_output_weights in its forward method. The visualized attention weights when predicting a single token would then be the average of these weights across all heads. The problem that I am finding is that the visualized weights seem to mostly appear in the center of the image during captioning on the coco dataset, but the results in the paper show reasonable variation in these weights as tokens are predicted.

    Is this the method that you used to create the visualization? Any insight into how this was previously done would be appreciated!

    opened by keeganq 0
  • Add Docker environment & web demo

    Add Docker environment & web demo

    Hey @kdexd! 👋

    This pull request makes it possible to run your model inside a Docker environment, which makes it easier for other people to run it. We're using an open source tool called Cog to make this process easier.

    This also means we can make a web page where other people can try out your model! We've implemented image captioning, but it should be pretty easy to add other tasks too if you'd like. View it here: https://replicate.ai/kdexd/virtex-image-captioning

    That page also has instructions on how to use the Docker image, which is on our registry at r8.im/kdexd/virtex-image-captioning.

    In case you're wondering who the heck I am, I'm from Replicate, where we're trying to make machine learning reproducible. So many cool models are being made, but I got frustrated that I couldn't run them, hence we're trying to fix that. :)

    opened by bfirsh 0
  • Fine tuning Virtex for image captioning

    Fine tuning Virtex for image captioning

    Hi there, I am aware that Virtex used image captioning as a pretraining task and not as the "final goal", but I was wondering whether one could go on fine-tuning the pretrained model (e.g. bicaptioning_R_50_L1_H2048) with additional COCOcaptions-like data in order to get an improved captioning model. Has anyone tried that or does anyone have any suggestion how to do it? Can any of the scripts in the repository be used/adapted for fine-tuning existing models? Thanks a lot! :)

    opened by freeIsa 1
Releases(v1.4)
  • v1.4(Jan 9, 2022)

    Major changes

    • Python 3.6 support is dropped, the minimum requirement is Python 3.8. All major library versions are bumped to the latest releases (PyTorch, OpenCV, Albumentations, etc.).
    • Model zoo URLs are changed to Dropbox. All pre-trained checkpoint weights are unchanged.
    • There was a spike in training loss when resuming training with pretrain_virtex.py, it is fixed now.
    • Documentation theme is changed from alabaster to read the docs, looks fancier!
    Source code(tar.gz)
    Source code(zip)
  • v1.2(Jul 15, 2021)

    Bug Fix: Beam Search

    The beam search implementation adapted from AllenNLP was more suited for LSTM/GRU (recurrent models), less for transformers (autoregressive models). This version removes the "backpointer" trick from AllenNLP implementation and improves captioning results for all VirTex models. See below, "Old" metrics are v1.1 (ArXiv v2) and "New" metrics are v1.2 (ArXiv v3).

    image

    This bug does not affect pre-training or other downstream task results. Thanks to Nicolas Carion (@alcinos) and Aishwarya Kamath (@ashkamath) for spotting this issue and helping me to fix it!

    Feature: Nucleus Sampling

    This codebase now supports decoding through Nucleus Sampling, as introduced in The Curious Case of Neural Text Degeneration. Try running captioning evaluation script with --config-override MODEL.DECODER.NAME nucleus_sampling MODEL.DECODER.NUCLEUS_SIZE 0.9! To have consistent behavior with prior versions, the default decoding method is Beam Search with 5 beams.

    Note: Nucleus sampling would give worse results specifically on COCO Captions, but will produce more interesting sounding language with larger transformers trained on much more data than COCO Captions.

    New config arguments to support this:

    MODEL:
      DECODER:
        # What algorithm to use for decoding. Supported values: {"beam_search",
        # "nucleus_sampling"}.
        NAME: "beam_search"
    
        # Number of beams to decode (1 = greedy decoding). Ignored when decoding
        # through nucleus sampling.
        BEAM_SIZE: 5
    
        # Size of nucleus for sampling predictions. Ignored when decoding through
        # beam search.
        NUCLEUS_SIZE: 0.9
    
        # Maximum length of decoded caption. Decoding may end earlier when [EOS]
        # token is sampled.
        MAX_DECODING_STEPS: 50  # Same as DATA.MAX_CAPTION_LENGTH
    
    Source code(tar.gz)
    Source code(zip)
  • v1.1(Apr 4, 2021)

    This version is a small increment over v1.0 with only cosmetic changes and obsolete code removals. The final results of models rained from this codebase would remain unchanged.

    Removed feature extraction support:

    • Removed virtex.downstream.FeatureExtractor and its usage in scripts/clf_voc07.py. By default, the script will only evaluate on global average pooled features (2048-d), as with the CVPR 2021 paper version.

    • Removed virtex.modules.visual_backbones.BlindVisualBackbone. I introduced it a long time ago for debugging, it is not much useful anymore.

    Two config-related changes:

    1. Renamed config parameters: (OPTIM.USE_LOOKAHEAD —> OPTIM.LOOKAHEAD.USE), (OPTIM.LOOKAHEAD_ALPHA —> OPTIM.LOOKAHEAD_ALPHA) and (OPTIM.LOOKAHEAD_STEPS —> OPTIM.LOOKAHEAD.STEPS).

    2. Renamed TransformerTextualHead to TransformerDecoderTextualHead for clarity. Model names in config also change accordingly: "transformer_postnorm" —> "transdec_postnorm" (same for prenorm).

    These changes may be breaking if you wrote your own config and explicitly added these arguments.

    Source code(tar.gz)
    Source code(zip)
  • v1.0(Mar 7, 2021)

    CVPR 2021 release of VirTex. Code and pre-trained models can reproduce results according to the paper: https://arxiv.org/abs/2006.06666v2

    Source code(tar.gz)
    Source code(zip)
Implementation of the GBST block from the Charformer paper, in Pytorch

Charformer - Pytorch Implementation of the GBST (gradient-based subword tokenization) module from the Charformer paper, in Pytorch. The paper proposes

Phil Wang 105 Dec 26, 2022
Various operations like path tracking, counting, etc by using yolov5

Object-tracing-with-YOLOv5 Various operations like path tracking, counting, etc by using yolov5

Pawan Valluri 5 Nov 28, 2022
Implementation of the 😇 Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbones

HaloNet - Pytorch Implementation of the Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbones. This re

Phil Wang 189 Nov 22, 2022
Weakly- and Semi-Supervised Panoptic Segmentation (ECCV18)

Weakly- and Semi-Supervised Panoptic Segmentation by Qizhu Li*, Anurag Arnab*, Philip H.S. Torr This repository demonstrates the weakly supervised gro

Qizhu Li 159 Dec 20, 2022
A PyTorch implementation for our paper "Dual Contrastive Learning: Text Classification via Label-Aware Data Augmentation".

Dual-Contrastive-Learning A PyTorch implementation for our paper "Dual Contrastive Learning: Text Classification via Label-Aware Data Augmentation". Y

hoshi-hiyouga 85 Dec 26, 2022
[Preprint] "Chasing Sparsity in Vision Transformers: An End-to-End Exploration" by Tianlong Chen, Yu Cheng, Zhe Gan, Lu Yuan, Lei Zhang, Zhangyang Wang

Chasing Sparsity in Vision Transformers: An End-to-End Exploration Codes for [Preprint] Chasing Sparsity in Vision Transformers: An End-to-End Explora

VITA 64 Dec 08, 2022
Clinica is a software platform for clinical research studies involving patients with neurological and psychiatric diseases and the acquisition of multimodal data

Clinica Software platform for clinical neuroimaging studies Homepage | Documentation | Paper | Forum | See also: AD-ML, AD-DL ClinicaDL About The Proj

ARAMIS Lab 165 Dec 29, 2022
Generate Cartoon Images using Generative Adversarial Network

AvatarGAN ✨ Generate Cartoon Images using DC-GAN Deep Convolutional GAN is a generative adversarial network architecture. It uses a couple of guidelin

Aakash Jhawar 50 Dec 29, 2022
Implement the Pareto Optimizer and pcgrad to make a self-adaptive loss for multi-task

multi-task_losses_optimizer Implement the Pareto Optimizer and pcgrad to make a self-adaptive loss for multi-task 已经实验过了,不会有cuda out of memory情况 ##Par

14 Dec 25, 2022
Code for "MetaMorph: Learning Universal Controllers with Transformers", Gupta et al, ICLR 2022

MetaMorph: Learning Universal Controllers with Transformers This is the code for the paper MetaMorph: Learning Universal Controllers with Transformers

Agrim Gupta 50 Jan 03, 2023
[MedIA2021]MIDeepSeg: Minimally Interactive Segmentation of Unseen Objects from Medical Images Using Deep Learning

MIDeepSeg: Minimally Interactive Segmentation of Unseen Objects from Medical Images Using Deep Learning [MedIA or Arxiv] and [Demo] This repository pr

Healthcare Intelligence Laboratory 92 Dec 08, 2022
This is the repository for the paper "Have I done enough planning or should I plan more?"

Metacognitive Learning Tool box https://re.is.mpg.de What Is This? This repository contains two modules used to analyse metacognitive learning in huma

0 Dec 01, 2021
1st Solution For NeurIPS 2021 Competition on ML4CO Dual Task

KIDA: Knowledge Inheritance in Data Aggregation This project releases our 1st place solution on NeurIPS2021 ML4CO Dual Task. Slide and model weights a

MEGVII Research 24 Sep 08, 2022
Discover hidden deepweb pages

DeepWeb Scapper Att: Demo version An simple script to scrappe deepweb to find pages. Will return if any of those exists and will save on a file. You s

Héber Júlio 77 Oct 02, 2022
PyTorch implementation of Munchausen Reinforcement Learning based on DQN and SAC. Handles discrete and continuous action spaces

Exploring Munchausen Reinforcement Learning This is the project repository of my team in the "Advanced Deep Learning for Robotics" course at TUM. Our

Mohamed Amine Ketata 10 Mar 10, 2022
Justmagic - Use a function as a method with this mystic script, like in Nim

justmagic Use a function as a method with this mystic script, like in Nim. Just

witer33 8 Oct 08, 2022
Reducing Information Bottleneck for Weakly Supervised Semantic Segmentation (NeurIPS 2021)

Reducing Information Bottleneck for Weakly Supervised Semantic Segmentation (NeurIPS 2021) The implementation of Reducing Infromation Bottleneck for W

Jungbeom Lee 81 Dec 16, 2022
This repository contains code for the paper "Disentangling Label Distribution for Long-tailed Visual Recognition", published at CVPR' 2021

Disentangling Label Distribution for Long-tailed Visual Recognition (CVPR 2021) Arxiv link Blog post This codebase is built on Causal Norm. Install co

Hyperconnect 85 Oct 18, 2022
A ssl analyzer which could analyzer target domain's certificate.

ssl_analyzer A ssl analyzer which could analyzer target domain's certificate. Analyze the domain name ssl certificate information according to the inp

vincent 17 Dec 12, 2022
A annotation of yolov5-5.0

代码版本:0714 commit #4000 $ git clone https://github.com/ultralytics/yolov5 $ cd yolov5 $ git checkout 720aaa65c8873c0d87df09e3c1c14f3581d4ea61 这个代码只是注释版

Laughing 229 Dec 17, 2022