The official homepage of the (outdated) COCO-Stuff 10K dataset.

Overview

COCO-Stuff 10K dataset v1.1 (outdated)

Holger Caesar, Jasper Uijlings, Vittorio Ferrari

Overview

COCO-Stuff example annotations

Welcome to official homepage of the COCO-Stuff [1] dataset. COCO-Stuff augments the popular COCO [2] dataset with pixel-level stuff annotations. These annotations can be used for scene understanding tasks like semantic segmentation, object detection and image captioning.

Overview

Highlights

  • 10,000 complex images from COCO [2]
  • Dense pixel-level annotations
  • 91 thing and 91 stuff classes
  • Instance-level annotations for things from COCO [2]
  • Complex spatial context between stuff and things
  • 5 captions per image from COCO [2]

Updates

  • 11 Jul 2017: Added working Deeplab models for Resnet and VGG
  • 06 Apr 2017: Dataset version 1.1: Modified label indices
  • 31 Mar 2017: Published annotations in JSON format
  • 09 Mar 2017: Added label hierarchy scripts
  • 08 Mar 2017: Corrections to table 2 in arXiv paper [1]
  • 10 Feb 2017: Added script to extract SLICO superpixels in annotation tool
  • 12 Dec 2016: Dataset version 1.0 and arXiv paper [1] released

Results

The current release of COCO-Stuff-10K publishes both the training and test annotations and users report their performance individually. We invite users to report their results to us to complement this table. In the near future we will extend COCO-Stuff to all images in COCO and organize an official challenge where the test annotations will only be known to the organizers.

For the updated table please click here.

Method Source Class-average accuracy Global accuracy Mean IOU FW IOU
FCN-16s [3] [1] 34.0% 52.0% 22.7% -
Deeplab VGG-16 (no CRF) [4] [1] 38.1% 57.8% 26.9% -
FCN-8s [3] [6] 38.5% 60.4% 27.2% -
DAG-RNN + CRF [6] [6] 42.8% 63.0% 31.2% -
OHE + DC + FCN+ [5] [5] 45.8% 66.6% 34.3% 51.2%
Deeplab ResNet (no CRF) [4] - 45.5% 65.1% 34.4% 50.4%
W2V + DC + FCN+ [5] [5] 45.1% 66.1% 34.7% 51.0%

Dataset

Filename Description Size
cocostuff-10k-v1.1.zip COCO-Stuff dataset v. 1.1, images and annotations 2.0 GB
cocostuff-10k-v1.1.json COCO-Stuff dataset v. 1.1, annotations in JSON format (optional) 62.3 MB
cocostuff-labels.txt A list of the 1+91+91 classes in COCO-Stuff 2.3 KB
cocostuff-readme.txt This document 6.5 KB
Older files
cocostuff-10k-v1.0.zip COCO-Stuff dataset version 1.0, including images and annotations 2.6 GB

Usage

To use the COCO-Stuff dataset, please follow these steps:

  1. Download or clone this repository using git: git clone https://github.com/nightrome/cocostuff10k.git
  2. Open the dataset folder in your shell: cd cocostuff10k
  3. If you have Matlab, run the following commands:
  • Add the code folder to your Matlab path: startup();
  • Run the demo script in Matlab demo_cocoStuff();
  • The script displays an image, its thing, stuff and thing+stuff annotations, as well as the image captions.
  1. Alternatively run the following Linux commands or manually download and unpack the dataset:
  • wget --directory-prefix=downloads http://calvin.inf.ed.ac.uk/wp-content/uploads/data/cocostuffdataset/cocostuff-10k-v1.1.zip
  • unzip downloads/cocostuff-10k-v1.1.zip -d dataset/

MAT Format

The COCO-Stuff annotations are stored in separate .mat files per image. These files follow the same format as used by Tighe et al.. Each file contains the following fields:

  • S: The pixel-wise label map of size [height x width].
  • names: The names of the thing and stuff classes in COCO-Stuff. For more details see Label Names & Indices.
  • captions: Image captions from [2] that are annotated by 5 distinct humans on average.
  • regionMapStuff: A map of the same size as S that contains the indices for the approx. 1000 regions (superpixels) used to annotate the image.
  • regionLabelsStuff: A list of the stuff labels for each superpixel. The indices in regionMapStuff correspond to the entries in regionLabelsStuff.

JSON Format

Alternatively, we also provide stuff and thing annotations in the COCO-style JSON format. The thing annotations are copied from COCO. We encode every stuff class present in an image as a single annotation using the RLE encoding format of COCO. To get the annotations:

  • Either download them: wget --directory-prefix=dataset/annotations-json http://calvin.inf.ed.ac.uk/wp-content/uploads/data/cocostuffdataset/cocostuff-10k-v1.1.json
  • Or extract them from the .mat file annotations using this Python script.

Label Names & Indices

To be compatible with COCO, version 1.1 of COCO-Stuff has 91 thing classes (1-91), 91 stuff classes (92-182) and 1 class "unlabeled" (0). Note that 11 of the thing classes from COCO 2015 do not have any segmentation annotations. The classes desk, door and mirror could be either stuff or things and therefore occur in both COCO and COCO-Stuff. To avoid confusion we add the suffix "-stuff" to those classes in COCO-Stuff. The full list of classes can be found here.

The older version 1.0 of COCO-Stuff had 80 thing classes (2-81), 91 stuff classes (82-172) and 1 class "unlabeled" (1).

Label Hierarchy

The hierarchy of labels is stored in CocoStuffClasses. To visualize it, run CocoStuffClasses.showClassHierarchyStuffThings() (also available for just stuff and just thing classes) in Matlab. The output should look similar to the following figure: COCO-Stuff label hierarchy

Semantic Segmentation Models

To encourage further research of stuff and things we provide the trained semantic segmentation model (see Sect. 4.4 in [1]).

DeepLab VGG-16

Use the following steps to download and setup the DeepLab [4] semantic segmentation model trained on COCO-Stuff. It requires deeplab-public-ver2, which is built on Caffe:

  1. Install Cuda. I recommend version 7.0. For version 8.0 you will need to apply the fix described here in step 3.
  2. Download deeplab-public-ver2: git submodule update --init models/deeplab/deeplab-public-ver2
  3. Compile and configure deeplab-public-ver2 following the author's instructions. Depending on your system setup you might have to install additional packages, but a minimum setup could look like this:
  • cd models/deeplab/deeplab-public-ver2
  • cp Makefile.config.example Makefile.config
  • Optionally add CuDNN support or modify library paths in the Makefile.
  • make all -j8
  • cd ../..
  1. Configure the COCO-Stuff dataset:
  • Create folders: mkdir models/deeplab/deeplab-public-ver2/cocostuff && mkdir models/deeplab/deeplab-public-ver2/cocostuff/data
  • Create a symbolic link to the images: cd models/deeplab/cocostuff/data && ln -s ../../../../dataset/images images && cd ../../../..
  • Convert the annotations by running the Matlab script: startup(); convertAnnotationsDeeplab();
  1. Download the base VGG-16 model:
  • wget --directory-prefix=models/deeplab/cocostuff/model/deeplabv2_vgg16 http://calvin.inf.ed.ac.uk/wp-content/uploads/data/cocostuffdataset/deeplabv2_vgg16_init.caffemodel
  1. Run cd models/deeplab && ./run_cocostuff_vgg16.sh to train and test the network on COCO-Stuff.

DeepLab ResNet 101

The default Deeplab model performs center crops of size 513*513 pixels of an image, if any side is larger than that. Since we want to segment the whole image at test time, we choose to resize the images to 513x513, perform the semantic segmentation and then rescale it elsewhere. Note that without the final step, the performance might differ slightly.

  1. Follow steps 1-4 of the DeepLab VGG-16 section above.
  2. Download the base ResNet model:
  • wget --directory-prefix=models/deeplab/cocostuff/model/deeplabv2_resnet101 http://calvin.inf.ed.ac.uk/wp-content/uploads/data/cocostuffdataset/deeplabv2_resnet101_init.caffemodel
  1. Rescale the images and annotations:
  • cd models/deeplab
  • python rescaleImages.py
  • python rescaleAnnotations.py
  1. Run ./run_cocostuff_resnet101.sh to train and test the network on COCO-Stuff.

Annotation Tool

In [1] we present a simple and efficient stuff annotation tool which was used to annotate the COCO-Stuff dataset. It uses a paintbrush tool to annotate SLICO superpixels (precomputed using the code of Achanta et al.) with stuff labels. These annotations are overlaid with the existing pixel-level thing annotations from COCO. We provide a basic version of our annotation tool:

  • Prepare the required data:
    • Specify a username in annotator/data/input/user.txt.
    • Create a list of images in annotator/data/input/imageLists/<user>.list.
    • Extract the thing annotations for all images in Matlab: extractThings().
    • Extract the superpixels for all images in Matlab: extractSLICOSuperpixels().
    • To enable or disable superpixels, thing annotations and polygon drawing, take a look at the flags at the top of CocoStuffAnnotator.m.
  • Run the annotation tool in Matlab: CocoStuffAnnotator();
    • The tool writes the .mat label files to annotator/data/output/annotations.
    • To create a .png preview of the annotations, run annotator/code/exportImages.m in Matlab. The previews will be saved to annotator/data/output/preview.

Misc

References

Licensing

COCO-Stuff is a derivative work of the COCO dataset. The authors of COCO do not in any form endorse this work. Different licenses apply:

Contact

If you have any questions regarding this dataset, please contact us at holger-at-it-caesar.com.

Owner
Holger Caesar
Author of the COCO-Stuff and nuScenes datasets.
Holger Caesar
Tensorflow implementation of the paper "HumanGPS: Geodesic PreServing Feature for Dense Human Correspondences", CVPR 2021.

HumanGPS: Geodesic PreServing Feature for Dense Human Correspondences Tensorflow implementation of the paper "HumanGPS: Geodesic PreServing Feature fo

Google Interns 50 Dec 21, 2022
DirectVoxGO reconstructs a scene representation from a set of calibrated images capturing the scene.

DirectVoxGO reconstructs a scene representation from a set of calibrated images capturing the scene. We achieve NeRF-comparable novel-view synthesis quality with super-fast convergence.

sunset 709 Dec 31, 2022
Human4D Dataset tools for processing and visualization

HUMAN4D: A Human-Centric Multimodal Dataset for Motions & Immersive Media HUMAN4D constitutes a large and multimodal 4D dataset that contains a variet

tofis 15 Nov 09, 2022
This repository is for Competition for ML_data class

This repository is for Competition for ML_data class. Based on mmsegmentatoin,mainly using swin transformer to completed the competition.

jianlong 2 Oct 23, 2022
A PyTorch-centric hybrid classical-quantum machine learning framework

torchquantum A PyTorch-centric hybrid classical-quantum dynamic neural networks framework. News Add a simple example script using quantum gates to do

MIT HAN Lab 400 Jan 02, 2023
A flexible submap-based framework towards spatio-temporally consistent volumetric mapping and scene understanding.

Panoptic Mapping This package contains panoptic_mapping, a general framework for semantic volumetric mapping. We provide, among other, a submap-based

ETHZ ASL 194 Dec 20, 2022
10x faster matrix and vector operations

Bolt is an algorithm for compressing vectors of real-valued data and running mathematical operations directly on the compressed representations. If yo

2.3k Jan 09, 2023
Code for A Volumetric Transformer for Accurate 3D Tumor Segmentation

VT-UNet This repo contains the supported pytorch code and configuration files to reproduce 3D medical image segmentaion results of VT-UNet. Environmen

Himashi Amanda Peiris 114 Dec 20, 2022
A note taker for NVDA. Allows the user to create, edit, view, manage and export notes to different formats.

Quick Notetaker add-on for NVDA The Quick Notetaker add-on is a wonderful tool which allows writing notes quickly and easily anytime and from any app

5 Dec 06, 2022
Source code of generalized shuffled linear regression

Generalized-Shuffled-Linear-Regression Code for the ICCV 2021 paper: Generalized Shuffled Linear Regression. Authors: Feiran Li, Kent Fujiwara, Fumio

FEI 7 Oct 26, 2022
Face recognition project by matching the features extracted using SIFT.

MV_FaceDetectionWithSIFT Face recognition project by matching the features extracted using SIFT. By : Aria Radmehr Professor : Ali Amiri Dependencies

Aria Radmehr 4 May 31, 2022
Collision risk estimation using stochastic motion models

collision_risk_estimation Collision risk estimation using stochastic motion models. This is a new approach, based on stochastic models, to predict the

Unmesh 7 Jun 26, 2022
Public implementation of the Convolutional Motif Kernel Network (CMKN) architecture

CMKN Implementation of the convolutional motif kernel network (CMKN) introduced in Ditz et al., "Convolutional Motif Kernel Network", 2021. Testing Yo

1 Nov 17, 2021
"Structure-Augmented Text Representation Learning for Efficient Knowledge Graph Completion"(WWW 2021)

STAR_KGC This repo contains the source code of the paper accepted by WWW'2021. "Structure-Augmented Text Representation Learning for Efficient Knowled

Bo Wang 60 Dec 26, 2022
Code for TIP 2017 paper --- Illumination Decomposition for Photograph with Multiple Light Sources.

Illumination_Decomposition Code for TIP 2017 paper --- Illumination Decomposition for Photograph with Multiple Light Sources. This code implements the

QAY 7 Nov 15, 2020
Turning SymPy expressions into PyTorch modules.

sympytorch A micro-library as a convenience for turning SymPy expressions into PyTorch Modules. All SymPy floats become trainable parameters. All SymP

Patrick Kidger 89 Dec 13, 2022
Computer-Vision-Paper-Reviews - Computer Vision Paper Reviews with Key Summary along Papers & Codes

Computer-Vision-Paper-Reviews Computer Vision Paper Reviews with Key Summary along Papers & Codes. Jonathan Choi 2021 50+ Papers across Computer Visio

Jonathan Choi 2 Mar 17, 2022
Official implementation of "MetaSDF: Meta-learning Signed Distance Functions"

MetaSDF: Meta-learning Signed Distance Functions Project Page | Paper | Data Vincent Sitzmann*, Eric Ryan Chan*, Richard Tucker, Noah Snavely Gordon W

Vincent Sitzmann 100 Jan 01, 2023
Implementation of the paper NAST: Non-Autoregressive Spatial-Temporal Transformer for Time Series Forecasting.

Non-AR Spatial-Temporal Transformer Introduction Implementation of the paper NAST: Non-Autoregressive Spatial-Temporal Transformer for Time Series For

Chen Kai 66 Nov 28, 2022
A clean implementation based on AlphaZero for any game in any framework + tutorial + Othello/Gobang/TicTacToe/Connect4 and more

Alpha Zero General (any game, any framework!) A simplified, highly flexible, commented and (hopefully) easy to understand implementation of self-play

Surag Nair 3.1k Jan 05, 2023