The official implementation code of "PlantStereo: A Stereo Matching Benchmark for Plant Surface Dense Reconstruction."

Overview

PlantStereo

This is the official implementation code for the paper "PlantStereo: A Stereo Matching Benchmark for Plant Surface Dense Reconstruction".

Paper

PlantStereo: A Stereo Matching Benchmark for Plant Surface Dense Reconstruction[preprint]

Qingyu Wang, Baojian Ma, Wei Liu, Mingzhao Lou, Mingchuan Zhou*, Huanyu Jiang and Yibin Ying

College of Biosystems Engineering and Food Science, Zhejiang University.

Example and Overview

We give an example of our dataset, including spinach, tomato, pepper and pumpkin.

The data size and the resolution of the images are listed as follows:

Subset Train Validation Test All Resolution
Spinach 160 40 100 300 1046×606
Tomato 80 20 50 150 1040×603
Pepper 150 30 32 212 1024×571
Pumpkin 80 20 50 150 1024×571
All 470 110 232 812

Analysis

We evaluated the disparity distribution of different stereo matching datasets.

Format

The data was organized as the following format, where the sub-pixel level disparity images are saved as .tiff format, and the pixel level disparity images are saved as .png format.

PlantStereo

├── PlantStereo2021

│          ├── tomato

│          │          ├── training

│          │          │         ├── left_view

│          │          │          │         ├── 000000.png

│          │          │          │         ├── 000001.png

│          │          │          │         ├── ......

│          │          │          ├── right_view

│          │          │          │         ├── ......

│          │          │          ├── disp

│          │          │          │         ├── ......

│          │          │          ├── disp_high_acc

│          │          │          │         ├── 000000.tiff

│          │          │          │         ├── ......

│          │          ├── testing

│          │          │          ├── left_view

│          │          │          ├── right_view

│          │          │          ├── disp

│          │          │          ├── disp_high_acc

│          ├── spinach

│          ├── ......

Download

You can use the following links to download out PlantStereo dataset.

Baidu Netdisk link
Google Drive link

Usage

  • sample.py

To construct the dataset, you can run the code in sample.py in your terminal:

conda activate <your_anaconda_virtual_environment>
python sample.py --num 0

We can registrate the image and transformate the coordinate through function mech_zed_alignment():

def mech_zed_alignment(depth, mech_height, mech_width, zed_height, zed_width):
    ground_truth = np.zeros(shape=(zed_height, zed_width), dtype=float)
    for v in range(0, mech_height):
        for u in range(0, mech_width):
            i_mech = np.array([[u], [v], [1]], dtype=float)  # 3*1
            p_i_mech = np.dot(np.linalg.inv(K_MECH), i_mech * depth[v, u])  # 3*1
            p_i_zed = np.dot(R_MECH_ZED, p_i_mech) + T_MECH_ZED  # 3*1
            i_zed = np.dot(K_ZED_LEFT, p_i_zed) * (1 / p_i_zed[2])  # 3*1
            disparity = ZED_BASELINE * ZED_FOCAL_LENGTH * 1000 / p_i_zed[2]
            u_zed = i_zed[0]
            v_zed = i_zed[1]
            coor_u_zed = round(u_zed[0])
            coor_v_zed = round(v_zed[0])
            if coor_u_zed < zed_width and coor_v_zed < zed_height:
                ground_truth[coor_v_zed][coor_u_zed] = disparity
    return ground_truth
  • epipole_rectification.py

    After collecting the left, right and disparity images throuth sample.py, we can perform epipole rectification on left and right images through epipole_rectification.py:

    python epipole_rectification.py

Citation

If you use our PlantStereo dataset in your research, please cite this publication:

@misc{PlantStereo,
    title={PlantStereo: A Stereo Matching Benchmark for Plant Surface Dense Reconstruction},
    author={Qingyu Wang, Baojian Ma, Wei Liu, Mingzhao Lou, Mingchuan Zhou, Huanyu Jiang and Yibin Ying},
    howpublished = {\url{https://github.com/wangqingyu985/PlantStereo}},
    year={2021}
}

Acknowledgements

This project is mainly based on:

zed-python-api

mecheye_python_interface

Contact

If you have any questions, please do not hesitate to contact us through E-mail or issue, we will reply as soon as possible.

[email protected] or [email protected]

Owner
Wang Qingyu
A second-year Ph.D. student in Zhejiang University
Wang Qingyu
Code for PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning

PackNet: https://arxiv.org/abs/1711.05769 Pretrained models are available here: https://uofi.box.com/s/zap2p03tnst9dfisad4u0sfupc0y1fxt Datasets in Py

Arun Mallya 216 Jan 05, 2023
TraSw for FairMOT - A Single-Target Attack example (Attack ID: 19; Screener ID: 24):

TraSw for FairMOT A Single-Target Attack example (Attack ID: 19; Screener ID: 24): Fig.1 Original Fig.2 Attacked By perturbing only two frames in this

Derry Lin 21 Dec 21, 2022
A very simple tool for situations where optimization with onnx-simplifier would exceed the Protocol Buffers upper file size limit of 2GB, or simply to separate onnx files to any size you want.

sne4onnx A very simple tool for situations where optimization with onnx-simplifier would exceed the Protocol Buffers upper file size limit of 2GB, or

Katsuya Hyodo 10 Aug 30, 2022
This repository contains FEDOT - an open-source framework for automated modeling and machine learning (AutoML)

package tests docs license stats support This repository contains FEDOT - an open-source framework for automated modeling and machine learning (AutoML

National Center for Cognitive Research of ITMO University 482 Dec 26, 2022
Unsupervised Foreground Extraction via Deep Region Competition

Unsupervised Foreground Extraction via Deep Region Competition [Paper] [Code] The official code repository for NeurIPS 2021 paper "Unsupervised Foregr

28 Nov 06, 2022
SPT_LSA_ViT - Implementation for Visual Transformer for Small-size Datasets

Vision Transformer for Small-Size Datasets Seung Hoon Lee and Seunghyun Lee and Byung Cheol Song | Paper Inha University Abstract Recently, the Vision

Lee SeungHoon 87 Jan 01, 2023
PyTorch Live is an easy to use library of tools for creating on-device ML demos on Android and iOS.

PyTorch Live is an easy to use library of tools for creating on-device ML demos on Android and iOS. With Live, you can build a working mobile app ML demo in minutes.

559 Jan 01, 2023
A list of Machine Learning Art Colabs

ML Visual Art Colabs A list of cool Colabs on Machine Learning Imagemaking or other artistic purposes 3D Ken Burns Effect Ken Burns Effect by Manuel R

Derrick Schultz (he/him) 789 Dec 12, 2022
DA2Lite is an automated model compression toolkit for PyTorch.

DA2Lite (Deep Architecture to Lite) is a toolkit to compress and accelerate deep network models. ⭐ Star us on GitHub — it helps!! Frameworks & Librari

Sinhan Kang 7 Mar 22, 2022
Multi-Object Tracking in Satellite Videos with Graph-Based Multi-Task Modeling

TGraM Multi-Object Tracking in Satellite Videos with Graph-Based Multi-Task Modeling, Qibin He, Xian Sun, Zhiyuan Yan, Beibei Li, Kun Fu Abstract Rece

Qibin He 6 Nov 25, 2022
Volumetric parameterization of the placenta to a flattened template

placenta-flattening A MATLAB algorithm for volumetric mesh parameterization. Developed for mapping a placenta segmentation derived from an MRI image t

Mazdak Abulnaga 12 Mar 14, 2022
S-attack library. Official implementation of two papers "Are socially-aware trajectory prediction models really socially-aware?" and "Vehicle trajectory prediction works, but not everywhere".

S-attack library: A library for evaluating trajectory prediction models This library contains two research projects to assess the trajectory predictio

VITA lab at EPFL 71 Jan 04, 2023
Pytorch Implementation for (STANet+ and STANet)

Pytorch Implementation for (STANet+ and STANet) V2-Weakly Supervised Visual-Auditory Saliency Detection with Multigranularity Perception (arxiv), pdf:

GuotaoWang 14 Nov 29, 2022
The Self-Supervised Learner can be used to train a classifier with fewer labeled examples needed using self-supervised learning.

Published by SpaceML • About SpaceML • Quick Colab Example Self-Supervised Learner The Self-Supervised Learner can be used to train a classifier with

SpaceML 92 Nov 30, 2022
Datasets, tools, and benchmarks for representation learning of code.

The CodeSearchNet challenge has been concluded We would like to thank all participants for their submissions and we hope that this challenge provided

GitHub 1.8k Dec 25, 2022
This is an (re-)implementation of DeepLab-ResNet in TensorFlow for semantic image segmentation on the PASCAL VOC dataset.

DeepLab-ResNet-TensorFlow This is an (re-)implementation of DeepLab-ResNet in TensorFlow for semantic image segmentation on the PASCAL VOC dataset. Up

19 Jan 16, 2022
SFD implement with pytorch

S³FD: Single Shot Scale-invariant Face Detector A PyTorch Implementation of Single Shot Scale-invariant Face Detector Description Meanwhile train hand

Jun Li 251 Dec 22, 2022
Python package facilitating the use of Bayesian Deep Learning methods with Variational Inference for PyTorch

PyVarInf PyVarInf provides facilities to easily train your PyTorch neural network models using variational inference. Bayesian Deep Learning with Vari

342 Dec 02, 2022
[CVPR'20] TTSR: Learning Texture Transformer Network for Image Super-Resolution

TTSR Official PyTorch implementation of the paper Learning Texture Transformer Network for Image Super-Resolution accepted in CVPR 2020. Contents Intr

Multimedia Research 689 Dec 28, 2022
Explaining in Style: Training a GAN to explain a classifier in StyleSpace

Explaining in Style: Official TensorFlow Colab Explaining in Style: Training a GAN to explain a classifier in StyleSpace Oran Lang, Yossi Gandelsman,

Google 197 Nov 08, 2022