3D detection and tracking viewer (visualization) for kitti & waymo dataset

Overview

3D Detection & Tracking Viewer

This project was developed for view 3D object detection and tracking results. It supports rendering 3D bounding boxes as car models and rendering boxes on images.

Features

  • Rendering boxes as cars
  • Captioning box ids(infos) in 3D scene
  • Projecting 3D box or points on 2D image

Design pattern

This code includes two parts, one for data loading, other one for visualization of 3D detection and tracking results. The overall framework of design is as shown below:

Prepare data

  • Kitti detection dataset
# For Kitti Detection Dataset         
└── kitti_detection
       ├── testing 
       |      ├──calib
       |      ├──image_2
       |      ├──label_2
       |      └──velodyne      
       └── training
              ├──calib
              ├──image_2
              ├──label_2
              └──velodyne 
  • Kitti tracking dataset
# For Kitti Tracking Dataset         
└── kitti_tracking
       ├── testing 
       |      ├──calib
       |      |    ├──0000.txt
       |      |    ├──....txt
       |      |    └──0028.txt
       |      ├──image_02
       |      |    ├──0000
       |      |    ├──....
       |      |    └──0028
       |      ├──label_02
       |      |    ├──0000.txt
       |      |    ├──....txt
       |      |    └──0028.txt
       |      └──velodyne
       |           ├──0000
       |           ├──....
       |           └──0028      
       └── training # the structure is same as testing set
              ├──calib
              ├──image_02
              ├──label_02
              └──velodyne 
  • Waymo dataset

Please refer to the OpenPCDet for Waymo dataset organization.

Requirements

python3
numpy
vedo
vtk
opencv
matplotlib

Usage

1. Set boxes type & viewer background color

Currently this code supports Kitti (h,w,l,x,y,z,yaw) and Waymo OpenPCDet (x,y,z,l,w,h,yaw) box type. You can set the box type and background color when initializing a viewer as

from viewer.viewer import Viewer

vi = Viewer(box_type="Kitti",bg = (255,255,255))

2. Set objects color map

You can set the objects color map for view tracking results, same as matplotlab.pypot color map. The common used color maps are "rainbow", "viridis","brg","gnuplot","hsv" and etc.

vi.set_ob_color_map('rainbow')

3. Add colorized point clouds to 3D scene

The viewer receive a set of points, it must be a array with shape (N,3). If you want to view the scatter filed, you should to set the 'scatter_filed' with a shape (N,), and set the 'color_map_name' to specify the colors. If the 'scatter_filed' is None, the points will show in color of 'color' arg.

vi.add_points(points[:,0:3],
               radius = 2,
               color = (150,150,150),
               scatter_filed=points[:,2],
               alpha=1,
               del_after_show='True',
               add_to_3D_scene = True,
               add_to_2D_scene = True,
               color_map_name = "viridis")

4. Add boxes or cars to 3D scene

The viewer receive a set of boxes, it must be a array with shape (N,7). You can set the boxes to meshes or lines only, you also can set the line width, conner points. Besides, you can provide a set of IDs(int) to colorize the boxes, and put a set of additional infos to caption the boxes. Note that, the color will set to the color of "color" arg if the ids is None.

vi.add_3D_boxes(boxes=boxes[:,0:7],
                 ids=ids,
                 box_info=infos,
                 color="blue",
                 add_to_3D_scene=True,
                 mesh_alpha = 0.3,
                 show_corner_spheres = True,
                 corner_spheres_alpha = 1,
                 corner_spheres_radius=0.1,
                 show_heading = True,
                 heading_scale = 1,
                 show_lines = True,
                 line_width = 2,
                 line_alpha = 1,
                 show_ids = True,
                 show_box_info=True,
                 del_after_show=True,
                 add_to_2D_scene=True,
                 caption_size=(0.05,0.05)
                 )

You can also render the boxes as cars, the input format is same as boxes.

vi.add_3D_cars(boxes=boxes[:,0:7],
                 ids=ids,
                 box_info=infos,
                 color="blue",
                 mesh_alpha = 1,
                 show_ids = True,
                 show_box_info=True,
                 del_after_show=True,
                 car_model_path="viewer/car.obj",
                 caption_size = (0.1, 0.1)
                )

5. View boxes or points on image

To view the 3D box and points on image, firstly should set the camera intrinsic, extrinsic mat, and put a image. Besides, when adding the boxes and points, the 'add_to_2D_scene' should be set to True.

vi.add_image(image)
vi.set_extrinsic_mat(V2C)
vi.set_intrinsic_mat(P2)

6. Show 2D and 3D results

To show a single frame, you can directly run vi.show_2D(), vi.show_3D(). The visualization window will not close until you press the "Enter" key. Please zoom out the 3D scene by scrolling the middle mouse button backward, and then you can see the point cloud in this window. You can change the viewing angle by dragging the mouse within the visualization window.

To show multiple frames, you can use the for loop, and press the "Enter" key to view a sequence data.

for i in range(len(dataset)):
    V2C, P2, image, boxes = dataset[i]
    vi.add_3D_boxes(boxes)
    vi.add_image(image)
    vi.set_extrinsic_mat(V2C)
    vi.set_intrinsic_mat(P2)
    vi.show_2D()
    vi.show_3D()
VR Viewport Pose Model for Quantifying and Exploiting Frame Correlations

This repository contains the introduction to the collected VRViewportPose dataset and the code for the IEEE INFOCOM 2022 paper: "VR Viewport Pose Model for Quantifying and Exploiting Frame Correlatio

0 Aug 10, 2022
SAFL: A Self-Attention Scene Text Recognizer with Focal Loss

SAFL: A Self-Attention Scene Text Recognizer with Focal Loss This repository implements the SAFL in pytorch. Installation conda env create -f environm

6 Aug 24, 2022
A neuroanatomy-based augmented reality experience powered by computer vision. Features 3D visuals of the Atlas Brain Map slices.

Brain Augmented Reality (AR) A neuroanatomy-based augmented reality experience powered by computer vision that features 3D visuals of the Atlas Brain

Yasmeen Brain 10 Oct 06, 2022
(CVPR 2022 Oral) Official implementation for "Surface Representation for Point Clouds"

RepSurf - Surface Representation for Point Clouds [CVPR 2022 Oral] By Haoxi Ran* , Jun Liu, Chengjie Wang ( * : corresponding contact) The pytorch off

Haoxi Ran 264 Dec 23, 2022
Tensorflow implementation of "Learning Deep Features for Discriminative Localization"

Weakly_detector Tensorflow implementation of "Learning Deep Features for Discriminative Localization" B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and

Taeksoo Kim 363 Jun 29, 2022
Python Implementation of Chess Playing AI with variable difficulty

Chess AI with variable difficulty level implemented using the MiniMax AB-Pruning Algorithm

Ali Imran 7 Feb 20, 2022
constructing maps of intellectual influence from publication data

Influencemap Project @ ANU Influence in the academic communities has been an area of interest for researchers. This can be seen in the popularity of a

CS Metrics 13 Jun 18, 2022
GPU Accelerated Non-rigid ICP for surface registration

GPU Accelerated Non-rigid ICP for surface registration Introduction Preivous Non-rigid ICP algorithm is usually implemented on CPU, and needs to solve

Haozhe Wu 144 Jan 04, 2023
POPPY (Physical Optics Propagation in Python) is a Python package that simulates physical optical propagation including diffraction

POPPY: Physical Optics Propagation in Python POPPY (Physical Optics Propagation in Python) is a Python package that simulates physical optical propaga

Space Telescope Science Institute 132 Dec 15, 2022
Code for the paper "MASTER: Multi-Aspect Non-local Network for Scene Text Recognition" (Pattern Recognition 2021)

MASTER-PyTorch PyTorch reimplementation of "MASTER: Multi-Aspect Non-local Network for Scene Text Recognition" (Pattern Recognition 2021). This projec

Wenwen Yu 255 Dec 29, 2022
The official PyTorch implementation of Curriculum by Smoothing (NeurIPS 2020, Spotlight).

Curriculum by Smoothing (NeurIPS 2020) The official PyTorch implementation of Curriculum by Smoothing (NeurIPS 2020, Spotlight). For any questions reg

PAIR Lab 36 Nov 23, 2022
[CVPR'2020] DeepDeform: Learning Non-rigid RGB-D Reconstruction with Semi-supervised Data

DeepDeform (CVPR'2020) DeepDeform is an RGB-D video dataset containing over 390,000 RGB-D frames in 400 videos, with 5,533 optical and scene flow imag

Aljaz Bozic 165 Jan 09, 2023
Convolutional Neural Network for Text Classification in Tensorflow

This code belongs to the "Implementing a CNN for Text Classification in Tensorflow" blog post. It is slightly simplified implementation of Kim's Convo

Denny Britz 5.5k Jan 02, 2023
Codebase for ECCV18 "The Sound of Pixels"

Sound-of-Pixels Codebase for ECCV18 "The Sound of Pixels". *This repository is under construction, but the core parts are already there. Environment T

Hang Zhao 318 Dec 20, 2022
Implementation of "Glancing Transformer for Non-Autoregressive Neural Machine Translation"

GLAT Implementation for the ACL2021 paper "Glancing Transformer for Non-Autoregressive Neural Machine Translation" Requirements Python = 3.7 Pytorch

117 Jan 09, 2023
Clockwork Convnets for Video Semantic Segmentation

Clockwork Convnets for Video Semantic Segmentation This is the reference implementation of arxiv:1608.03609: Clockwork Convnets for Video Semantic Seg

Evan Shelhamer 141 Nov 21, 2022
Machine Learning Platform for Kubernetes

Reproduce, Automate, Scale your data science. Welcome to Polyaxon, a platform for building, training, and monitoring large scale deep learning applica

polyaxon 3.2k Dec 23, 2022
Unified Pre-training for Self-Supervised Learning and Supervised Learning for ASR

UniSpeech The family of UniSpeech: UniSpeech (ICML 2021): Unified Pre-training for Self-Supervised Learning and Supervised Learning for ASR UniSpeech-

Microsoft 282 Jan 09, 2023
Companion code for the paper Theoretical characterization of uncertainty in high-dimensional linear classification

Companion code for the paper Theoretical characterization of uncertainty in high-dimensional linear classification Usage The required packages are lis

0 Feb 07, 2022