Deep learned, hardware-accelerated 3D object pose estimation

Overview

Isaac ROS Pose Estimation

Overview

This repository provides NVIDIA GPU-accelerated packages for 3D object pose estimation. Using a deep learned pose estimation model and a monocular camera, the isaac_ros_dope and isaac_ros_centerpose package can estimate the 6DOF pose of a target object.

Packages in this repository rely on accelerated DNN model inference using Triton or TensorRT from Isaac ROS DNN Inference.

System Requirements

This Isaac ROS package is designed and tested to be compatible with ROS2 Foxy on Jetson hardware, in addition to on x86 systems with an Nvidia GPU. On x86 systems, packages are only supported when run in the provided Isaac ROS Dev Docker container.

Jetson

  • AGX Xavier or Xavier NX
  • JetPack 4.6

x86_64 (in Isaac ROS Dev Docker Container)

  • CUDA 11.1+ supported discrete GPU
  • VPI 1.1.11
  • Ubuntu 20.04+

Note: For best performance on Jetson, ensure that power settings are configured appropriately (Power Management for Jetson).

Docker

You need to use the Isaac ROS development Docker image from Isaac ROS Common, based on the version 21.08 image from Deep Learning Frameworks Containers.

You must first install the NVIDIA Container Toolkit to make use of the Docker container development/runtime environment.

Configure nvidia-container-runtime as the default runtime for Docker by editing /etc/docker/daemon.json to include the following:

    "runtimes": {
        "nvidia": {
            "path": "nvidia-container-runtime",
            "runtimeArgs": []
        }
    },
    "default-runtime": "nvidia"

and then restarting Docker: sudo systemctl daemon-reload && sudo systemctl restart docker

Run the following script in isaac_ros_common to build the image and launch the container on x86_64 or Jetson:

$ scripts/run_dev.sh <optional_path>

Dependencies

Setup

  1. Create a ROS2 workspace if one is not already prepared:

    mkdir -p your_ws/src
    

    Note: The workspace can have any name; this guide assumes you name it your_ws.

  2. Clone the Isaac ROS Pose Estimation, Isaac ROS DNN Inference, and Isaac ROS Common package repositories to your_ws/src. Check that you have Git LFS installed before cloning to pull down all large files:

    sudo apt-get install git-lfs
    
    cd your_ws/src   
    git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros/isaac_ros_pose_estimation
    git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros/isaac_ros_dnn_inference
    git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros/isaac_ros_common
    
  3. Start the Docker interactive workspace:

    isaac_ros_common/scripts/run_dev.sh your_ws
    

    After this command, you will be inside of the container at /workspaces/isaac_ros-dev. Running this command in different terminals will attach to the same container.

    Note: The rest of this README assumes that you are inside this container.

  4. Build and source the workspace:

    cd /workspaces/isaac_ros-dev
    colcon build && . install/setup.bash
    

    Note: We recommend rebuilding the workspace each time when source files are edited. To rebuild, first clean the workspace by running rm -r build install log.

  5. (Optional) Run tests to verify complete and correct installation:

    colcon test --executor sequential
    

Package Reference

isaac_ros_dope

Overview

The isaac_ros_dope package offers functionality for detecting objects of a specific object type in images and estimating these objects' 6 DOF (degrees of freedom) poses using a trained DOPE (Deep Object Pose Estimation) model. This package sets up pre-processing using the DNN Image Encoder node, inference on images by leveraging the TensorRT node and provides a decoder that converts the DOPE network's output into an array of 6 DOF poses.

The model provided is taken from the official DOPE Github repository published by NVIDIA Research. To get a model, visit the PyTorch DOPE model collection here, and use the script under isaac_ros_dope/scripts to convert the PyTorch model to ONNX, which can be ingested by the TensorRT node (this script can only be executed on an x86 machine). However, the package should also work if you train your own DOPE model that has an input image size of [480, 640]. For instructions to train your own DOPE model, check out the README in the official DOPE Github repository.

Package Dependencies

Available Components

Component Topics Subscribed Topics Published Parameters
DopeDecoderNode belief_map_array: The tensor that represents the belief maps, which are outputs from the DOPE network dope/pose_array: An array of poses of the objects detected by the DOPE network and interpreted by the DOPE decoder node. queue_size: The length of the subscription queues, which is rmw_qos_profile_default.depth by default
frame_id: The frame ID that the DOPE decoder node will write to the header of its output messages
configuration_file: The name of the configuration file to parse. Note: The node will look for that file name under isaac_ros_dope/config. By default there is a configuration file under that directory named dope_config.yaml.
object_name: The object class the DOPE network is detecting and the DOPE decoder is interpreting. This name should be listed in the configuration file along with its corresponding cuboid dimensions.

Configuration

You will need to specify an object type in the DopeDecoderNode that is listed in the dope_config.yaml file, so the DOPE decoder node will pick the right parameters to transform the belief maps from the inference node to object poses. The dope_config.yaml file uses the camera intrinsics of Realsense by default - if you are using a different camera, you will need to modify the camera_matrix field with the new, scaled (640x480) camera intrinsics.

isaac_ros_centerpose

Overview

The isaac_ros_centerpose package offers functionality for detecting objects of a specific class in images and estimating these objects' 6 DOF (degrees of freedom) poses using a trained CenterPose model. Just like DOPE, this package sets up pre-processing using the DNN Image Encoder node, inference on images by leveraging an inference node (either TensorRT or Triton node) and provides a decoder that converts the CenterPose network's output into an array of 6 DOF poses.

The model provided is taken from the official CenterPose Github repository published by NVIDIA Research. To get a model, visit the PyTorch CenterPose model collection here, and use the script under isaac_ros_centerpose/scripts to convert the PyTorch model to ONNX, which can be ingested by the TensorRT node. However, the package should also work if you train your own CenterPose model that has an input image size of [512, 512]. For instructions to train your own CenterPose model, check out the README in the official CenterPose Github repository.

Package Dependencies

Available Components

Component Topics Subscribed Topics Published Parameters
CenterPoseDecoderNode tensor_sub: The TensorList that contains the outputs of the CenterPose network object_poses: A MarkerArray representing the poses of objects detected by the CenterPose network and interpreted by the CenterPose decoder node. camera_matrix: A row-major array of 9 floats that represent the camera intrinsics matrix K.
original_image_size: An array of two floats that represent the size of the original image passed into the image encoder. The first element needs to be width, and the second element needs to be height.
output_field_size: An array of two integers that represent the size of the 2D keypoint decoding from the network output. The value by default is [128, 128].
height: This parameter is used to scale the cuboid used for calculating the size of the objects detected.
frame_id: The frame ID that the DOPE decoder node will write to the header of its output messages. The default value is set to centerpose.
marker_color: An array of 4 floats representing RGBA that will be used to define the color that will be used by RViz to visualize the marker. Each value should be between 0.0 and 1.0. The default value is set to (1.0, 0.0, 0.0, 1.0), which is red.

Configuration

The default parameters for the CenterPoseDecoderNode is defined in the decoders_param.yaml file under isaac_ros_centerpose/config. The dope_config.yaml file uses the camera intrinsics of Realsense by default - if you are using a different camera, you will need to modify the camera_matrix field.

Network Outputs

The CenterPose network has 7 different outputs:

Output Name Meaning
hm Object center heatmap
wh 2D bounding box size
hps Keypoint displacements
reg Sub-pixel offset
hm_hp Keypoint heatmaps
hp_offset Sub-pixel offsets for keypoints
scale Relative cuboid dimensions

For more context and explanation, you can find the corresponding outputs in Figure 2 of the CenterPose paper and refer to the paper.

Walkthroughs

Inference on DOPE using TensorRT

  1. Select a DOPE model by visiting the DOPE model collection available on the official DOPE GitHub repository here. For example, download Ketchup.pth into /tmp/models.

  2. In order to run PyTorch models with TensorRT, one option is to export the model into an ONNX file using the script provided under /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_dope/scripts/dope_converter.py:

    python3 /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_dope/scripts/dope_converter.py --format onnx --input /tmp/models/Ketchup.pth
    

    The output ONNX file will be located at /tmp/models/Ketchup.onnx.

    Note: The DOPE decoder currently works with the output of a DOPE network that has a fixed input size of 640 x 480, which are the default dimensions set in the script. In order to use input images of other sizes, make sure to crop/resize using ROS2 nodes from Isaac ROS Image Pipeline or similar packages.

  3. Modify the following values in the launch file /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_dope/launch/isaac_ros_dope_tensor_rt.launch.py:

    'model_file_path': '/tmp/models/Ketchup.onnx'
    'object_name': 'Ketchup'
    

    Note: Modify parameters object_name and model_file_path in the launch file if you are using another model.object_name should correspond to one of the objects listed in the DOPE configuration file, and the specified model should be a DOPE model that is trained for that specific object.

  4. Rebuild and source isaac_ros_dope:

    cd /workspaces/isaac_ros-dev
    colcon build --packages-up-to isaac_ros_dope && . install/setup.bash
    
  5. Start isaac_ros_dope using the launch file:

    ros2 launch /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_dope/launch/isaac_ros_dope_tensor_rt.launch.py
    
  6. Setup image_publisher package if not already installed.

    cd /workspaces/isaac_ros-dev/src 
    git clone --single-branch -b ros2 https://github.com/ros-perception/image_pipeline.git
    cd /workspaces/isaac_ros-dev
    colcon build --packages-up-to image_publisher && . install/setup.bash
    
  7. Start publishing images to topic /image using image_publisher, the topic that the encoder is subscribed to.

    ros2 run image_publisher image_publisher_node /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/resources/0002_rgb.jpg --ros-args -r image_raw:=image
    
  8. Open another terminal window. You should be able to get the poses of the objects in the images through ros2 topic echo:

    source /workspaces/isaac_ros-dev/install/setup.bash
    ros2 topic echo /poses
    

    We are echoing the topic /poses because we remapped the original topic name /dope/pose_array to /poses in our launch file.

  9. Launch rviz2. Click on Add button, select "By topic", and choose PoseArray under /poses. Update "Displays" parameters as shown in the following to see the axes of the object displayed.

Note: For best results, crop/resize input images to the same dimensions your DNN model is expecting.

Inference on DOPE using Triton

  1. Select a DOPE model by visiting the DOPE model collection available on the official DOPE GitHub repository here. For example, download Ketchup.pth into /tmp/models/Ketchup.

  2. Setup model repository.

    Create a models repository with version 1:

    mkdir -p /tmp/models/Ketchup/1
    

    Create a configuration file for this model at path /tmp/models/Ketchup/config.pbtxt. Note that name has to be the same as the model repository.

    name: "Ketchup"
    platform: "onnxruntime_onnx"
    max_batch_size: 0
    input [
      {
        name: "INPUT__0"
        data_type: TYPE_FP32
        dims: [ 1, 3, 480, 640 ]
      }
    ]
    output [
      {
        name: "OUTPUT__0"
        data_type: TYPE_FP32
        dims: [ 1, 25, 60, 80 ]
      }
    ]
    version_policy: {
      specific {
        versions: [ 1 ]
      }
    }
    
    • To run ONNX models with Triton, export the model into an ONNX file using the script provided under /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_dope/scripts/dope_converter.py:

      python3 /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_dope/scripts/dope_converter.py --format onnx --input /tmp/models/Ketchup/Ketchup.pth --output /tmp/models/Ketchup/1/model.onnx --input_name INPUT__0 --output_name OUTPUT__0
      

      Note: The DOPE decoder currently works with the output of a DOPE network that has a fixed input size of 640 x 480, which are the default dimensions set in the script. In order to use input images of other sizes, make sure to crop/resize using ROS2 nodes from Isaac ROS Image Pipeline or similar packages. The model name has to be model.onnx.

    • To run TensorRT engine plan file with Triton, export the ONNX model into an TensorRT engine plan file using the builtin TensorRT converter trtexec:

      /usr/src/tensorrt/bin/trtexec --onnx=/tmp/models/Ketchup/1/model.onnx --saveEngine=/tmp/models/Ketchup/1/model.plan
      

      Modify the following value in /tmp/models/Ketchup/config.pbtxt:

      platform: "tensorrt_plan"
      
    • To run PyTorch model with Triton (inferencing PyTorch model is supported for x86_64 platform only), the model needs to be saved using torch.jit.save(). The downloaded DOPE model is saved with torch.save(). Export the DOPE model using the script provided under /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_dope/scripts/dope_converter.py:

      python3 /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_dope/scripts/dope_converter.py --format pytorch --input /tmp/models/Ketchup/Ketchup.pth --output /tmp/models/Ketchup/1/model.pt
      

      Modify the following value in /tmp/models/Ketchup/config.pbtxt:

      platform: "pytorch_libtorch"
      
  3. Modify the following values in the launch file /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_dope/launch/isaac_ros_dope_triton.launch.py:

    'model_name': 'Ketchup'
    'model_repository_paths': ['/tmp/models']
    'input_binding_names': ['INPUT__0']
    'output_binding_names': ['OUTPUT__0']
    'object_name': 'Ketchup'
    

    Note: object_name should correspond to one of the objects listed in the DOPE configuration file, and the specified model should be a DOPE model that is trained for that specific object.

  4. Rebuild and source isaac_ros_dope:

    cd /workspaces/isaac_ros-dev
    colcon build --packages-up-to isaac_ros_dope && . install/setup.bash
    
  5. Start isaac_ros_dope using the launch file:

    ros2 launch /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_dope/launch/isaac_ros_dope_triton.launch.py
    
  6. Setup image_publisher package if not already installed.

    cd /workspaces/isaac_ros-dev/src
    git clone --single-branch -b ros2 https://github.com/ros-perception/image_pipeline.git
    cd /workspaces/isaac_ros-dev
    colcon build --packages-up-to image_publisher && . install/setup.bash
    
  7. Start publishing images to topic /image using image_publisher, the topic that the encoder is subscribed to.

    ros2 run image_publisher image_publisher_node /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/resources/0002_rgb.jpg --ros-args -r image_raw:=image
    
  8. Open another terminal window. You should be able to get the poses of the objects in the images through ros2 topic echo:

    source /workspaces/isaac_ros-dev/install/setup.bash
    ros2 topic echo /poses
    

    We are echoing the topic /poses because we remapped the original topic name /dope/pose_array to /poses in our launch file.

  9. Launch rviz2. Click on Add button, select "By topic", and choose PoseArray under /poses. Update "Displays" parameters to see the axes of the object displayed.

Note: For best results, crop/resize input images to the same dimensions your DNN model is expecting.

Inference on CenterPose using Triton

  1. Select a CenterPose model by visiting the CenterPose model collection available on the official CenterPose GitHub repository here. For example, download shoe_resnet_140.pth into /tmp/models/centerpose_shoe.

Note: The models in the root directory of the model collection listed above will NOT WORK with our inference nodes because they have custom layers not supported by TensorRT nor Triton. Make sure to use the PyTorch weights that have the string resnet in their file names.

  1. Setup model repository.

    Create a models repository with version 1:

    mkdir -p /tmp/models/centerpose_shoe/1
    
  2. Create a configuration file for this model at path /tmp/models/centerpose_shoe/config.pbtxt. Note that name has to be the same as the model repository name. Take a look at the example at isaac_ros_centerpose/test/models/centerpose_shoe/config.pbtxt and copy that file to /tmp/models/centerpose_shoe/config.pbtxt.

    cp /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_centerpose/test/models/centerpose_shoe/config.pbtxt /tmp/models/centerpose_shoe/config.pbtxt
    
  3. To run the TensorRT engine plan, convert the PyTorch model to ONNX first. Export the model into an ONNX file using the script provided under /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_centerpose/scripts/centerpose_pytorch2onnx.py:

    python3 /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_centerpose/scripts/centerpose_pytorch2onnx.py --input /tmp/models/centerpose_shoe/shoe_resnet_140.pth --output /tmp/models/centerpose_shoe/1/model.onnx
    
  4. To get a TensorRT engine plan file with Triton, export the ONNX model into an TensorRT engine plan file using the builtin TensorRT converter trtexec:

    /usr/src/tensorrt/bin/trtexec --onnx=/tmp/models/centerpose_shoe/1/model.onnx --saveEngine=/tmp/models/centerpose_shoe/1/model.plan
    
  5. Modify the isaac_ros_centerpose launch file located in /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_centerpose/launch/isaac_ros_centerpose.launch.py. You will need to update the following lines as:

    'model_name': 'centerpose_shoe',
    'model_repository_paths': ['/tmp/models'],
    

    Rebuild and source isaac_ros_centerpose:

    cd /workspaces/isaac_ros-dev
    colcon build --packages-up-to isaac_ros_centerpose && . install/setup.bash
    

    Start isaac_ros_centerpose using the launch file:

    ros2 launch isaac_ros_centerpose isaac_ros_centerpose.launch.py
    
  6. Setup image_publisher package if not already installed.

    cd /workspaces/isaac_ros-dev/src
    git clone --single-branch -b ros2 https://github.com/ros-perception/image_pipeline.git
    cd /workspaces/isaac_ros-dev
    colcon build --packages-up-to image_publisher && . install/setup.bash
    
  7. Start publishing images to topic /image using image_publisher, the topic that the encoder is subscribed to.

    ros2 run image_publisher image_publisher_node /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/resources/shoe.jpg --ros-args -r image_raw:=image
    
  8. Open another terminal window and attach to the same container. You should be able to get the poses of the objects in the images through ros2 topic echo:

    source /workspaces/isaac_ros-dev/install/setup.bash
    ros2 topic echo /object_poses
    
  9. Launch rviz2. Click on Add button, select "By topic", and choose MarkerArray under /object_poses. Set the fixed frame to centerpose. You'll be able to see the cuboid marker representing the object's pose detected!

Troubleshooting

Nodes crashed on initial launch reporting shared libraries have a file format not recognized

Many dependent shared library binary files are stored in git-lfs. These files need to be fetched in order for Isaac ROS nodes to function correctly.

Symptoms

/usr/bin/ld:/workspaces/isaac_ros-dev/ros_ws/src/isaac_ros_common/isaac_ros_nvengine/gxf/lib/gxf_jetpack46/core/libgxf_core.so: file format not recognized; treating as linker script
/usr/bin/ld:/workspaces/isaac_ros-dev/ros_ws/src/isaac_ros_common/isaac_ros_nvengine/gxf/lib/gxf_jetpack46/core/libgxf_core.so:1: syntax error
collect2: error: ld returned 1 exit status
make[2]: *** [libgxe_node.so] Error 1
make[1]: *** [CMakeFiles/gxe_node.dir/all] Error 2
make: *** [all] Error 2

Solution

Run git lfs pull in each Isaac ROS repository you have checked out, especially isaac_ros_common, to ensure all of the large binary files have been downloaded.

Updates

Date Changes
2021-10-20 Initial release
You might also like...
Single-Stage 6D Object Pose Estimation, CVPR 2020
Single-Stage 6D Object Pose Estimation, CVPR 2020

Overview This repository contains the code for the paper Single-Stage 6D Object Pose Estimation. Yinlin Hu, Pascal Fua, Wei Wang and Mathieu Salzmann.

Single-stage Keypoint-based Category-level Object Pose Estimation from an RGB Image
Single-stage Keypoint-based Category-level Object Pose Estimation from an RGB Image

CenterPose Overview This repository is the official implementation of the paper "Single-stage Keypoint-based Category-level Object Pose Estimation fro

PyTorch implemention of ICCV'21 paper SGPA: Structure-Guided Prior Adaptation for Category-Level 6D Object Pose Estimation
PyTorch implemention of ICCV'21 paper SGPA: Structure-Guided Prior Adaptation for Category-Level 6D Object Pose Estimation

SGPA: Structure-Guided Prior Adaptation for Category-Level 6D Object Pose Estimation This is the PyTorch implemention of ICCV'21 paper SGPA: Structure

[CVPR 2022] Pytorch implementation of
[CVPR 2022] Pytorch implementation of "Templates for 3D Object Pose Estimation Revisited: Generalization to New objects and Robustness to Occlusions" paper

template-pose Pytorch implementation of "Templates for 3D Object Pose Estimation Revisited: Generalization to New objects and Robustness to Occlusions

[CVPR 2022 Oral] EPro-PnP: Generalized End-to-End Probabilistic Perspective-n-Points for Monocular Object Pose Estimation
[CVPR 2022 Oral] EPro-PnP: Generalized End-to-End Probabilistic Perspective-n-Points for Monocular Object Pose Estimation

EPro-PnP EPro-PnP: Generalized End-to-End Probabilistic Perspective-n-Points for Monocular Object Pose Estimation In CVPR 2022 (Oral). [paper] Hanshen

Object detection on multiple datasets with an automatically learned unified label space.
Object detection on multiple datasets with an automatically learned unified label space.

Simple multi-dataset detection An object detector trained on multiple large-scale datasets with a unified label space; Winning solution of E

Lunar is a neural network aimbot that uses real-time object detection accelerated with CUDA on Nvidia GPUs.
Lunar is a neural network aimbot that uses real-time object detection accelerated with CUDA on Nvidia GPUs.

Lunar Lunar is a neural network aimbot that uses real-time object detection accelerated with CUDA on Nvidia GPUs. About Lunar can be modified to work

The project is an official implementation of our CVPR2019 paper
The project is an official implementation of our CVPR2019 paper "Deep High-Resolution Representation Learning for Human Pose Estimation"

Deep High-Resolution Representation Learning for Human Pose Estimation (CVPR 2019) News [2020/07/05] A very nice blog from Towards Data Science introd

Deep Dual Consecutive Network for Human Pose Estimation (CVPR2021)
Deep Dual Consecutive Network for Human Pose Estimation (CVPR2021)

Deep Dual Consecutive Network for Human Pose Estimation (CVPR2021) Introduction This is the official code of Deep Dual Consecutive Network for Human P

Comments
  • Question about deformable convolution

    Question about deformable convolution

    Hi, First of all Thank you. I have one query. Centre pose uses Deformable convolution layer. But in the "centerpose_pytorch2onnx.py" script there is no Deformable convolution. Does the converted ONNXmodel contain Deformable convolution layer ? Thank you .

    opened by Shaaa10 1
  • Can't use it with usb cameras with ros2 node

    Can't use it with usb cameras with ros2 node

    Hello,

    I was trying to use this repo inside docker. It runs well as you gave the demo ros bag but with my USB camera with the ros2 usb_cam node running, it gives the following result:

    [component_container_mt-1] [WARN] [1667762069.374893820] [dope_decoder]: [NitrosPublisherSubscriberBase] Failed to get timestamp from a NITROS message (eid=77309)
    

    Here is the node: https://github.com/ros-drivers/usb_cam

    git checkout ros2

    Note:

    • changed the image_raw topic to image to feed this rosnode and
    • changed image height to 1080 and image width to 1920
    • the encoding is rgb8
    needs info 
    opened by ArghyaChatterjee 3
Releases(v0.20.0-dp)
Owner
NVIDIA Isaac ROS
High-performance computing for robotics
NVIDIA Isaac ROS
PAMI stands for PAttern MIning. It constitutes several pattern mining algorithms to discover interesting patterns in transactional/temporal/spatiotemporal databases

Introduction PAMI stands for PAttern MIning. It constitutes several pattern mining algorithms to discover interesting patterns in transactional/tempor

RAGE UDAY KIRAN 43 Jan 08, 2023
使用OpenCV部署全景驾驶感知网络YOLOP,可同时处理交通目标检测、可驾驶区域分割、车道线检测,三项视觉感知任务,包含C++和Python两种版本的程序实现。本套程序只依赖opencv库就可以运行, 从而彻底摆脱对任何深度学习框架的依赖。

YOLOP-opencv-dnn 使用OpenCV部署全景驾驶感知网络YOLOP,可同时处理交通目标检测、可驾驶区域分割、车道线检测,三项视觉感知任务,依然是包含C++和Python两种版本的程序实现 onnx文件从百度云盘下载,链接:https://pan.baidu.com/s/1A_9cldU

178 Jan 07, 2023
A python program to hack instagram

hackinsta a program to hack instagram Yokoback_(instahack) is the file to open, you need libraries write on import. You run that file in the same fold

2 Jan 22, 2022
Official Repository for "Robust On-Policy Data Collection for Data Efficient Policy Evaluation" (NeurIPS 2021 Workshop on OfflineRL).

Robust On-Policy Data Collection for Data-Efficient Policy Evaluation Source code of Robust On-Policy Data Collection for Data-Efficient Policy Evalua

Autonomous Agents Research Group (University of Edinburgh) 2 Oct 09, 2022
Brax is a differentiable physics engine that simulates environments made up of rigid bodies, joints, and actuators

Brax is a differentiable physics engine that simulates environments made up of rigid bodies, joints, and actuators. It's also a suite of learning algorithms to train agents to operate in these enviro

Google 1.5k Jan 02, 2023
Robotics with GPU computing

Robotics with GPU computing Cupoch is a library that implements rapid 3D data processing for robotics using CUDA. The goal of this library is to imple

Shirokuma 625 Jan 07, 2023
A PyTorch Lightning solution to training OpenAI's CLIP from scratch.

train-CLIP 📎 A PyTorch Lightning solution to training CLIP from scratch. Goal ⚽ Our aim is to create an easy to use Lightning implementation of OpenA

Cade Gordon 396 Dec 30, 2022
https://sites.google.com/cornell.edu/recsys2021tutorial

Counterfactual Learning and Evaluation for Recommender Systems (RecSys'21 Tutorial) Materials for "Counterfactual Learning and Evaluation for Recommen

yuta-saito 45 Nov 10, 2022
DeepConsensus uses gap-aware sequence transformers to correct errors in Pacific Biosciences (PacBio) Circular Consensus Sequencing (CCS) data.

DeepConsensus DeepConsensus uses gap-aware sequence transformers to correct errors in Pacific Biosciences (PacBio) Circular Consensus Sequencing (CCS)

Google 149 Dec 19, 2022
COCO Style Dataset Generator GUI

A simple GUI-based COCO-style JSON Polygon masks' annotation tool to facilitate quick and efficient crowd-sourced generation of annotation masks and bounding boxes. Optionally, one could choose to us

Hans Krupakar 142 Dec 09, 2022
Code for "Layered Neural Rendering for Retiming People in Video."

Layered Neural Rendering in PyTorch This repository contains training code for the examples in the SIGGRAPH Asia 2020 paper "Layered Neural Rendering

Google 154 Dec 16, 2022
RefineMask (CVPR 2021)

RefineMask: Towards High-Quality Instance Segmentation with Fine-Grained Features (CVPR 2021) This repo is the official implementation of RefineMask:

Gang Zhang 191 Jan 07, 2023
Code for C2-Matching (CVPR2021). Paper: Robust Reference-based Super-Resolution via C2-Matching.

C2-Matching (CVPR2021) This repository contains the implementation of the following paper: Robust Reference-based Super-Resolution via C2-Matching Yum

Yuming Jiang 151 Dec 26, 2022
This repository contains the code for the paper Neural RGB-D Surface Reconstruction

Neural RGB-D Surface Reconstruction Paper | Project Page | Video Neural RGB-D Surface Reconstruction Dejan Azinović, Ricardo Martin-Brualla, Dan B Gol

Dejan 406 Jan 04, 2023
PyTorch implementation of InstaGAN: Instance-aware Image-to-Image Translation

InstaGAN: Instance-aware Image-to-Image Translation Warning: This repo contains a model which has potential ethical concerns. Remark that the task of

Sangwoo Mo 827 Dec 29, 2022
Hardware-accelerated DNN model inference ROS2 packages using NVIDIA Triton/TensorRT for both Jetson and x86_64 with CUDA-capable GPU

Isaac ROS DNN Inference Overview This repository provides two NVIDIA GPU-accelerated ROS2 nodes that perform deep learning inference using custom mode

NVIDIA Isaac ROS 62 Dec 14, 2022
ivadomed is an integrated framework for medical image analysis with deep learning.

Repository on the collaborative IVADO medical imaging project between the Mila and NeuroPoly labs.

144 Dec 19, 2022
Deep Learning agent of Starcraft2, similar to AlphaStar of DeepMind except size of network.

Introduction This repository is for Deep Learning agent of Starcraft2. It is very similar to AlphaStar of DeepMind except size of network. I only test

Dohyeong Kim 136 Jan 04, 2023
Recursive Bayesian Networks

Recursive Bayesian Networks This repository contains the code to reproduce the results from the NeurIPS 2021 paper Lieck R, Rohrmeier M (2021) Recursi

Robert Lieck 11 Oct 18, 2022
Autoregressive Models in PyTorch.

Autoregressive This repository contains all the necessary PyTorch code, tailored to my presentation, to train and generate data from WaveNet-like auto

Christoph Heindl 41 Oct 09, 2022