This repository is based on Ultralytics/yolov5, with adjustments to enable polygon prediction boxes.

Overview

Polygon-Yolov5

This repository is based on Ultralytics/yolov5, with adjustments to enable polygon prediction boxes.

Section I. Description

The codes are based on Ultralytics/yolov5, and several functions are added and modified to enable polygon prediction boxes.

The modifications compared with Ultralytics/yolov5 and their brief descriptions are summarized below:

  1. data/polygon_ucas.yaml : Exemplar UCAS-AOD dataset to test the effects of polygon boxes

  2. data/images/UCAS-AOD : For the inference of polygon-yolov5s-ucas.pt

  3. models/common.py :
    3.1. class Polygon_NMS : Non-Maximum Suppression (NMS) module for Polygon Boxes
    3.2. class Polygon_AutoShape : Polygon Version of Original AutoShape, input-robust polygon model wrapper for passing cv2/np/PIL/torch inputs. Includes preprocessing, inference and Polygon_NMS
    3.3. class Polygon_Detections : Polygon detections class for Polygon-YOLOv5 inference results

  4. models/polygon_yolov5s_ucas.yaml : Configuration file of polygon yolov5s for exemplar UCAS-AOD dataset

  5. models/yolo.py :
    5.1. class Polygon_Detect : Detect head for polygon yolov5 models with polygon box prediction
    5.2. class Polygon_Model : Polygon yolov5 models with polygon box prediction

  6. utils/iou_cuda : CUDA extension for iou computation of polygon boxes
    6.1. extensions.cpp : CUDA extension file
    6.2. inter_union_cuda.cu : CUDA code for computing iou of polygon boxes
    6.3. setup.py : for building CUDA extensions module polygon_inter_union_cuda, with two functions polygon_inter_union_cuda and polygon_b_inter_union_cuda

  7. utils/autoanchor.py :
    7.1. def polygon_check_anchors : Polygon version of original check_anchors
    7.2. def polygon_kmean_anchors : Create kmeans-evolved anchors from polygon-enabled training dataset, use minimum outter bounding box as approximations

  8. utils/datasets.py :
    8.1. def polygon_random_perspective : Data augmentation for datasets with polygon boxes (augmentation effects: HSV-Hue, HSV-Saturation, HSV-Value, rotation, translation, scale, shear, perspective, flip up-down, flip left-right, mosaic, mixup)
    8.2. def polygon_box_candidates : Polygon version of original box_candidates
    8.3. class Polygon_LoadImagesAndLabels : Polygon version of original LoadImagesAndLabels
    8.4. def polygon_load_mosaic : Loads images in a 4-mosaic, with polygon boxes
    8.5. def polygon_load_mosaic9 : Loads images in a 9-mosaic, with polygon boxes
    8.6. def polygon_verify_image_label : Verify one image-label pair for polygon datasets
    8.7. def create_dataloader : Has been modified to include polygon datasets

  9. utils/general.py :
    9.1. def xyxyxyxyn2xyxyxyxy : Convert normalized xyxyxyxy or segments into pixel xyxyxyxy or segments
    9.2. def polygon_segment2box : Convert 1 segment label to 1 polygon box label
    9.3. def polygon_segments2boxes : Convert segment labels to polygon box labels
    9.4. def polygon_scale_coords : Rescale polygon coords (xyxyxyxy) from img1_shape to img0_shape
    9.5. def polygon_clip_coords : Clip bounding polygon xyxyxyxy bounding boxes to image shape (height, width)
    9.6. def polygon_inter_union_cpu : iou computation (polygon) with cpu
    9.7. def polygon_box_iou : Compute iou of polygon boxes via cpu or cuda
    9.8. def polygon_b_inter_union_cpu : iou computation (polygon) with cpu for class Polygon_ComputeLoss in loss.py
    9.9. def polygon_bbox_iou : Compute iou of polygon boxes for class Polygon_ComputeLoss in loss.py via cpu or cuda
    9.10. def polygon_non_max_suppression : Runs Non-Maximum Suppression (NMS) on inference results for polygon boxes
    9.11. def polygon_nms_kernel : Non maximum suppression kernel for polygon-enabled boxes
    9.12. def order_corners : Return sorted corners for loss.py::class Polygon_ComputeLoss::build_targets

  10. utils/loss.py :
    10.1. class Polygon_ComputeLoss : Compute loss for polygon boxes

  11. utils/metrics.py :
    11.1. class Polygon_ConfusionMatrix : Polygon version of original ConfusionMatrix

  12. utils/plots.py :
    12.1. def polygon_plot_one_box : Plot one polygon box on image
    12.2. def polygon_plot_one_box_PIL : Plot one polygon box on image via PIL
    12.3. def polygon_output_to_target : Convert model output to target format (batch_id, class_id, x1, y1, x2, y2, x3, y3, x4, y4, conf)
    12.4. def polygon_plot_images : Polygon version of original plot_images
    12.5. def polygon_plot_test_txt : Polygon version of original plot_test_txt
    12.6. def polygon_plot_targets_txt : Polygon version of original plot_targets_txt
    12.7. def polygon_plot_labels : Polygon version of original plot_labels

  13. polygon_train.py : For training polygon-yolov5 models

  14. polygon_test.py : For testing polygon-yolov5 models

  15. polygon_detect.py : For detecting polygon-yolov5 models

  16. requirements.py : Added python model shapely

Section II. How Does Polygon Boxes Work? How Does Polygon Boxes Different from Axis-Aligned Boxes?

  1. build_targets in class Polygon_ComputeLoss & forward in class Polygon_Detect

2. order_corners in general.py

3. Illustrations of box loss of polygon boxes

Section III. Installation

For the CUDA extension to be successfully built without error, please use CUDA version >= 11.2. The codes have been verified in Ubuntu 16.04 with Tesla K80 GPU.

# The following codes install CUDA 11.2 from scratch on Ubuntu 16.04, if you have installed it, please ignore
# If you are using other versions of systems, please check https://tutorialforlinux.com/2019/12/01/how-to-add-cuda-repository-for-ubuntu-based-oses-2/
# Install Ubuntu kernel head
sudo apt install linux-headers-$(uname -r)

# Pinning CUDA repo wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/cuda-ubuntu1604.pin sudo mv cuda-ubuntu1604.pin /etc/apt/preferences.d/cuda-repository-pin-600
# Add CUDA GPG key sudo apt-key adv --fetch-keys http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/7fa2af80.pub
# Setting up CUDA repo sudo add-apt-repository "deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/ /"
# Refresh apt repositories sudo apt update
# Installing CUDA 11.2 sudo apt install cuda-11-2 -y sudo apt install cuda-toolkit-11-2 -y
# Setting up path echo 'export PATH=/usr/local/cuda-11.2/bin${PATH:+:${PATH}}' >> $HOME/.bashrc # You are done installing CUDA 11.2
# Check NVIDIA nvidia-smi # Update all apts sudo apt-get update sudo apt-get -y upgrade
# Begin installing python 3.7 curl -o ~/miniconda.sh -O https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh chmod +x ~/miniconda.sh ./miniconda.sh -b echo "PATH=~/miniconda3/bin:$PATH" >> ~/.bashrc source ~/.bashrc conda install -y python=3.7 # You are done installing python

The following codes set you up with the Polygon Yolov5.

# clone git repo
git clone https://github.com/XinzeLee/PolygonObjectDetection
cd PolygonObjectDetection/polygon-yolov5
# install python package requirements
pip install -r requirements.txt
# install CUDA extensions
cd utils/iou_cuda
python setup.py install
# cd back to polygon-yolov5 folder
cd .. && cd ..

Section IV. Polygon-Tutorial 1: Deploy the Polygon Yolov5s

Try Polygon Yolov5s Model by Following Polygon-Tutorial 1

  1. Inference
     $ python polygon_detect.py --weights polygon-yolov5s-ucas.pt --img 1024 --conf 0.75 \
         --source data/images/UCAS-AOD --iou-thres 0.4 --hide-labels

  2. Test
     $ python polygon_test.py --weights polygon-yolov5s-ucas.pt --data polygon_ucas.yaml \
         --img 1024 --iou 0.65 --task val

  3. Train
     $ python polygon_train.py --weights polygon-yolov5s-ucas.pt --cfg polygon_yolov5s_ucas.yaml \
         --data polygon_ucas.yaml --hyp hyp.ucas.yaml --img-size 1024 \
         --epochs 3 --batch-size 12 --noautoanchor --polygon --cache
  4. Performance
    4.1. Confusion Matrix

    4.2. Precision Curve

    4.3. Recall Curve

    4.4. Precision-Recall Curve

    4.5. F1 Curve

Section V. Polygon-Tutorial 2: Transform COCO Dataset to Polygon Labels Using Segmentation

Transform COCO Dataset to Polygon Labels by Following [Polygon-Tutorial 2](https://github.com/XinzeLee/PolygonObjectDetection/blob/main/polygon-yolov5/Polygon-Tutorial2.ipynb]

Transformed Exemplar Figure

Section VI. Expansion to More Than Four Corners


Section VII. References

Comments
  • NMS time limit 10.0s exceeded

    NMS time limit 10.0s exceeded

    Thanks for sharing great works!

    I am trying to train coco dataset. When calculating mAP for val data, I got bellow warning.

    WARNING: NMS time limit 10.0s exceeded

    I think when found many bbox, nms cost is too much. then time limit exceeded.

    So, I set/change conf_thres=0.1(default is 0.001), it's work no warning.

    But, I am afraid that this change will affect learning performance. What do you think?

    opened by tak-s 10
  • Strange behaviour in overlapping bounding boxes

    Strange behaviour in overlapping bounding boxes

    @XinzeLee I have an issue when there are two adjacent or overlapping objects that I want to detect. I created a diagram to give an example.

    PolygonProblem

    The objects I am trying to detect have a rectangular shape and are represented in the image by black rectangles with grey outlines. The bounding boxes predicted by the model are in red.

    As you can see, box 1 and 3 are correct and box 2 is incorrect. Increasing the confidence or decreasing the IoU threshold cause only box 2 to be visible.

    This happens in almost every case where 2 or more objects are close together.

    Any idea on the root of the problem?

    opened by AntMorais 2
  • TypeError: test() got an unexpected keyword argument 'polygon'

    TypeError: test() got an unexpected keyword argument 'polygon'

    https://github.com/XinzeLee/PolygonObjectDetection/blob/f3333f560a08b7fccba4285f0c99cd5af03dc45a/polygon-yolov5/polygon_train.py#L444

    Not defined parameter 'polygon' at test() https://github.com/XinzeLee/PolygonObjectDetection/blob/f3333f560a08b7fccba4285f0c99cd5af03dc45a/polygon-yolov5/polygon_test.py#L25

    opened by tak-s 2
  • What is polygon yolov5 mAP on UCAS dataset

    What is polygon yolov5 mAP on UCAS dataset

    Firstly, thanks for your work. I have question. Did you test poly-yolo5 model on UCAS? As I want to compare it 's result with other object detectors. There are currently some sota satellite image related object detectors 's results in this repo https://github.com/ming71/UCAS-AOD-benchmark

    opened by vpeopleonatank 2
  • about multi-scale argument during training

    about multi-scale argument during training

    Does the argument help improve mAP? I'm asking because we are generating anchors for a fixed image size. Since the multi-scale arg varies the image size by -/+50%, will it have a negative effect on mAP?

    opened by nsabir2011 1
  • Expected all tensors to be on the same device, but found at least two devices, cuda: 0 and cpu!

    Expected all tensors to be on the same device, but found at least two devices, cuda: 0 and cpu!

    When following the first tutorial on Google Colab, I am trying to run !python polygon_test.py --weights polygon-yolov5s-ucas.pt --data polygon_ucas.yaml --img 1024 --iou 0.65 --task val --device 0 as in the example. I get the following error:

    Traceback (most recent call last): File "polygon_test.py", line 325, in <module> test(**vars(opt)) File "/usr/local/lib/python3.7/dist-packages/torch/autograd/grad_mode.py", line 28, in decorate_context return func(*args, **kwargs) File "polygon_test.py", line 224, in test for j in (ious > iouv[index_ap50]).nonzero(as_tuple=False): RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!

    I have not modified any code or data and cannot figure out where the issue is. Any help would be much appreciated. Thank you!

    opened by sac3tf 1
  • custom data polygon transform using Polygon-Tutorial2.ipynb

    custom data polygon transform using Polygon-Tutorial2.ipynb

    hello @XinzeLee ,

    I'm using my custom dataset and having json file in COCO format. I'm trying to use Polygon-Tutorial2.ipynb for it. But somehow it throws an error. Can you please help me how to run and get rotated bounding boxes from polygon annoations. Thank you in advance.

    This is the error I'm getting: Traceback (most recent call last): File "C:/yolo/tranform.py", line 173, in main() File "C:/yolo/tranform.py", line 170, in main seg2poly(r'C:\Users\exp', plot=True) File "C:/yolo/tranform.py", line 62, in seg2poly img_dir = img_dir / prefix UnboundLocalError: local variable 'prefix' referenced before assignment

    opened by apanand14 1
  • How to use this along with basic yolov5

    How to use this along with basic yolov5

    image My project is reading the license plate, and I used your polygon model to detect 4 corner of the plate to transform it , then use the basic yolov5 model to detect the characters. But I got this error when use both model in 1 runtime. It's no problem if I use them separately.

    opened by NMT201 0
  • Two error

    Two error

    1. 'Upsample' object has no attribute 'recompute_scale_factor' Edit torch==1.10.0 in requirements.txt

    2. ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (4,) + inhomogeneous part. For CPU polygon_detect.py, In line numbers 813, 815 utils/general.py, Edit boxes1[i, :].view(4,2) -> boxes1[i, :].view(4,2).numpy() Edit boxes2[j, :].view(4,2) -> boxes2[j, :].view(4,2).numpy()

    opened by tdat97 0
  • Invalid sos parameters for sequential JPEG and zero box value during training

    Invalid sos parameters for sequential JPEG and zero box value during training

    image

    • i'm using jpg image and get "Invalid SOS parameters for sequential JPEG"
    • Get zero for box value ,P,R ,map when training

    image

    • how do i choose this value and if i commented it, will getting error
    opened by Aun0124 0
  • RuntimeError: result type Float can't be cast to the desired output type long int

    RuntimeError: result type Float can't be cast to the desired output type long int

    Facing error while training. Training command: !python polygon_train.py --weights yolov5s.pt --cfg polygon_yolov5s_ucas.yaml
    --data data/custom.yaml --hyp hyp.ucas.yaml --img-size 1024
    --epochs 3 --batch-size 12 --noautoanchor --polygon --cache

    image

    opened by poojatambe 0
  • when I train my own dataset.P,R,map is zero

    when I train my own dataset.P,R,map is zero

    I annotated some images,and find that this project train can successfully start,but P,R,map is zero all the time,I train 200 epoch.

    Epoch gpu_mem box obj cls total labels img_size 64/199 3.17G 0.03754 0.01328 0 0.05082 11 640: 100%|█| 21/21 [00:02<00:00, 9. Class Images Labels P R [email protected] [email protected]:.95: 100%|█| 2/2 [00:00<00:00, all 19 0 0 0 0 0

     Epoch   gpu_mem       box       obj       cls     total    labels  img_size
    65/199     3.17G   0.03064    0.0133         0   0.04394         9       640: 100%|█| 21/21 [00:02<00:00,  9.
               Class     Images     Labels          P          R     [email protected] [email protected]:.95: 100%|█| 2/2 [00:00<00:00,
                 all         19          0          0          0          0          0
    
     Epoch   gpu_mem       box       obj       cls     total    labels  img_size
    66/199     3.17G   0.03182   0.01224         0   0.04405        12       640:  62%|▌| 13/21 [00:01<00:00,  9.    66/199     3.17G   0.03182   0.01224         0   0.04405        12       640:  62%|▌| 13/21 [00:01<00:00,  9.
    
    opened by futureflsl 1
Releases(v1.0)
Owner
xinzelee
xinzelee
This repository is the official implementation of the Hybrid Self-Attention NEAT algorithm.

This repository is the official implementation of the Hybrid Self-Attention NEAT algorithm. It contains the code to reproduce the results presented in the original paper: https://arxiv.org/abs/2112.0

Saman Khamesian 6 Dec 13, 2022
통일된 DataScience 폴더 구조 제공 및 가상환경 작업의 부담감 해소

Lucas coded by linux shell 목차 Mac버전 CookieCutter (autoenv) 1.How to Install autoenv 2.폴더 진입 시, activate 구현하기 3.폴더 탈출 시, deactivate 구현하기 4.Alias 설정하기 5

ello 3 Feb 21, 2022
Implementation for NeurIPS 2021 Submission: SparseFed

READ THIS FIRST This repo is an anonymized version of an existing repository of GitHub, for the AIStats 2021 submission: SparseFed: Mitigating Model P

2 Jun 15, 2022
A PyTorch Implementation of PGL-SUM from "Combining Global and Local Attention with Positional Encoding for Video Summarization", Proc. IEEE ISM 2021

PGL-SUM: Combining Global and Local Attention with Positional Encoding for Video Summarization PyTorch Implementation of PGL-SUM From "PGL-SUM: Combin

Evlampios Apostolidis 35 Dec 22, 2022
PySLM Python Library for Selective Laser Melting and Additive Manufacturing

PySLM Python Library for Selective Laser Melting and Additive Manufacturing PySLM is a Python library for supporting development of input files used i

Dr Luke Parry 35 Dec 27, 2022
competitions-v2

Codabench (formerly Codalab Competitions v2) Installation $ cp .env_sample .env $ docker-compose up -d $ docker-compose exec django ./manage.py migrat

CodaLab 21 Dec 02, 2022
DatasetGAN: Efficient Labeled Data Factory with Minimal Human Effort

DatasetGAN This is the official code and data release for: DatasetGAN: Efficient Labeled Data Factory with Minimal Human Effort Yuxuan Zhang*, Huan Li

302 Jan 05, 2023
A multi-scale unsupervised learning for deformable image registration

A multi-scale unsupervised learning for deformable image registration Shuwei Shao, Zhongcai Pei, Weihai Chen, Wentao Zhu, Xingming Wu and Baochang Zha

ShuweiShao 2 Apr 13, 2022
Learned Initializations for Optimizing Coordinate-Based Neural Representations

Learned Initializations for Optimizing Coordinate-Based Neural Representations Project Page | Paper Matthew Tancik*1, Ben Mildenhall*1, Terrance Wang1

Matthew Tancik 127 Jan 03, 2023
AI virtual gym is an AI program which can be used to exercise and can be used to see if we are doing the exercises

AI virtual gym is an AI program which can be used to exercise and can be used to see if we are doing the exercises

4 Feb 13, 2022
SymPy-powered, Wolfram|Alpha-like answer engine totally in your browser, without backend computation

SymPy Beta SymPy Beta is a fork of SymPy Gamma. The purpose of this project is to run a SymPy-powered, Wolfram|Alpha-like answer engine totally in you

Liumeo 25 Dec 21, 2022
Fast Learning of MNL Model From General Partial Rankings with Application to Network Formation Modeling

Fast-Partial-Ranking-MNL This repo provides a PyTorch implementation for the CopulaGNN models as described in the following paper: Fast Learning of MN

Xingjian Zhang 3 Aug 19, 2022
The final project for "Applying AI to Wearable Device Data" course from "AI for Healthcare" - Udacity.

Motion Compensated Pulse Rate Estimation Overview This project has 2 main parts. Develop a Pulse Rate Algorithm on the given training data. Then Test

Omar Laham 2 Oct 25, 2022
Adversarial Attacks on Probabilistic Autoregressive Forecasting Models.

Attack-Probabilistic-Models This is the source code for Adversarial Attacks on Probabilistic Autoregressive Forecasting Models. This repository contai

SRI Lab, ETH Zurich 25 Sep 14, 2022
TLoL (Python Module) - League of Legends Deep Learning AI (Research and Development)

TLoL-py - League of Legends Deep Learning Library TLoL-py is the Python component of the TLoL League of Legends deep learning library. It provides a s

7 Nov 29, 2022
Semantic Segmentation in Pytorch

PyTorch Semantic Segmentation Introduction This repository is a PyTorch implementation for semantic segmentation / scene parsing. The code is easy to

Hengshuang Zhao 1.2k Jan 01, 2023
MLPs for Vision and Langauge Modeling (Coming Soon)

MLP Architectures for Vision-and-Language Modeling: An Empirical Study MLP Architectures for Vision-and-Language Modeling: An Empirical Study (Code wi

Yixin Nie 27 May 09, 2022
ICCV2021 - Mining Contextual Information Beyond Image for Semantic Segmentation

Introduction The official repository for "Mining Contextual Information Beyond Image for Semantic Segmentation". Our full code has been merged into ss

55 Nov 09, 2022
Modified fork of Xuebin Qin's U-2-Net Repository. Used for demonstration purposes.

U^2-Net (U square net) Modified version of U2Net used for demonstation purposes. Paper: U^2-Net: Going Deeper with Nested U-Structure for Salient Obje

Shreyas Bhat Kera 13 Aug 28, 2022
MemStream: Memory-Based Anomaly Detection in Multi-Aspect Streams with Concept Drift

MemStream Implementation of MemStream: Memory-Based Anomaly Detection in Multi-Aspect Streams with Concept Drift . Siddharth Bhatia, Arjit Jain, Shivi

Stream-AD 61 Dec 02, 2022