This project deals with the detection of skin lesions within the ISICs dataset using YOLOv3 Object Detection with Darknet.

Overview

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

Skin Lesion detection using YOLO

This project deals with the detection of skin lesions within the ISICs dataset using YOLOv3 Object Detection with Darknet.

Predictions

YOLOv3

YOLO stands for "You Only Look Once" which uses Convolutional Neural Networks for Object Detection. On a single image, YOLO may detect multiple objects. It implies that, in addition to predicting object classes, YOLO also recognises its positions in the image. The entire image is processed by a single Neural Network in YOLO. The picture is divided into regions using this Neural Network, which generates probabilities for each region. YOLO predicts multiple bounding boxes that cover some regions of the image and then based on the probabilities, picks the best one.

Architecture of YOLOv3:

Architecture of YOLOv3

  • YOLOv3 has a total of 106 layers where detections are made at 82, 94 and 106 layers.
  • It consists of a residual blocks, skip connections and up-sampling.
  • Each convolutional layer is followed by batch normalization layer and Leaky ReLU activation function.
  • There are no pooling layers, but instead, additional convolutional layers with stride 2, are used to down-sample feature maps.

Input:

Input images themselves can be of any size, there is no need to resize them before feeding to the network. However, all the images must be stored in a single folder. In the same folder, there should be a text file, one for each image(with the same file name), containing the "true" annotations of the bounding box in YOLOv3 format i.e.,


    
     
      
       
        
       
      
     
    
   

where,

  • class id = label index of the class to be annotated
  • Xo = X coordinate of the bounding box’s centre
  • Yo = Y coordinate of the bounding box’s centre
  • W = Width of the bounding box
  • H = Height of the bounding box
  • X = Width of the image
  • Y = Height of the image

For multiple objects in the same image, this annotation is saved line-by-line for each object.

Steps:

Note: I have used Google Colab which supports Linux commands. The steps for running it on local windows computer is different.

  1. Create "true" annotations using Annotate_YOLO.py which takes in segmented(binary) images as input and returns text file for every image, labelled in the YOLO format.

  2. Clone darknet from AlexeyAB's GitHub repository, adjust the Makefile to enable OPENCV and GPU and then build darknet.

    # Clone darknet repo
    !git clone https://github.com/AlexeyAB/darknet
    
    # Change makefile to have GPU and OPENCV enabled
    %cd darknet
    !chmod +x ./darknet
    !sed -i 's/OPENCV=0/OPENCV=1/' Makefile
    !sed -i 's/GPU=0/GPU=1/' Makefile
    !sed -i 's/CUDNN=0/CUDNN=1/' Makefile
    !sed -i 's/CUDNN_HALF=0/CUDNN_HALF=1/' Makefile
    
    # To use the darknet executable file
    !make
  3. Download the pre-trained YOLO weights from darknet. It is trained on a coco dataset consisting of 80 classes.

    !wget https://pjreddie.com/media/files/darknet53.conv.74
    
  4. Define the helper function as in Helper.py that is used to display images.

  5. Split the dataset into train, validation and test set (including the labels) and store it in darknet/data folder with filenames "obj", "valid", and "test" repectively. In my case, total images = 2594 out of which,

    Train = 2094 images; Validation = 488 images; Test = 12 images.

  6. Create obj.names consisting of class names (one class name per line) and also create obj.data that points to the file paths of train data, validation data and backup folder which will store the weights of the model trained on our custom dataset.

  7. Tune hyper-parameters by creating custom config file in "cfg" folder which is inside "darknet" folder. Change the following parameters in yolov3.clf and save it as yolov3-custom.cfg:

    The parameters are chosen by considering the following:

    - max_batches = (# of classes) * 2000 --> [but no less than 4000]
    
    - steps = (80% of max_batches), (90% of max_batches)
    
    - filters = (# of classes + 5) * 3
    
    - random = 1 to random = 0 --> [to speed up training but slightly reduce accuracy]
    

    I chose the following:

    • Training batch = 64
    • Training subdivisions = 16
    • max_batches = 4000, steps = 3200, 3600
    • classes = 1 in the three YOLO layers
    • filters = 18 in the three convolutional layers just before the YOLO layers.
  8. Create "train.txt" and "test.txt" using Generate_Train_Test.py

  9. Train the Custom Object Detector

    !./darknet detector train data/obj.data cfg/yolov3-custom.cfg darknet53.conv.74 -dont_show -map
    
    # Show the graph to review the performance of the custom object detector
    imShow('chart.png')

    The new weights will be stored in backup folder with the name yolov3-custom_best.weights after training.

  10. Check the Mean Average Precision(mAP) of the model

    !./darknet detector map data/obj.data cfg/yolov3-custom.cfg /content/drive/MyDrive/darknet/backup/yolov3-custom_best.weights
  11. Run Your Custom Object Detector

    For testing, set batch and subdivisions to 1.

    # To set our custom cfg to test mode
    %cd cfg
    !sed -i 's/batch=64/batch=1/' yolov3-custom.cfg
    !sed -i 's/subdivisions=16/subdivisions=1/' yolov3-custom.cfg
    %cd ..
    
    # To run custom detector
    # thresh flag sets threshold probability for detection
    !./darknet detector test data/obj.data cfg/yolov3-custom.cfg /content/drive/MyDrive/darknet/backup/yolov3-custom_best.weights /content/drive/MyDrive/Test_Lesion/ISIC_0000000.jpg -thresh 0.3
    
    # Show the predicted image with bounding box and its probability
    imShow('predictions.jpg')

Conclusion

The whole process looks like this:

Summary

Chart (Loss in mAP vs Iteration number):

Loss Chart

Loss Capture

The model was supposed to take 4,000 iterations to complete, however, the rate of decrease in loss is not very significant after 1000 iterations. The model is performing with similar precision even with 3000 fewer iterations which resulted in low training time (saving over 9 hours of compute time) and also there is less chance of it overfitting the data. Hence, the model was stopped pre-maturely just after 1100 iterations.

From the above chart, we can see that the average loss is 0.2545 and the mean average precision(mAP) is over 95% which is extremely good.

Assumption and dependencies

The user is assumed to have access to the ISICs dataset with colored images required for training, as well as its corresponding binary segmentation images.

Dependencies:

  • Google Colab Notebook
  • Python 3.7 on local machine
  • Python libraries: matplotlib, OpenCV2, glob
  • Darknet which is an open source neural network framework written in C and CUDA

References

Redmon, J., & Farhadi, A. (2018). YOLO: Real-Time Object Detection. Retrieved October 28, 2021, from https://pjreddie.com/darknet/yolo/

About the Author

Lalith Veerabhadrappa Badiger
The University of Queensland, Brisbane, Australia
Master of Data Science
Student ID: 46557829
Email ID: [email protected]

Owner
Lalith Veerabhadrappa Badiger
Master of Data Science
Lalith Veerabhadrappa Badiger
This repository contains the code for the paper "Hierarchical Motion Understanding via Motion Programs"

Hierarchical Motion Understanding via Motion Programs (CVPR 2021) This repository contains the official implementation of: Hierarchical Motion Underst

Sumith Kulal 40 Dec 05, 2022
Seq2seq - Sequence to Sequence Learning with Keras

Seq2seq Sequence to Sequence Learning with Keras Hi! You have just found Seq2Seq. Seq2Seq is a sequence to sequence learning add-on for the python dee

Fariz Rahman 3.1k Dec 18, 2022
WSDM2022 Challenge - Large scale temporal graph link prediction

WSDM 2022 Large-scale Temporal Graph Link Prediction - Baseline and Initial Test Set WSDM Cup Website link Link to this challenge This branch offers A

Deep Graph Library 34 Dec 29, 2022
Sub-Cluster AdaCos: Learning Representations for Anomalous Sound Detection.

Accompanying code for the paper Sub-Cluster AdaCos: Learning Representations for Anomalous Sound Detection.

Kevin Wilkinghoff 6 Dec 01, 2022
📚 A collection of Jupyter notebooks for learning and experimenting with OpenVINO 👓

A collection of ready-to-run Python* notebooks for learning and experimenting with OpenVINO developer tools. The notebooks are meant to provide an introduction to OpenVINO basics and teach developers

OpenVINO Toolkit 840 Jan 03, 2023
Learning where to learn - Gradient sparsity in meta and continual learning

Learning where to learn - Gradient sparsity in meta and continual learning In this paper, we investigate gradient sparsity found by MAML in various co

Johannes Oswald 28 Dec 09, 2022
Txt2Xml tool will help you convert from txt COCO format to VOC xml format in Object Detection Problem.

TXT 2 XML All codes assume running from root directory. Please update the sys path at the beginning of the codes before running. Over View Txt2Xml too

Nguyễn Trường Lâu 4 Nov 24, 2022
A library for uncertainty representation and training in neural networks.

Epistemic Neural Networks A library for uncertainty representation and training in neural networks. Introduction Many applications in deep learning re

DeepMind 211 Dec 12, 2022
A python bot to move your mouse every few seconds to appear active on Skype, Teams or Zoom as you go AFK. 🐭 🤖

PyMouseBot If you're from GT and annoyed with SGVPN idle timeouts while working on development laptop, You might find this useful. A python cli bot to

Oaker Min 6 Oct 24, 2022
Supervised Classification from Text (P)

MSc-Thesis Module: Masters Research Thesis Language: Python Grade: 75 Title: An investigation of supervised classification of therapeutic process from

Matthew Laws 1 Nov 22, 2021
This repository provides a basic implementation of our GCPR 2021 paper "Learning Conditional Invariance through Cycle Consistency"

Learning Conditional Invariance through Cycle Consistency This repository provides a basic TensorFlow 1 implementation of the proposed model in our GC

BMDA - University of Basel 1 Nov 04, 2022
🔮 Execution time predictions for deep neural network training iterations across different GPUs.

Habitat: A Runtime-Based Computational Performance Predictor for Deep Neural Network Training Habitat is a tool that predicts a deep neural network's

Geoffrey Yu 44 Dec 27, 2022
Gesture Volume Control Using OpenCV and MediaPipe

This Project Uses OpenCV and MediaPipe Hand solutions to identify hands and Change system volume by taking thumb and index finger positions

Pratham Bhatnagar 6 Sep 12, 2022
A robust pointcloud registration pipeline based on correlation.

PHASER: A Robust and Correspondence-Free Global Pointcloud Registration Ubuntu 18.04+ROS Melodic: Overview Pointcloud registration using correspondenc

ETHZ ASL 101 Dec 01, 2022
Code for the TPAMI paper: "Syntax Customized Video Captioning by Imitating Exemplar Sentences"

Syntax-Customized-Video-Captioning Code for the TPAMI paper: "Syntax Customized Video Captioning by Imitating Exemplar Sentences". This is my second w

3 Dec 05, 2022
JupyterLite demo deployed to GitHub Pages 🚀

JupyterLite Demo JupyterLite deployed as a static site to GitHub Pages, for demo purposes. ✨ Try it in your browser ✨ ➡️ https://jupyterlite.github.io

JupyterLite 223 Jan 04, 2023
Implementation for Learning to Track with Object Permanence

Learning to Track with Object Permanence A video-based MOT approach capable of tracking through full occlusions: Learning to Track with Object Permane

Toyota Research Institute - Machine Learning 91 Jan 03, 2023
Contextual Attention Network: Transformer Meets U-Net

Contextual Attention Network: Transformer Meets U-Net Contexual attention network for medical image segmentation with state of the art results on skin

Reza Azad 67 Nov 28, 2022
Multi-Objective Reinforced Active Learning

Multi-Objective Reinforced Active Learning Dependencies wandb tqdm pytorch = 1.7.0 numpy = 1.20.0 scipy = 1.1.0 pycolab == 1.2 Weights and Biases O

Markus Peschl 6 Nov 19, 2022
A lane detection integrated Real-time Instance Segmentation based on YOLACT (You Only Look At CoefficienTs)

Real-time Instance Segmentation and Lane Detection This is a lane detection integrated Real-time Instance Segmentation based on YOLACT (You Only Look

Jin 4 Dec 30, 2022