The Hailo Model Zoo includes pre-trained models and a full building and evaluation environment

Overview

Hailo Model Zoo

The Hailo Model Zoo provides pre-trained models for high-performance deep learning applications. Using the Hailo Model Zoo you can measure the full precision accuracy of each model, the quantized accuracy using the Hailo Emulator and measure the accuracy on the Hailo-8 device. Finally, you will be able to generate the Hailo Executable Format (HEF) binary file to speed-up development and generate high quality applications accelerated with Hailo-8. The models are optimized for high accuracy on public datasets and can be used to benchmark the Hailo quantization scheme.

Usage

Quick Start Guide

  • Install the Hailo Dataflow Compiler and enter the virtualenv. In case you are not Hailo customer please contact hailo.ai
  • Clone the Hailo Model Zoo
git clone https://github.com/hailo-ai/hailo_model_zoo.git
  • Run the setup script
cd hailo_model_zoo; pip install -e .
  • Run the Hailo Model Zoo. For example, to parse the YOLOv3 model:
python hailo_model_zoo/main.py parse yolov3

Getting Started

For further functionality please see the GETTING_STARTED page (full install instructions and usage examples). The Hailo Model Zoo is using the Hailo Dataflow Compiler for parsing, quantization, emulation and compilation of the deep learning models. Full functionality includes:

  • Parse: model translation of the input model into Hailo's internal representation.
  • Profiler: generate profiler report of the model. The report contains information about your model and expected performance on the Hailo hardware.
  • Quantize: numeric translation of the input model into a compressed integer representation.
  • Compile: run the Hailo compiler to generate the Hailo Executable Format file (HEF) which can be executed on the Hailo hardware.
  • Evaluate: infer the model using the Hailo Emulator or the Hailo hardware and produce the model accuracy.

For further information about the Hailo Dataflow Compiler please contact hailo.ai.

Models

Full list of pre-trained models can be found here.

License

The Hailo Model Zoo is released under the MIT license. Please see the LICENSE file for more information.

Contact

Please visit hailo.ai for support / requests / issues.

Comments
  • yolov7.hef vs yolov5m_wo_spp_60p.hef

    yolov7.hef vs yolov5m_wo_spp_60p.hef

    Hi,

    As far as I know yolov7 faster and more accurate than the yolov5.

    In our tests :

    gst-launch-1.0 rtspsrc location=rtsp://xxxxx/ISAPI/Streaming/Channels/101 name=src_0 ! decodebin ! videoscale ! video/x-raw, pixel-aspect-ratio=1/1 ! videoconvert ! queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-time=0 ! hailonet hef-path=/local/shared_with_docker/yolov7.hef is-active=true batch-size=1 ! queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-time=0 ! hailofilter function-name=yolov5 so-path=/local/workspace/tappas/apps/gstreamer/libs/post_processes//libyolo_post.so config-path=/local/workspace/tappas/apps/gstreamer/general/detection/resources/configs/yolov5.json qos=false ! queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-time=0 ! hailooverlay ! videoconvert ! fpsdisplaysink video-sink=xvimagesink name=hailo_display sync=false text-overlay=false -v | grep -e hailo_display -e hailodevicestats

    yolov7.hef almost 7 times slower than the yolov5m_wo_spp_60p.hef version.

    opened by MyraBaba 19
  • Error: Model uses too many reources: 136 Layer-Controllers

    Error: Model uses too many reources: 136 Layer-Controllers

    onnx: 1.11.0
    torch: 1.12.1
    torchvision: 0.13.1
    

    Hi, I have fine-tuned yolov5m_wo_spp.pt model in the yolov5 v6.2 framework. Then I have exported the model to onnx (with opset 11) also in the yolov5 v6.2 framework. When I compile this onnx model with hailomz compile, the model optimization is done correctly, but then it throws following error:

    997/1000 [============================>.] - ETA: 1s - total_distill_loss: 0.1018 - _distill_loss_yolov5m_wo_spp/conv93: 0.0353 - _distill_loss_yolov5m_wo_spp/conv84: 0.0367 - _distill_loss_yolov5m_wo_spp/conv74 998/1000 [============================>.] - ETA: 0s - total_distill_loss: 0.1018 - _distill_loss_yolov5m_wo_spp/conv93: 0.0353 - _distill_loss_yolov5m_wo_spp/conv84: 0.0367 - _distill_loss_yolov5m_wo_spp/conv74 999/1000 [============================>.] - ETA: 0s - total_distill_loss: 0.1018 - _distill_loss_yolov5m_wo_spp/conv93: 0.0353 - _distill_loss_yolov5m_wo_spp/conv84: 0.0367 - _distill_loss_yolov5m_wo_spp/conv741000/1000 [==============================] - ETA: 0s - total_distill_loss: 0.1018 - _distill_loss_yolov5m_wo_spp/conv93: 0.0353 - _distill_loss_yolov5m_wo_spp/conv84: 0.0367 - _distill_loss_yolov5m_wo_spp/conv741000/1000 [==============================] - 477s 477ms/step - total_distill_loss: 0.1018 - _distill_loss_yolov5m_wo_spp/conv93: 0.0353 - _distill_loss_yolov5m_wo_spp/conv84: 0.0367 - _distill_loss_yolov5m_wo_spp/conv74: 0.0297
    [info] Fine Tune is done (completion time is 00:38:18.70)
    Calibration: 64entries [00:48,  1.32entries/s]
    
    [info] Model Optimization is done
    [info] Loading model script on yolov5m_wo_spp
    [info] Loading network parameters
    [info] Starting Hailo allocation and compilation flow
    [info] Using Single-context flow
    [info] Resources optimization guidelines: Strategy -> GREEDY Objective -> REQUIRED_FPS
    [info] Resources optimization params: max_control_utilization=120%, max_compute_utilization=100%, max_memory_utilization (weights)=100%, max_input_aligner_utilization=100%, max_apu_utilization=100%
    [info] Running Auto-Merger
    [info] Auto-Merger is done
    [info] Adding a portal between conv27( index=19 604, name=conv27, ) and concat7, type: L4
    [info] Starting context partition
    [info] Context partition is done (0s 2ms)
    [info] Adding format conversion layer 'auto_reshape_from_input_layer1_to_merged_layer_normalization1_space_to_depth1' after input_layer1
    [info] Adding format conversion layer 'auto_reshape_from_conv74_to_output_layer1' after conv74
    [info] Adding format conversion layer 'auto_reshape_from_conv84_to_output_layer2' after conv84
    [info] Adding format conversion layer 'auto_reshape_from_conv93_to_output_layer3' after conv93
    Model uses too many reources: 136 Layer-Controllers
    [critical] Model uses too many reources: 136 Layer-Controllers
    [error] Failed to produce compiled graph
    [error] Tried to deserialize allocator result on failure, but got another exception: No output graph, deserialization failed.
    Traceback (most recent call last):
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/hailo_venv/bin/hailomz", line 33, in <module>
        sys.exit(load_entry_point('hailo-model-zoo', 'console_scripts', 'hailomz')())
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/sources/model_zoo/hailo_model_zoo/main.py", line 181, in main
        run(args)
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/sources/model_zoo/hailo_model_zoo/main.py", line 170, in run
        return handlers[args.command](args)
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/sources/model_zoo/hailo_model_zoo/main_driver.py", line 132, in compile
        compile_model(runner, network_info, args.results_dir, model_script_path=args.model_script_path)
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/sources/model_zoo/hailo_model_zoo/core/main_utils.py", line 298, in compile_model
        hef = runner.compile()
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/hailo_venv/lib/python3.8/site-packages/hailo_sdk_common/states/states.py", line 16, in wrapped_func
        return func(self, *args, **kwargs)
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/hailo_venv/lib/python3.8/site-packages/hailo_sdk_client/runner/client_runner.py", line 661, in compile
        return self._get_hef_hw_representation()
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/hailo_venv/lib/python3.8/site-packages/hailo_sdk_common/states/states.py", line 16, in wrapped_func
        return func(self, *args, **kwargs)
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/hailo_venv/lib/python3.8/site-packages/hailo_sdk_client/runner/client_runner.py", line 707, in _get_hef_hw_representation
        serialized_hef = self._sdk_backend.get_hef_hw_representation(fps, allocator_script, mapping_timeout)
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/hailo_venv/lib/python3.8/site-packages/hailo_sdk_client/sdk_backend/sdk_backend.py", line 1156, in get_hef_hw_representation
        hef, mapped_graph_file = self._get_hef_hw_representation(fps, allocator_script, mapping_timeout)
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/hailo_venv/lib/python3.8/site-packages/hailo_sdk_client/sdk_backend/sdk_backend.py", line 1151, in _get_hef_hw_representation
        hef, mapped_graph_file, auto_alls = self.hef_full_build(fps, mapping_timeout, model_params, allocator_script)
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/hailo_venv/lib/python3.8/site-packages/hailo_sdk_client/sdk_backend/sdk_backend.py", line 1128, in hef_full_build
        auto_alls, self._mapped_graph, self._integrated_graph = allocator.create_mapping_and_full_build_hef(
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/hailo_venv/lib/python3.8/site-packages/hailo_sdk_client/allocator/hailo_tools_runner.py", line 568, in create_mapping_and_full_build_hef
        self.call_builder(network_graph_path, output_path, compilation_output_proto=compilation_output_proto,
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/hailo_venv/lib/python3.8/site-packages/hailo_sdk_client/allocator/hailo_tools_runner.py", line 527, in call_builder
        self.run_builder(network_graph_path, output_path, **kwargs)
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/hailo_venv/lib/python3.8/site-packages/hailo_sdk_client/allocator/hailo_tools_runner.py", line 394, in run_builder
        raise e.internal_exception("Hailo tools builder failed:", hailo_tools_error=e.hailo_tools_error) from None
    hailo_sdk_client.sdk_backend.sdk_backend_exceptions.BackendAllocatorException: Hailo tools builder failed: Model uses too many reources: 136 Layer-Controllers
    

    If I train the model in the hailo yolov5 retraining docker container, the compilation works fine. Any idea what this error means?

    opened by frenky-strasak 3
  • Is there a specific implementation limit ? (multitasking models, cascaded models, or large models)

    Is there a specific implementation limit ? (multitasking models, cascaded models, or large models)

    Hi, I have not applied through the developer zone account yet (will it be difficult to apply for all-pass?). I wonder if the Haio-8 chip can run some large models at the same time? Or can you tell me what the implementation limits are? E.g:

    1. Operator compatibility (highest opset version supported by ONNX, or better than OpenVINO in general?)
    2. What is the memory size of the chip that can store/compute tensors? Can it run super-resolution models with higher output resolutions? Can the chip run very wide fully connected layers?
    3. Hailo-8 can multitask, if I run keypoint detection, ReID and depth estimation at the same time with frame skipping, is the chip's computing or memory capacity overloaded? How to spot where the overload is or estimate it?
    4. Has your team considered having multiple Hailo-8 chips "chained" to run some difficult tasks? This should be super cool.
    opened by BICHENG 2
  • Dataflow Compiler v3.17.0 not available in Developer Zone

    Dataflow Compiler v3.17.0 not available in Developer Zone

    Hi, in the latest changelog update you mentioned, that the repository was updated to use the Dataflow Compiler v3.17.0. However, in the Developer Zone only version 3.16.0 is available. How can we get the latest Dataflow Compiler v3.17.0? Can you please add it in the Developer Zone?

    opened by kmaerkl 2
  • Old version of yolov5 in retraining docker container.

    Old version of yolov5 in retraining docker container.

    Hi, for what reason does the retraining docker container contain the old version of yolov5 v2.0? Is possible to use some new versions of yolov5 such as v6.2? Are these newer versions of yolov5 compatible with data flow compiler to optimize and compile models to hailo hef files? Thanks!

    opened by frenky-strasak 1
  • YoloV7-tiny with on-chip NMS

    YoloV7-tiny with on-chip NMS

    Dear Hailo, the output structure of Yolov5 and Yolov7 is the same IIRC, so it should be possible to run the NMS on-chip. I wanted to test this, so I took the yolov5xs_wo_spp_nms model of this zoo as a reference. When downloading, I get this NMS config JSON:

    {
      "nms_scores_th": 0.01,
      "nms_iou_th": 1.0,
      "image_dims": [512, 512],
      "max_proposals_per_class": 80,
      "background_removal": false,
      "input_division_factor": 8,
      "classes": 80,
      "bbox_decoders": [
          {
              "name": "bbox_decoder53",
              "w": [
                  10,
                  16,
                  33
              ],
              "h": [
                  13,
                  30,
                  23
              ],
              "stride": 8,
              "encoded_layer": "conv53"
          },
          {
              "name": "bbox_decoder61",
              "w": [
                  30,
                  62,
                  59
              ],
              "h": [
                  61,
                  45,
                  119
              ],
              "stride": 16,
              "encoded_layer": "conv61"
          },
          {
              "name": "bbox_decoder69",
              "w": [
                  116,
                  156,
                  373 
              ],
              "h": [
                  90,
                  198,
                  326
              ],
              "stride": 32,
              "encoded_layer": "conv69"
          }
      ]
    }
    

    I cannot find a description of these parameters in the documentation anywhere. While I understand some parameters, like the names and anchors (w, h, stride, etc.) I dont get these ones:

      "background_removal": false,
      "input_division_factor": 8,
    

    Can you help me with these parameters? And did you ever test a yolov7 and on-chip decode/nms? Furhter, in the yolov5xs alls-file, there have been these settings made:

    buffers(proposal_generator0, proposal_generator0_concat, 2, FULL_ROW)
    buffers(proposal_generator1, proposal_generator0_concat, 2, FULL_ROW)
    buffers(proposal_generator2, proposal_generator0_concat, 2, FULL_ROW)
    buffers(proposal_generator0_concat, nms1, 2)
    

    which I don't really understand. Could you explain the usage of those as well? thanks!

    Cheers

    opened by dnns92 1
  • CVE-2007-4559 Patch

    CVE-2007-4559 Patch

    Patching CVE-2007-4559

    Hi, we are security researchers from the Advanced Research Center at Trellix. We have began a campaign to patch a widespread bug named CVE-2007-4559. CVE-2007-4559 is a 15 year old bug in the Python tarfile package. By using extract() or extractall() on a tarfile object without sanitizing input, a maliciously crafted .tar file could perform a directory path traversal attack. We found at least one unsantized extractall() in your codebase and are providing a patch for you via pull request. The patch essentially checks to see if all tarfile members will be extracted safely and throws an exception otherwise. We encourage you to use this patch or your own solution to secure against CVE-2007-4559. Further technical information about the vulnerability can be found in this blog.

    If you have further questions you may contact us through this projects lead researcher Kasimir Schulz.

    opened by TrellixVulnTeam 0
  • Reactangle hef model does not work

    Reactangle hef model does not work

    Hi, in the yolov5 retraining container (v.2) I have exported yolov5m_wo_spp.pt to ONNX with rectangular shape: python models/export.py --weights model.pt --img 352 640 --batch 1 (--img H W)

    Then I compiled this onnx model: hailomz compile --ckpt model.onnx --calib-path calib_dataset --yaml yolov5m_wo_spp.yaml where I changed the yolov5m_wo_spp.yaml like this:

    • I added the preprocessing part with corresponding shape:
    preprocessing:
      network_type: detection
      input_shape:
      - 352
      - 640
      - 3
    
    • and I changed the info (the output shapes were found by Netron tool)
    info:
      input_shape: 352x640x3
      output_shape: 11x20x18, 22x40x18, 44x80x18
    

    The complete file is here: yolov5m_wo_spp.zip

    The compilation looks fine. When I deploy the hef file to my pipeline I can see wrong bboxes which are doubled and shifted. But they move in the same way as the object to be detected.

    What do I miss? Should I also modify the yolov5m_wo_spp.alls file? Could you point me please? Thank you!

    opened by frenky-strasak 3
  • Fix yolo postprocessing when batches are used

    Fix yolo postprocessing when batches are used

    Hi,

    currently the yolo postprocessing is not working when batches are provided. I fixed the bug in this branch: https://github.com/DavidBecht/hailo_model_zoo

    BR

    opened by DavidBecht 0
  • Illegal instruction (core dumped)

    Illegal instruction (core dumped)

    Hi,

    hailomz gives below error all the time. (in docker)

    all other command and tappas is working without any problem

    hailomz -h Illegal instruction (core dumped)

    opened by MyraBaba 1
  • when I run the hef , there is something wrong . how to compile mode to hef in ARM arch.

    when I run the hef , there is something wrong . how to compile mode to hef in ARM arch.

    [HailoRT] [error] CHECK failed - Failed opening non-compatible HEF with the following unsupported extensions: KO Run ASAP (KO_RUN_ASAP) [HailoRT] [error] CHECK_SUCCESS failed with status=26 [HailoRT] [error] Failed parsing HEF file [HailoRT] [error] Failed creating HEF [HailoRT] [error] CHECK_EXPECTED failed with status=26

    opened by riverfrank 2
  • Post-processing yolov5s personface output

    Post-processing yolov5s personface output

    Hi,

    As a result of inference of the yolov5s_personface model I get 3 vectors of dimensions [1, 40, 40, 21], [1, 20, 20, 21], [1, 80, 80, 21]; what's the correct/fastest procedure to decode them in order to get a list of detections (such as [x_min, y_min, x_max, y_max, score, class])?

    Thanks

    opened by aux82716 6
Releases(v2.5)
Owner
Hailo
Hailo
Two-stage CenterNet

Probabilistic two-stage detection Two-stage object detectors that use class-agnostic one-stage detectors as the proposal network. Probabilistic two-st

Xingyi Zhou 1.1k Jan 03, 2023
[CVPR 2021] "The Lottery Tickets Hypothesis for Supervised and Self-supervised Pre-training in Computer Vision Models" Tianlong Chen, Jonathan Frankle, Shiyu Chang, Sijia Liu, Yang Zhang, Michael Carbin, Zhangyang Wang

The Lottery Tickets Hypothesis for Supervised and Self-supervised Pre-training in Computer Vision Models Codes for this paper The Lottery Tickets Hypo

VITA 59 Dec 28, 2022
Measuring and Improving Consistency in Pretrained Language Models

ParaRel 🤘 This repository contains the code and data for the paper: Measuring and Improving Consistency in Pretrained Language Models as well as the

Yanai Elazar 26 Dec 02, 2022
bespoke tooling for offensive security's Windows Usermode Exploit Dev course (OSED)

osed-scripts bespoke tooling for offensive security's Windows Usermode Exploit Dev course (OSED) Table of Contents Standalone Scripts egghunter.py fin

epi 268 Jan 05, 2023
This repo is about to create the Streamlit application for given ML model.

HR-Attritiion-using-Streamlit This repo is about to create the Streamlit application for given ML model. Problem Statement: Managing peoples at workpl

Pavan Giri 0 Dec 10, 2021
Official code for the paper "Self-Supervised Prototypical Transfer Learning for Few-Shot Classification"

Self-Supervised Prototypical Transfer Learning for Few-Shot Classification This repository contains the reference source code and pre-trained models (

EPFL INDY 44 Nov 04, 2022
Pixray is an image generation system

Pixray is an image generation system

pixray 883 Jan 07, 2023
This is the repo for the paper "Improving the Accuracy-Memory Trade-Off of Random Forests Via Leaf-Refinement".

Improving the Accuracy-Memory Trade-Off of Random Forests Via Leaf-Refinement This is the repository for the paper "Improving the Accuracy-Memory Trad

3 Dec 29, 2022
Reduce end to end training time from days to hours (or hours to minutes), and energy requirements/costs by an order of magnitude using coresets and data selection.

COResets and Data Subset selection Reduce end to end training time from days to hours (or hours to minutes), and energy requirements/costs by an order

decile-team 244 Jan 09, 2023
DPT: Deformable Patch-based Transformer for Visual Recognition (ACM MM2021)

DPT This repo is the official implementation of DPT: Deformable Patch-based Transformer for Visual Recognition (ACM MM2021). We provide code and model

CASIA-IVA-Lab 111 Dec 21, 2022
GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training @ KDD 2020

GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training Original implementation for paper GCC: Graph Contrastive Coding for Graph Neural N

THUDM 274 Dec 27, 2022
An image processing project uses Viola-jones technique to detect faces and then use SIFT algorithm for recognition.

Attendance_System An image processing project uses Viola-jones technique to detect faces and then use LPB algorithm for recognition. Face Detection Us

8 Jan 11, 2022
My course projects for the 2021 Spring Machine Learning course at the National Taiwan University (NTU)

ML2021Spring There are my projects for the 2021 Spring Machine Learning course at the National Taiwan University (NTU) Course Web : https://speech.ee.

Ding-Li Chen 15 Aug 29, 2022
Tools for investing in Python

InvestOps Original repository on GitHub Original author is Magnus Erik Hvass Pedersen Introduction This is a Python package with simple and effective

24 Nov 26, 2022
Video Instance Segmentation with a Propose-Reduce Paradigm (ICCV 2021)

Propose-Reduce VIS This repo contains the official implementation for the paper: Video Instance Segmentation with a Propose-Reduce Paradigm Huaijia Li

DV Lab 39 Nov 23, 2022
GAN JAX - A toy project to generate images from GANs with JAX

GAN JAX - A toy project to generate images from GANs with JAX This project aims to bring the power of JAX, a Python framework developped by Google and

Valentin Goldité 14 Nov 29, 2022
The Habitat-Matterport 3D Research Dataset - the largest-ever dataset of 3D indoor spaces.

Habitat-Matterport 3D Dataset (HM3D) The Habitat-Matterport 3D Research Dataset is the largest-ever dataset of 3D indoor spaces. It consists of 1,000

Meta Research 62 Dec 27, 2022
We envision models that are pre-trained on a vast range of domain-relevant tasks to become key for molecule property prediction

We envision models that are pre-trained on a vast range of domain-relevant tasks to become key for molecule property prediction. This repository aims to give easy access to state-of-the-art pre-train

GMUM 90 Jan 08, 2023
Incomplete easy-to-use math solver and PDF generator.

Math Expert Let me do your work Preview preview.mp4 Introduction Math Expert is our (@salastro, @younis-tarek, @marawn-mogeb) math high school graduat

SalahDin Ahmed 22 Jul 11, 2022
PyTorch implementation of our ICCV 2019 paper: Liquid Warping GAN: A Unified Framework for Human Motion Imitation, Appearance Transfer and Novel View Synthesis

Impersonator PyTorch implementation of our ICCV 2019 paper: Liquid Warping GAN: A Unified Framework for Human Motion Imitation, Appearance Transfer an

SVIP Lab 1.7k Jan 06, 2023