The Hailo Model Zoo includes pre-trained models and a full building and evaluation environment

Overview

Hailo Model Zoo

The Hailo Model Zoo provides pre-trained models for high-performance deep learning applications. Using the Hailo Model Zoo you can measure the full precision accuracy of each model, the quantized accuracy using the Hailo Emulator and measure the accuracy on the Hailo-8 device. Finally, you will be able to generate the Hailo Executable Format (HEF) binary file to speed-up development and generate high quality applications accelerated with Hailo-8. The models are optimized for high accuracy on public datasets and can be used to benchmark the Hailo quantization scheme.

Usage

Quick Start Guide

  • Install the Hailo Dataflow Compiler and enter the virtualenv. In case you are not Hailo customer please contact hailo.ai
  • Clone the Hailo Model Zoo
git clone https://github.com/hailo-ai/hailo_model_zoo.git
  • Run the setup script
cd hailo_model_zoo; pip install -e .
  • Run the Hailo Model Zoo. For example, to parse the YOLOv3 model:
python hailo_model_zoo/main.py parse yolov3

Getting Started

For further functionality please see the GETTING_STARTED page (full install instructions and usage examples). The Hailo Model Zoo is using the Hailo Dataflow Compiler for parsing, quantization, emulation and compilation of the deep learning models. Full functionality includes:

  • Parse: model translation of the input model into Hailo's internal representation.
  • Profiler: generate profiler report of the model. The report contains information about your model and expected performance on the Hailo hardware.
  • Quantize: numeric translation of the input model into a compressed integer representation.
  • Compile: run the Hailo compiler to generate the Hailo Executable Format file (HEF) which can be executed on the Hailo hardware.
  • Evaluate: infer the model using the Hailo Emulator or the Hailo hardware and produce the model accuracy.

For further information about the Hailo Dataflow Compiler please contact hailo.ai.

Models

Full list of pre-trained models can be found here.

License

The Hailo Model Zoo is released under the MIT license. Please see the LICENSE file for more information.

Contact

Please visit hailo.ai for support / requests / issues.

Comments
  • yolov7.hef vs yolov5m_wo_spp_60p.hef

    yolov7.hef vs yolov5m_wo_spp_60p.hef

    Hi,

    As far as I know yolov7 faster and more accurate than the yolov5.

    In our tests :

    gst-launch-1.0 rtspsrc location=rtsp://xxxxx/ISAPI/Streaming/Channels/101 name=src_0 ! decodebin ! videoscale ! video/x-raw, pixel-aspect-ratio=1/1 ! videoconvert ! queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-time=0 ! hailonet hef-path=/local/shared_with_docker/yolov7.hef is-active=true batch-size=1 ! queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-time=0 ! hailofilter function-name=yolov5 so-path=/local/workspace/tappas/apps/gstreamer/libs/post_processes//libyolo_post.so config-path=/local/workspace/tappas/apps/gstreamer/general/detection/resources/configs/yolov5.json qos=false ! queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-time=0 ! hailooverlay ! videoconvert ! fpsdisplaysink video-sink=xvimagesink name=hailo_display sync=false text-overlay=false -v | grep -e hailo_display -e hailodevicestats

    yolov7.hef almost 7 times slower than the yolov5m_wo_spp_60p.hef version.

    opened by MyraBaba 19
  • Error: Model uses too many reources: 136 Layer-Controllers

    Error: Model uses too many reources: 136 Layer-Controllers

    onnx: 1.11.0
    torch: 1.12.1
    torchvision: 0.13.1
    

    Hi, I have fine-tuned yolov5m_wo_spp.pt model in the yolov5 v6.2 framework. Then I have exported the model to onnx (with opset 11) also in the yolov5 v6.2 framework. When I compile this onnx model with hailomz compile, the model optimization is done correctly, but then it throws following error:

    997/1000 [============================>.] - ETA: 1s - total_distill_loss: 0.1018 - _distill_loss_yolov5m_wo_spp/conv93: 0.0353 - _distill_loss_yolov5m_wo_spp/conv84: 0.0367 - _distill_loss_yolov5m_wo_spp/conv74 998/1000 [============================>.] - ETA: 0s - total_distill_loss: 0.1018 - _distill_loss_yolov5m_wo_spp/conv93: 0.0353 - _distill_loss_yolov5m_wo_spp/conv84: 0.0367 - _distill_loss_yolov5m_wo_spp/conv74 999/1000 [============================>.] - ETA: 0s - total_distill_loss: 0.1018 - _distill_loss_yolov5m_wo_spp/conv93: 0.0353 - _distill_loss_yolov5m_wo_spp/conv84: 0.0367 - _distill_loss_yolov5m_wo_spp/conv741000/1000 [==============================] - ETA: 0s - total_distill_loss: 0.1018 - _distill_loss_yolov5m_wo_spp/conv93: 0.0353 - _distill_loss_yolov5m_wo_spp/conv84: 0.0367 - _distill_loss_yolov5m_wo_spp/conv741000/1000 [==============================] - 477s 477ms/step - total_distill_loss: 0.1018 - _distill_loss_yolov5m_wo_spp/conv93: 0.0353 - _distill_loss_yolov5m_wo_spp/conv84: 0.0367 - _distill_loss_yolov5m_wo_spp/conv74: 0.0297
    [info] Fine Tune is done (completion time is 00:38:18.70)
    Calibration: 64entries [00:48,  1.32entries/s]
    
    [info] Model Optimization is done
    [info] Loading model script on yolov5m_wo_spp
    [info] Loading network parameters
    [info] Starting Hailo allocation and compilation flow
    [info] Using Single-context flow
    [info] Resources optimization guidelines: Strategy -> GREEDY Objective -> REQUIRED_FPS
    [info] Resources optimization params: max_control_utilization=120%, max_compute_utilization=100%, max_memory_utilization (weights)=100%, max_input_aligner_utilization=100%, max_apu_utilization=100%
    [info] Running Auto-Merger
    [info] Auto-Merger is done
    [info] Adding a portal between conv27( index=19 604, name=conv27, ) and concat7, type: L4
    [info] Starting context partition
    [info] Context partition is done (0s 2ms)
    [info] Adding format conversion layer 'auto_reshape_from_input_layer1_to_merged_layer_normalization1_space_to_depth1' after input_layer1
    [info] Adding format conversion layer 'auto_reshape_from_conv74_to_output_layer1' after conv74
    [info] Adding format conversion layer 'auto_reshape_from_conv84_to_output_layer2' after conv84
    [info] Adding format conversion layer 'auto_reshape_from_conv93_to_output_layer3' after conv93
    Model uses too many reources: 136 Layer-Controllers
    [critical] Model uses too many reources: 136 Layer-Controllers
    [error] Failed to produce compiled graph
    [error] Tried to deserialize allocator result on failure, but got another exception: No output graph, deserialization failed.
    Traceback (most recent call last):
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/hailo_venv/bin/hailomz", line 33, in <module>
        sys.exit(load_entry_point('hailo-model-zoo', 'console_scripts', 'hailomz')())
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/sources/model_zoo/hailo_model_zoo/main.py", line 181, in main
        run(args)
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/sources/model_zoo/hailo_model_zoo/main.py", line 170, in run
        return handlers[args.command](args)
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/sources/model_zoo/hailo_model_zoo/main_driver.py", line 132, in compile
        compile_model(runner, network_info, args.results_dir, model_script_path=args.model_script_path)
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/sources/model_zoo/hailo_model_zoo/core/main_utils.py", line 298, in compile_model
        hef = runner.compile()
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/hailo_venv/lib/python3.8/site-packages/hailo_sdk_common/states/states.py", line 16, in wrapped_func
        return func(self, *args, **kwargs)
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/hailo_venv/lib/python3.8/site-packages/hailo_sdk_client/runner/client_runner.py", line 661, in compile
        return self._get_hef_hw_representation()
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/hailo_venv/lib/python3.8/site-packages/hailo_sdk_common/states/states.py", line 16, in wrapped_func
        return func(self, *args, **kwargs)
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/hailo_venv/lib/python3.8/site-packages/hailo_sdk_client/runner/client_runner.py", line 707, in _get_hef_hw_representation
        serialized_hef = self._sdk_backend.get_hef_hw_representation(fps, allocator_script, mapping_timeout)
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/hailo_venv/lib/python3.8/site-packages/hailo_sdk_client/sdk_backend/sdk_backend.py", line 1156, in get_hef_hw_representation
        hef, mapped_graph_file = self._get_hef_hw_representation(fps, allocator_script, mapping_timeout)
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/hailo_venv/lib/python3.8/site-packages/hailo_sdk_client/sdk_backend/sdk_backend.py", line 1151, in _get_hef_hw_representation
        hef, mapped_graph_file, auto_alls = self.hef_full_build(fps, mapping_timeout, model_params, allocator_script)
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/hailo_venv/lib/python3.8/site-packages/hailo_sdk_client/sdk_backend/sdk_backend.py", line 1128, in hef_full_build
        auto_alls, self._mapped_graph, self._integrated_graph = allocator.create_mapping_and_full_build_hef(
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/hailo_venv/lib/python3.8/site-packages/hailo_sdk_client/allocator/hailo_tools_runner.py", line 568, in create_mapping_and_full_build_hef
        self.call_builder(network_graph_path, output_path, compilation_output_proto=compilation_output_proto,
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/hailo_venv/lib/python3.8/site-packages/hailo_sdk_client/allocator/hailo_tools_runner.py", line 527, in call_builder
        self.run_builder(network_graph_path, output_path, **kwargs)
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/hailo_venv/lib/python3.8/site-packages/hailo_sdk_client/allocator/hailo_tools_runner.py", line 394, in run_builder
        raise e.internal_exception("Hailo tools builder failed:", hailo_tools_error=e.hailo_tools_error) from None
    hailo_sdk_client.sdk_backend.sdk_backend_exceptions.BackendAllocatorException: Hailo tools builder failed: Model uses too many reources: 136 Layer-Controllers
    

    If I train the model in the hailo yolov5 retraining docker container, the compilation works fine. Any idea what this error means?

    opened by frenky-strasak 3
  • Is there a specific implementation limit ? (multitasking models, cascaded models, or large models)

    Is there a specific implementation limit ? (multitasking models, cascaded models, or large models)

    Hi, I have not applied through the developer zone account yet (will it be difficult to apply for all-pass?). I wonder if the Haio-8 chip can run some large models at the same time? Or can you tell me what the implementation limits are? E.g:

    1. Operator compatibility (highest opset version supported by ONNX, or better than OpenVINO in general?)
    2. What is the memory size of the chip that can store/compute tensors? Can it run super-resolution models with higher output resolutions? Can the chip run very wide fully connected layers?
    3. Hailo-8 can multitask, if I run keypoint detection, ReID and depth estimation at the same time with frame skipping, is the chip's computing or memory capacity overloaded? How to spot where the overload is or estimate it?
    4. Has your team considered having multiple Hailo-8 chips "chained" to run some difficult tasks? This should be super cool.
    opened by BICHENG 2
  • Dataflow Compiler v3.17.0 not available in Developer Zone

    Dataflow Compiler v3.17.0 not available in Developer Zone

    Hi, in the latest changelog update you mentioned, that the repository was updated to use the Dataflow Compiler v3.17.0. However, in the Developer Zone only version 3.16.0 is available. How can we get the latest Dataflow Compiler v3.17.0? Can you please add it in the Developer Zone?

    opened by kmaerkl 2
  • Old version of yolov5 in retraining docker container.

    Old version of yolov5 in retraining docker container.

    Hi, for what reason does the retraining docker container contain the old version of yolov5 v2.0? Is possible to use some new versions of yolov5 such as v6.2? Are these newer versions of yolov5 compatible with data flow compiler to optimize and compile models to hailo hef files? Thanks!

    opened by frenky-strasak 1
  • YoloV7-tiny with on-chip NMS

    YoloV7-tiny with on-chip NMS

    Dear Hailo, the output structure of Yolov5 and Yolov7 is the same IIRC, so it should be possible to run the NMS on-chip. I wanted to test this, so I took the yolov5xs_wo_spp_nms model of this zoo as a reference. When downloading, I get this NMS config JSON:

    {
      "nms_scores_th": 0.01,
      "nms_iou_th": 1.0,
      "image_dims": [512, 512],
      "max_proposals_per_class": 80,
      "background_removal": false,
      "input_division_factor": 8,
      "classes": 80,
      "bbox_decoders": [
          {
              "name": "bbox_decoder53",
              "w": [
                  10,
                  16,
                  33
              ],
              "h": [
                  13,
                  30,
                  23
              ],
              "stride": 8,
              "encoded_layer": "conv53"
          },
          {
              "name": "bbox_decoder61",
              "w": [
                  30,
                  62,
                  59
              ],
              "h": [
                  61,
                  45,
                  119
              ],
              "stride": 16,
              "encoded_layer": "conv61"
          },
          {
              "name": "bbox_decoder69",
              "w": [
                  116,
                  156,
                  373 
              ],
              "h": [
                  90,
                  198,
                  326
              ],
              "stride": 32,
              "encoded_layer": "conv69"
          }
      ]
    }
    

    I cannot find a description of these parameters in the documentation anywhere. While I understand some parameters, like the names and anchors (w, h, stride, etc.) I dont get these ones:

      "background_removal": false,
      "input_division_factor": 8,
    

    Can you help me with these parameters? And did you ever test a yolov7 and on-chip decode/nms? Furhter, in the yolov5xs alls-file, there have been these settings made:

    buffers(proposal_generator0, proposal_generator0_concat, 2, FULL_ROW)
    buffers(proposal_generator1, proposal_generator0_concat, 2, FULL_ROW)
    buffers(proposal_generator2, proposal_generator0_concat, 2, FULL_ROW)
    buffers(proposal_generator0_concat, nms1, 2)
    

    which I don't really understand. Could you explain the usage of those as well? thanks!

    Cheers

    opened by dnns92 1
  • CVE-2007-4559 Patch

    CVE-2007-4559 Patch

    Patching CVE-2007-4559

    Hi, we are security researchers from the Advanced Research Center at Trellix. We have began a campaign to patch a widespread bug named CVE-2007-4559. CVE-2007-4559 is a 15 year old bug in the Python tarfile package. By using extract() or extractall() on a tarfile object without sanitizing input, a maliciously crafted .tar file could perform a directory path traversal attack. We found at least one unsantized extractall() in your codebase and are providing a patch for you via pull request. The patch essentially checks to see if all tarfile members will be extracted safely and throws an exception otherwise. We encourage you to use this patch or your own solution to secure against CVE-2007-4559. Further technical information about the vulnerability can be found in this blog.

    If you have further questions you may contact us through this projects lead researcher Kasimir Schulz.

    opened by TrellixVulnTeam 0
  • Reactangle hef model does not work

    Reactangle hef model does not work

    Hi, in the yolov5 retraining container (v.2) I have exported yolov5m_wo_spp.pt to ONNX with rectangular shape: python models/export.py --weights model.pt --img 352 640 --batch 1 (--img H W)

    Then I compiled this onnx model: hailomz compile --ckpt model.onnx --calib-path calib_dataset --yaml yolov5m_wo_spp.yaml where I changed the yolov5m_wo_spp.yaml like this:

    • I added the preprocessing part with corresponding shape:
    preprocessing:
      network_type: detection
      input_shape:
      - 352
      - 640
      - 3
    
    • and I changed the info (the output shapes were found by Netron tool)
    info:
      input_shape: 352x640x3
      output_shape: 11x20x18, 22x40x18, 44x80x18
    

    The complete file is here: yolov5m_wo_spp.zip

    The compilation looks fine. When I deploy the hef file to my pipeline I can see wrong bboxes which are doubled and shifted. But they move in the same way as the object to be detected.

    What do I miss? Should I also modify the yolov5m_wo_spp.alls file? Could you point me please? Thank you!

    opened by frenky-strasak 3
  • Fix yolo postprocessing when batches are used

    Fix yolo postprocessing when batches are used

    Hi,

    currently the yolo postprocessing is not working when batches are provided. I fixed the bug in this branch: https://github.com/DavidBecht/hailo_model_zoo

    BR

    opened by DavidBecht 0
  • Illegal instruction (core dumped)

    Illegal instruction (core dumped)

    Hi,

    hailomz gives below error all the time. (in docker)

    all other command and tappas is working without any problem

    hailomz -h Illegal instruction (core dumped)

    opened by MyraBaba 1
  • when I run the hef , there is something wrong . how to compile mode to hef in ARM arch.

    when I run the hef , there is something wrong . how to compile mode to hef in ARM arch.

    [HailoRT] [error] CHECK failed - Failed opening non-compatible HEF with the following unsupported extensions: KO Run ASAP (KO_RUN_ASAP) [HailoRT] [error] CHECK_SUCCESS failed with status=26 [HailoRT] [error] Failed parsing HEF file [HailoRT] [error] Failed creating HEF [HailoRT] [error] CHECK_EXPECTED failed with status=26

    opened by riverfrank 2
  • Post-processing yolov5s personface output

    Post-processing yolov5s personface output

    Hi,

    As a result of inference of the yolov5s_personface model I get 3 vectors of dimensions [1, 40, 40, 21], [1, 20, 20, 21], [1, 80, 80, 21]; what's the correct/fastest procedure to decode them in order to get a list of detections (such as [x_min, y_min, x_max, y_max, score, class])?

    Thanks

    opened by aux82716 6
Releases(v2.5)
Owner
Hailo
Hailo
DANet for Tabular data classification/ regression.

Deep Abstract Networks A pyTorch implementation for AAAI-2022 paper DANets: Deep Abstract Networks for Tabular Data Classification and Regression. Bri

Ronnie Rocket 55 Sep 14, 2022
An Open-Source Package for Information Retrieval.

OpenMatch An Open-Source Package for Information Retrieval. 😃 What's New Top Spot on TREC-COVID Challenge (May 2020, Round2) The twin goals of the ch

THUNLP 439 Dec 27, 2022
Import Python modules from dicts and JSON formatted documents.

Paker Paker is module for importing Python packages/modules from dictionaries and JSON formatted documents. It was inspired by httpimporter. Important

Wojciech Wentland 1 Sep 07, 2022
P-Tuning v2: Prompt Tuning Can Be Comparable to Finetuning Universally Across Scales and Tasks

P-tuning v2 P-Tuning v2: Prompt Tuning Can Be Comparable to Finetuning Universally Across Scales and Tasks An optimized prompt tuning strategy achievi

THUDM 540 Dec 30, 2022
Rendering Point Clouds with Compute Shaders

Compute Shader Based Point Cloud Rendering This repository contains the source code to our techreport: Rendering Point Clouds with Compute Shaders and

Markus Schütz 460 Jan 05, 2023
2 Jul 19, 2022
Pytorch implementation of Learning with Opponent-Learning Awareness

Pytorch implementation of Learning with Opponent-Learning Awareness using DiCE

Alexis David Jacq 82 Sep 15, 2022
This is the code for Deformable Neural Radiance Fields, a.k.a. Nerfies.

Deformable Neural Radiance Fields This is the code for Deformable Neural Radiance Fields, a.k.a. Nerfies. Project Page Paper Video This codebase conta

Google 1k Jan 09, 2023
RobustART: Benchmarking Robustness on Architecture Design and Training Techniques

The first comprehensive Robustness investigation benchmark on large-scale dataset ImageNet regarding ARchitecture design and Training techniques towards diverse noises.

132 Dec 23, 2022
This is the official code for the paper "Ad2Attack: Adaptive Adversarial Attack for Real-Time UAV Tracking".

Ad^2Attack:Adaptive Adversarial Attack on Real-Time UAV Tracking Demo video 📹 Our video on bilibili demonstrates the test results of Ad^2Attack on se

Intelligent Vision for Robotics in Complex Environment 10 Nov 07, 2022
Convert human motion from video to .bvh

video_to_bvh Convert human motion from video to .bvh with Google Colab Usage 1. Open video_to_bvh.ipynb in Google Colab Go to https://colab.research.g

Dene 306 Dec 10, 2022
Dahua Camera and Doorbell Home Assistant Integration

Home Assistant Dahua Integration The Dahua Home Assistant integration allows you to integrate your Dahua cameras and doorbells in Home Assistant. It's

Ronnie 216 Dec 26, 2022
Reinforcement Learning for the Blackjack

Reinforcement Learning for Blackjack Author: ZHA Mengyue Math Department of HKUST Problem Statement We study playing Blackjack by reinforcement learni

Dolores 3 Jan 24, 2022
gym-anm is a framework for designing reinforcement learning (RL) environments that model Active Network Management (ANM) tasks in electricity distribution networks.

gym-anm is a framework for designing reinforcement learning (RL) environments that model Active Network Management (ANM) tasks in electricity distribution networks. It is built on top of the OpenAI G

Robin Henry 99 Dec 12, 2022
Noether Networks: meta-learning useful conserved quantities

Noether Networks: meta-learning useful conserved quantities This repository contains the code necessary to reproduce experiments from "Noether Network

Dylan Doblar 33 Nov 23, 2022
Interactive Visualization to empower domain experts to align ML model behaviors with their knowledge.

An interactive visualization system designed to helps domain experts responsibly edit Generalized Additive Models (GAMs). For more information, check

InterpretML 83 Jan 04, 2023
CellRank's reproducibility repository.

CellRank's reproducibility repository We believe that reproducibility is key and have made it as simple as possible to reproduce our results. Please e

Theis Lab 8 Oct 08, 2022
[ACM MM 2021] Joint Implicit Image Function for Guided Depth Super-Resolution

Joint Implicit Image Function for Guided Depth Super-Resolution This repository contains the code for: Joint Implicit Image Function for Guided Depth

hawkey 78 Dec 27, 2022
Code + pre-trained models for the paper Keeping Your Eye on the Ball Trajectory Attention in Video Transformers

Motionformer This is an official pytorch implementation of paper Keeping Your Eye on the Ball: Trajectory Attention in Video Transformers. In this rep

Facebook Research 192 Dec 23, 2022
Anchor Retouching via Model Interaction for Robust Object Detection in Aerial Images

Anchor Retouching via Model Interaction for Robust Object Detection in Aerial Images In this paper, we present an effective Dynamic Enhancement Anchor

13 Dec 09, 2022