Visualizer for neural network, deep learning, and machine learning models

Overview

Netron is a viewer for neural network, deep learning and machine learning models.

Netron supports ONNX (.onnx, .pb, .pbtxt), Keras (.h5, .keras), TensorFlow Lite (.tflite), Caffe (.caffemodel, .prototxt), Darknet (.cfg), Core ML (.mlmodel), MNN (.mnn), MXNet (.model, -symbol.json), ncnn (.param), PaddlePaddle (.zip, __model__), Caffe2 (predict_net.pb), Barracuda (.nn), Tengine (.tmfile), TNN (.tnnproto), RKNN (.rknn), MindSpore Lite (.ms), UFF (.uff).

Netron has experimental support for TensorFlow (.pb, .meta, .pbtxt, .ckpt, .index), PyTorch (.pt, .pth), TorchScript (.pt, .pth), OpenVINO (.xml), Torch (.t7), Arm NN (.armnn), BigDL (.bigdl, .model), Chainer (.npz, .h5), CNTK (.model, .cntk), Deeplearning4j (.zip), MediaPipe (.pbtxt), ML.NET (.zip), scikit-learn (.pkl), TensorFlow.js (model.json, .pb).

Install

macOS: Download the .dmg file or run brew install netron

Linux: Download the .AppImage file or run snap install netron

Windows: Download the .exe installer or run winget install netron

Browser: Start the browser version.

Python Server: Run pip install netron and netron [FILE] or netron.start('[FILE]').

Models

Sample model files to download or open using the browser version:

Comments
  • Windows app not closing properly

    Windows app not closing properly

    After the latest update, Netron remains open taking up memory and CPU after closing the program. I must close it through task manager each time. I am on Windows 10

    no repro 
    opened by idenc 22
  • TorchScript: ValueError: not enough values to unpack

    TorchScript: ValueError: not enough values to unpack

    • Netron app and version: web app 5.5.9?
    • OS and browser version: Manjaro GNOME on firefox 97.0.1

    Steps to Reproduce:

    1. use torch.broadcast_tensors
    2. export with torch.trace(...).save()
    3. open in netron.app

    I have also gotten a Unsupported function 'torch.broadcast_tensors', but have been unable to reproduce it due to this current error. Most likely, the fix for the following repro will cover two bugs.

    Please attach or link model files to reproduce the issue if necessary.

    image

    Repro:

    import torch
    
    class Test(torch.nn.Module):
        def forward(self, a, b):
            a, b = torch.broadcast_tensors(a, b)
            assert a.shape == b.shape == (3, 5)
            return a + b
    
    torch.jit.trace(
        Test(),
        (torch.ones(3, 1), torch.ones(1, 5)),
    ).save("foobar.pt")
    

    Zipped foobar.pt: foobar.zip

    help wanted bug 
    opened by pbsds 15
  • OpenVINO support

    OpenVINO support

    • [x] 1. Opening rm_lstm4f.xml results in TypeError (#192)
    • [x] 2. dot files are not opened any more - need to fix it (#192)
    • [x] 3. add preflight check for invalid xml and dot content
    • [x] 6. Add test files to ./test/models.json (#195) (#211)
    • [x] 9. Add support for the version 3 of IR (#196)
    • [x] 10. Category color support (#203)
    • [x] 11. -metadata.json for coloring, documentation and attribute default filtering (#203).
    • [x] 5. Filter attribute defaults based on -metadata.json to show fewer attributes in the graph
    • [ ] 7. Show weight tensors
    • [x] 8. Graph inputs and outputs should be exposed as Graph.inputs and Graph.outputs
    • [x] 12. Move to DOMParser
    • [x] 13. Remove dot support
    feature 
    opened by lutzroeder 15
  • RangeError: Maximum call stack size exceeded

    RangeError: Maximum call stack size exceeded

    • Netron app and version: 4.4.8 App and Browser
    • OS and browser version: Windows 10 + Chrome Version 84.0.4147.135

    Steps to Reproduce:

    EfficientDet-d0.zip

    Please attach or link model files to reproduce the issue if necessary.

    help wanted no repro bug 
    opened by ryusaeba 14
  • Debugging Tensorflow Lite Model

    Debugging Tensorflow Lite Model

    Hi there,

    First off, just wanted to say thanks for creating such a great tool - Netron is very useful.

    I'm having an issue that likely stems from Tensorflow, rather than from Netron, but thought you might have some insights. In my flow, I use TF 1.15 to go from .ckpt --> frozen .pb --> .tflite. Normally it works reasonably smoothly, but a recent run shows an issue with the .tflite file: it is created without errors, it runs, but it performs poorly. Opening it with Netron shows that the activation functions (relu6 in this case) have been removed for every layer. Opening the equivalent .pb file in Netron shows the relu6 functions are present.

    Have you seen any cases in which Netron struggled with a TF Lite model (perhaps it can open, but isn't displaying correctly)? Also, how did you figure out the format for .tflite files (perhaps knowing this would allow me to debug it more deeply)?

    Thanks in advance.

    no repro 
    opened by mm7721 12
  • add armnn serialized format support

    add armnn serialized format support

    here's patch to support armnn format. (experimental)

    armnn-schema.js is compiled from ArmnnSchema.fbs included in armNN serailizer.

    see also:

    armnn: https://github.com/ARM-software/armnn

    As mensioned in #363, I will check items in below:

    • [x] Add sample files to test/models.json and run node test/test.js armnn
    • [x] Add tools/armnn script and sync, schema to automate regenerating armnn-schema.js
    • [x] Add tools/armnn script to run as part of ./Makefile
    • [x] Run make lint
    opened by Tee0125 12
  • TorchScript: Argument names to match runtime

    TorchScript: Argument names to match runtime

    Hi, there is some questions about node's name which in pt model saved by TorchScript. I use netron to view my pt model exported by torch.jit.save(),but the node's name doesn't match with it's real name resolved by TorchScript interface. It looks like the names in pt are arranged numerically from smallest to largest,but this is clearly not the case when they are parsed from TorchScript's interface. I wonder how this kind of situation can be solved, thanks a lot !! Looking forward to your reply.

    help wanted 
    opened by daodaoawaker 11
  • Support torch.fx IR visualization using netron

    Support torch.fx IR visualization using netron

    torch.fx is a library in PyTorch 1.8 that allows python-python model transformations. It works by symbolically tracing the PyTorch model into a graph (fx.GraphModule), which can be transformed and finally exported back to code, or used as a nn.Module directly. Currently there is no mechanism to import the graph IR into netron. An indirect path is to export to ONNX to visualize, which is not as useful if debugging transformations that potentially break ONNX exportability. It seems valuable to visualize the traced graph directly in netron.

    feature help wanted no repro 
    opened by sjain-stanford 11
  • TorchScript unsupported functions in after update

    TorchScript unsupported functions in after update

    I have a lot of basic model files saved in TorchScript and they were able to be opened weeks ago. However I cannot many of them after update Netron to v3.9.1. Many common functions are not supported not, e.g. torch.constant_pad_nd, torch.bmm, torch.avg_pool3d.

    opened by lujq96 11
  • OpenVINO IR v10 LSTM support

    OpenVINO IR v10 LSTM support

    • Netron app and version: 4.4.4
    • OS and browser version: Windows 10 64bit

    Steps to Reproduce:

    1. Open OpenVINO IR XML file in netron

    Please attach or link model files to reproduce the issue if necessary.

    I cannot share the proprietary model that shows dozens of disconnected nodes, but the one linked below does show disconnected subgraphs after conversion to OpenVINO IR. Note that the IR generated using the --generate_deprecated_IR_V7 option displays correctly.

    https://github.com/ARM-software/ML-KWS-for-MCU/blob/master/Pretrained_models/Basic_LSTM/Basic_LSTM_S.pb

    Convert using:

    python 'C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\mo.py' --input_model .\Basic_LSTM_S.pb --input=Reshape:0 --input_shape=[1,490] --output=Output-Layer/add

    This results in the following disconnected graph display:

    image

    no repro external bug 
    opened by mdeisher 10
  • Full support for scikit-learn (joblib)

    Full support for scikit-learn (joblib)

    For recoverable estimator persistence scikit-learn recommends to use joblib (instead of pickle). Sidenote: It is possible to export trained models into ONNX or PMML but the estimators are not recoverable. For more info refer to here.

    bug 
    opened by fkromer 9
  • Export full size image

    Export full size image

    I have onnx file successfully exported from mmsegmentation (swin-transformer), huge model (975.4) MB, I managed to open it in netron, however when I try to export it and preview in full size its blured.

    Any way I can fix it ? Thanks

    no repro bug 
    opened by adrianodac 0
  • TorchScript: torch.jit.mobile.serialization support

    TorchScript: torch.jit.mobile.serialization support

    Export PyTorch model to FlatBuffers file:

    import torch
    import torchvision
    model = torchvision.models.resnet34(weights=torchvision.models.ResNet34_Weights.DEFAULT)
    torch.jit.save_jit_module_to_flatbuffer(torch.jit.script(model), 'resnet34.ff')
    

    Sample files: scriptmodule.ff.zip squeezenet1_1_traced.ff.zip

    feature 
    opened by lutzroeder 0
  • MegEngine: fix some bugs

    MegEngine: fix some bugs

    fix some bugs of megengine C++ model (.mge) visualization:

    1. show the shape of the middle tensor;
    2. fix scope matching model identifier (mgv2) due to possible leading information;

    please help review, thanks~

    opened by Ysllllll 0
  • TorchScript server

    TorchScript server

    import torch
    import torchvision
    import torch.utils.tensorboard
    model = torchvision.models.detection.fasterrcnn_resnet50_fpn()
    script = torch.jit.script(model)
    script.save('fasterrcnn_resnet50_fpn.pt')
    with torch.utils.tensorboard.SummaryWriter('log') as writer:
        writer.add_graph(script, ())
    

    fasterrcnn_resnet50_fpn.pt.zip

    feature 
    opened by lutzroeder 0
EigenGAN Tensorflow, EigenGAN: Layer-Wise Eigen-Learning for GANs

Gender Bangs Body Side Pose (Yaw) Lighting Smile Face Shape Lipstick Color Painting Style Pose (Yaw) Pose (Pitch) Zoom & Rotate Flush & Eye Color Mout

Zhenliang He 321 Dec 01, 2022
A Marvelous ChatBot implement using PyTorch.

PyTorch Marvelous ChatBot [Update] it's 2019 now, previously model can not catch up state-of-art now. So we just move towards the future a transformer

JinTian 223 Oct 18, 2022
Official respository for "Modeling Defocus-Disparity in Dual-Pixel Sensors", ICCP 2020

Official respository for "Modeling Defocus-Disparity in Dual-Pixel Sensors", ICCP 2020 BibTeX @INPROCEEDINGS{punnappurath2020modeling, author={Abhi

Abhijith Punnappurath 22 Oct 01, 2022
Axel - 3D printed robotic hands and they controll with Raspberry Pi and Arduino combo

Axel It's our graduation project about 3D printed robotic hands and they control

0 Feb 14, 2022
FluidNet re-written with ATen tensor lib

fluidnet_cxx: Accelerating Fluid Simulation with Convolutional Neural Networks. A PyTorch/ATen Implementation. This repository is based on the paper,

JoliBrain 50 Jun 07, 2022
Official Pytorch implementation of paper "Reverse Engineering of Generative Models: Inferring Model Hyperparameters from Generated Images"

Reverse_Engineering_GMs Official Pytorch implementation of paper "Reverse Engineering of Generative Models: Inferring Model Hyperparameters from Gener

100 Dec 18, 2022
The official code repository for examples in the O'Reilly book 'Generative Deep Learning'

Generative Deep Learning Teaching Machines to paint, write, compose and play The official code repository for examples in the O'Reilly book 'Generativ

David Foster 1.3k Dec 29, 2022
Transfer-Learn is an open-source and well-documented library for Transfer Learning.

Transfer-Learn is an open-source and well-documented library for Transfer Learning. It is based on pure PyTorch with high performance and friendly API. Our code is pythonic, and the design is consist

THUML @ Tsinghua University 2.2k Jan 03, 2023
Unofficial Implementation of MLP-Mixer, gMLP, resMLP, Vision Permutator, S2MLPv2, RaftMLP, ConvMLP, ConvMixer in Jittor and PyTorch.

Unofficial Implementation of MLP-Mixer, gMLP, resMLP, Vision Permutator, S2MLPv2, RaftMLP, ConvMLP, ConvMixer in Jittor and PyTorch! Now, Rearrange and Reduce in einops.layers.jittor are support!!

130 Jan 08, 2023
Code for the paper "Ordered Neurons: Integrating Tree Structures into Recurrent Neural Networks"

ON-LSTM This repository contains the code used for word-level language model and unsupervised parsing experiments in Ordered Neurons: Integrating Tree

Yikang Shen 572 Nov 21, 2022
Novel and high-performance medical image classification pipelines are heavily utilizing ensemble learning strategies

An Analysis on Ensemble Learning optimized Medical Image Classification with Deep Convolutional Neural Networks Novel and high-performance medical ima

14 Dec 18, 2022
This is code of book "Learn Deep Learning with PyTorch"

深度学习入门之PyTorch Learn Deep Learning with PyTorch 非常感谢您能够购买此书,这个github repository包含有深度学习入门之PyTorch的实例代码。由于本人水平有限,在写此书的时候参考了一些网上的资料,在这里对他们表示敬意。由于深度学习的技术在

Xingyu Liao 2.5k Jan 04, 2023
[ACM MM 2019 Oral] Cycle In Cycle Generative Adversarial Networks for Keypoint-Guided Image Generation

Contents Cycle-In-Cycle GANs Installation Dataset Preparation Generating Images Using Pretrained Model Train and Test New Models Acknowledgments Relat

Hao Tang 67 Dec 14, 2022
FaceQgen: Semi-Supervised Deep Learning for Face Image Quality Assessment

FaceQgen FaceQgen: Semi-Supervised Deep Learning for Face Image Quality Assessment This repository is based on the paper: "FaceQgen: Semi-Supervised D

Javier Hernandez-Ortega 3 Aug 04, 2022
Easy-to-use,Modular and Extendible package of deep-learning based CTR models .

DeepCTR DeepCTR is a Easy-to-use,Modular and Extendible package of deep-learning based CTR models along with lots of core components layers which can

浅梦 6.6k Jan 08, 2023
Efficient and Scalable Physics-Informed Deep Learning and Scientific Machine Learning on top of Tensorflow for multi-worker distributed computing

Notice: Support for Python 3.6 will be dropped in v.0.2.1, please plan accordingly! Efficient and Scalable Physics-Informed Deep Learning Collocation-

tensordiffeq 74 Dec 09, 2022
Course materials for Fall 2021 "CIS6930 Topics in Computing for Data Science" at New College of Florida

Fall 2021 CIS6930 Topics in Computing for Data Science This repository hosts course materials used for a 13-week course "CIS6930 Topics in Computing f

Yoshi Suhara 101 Nov 30, 2022
TensorFlow 2 implementation of the Yahoo Open-NSFW model

TensorFlow 2 implementation of the Yahoo Open-NSFW model

Bosco Yung 101 Jan 01, 2023
Code for KDD'20 "An Efficient Neighborhood-based Interaction Model for Recommendation on Heterogeneous Graph"

Heterogeneous INteract and aggreGatE (GraphHINGE) This is a pytorch implementation of GraphHINGE model. This is the experiment code in the following w

Jinjiarui 69 Nov 24, 2022
Resources related to EMNLP 2021 paper "FAME: Feature-Based Adversarial Meta-Embeddings for Robust Input Representations"

FAME: Feature-based Adversarial Meta-Embeddings This is the companion code for the experiments reported in the paper "FAME: Feature-Based Adversarial

Bosch Research 11 Nov 27, 2022