ML models and internal tensors 3D visualizer

Related tags

Deep Learningviewer
Overview

logo

The free Zetane Viewer is a tool to help understand and accelerate discovery in machine learning and artificial neural networks. It can be used to open the AI black box by visualizing and understanding the model's architecture and internal data (feature maps, weights, biases and layers output tensors). It can be thought of as a tool to do neuroimaging or brain imaging for artificial neural networks and machine learning algorithms.

You can also launch your own Zetane workspace directly from your existing scripts or notebooks via a few commands using the Zetane Python API.

nodes tensors


Zetane Viewer

Installation

You can install the free Zetane viewer for Windows, Linux and Mac, and explore ZTN and ONNX files.

Download for Windows

Download for Linux

Download for Mac

Tutorial

In this video, we will show you how to load a Zetane or ONNX model, navigate the model and view different tensors:

Below is the step-by-step instruction of how to load and inspect a model in the Zetane viewer:

  • How to load a model

The viewer supports both .ONNX and .ZTN files. The ZTN files were generated from the Keras and Pytorch scripts shared in this Git repository. After launching the viewer, to load a Zetane model, simply click “Load Zetane Model” in the DATA I/O menu. To load an Onnx model, click on “Import ONNX Model” in the same menu. Below you can access the ZTN files for a few models to load. You can also access ONNX files from the ONNX Model Zoo.

loading

When a model is displayed in the Zetane engine, any components of the model can be accessed in a few clicks.

At the highest level, we have the model architecture which is composed of interconnected nodes and tensors. Each node represents an operator of the computational graph. Usually, an input tensor is passed to the model and as it goes through the nodes it will be transformed into intermediate tensors until we reach the output tensor of the model. In the Zetane engine, the data flows from left to right.

architecture

  • How to navigate

You may navigate the model viewer window by right clicking and dragging to explore the space and using the scroll wheel to zoom in and out. Here is the complete list of navigation instructions. You can change the behavior of the mouse wheel (either to zoom or to navigate) via the Mouse Zoom toggle in the top menu.

zoom

  • Loading custom model inputs

After loading a model you may want to send your own inputs to the model to inference. Zetane supports loading .npy, .npz, .png, .jpg, .pb (protobuf), .tiff, and .hdr files that match the input dimensions of the model. The Zetane engine will attempt to intelligently resize the file loaded (if possible) in order to send the data to the model. After loading and running the input, you will be able to explore in detail how your model interpreted the input data.

nodes tensors tensors

  • How to inspect different layers and feature maps

For each layer, you have the option to view all the feature maps and filters by clicking on the “Show Feature Maps” on each node. You may inspect the inputs and outputs and weights and biases using the tensor view bar.

featuremap

  • Tensor view bar

By clicking on the associated button, you can visualize inputs, outputs, weights and biases (if applicable) for each individual layer. You can also investigate the shape, type, mean and standard deviation of each tensor.

tensorview

Statistics about the tensor value and its distribution is given in the histogram in the top panel. You can also see the tensor name and shape. The tensor and its values is represented in the middle panel and the bottom section contains tensor visualization parameters and a refresh button which allow the user to refresh the tensor. This is useful when the input or the weights are changing in real-time.

tensorpanel

  • Styles of tensor visualization

Tensors can be inspected in different ways, including 3D view and 2D view with and without actual values.

tensorview2



Tensor View Screenshot
N-dimensional tensor projected in the 3D space tensor_viz_3d
N-dimensional tensor projected in the 2D space tensor_viz_2d
Tensor values and color representations of each value based on the gradient shown on the x-axis of the distribution histogram tensor_viz_color-values
Tensor values__ tensor_viz_values
Feature maps view when the tensor has shape of dimension 3 tensor_viz_values

Models

We have generated a few ZTN models for inspecting their architecture and internal tensors in the viewer. We have also provided the code used to generate these models.

Image Classification

Object Detection

Image Segmentation

Body, Face and Gesture Analysis

Image Manipulation

XAI

Classic Machine Learning


Installation

Install the Zetane Viewer here.


Comments
  • BUG: Viewer crashes when loading any model

    BUG: Viewer crashes when loading any model

    I've tried loading multiple models including emotion-ferplus (both onnx and ztn formats) but they always immediately crash the viewer.

    OS: Ubuntu 20.04 Zetane 1.3.2 Dump:

    LoadUniverse(): ZTN_REQUIRE_LOGIN = 1 
    online = 0 
    ================== ExposeIRnodes: ================== 
    @@@ ExposeIRnodes() n_IR_outputs = 51. 
     <- [Parameter1367_reshape1. 
     <- [Minus340_Output_0. 
     <- [Block352_Output_0. 
     <- [Convolution362_Output_0. 
     <- [Plus364_Output_0. 
     <- [ReLU366_Output_0. 
     <- [Convolution380_Output_0. 
     <- [Plus382_Output_0. 
     <- [ReLU384_Output_0. 
     <- [Pooling398_Output_0. 
     <- [Dropout408_Output_0. 
     <- [Convolution418_Output_0. 
     <- [Plus420_Output_0. 
     <- [ReLU422_Output_0. 
     <- [Convolution436_Output_0. 
     <- [Plus438_Output_0. 
     <- [ReLU440_Output_0. 
     <- [Pooling454_Output_0. 
     <- [Dropout464_Output_0. 
     <- [Convolution474_Output_0. 
     <- [Plus476_Output_0. 
     <- [ReLU478_Output_0. 
     <- [Convolution492_Output_0. 
     <- [Plus494_Output_0. 
     <- [ReLU496_Output_0. 
     <- [Convolution510_Output_0. 
     <- [Plus512_Output_0. 
     <- [ReLU514_Output_0. 
     <- [Pooling528_Output_0. 
     <- [Dropout538_Output_0. 
     <- [Convolution548_Output_0. 
     <- [Plus550_Output_0. 
     <- [ReLU552_Output_0. 
     <- [Convolution566_Output_0. 
     <- [Plus568_Output_0. 
     <- [ReLU570_Output_0. 
     <- [Convolution584_Output_0. 
     <- [Plus586_Output_0. 
     <- [ReLU588_Output_0. 
     <- [Pooling602_Output_0. 
     <- [Dropout612_Output_0. 
     <- [Dropout612_Output_0_reshape0. 
     <- [Times622_Output_0. 
     <- [Plus624_Output_0. 
     <- [ReLU636_Output_0. 
     <- [Dropout646_Output_0. 
     <- [Times656_Output_0. 
     <- [Plus658_Output_0. 
     <- [ReLU670_Output_0. 
     <- [Dropout680_Output_0. 
     <- [Times690_Output_0. 
    node_name = Node_0000000000_Times622_reshape1_Reshape. 
     -> [Parameter1367_reshape1. 
    node_name = Node_0000000001_Minus340_Sub. 
     -> [Minus340_Output_0. 
    node_name = Node_0000000002_Block352_Div. 
     -> [Block352_Output_0. 
    node_name = Node_0000000003_Convolution362_Conv. 
     -> [Convolution362_Output_0. 
    node_name = Node_0000000004_Plus364_Add. 
     -> [Plus364_Output_0. 
    node_name = Node_0000000005_ReLU366_Relu. 
     -> [ReLU366_Output_0. 
    node_name = Node_0000000006_Convolution380_Conv. 
     -> [Convolution380_Output_0. 
    node_name = Node_0000000007_Plus382_Add. 
     -> [Plus382_Output_0. 
    node_name = Node_0000000008_ReLU384_Relu. 
     -> [ReLU384_Output_0. 
    node_name = Node_0000000009_Pooling398_MaxPool. 
     -> [Pooling398_Output_0. 
    node_name = Node_0000000010_Dropout408_Dropout. 
     -> [Dropout408_Output_0. 
    node_name = Node_0000000011_Convolution418_Conv. 
     -> [Convolution418_Output_0. 
    node_name = Node_0000000012_Plus420_Add. 
     -> [Plus420_Output_0. 
    node_name = Node_0000000013_ReLU422_Relu. 
     -> [ReLU422_Output_0. 
    node_name = Node_0000000014_Convolution436_Conv. 
     -> [Convolution436_Output_0. 
    node_name = Node_0000000015_Plus438_Add. 
     -> [Plus438_Output_0. 
    node_name = Node_0000000016_ReLU440_Relu. 
     -> [ReLU440_Output_0. 
    node_name = Node_0000000017_Pooling454_MaxPool. 
     -> [Pooling454_Output_0. 
    node_name = Node_0000000018_Dropout464_Dropout. 
     -> [Dropout464_Output_0. 
    node_name = Node_0000000019_Convolution474_Conv. 
     -> [Convolution474_Output_0. 
    node_name = Node_0000000020_Plus476_Add. 
     -> [Plus476_Output_0. 
    node_name = Node_0000000021_ReLU478_Relu. 
     -> [ReLU478_Output_0. 
    node_name = Node_0000000022_Convolution492_Conv. 
     -> [Convolution492_Output_0. 
    node_name = Node_0000000023_Plus494_Add. 
     -> [Plus494_Output_0. 
    node_name = Node_0000000024_ReLU496_Relu. 
     -> [ReLU496_Output_0. 
    node_name = Node_0000000025_Convolution510_Conv. 
     -> [Convolution510_Output_0. 
    node_name = Node_0000000026_Plus512_Add. 
     -> [Plus512_Output_0. 
    node_name = Node_0000000027_ReLU514_Relu. 
     -> [ReLU514_Output_0. 
    node_name = Node_0000000028_Pooling528_MaxPool. 
     -> [Pooling528_Output_0. 
    node_name = Node_0000000029_Dropout538_Dropout. 
     -> [Dropout538_Output_0. 
    node_name = Node_0000000030_Convolution548_Conv. 
     -> [Convolution548_Output_0. 
    node_name = Node_0000000031_Plus550_Add. 
     -> [Plus550_Output_0. 
    node_name = Node_0000000032_ReLU552_Relu. 
     -> [ReLU552_Output_0. 
    node_name = Node_0000000033_Convolution566_Conv. 
     -> [Convolution566_Output_0. 
    node_name = Node_0000000034_Plus568_Add. 
     -> [Plus568_Output_0. 
    node_name = Node_0000000035_ReLU570_Relu. 
     -> [ReLU570_Output_0. 
    node_name = Node_0000000036_Convolution584_Conv. 
     -> [Convolution584_Output_0. 
    node_name = Node_0000000037_Plus586_Add. 
     -> [Plus586_Output_0. 
    node_name = Node_0000000038_ReLU588_Relu. 
     -> [ReLU588_Output_0. 
    node_name = Node_0000000039_Pooling602_MaxPool. 
     -> [Pooling602_Output_0. 
    node_name = Node_0000000040_Dropout612_Dropout. 
     -> [Dropout612_Output_0. 
    node_name = Node_0000000041_Times622_reshape0_Reshape. 
     -> [Dropout612_Output_0_reshape0. 
    node_name = Node_0000000042_Times622_MatMul. 
     -> [Times622_Output_0. 
    node_name = Node_0000000043_Plus624_Add. 
     -> [Plus624_Output_0. 
    node_name = Node_0000000044_ReLU636_Relu. 
     -> [ReLU636_Output_0. 
    node_name = Node_0000000045_Dropout646_Dropout. 
     -> [Dropout646_Output_0. 
    node_name = Node_0000000046_Times656_MatMul. 
     -> [Times656_Output_0. 
    node_name = Node_0000000047_Plus658_Add. 
     -> [Plus658_Output_0. 
    node_name = Node_0000000048_ReLU670_Relu. 
     -> [ReLU670_Output_0. 
    node_name = Node_0000000049_Dropout680_Dropout. 
     -> [Dropout680_Output_0. 
    node_name = Node_0000000050_Times690_MatMul. 
     -> [Times690_Output_0. 
    node_name = Node_0000000051_Plus692_Add. 
    @@@ ExposeIRnodes() [Outputs] = 1 --> 52. 
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 
    ***************** ValidateIRnodes: ***************** 
    ====================================  
    input_dims = [ 1, 1, 64, 64, ]. 
    --> input 0[Input3]: Type 1; [4 dims] tensor  
    --------------- 
    input_dims = [ ]. 
    --> input 1[Constant339]: Type 1; [0 dims] tensor  
    --------------- 
    input_dims = [ ]. 
    --> input 2[Constant343]: Type 1; [0 dims] tensor  
    --------------- 
    input_dims = [ 64, 1, 3, 3, ]. 
    --> input 3[Parameter3]: Type 1; [4 dims] tensor  
    --------------- 
    input_dims = [ 64, 1, 1, ]. 
    --> input 4[Parameter4]: Type 1; [3 dims] tensor  
    --------------- 
    input_dims = [ 64, 64, 3, 3, ]. 
    --> input 5[Parameter23]: Type 1; [4 dims] tensor  
    --------------- 
    input_dims = [ 64, 1, 1, ]. 
    --> input 6[Parameter24]: Type 1; [3 dims] tensor  
    --------------- 
    input_dims = [ 128, 64, 3, 3, ]. 
    --> input 7[Parameter63]: Type 1; [4 dims] tensor  
    --------------- 
    input_dims = [ 128, 1, 1, ]. 
    --> input 8[Parameter64]: Type 1; [3 dims] tensor  
    --------------- 
    input_dims = [ 128, 128, 3, 3, ]. 
    --> input 9[Parameter83]: Type 1; [4 dims] tensor  
    --------------- 
    input_dims = [ 128, 1, 1, ]. 
    --> input 10[Parameter84]: Type 1; [3 dims] tensor  
    --------------- 
    input_dims = [ 256, 128, 3, 3, ]. 
    --> input 11[Parameter575]: Type 1; [4 dims] tensor  
    --------------- 
    input_dims = [ 256, 1, 1, ]. 
    --> input 12[Parameter576]: Type 1; [3 dims] tensor  
    --------------- 
    input_dims = [ 256, 256, 3, 3, ]. 
    --> input 13[Parameter595]: Type 1; [4 dims] tensor  
    --------------- 
    input_dims = [ 256, 1, 1, ]. 
    --> input 14[Parameter596]: Type 1; [3 dims] tensor  
    --------------- 
    input_dims = [ 256, 256, 3, 3, ]. 
    --> input 15[Parameter615]: Type 1; [4 dims] tensor  
    --------------- 
    input_dims = [ 256, 1, 1, ]. 
    --> input 16[Parameter616]: Type 1; [3 dims] tensor  
    --------------- 
    input_dims = [ 256, 256, 3, 3, ]. 
    --> input 17[Parameter655]: Type 1; [4 dims] tensor  
    --------------- 
    input_dims = [ 256, 1, 1, ]. 
    --> input 18[Parameter656]: Type 1; [3 dims] tensor  
    --------------- 
    input_dims = [ 256, 256, 3, 3, ]. 
    --> input 19[Parameter675]: Type 1; [4 dims] tensor  
    --------------- 
    input_dims = [ 256, 1, 1, ]. 
    --> input 20[Parameter676]: Type 1; [3 dims] tensor  
    --------------- 
    input_dims = [ 256, 256, 3, 3, ]. 
    --> input 21[Parameter695]: Type 1; [4 dims] tensor  
    --------------- 
    input_dims = [ 256, 1, 1, ]. 
    --> input 22[Parameter696]: Type 1; [3 dims] tensor  
    --------------- 
    input_dims = [ 2, ]. 
    --> input 23[Dropout612_Output_0_reshape0_shape]: Type 7; [1 dims] tensor  
    --------------- 
    input_dims = [ 256, 4, 4, 1024, ]. 
    --> input 24[Parameter1367]: Type 1; [4 dims] tensor  
    --------------- 
    input_dims = [ 2, ]. 
    --> input 25[Parameter1367_reshape1_shape]: Type 7; [1 dims] tensor  
    --------------- 
    input_dims = [ 1024, ]. 
    --> input 26[Parameter1368]: Type 1; [1 dims] tensor  
    --------------- 
    input_dims = [ 1024, 1024, ]. 
    --> input 27[Parameter1403]: Type 1; [2 dims] tensor  
    --------------- 
    input_dims = [ 1024, ]. 
    --> input 28[Parameter1404]: Type 1; [1 dims] tensor  
    --------------- 
    input_dims = [ 1024, 8, ]. 
    --> input 29[Parameter1693]: Type 1; [2 dims] tensor  
    --------------- 
    input_dims = [ 8, ]. 
    --> input 30[Parameter1694]: Type 1; [1 dims] tensor  
    --------------- 
    >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
    output_dims = [ 1, 8, ]. 
    --> output 0/1[Plus692_Output_0]: Type 1; [2 dims] tensor  
    --------------- 
    output_dims = [ 4096, 1024, ]. 
    --> output 1/1[Parameter1367_reshape1]: Type 1; [2 dims] tensor  
    --------------- 
    output_dims = [ 1, 1, 64, 64, ]. 
    --> output 2/1[Minus340_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 1, 64, 64, ]. 
    --> output 3/1[Block352_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 64, 64, 64, ]. 
    --> output 4/1[Convolution362_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 64, 64, 64, ]. 
    --> output 5/1[Plus364_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 64, 64, 64, ]. 
    --> output 6/1[ReLU366_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 64, 64, 64, ]. 
    --> output 7/1[Convolution380_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 64, 64, 64, ]. 
    --> output 8/1[Plus382_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 64, 64, 64, ]. 
    --> output 9/1[ReLU384_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 64, 32, 32, ]. 
    --> output 10/1[Pooling398_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 64, 32, 32, ]. 
    --> output 11/1[Dropout408_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 128, 32, 32, ]. 
    --> output 12/1[Convolution418_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 128, 32, 32, ]. 
    --> output 13/1[Plus420_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 128, 32, 32, ]. 
    --> output 14/1[ReLU422_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 128, 32, 32, ]. 
    --> output 15/1[Convolution436_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 128, 32, 32, ]. 
    --> output 16/1[Plus438_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 128, 32, 32, ]. 
    --> output 17/1[ReLU440_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 128, 16, 16, ]. 
    --> output 18/1[Pooling454_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 128, 16, 16, ]. 
    --> output 19/1[Dropout464_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 16, 16, ]. 
    --> output 20/1[Convolution474_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 16, 16, ]. 
    --> output 21/1[Plus476_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 16, 16, ]. 
    --> output 22/1[ReLU478_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 16, 16, ]. 
    --> output 23/1[Convolution492_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 16, 16, ]. 
    --> output 24/1[Plus494_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 16, 16, ]. 
    --> output 25/1[ReLU496_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 16, 16, ]. 
    --> output 26/1[Convolution510_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 16, 16, ]. 
    --> output 27/1[Plus512_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 16, 16, ]. 
    --> output 28/1[ReLU514_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 8, 8, ]. 
    --> output 29/1[Pooling528_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 8, 8, ]. 
    --> output 30/1[Dropout538_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 8, 8, ]. 
    --> output 31/1[Convolution548_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 8, 8, ]. 
    --> output 32/1[Plus550_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 8, 8, ]. 
    --> output 33/1[ReLU552_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 8, 8, ]. 
    --> output 34/1[Convolution566_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 8, 8, ]. 
    --> output 35/1[Plus568_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 8, 8, ]. 
    --> output 36/1[ReLU570_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 8, 8, ]. 
    --> output 37/1[Convolution584_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 8, 8, ]. 
    --> output 38/1[Plus586_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 8, 8, ]. 
    --> output 39/1[ReLU588_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 4, 4, ]. 
    --> output 40/1[Pooling602_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 256, 4, 4, ]. 
    --> output 41/1[Dropout612_Output_0]: Type 1; [4 dims] tensor  
    --------------- 
    output_dims = [ 1, 4096, ]. 
    --> output 42/1[Dropout612_Output_0_reshape0]: Type 1; [2 dims] tensor  
    --------------- 
    output_dims = [ 1, 1024, ]. 
    --> output 43/1[Times622_Output_0]: Type 1; [2 dims] tensor  
    --------------- 
    output_dims = [ 1, 1024, ]. 
    --> output 44/1[Plus624_Output_0]: Type 1; [2 dims] tensor  
    --------------- 
    output_dims = [ 1, 1024, ]. 
    --> output 45/1[ReLU636_Output_0]: Type 1; [2 dims] tensor  
    --------------- 
    output_dims = [ 1, 1024, ]. 
    --> output 46/1[Dropout646_Output_0]: Type 1; [2 dims] tensor  
    --------------- 
    output_dims = [ 1, 1024, ]. 
    --> output 47/1[Times656_Output_0]: Type 1; [2 dims] tensor  
    --------------- 
    output_dims = [ 1, 1024, ]. 
    --> output 48/1[Plus658_Output_0]: Type 1; [2 dims] tensor  
    --------------- 
    output_dims = [ 1, 1024, ]. 
    --> output 49/1[ReLU670_Output_0]: Type 1; [2 dims] tensor  
    --------------- 
    output_dims = [ 1, 1024, ]. 
    --> output 50/1[Dropout680_Output_0]: Type 1; [2 dims] tensor  
    --------------- 
    output_dims = [ 1, 8, ]. 
    --> output 51/1[Times690_Output_0]: Type 1; [2 dims] tensor  
    --------------- 
    <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< 
    ValidateIRnodes() 52 --> 52=52=52 valid output tensors  
    --------------- 
    ----------------- ValidateIRnodes. ----------------- 
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 
    *** ExposeIRnodes: 1 --> 52 Outputs. 
    *** type: [FLOAT] ~?= STRING 
    TVZ10()  input_dims = [ 1, 1, 64, 64, ]. 
    --> input [0]: Type FLOAT; [4 dims] tensor  
    --------------- 
    Warning: Could not load "/opt/zetane/lib/graphviz/libgvplugin_pango.so.6" - file not found
    terminate called after throwing an instance of 'std::invalid_argument'
      what():  stod
    /usr/bin/zetane: line 26: 37102 Aborted                 (core dumped) ./Zetane --server
    
    
    opened by paulgavrikov 4
  • Free Trial not Available

    Free Trial not Available

    After clicking "upgrade 2 pro", I arrive at your pricing page. Clicking on "free trial" redirects me to the documentation, which instructs me to click the button "upgrade 2 pro". Now I'm stuck in an infinite loop and unhappy about it.

    I'd like to successfully exit this loop and try your product. Any Tips?

    opened by Whadup 3
  • sorry, i install deb in ubuntu20.04,but when i use it to load input jpg ,it crash,how can i get the log to find result

    sorry, i install deb in ubuntu20.04,but when i use it to load input jpg ,it crash,how can i get the log to find result

    (base) [email protected]:~/下载$ sudo dpkg -i Zetane-1.7.0.deb (正在读取数据库 ... 系统当前共安装有 330200 个文件和目录。) 准备解压 Zetane-1.7.0.deb ... 正在解压 zetane (1.7.0) 并覆盖 (1.7.0) ... 正在设置 zetane (1.7.0) ... 正在处理用于 gnome-menus (3.36.0-1ubuntu1) 的触发器 ... 正在处理用于 desktop-file-utils (0.24-1ubuntu3) 的触发器 ... 正在处理用于 mime-support (3.64ubuntu1) 的触发器 ... 正在处理用于 hicolor-icon-theme (0.17-2) 的触发器 ...

    opened by mathpopo 2
  • engine is not launched after running example 'hello world' code

    engine is not launched after running example 'hello world' code

    By following the guide here https://docs.zetane.com/getting_started.html#installation, I created a scripy to run the 'hello world' code. However, the engine was launched not shown anything

    OS: Windows 10.0 Zetane 1.7.0

    Console output:

    Dialing Zetane... Did not connect! Dialing Zetane... Did not connect! Dialing Zetane... Did not connect! running process: /usr/bin/zetane --server 127.0.0.1 --port 4004 Dialing Zetane... Did not connect! Dialing Zetane... Did not connect! Dialing Zetane... Connected to Zetane Engine! image

    opened by wftubby 0
  • engine is not launched after running example 'hello world' code

    engine is not launched after running example 'hello world' code

    By following the guide here https://docs.zetane.com/getting_started.html#installation, I created a scripy to run the 'hello world' code. However, the engine was not launched but keep printing "Dialing Zetane"

    OS: Ubuntu 18.04 Zetane 1.7.0

    Console output:

    Dialing Zetane...
    Did not connect!
    Dialing Zetane...
    Did not connect!
    Dialing Zetane...
    Did not connect!
    running process: /usr/bin/zetane --server 127.0.0.1 --port 4004
    Dialing Zetane...
    Did not connect!
    Dialing Zetane...
    Did not connect!
    Dialing Zetane...
    Did not connect!
    Dialing Zetane...
    Did not connect!
    Dialing Zetane...
    Did not connect!
    Dialing Zetane...
    Did not connect!
    Dialing Zetane...
    Did not connect!
    Dialing Zetane...
    Did not connect!
    Dialing Zetane...
    Did not connect!
    Dialing Zetane...
    Did not connect!
    Dialing Zetane...
    
    
    opened by akzing-hz 6
Releases(v1.7.4)
  • v1.7.4(Jun 1, 2022)

    Viewer Engine

    • Added support for ONNX 1.10.2
    • Added support for ONNX Runtime 1.10.0
    • Added support for Keras/TensorFlow 2.9.1
    • Improved progress notifications when loading Keras models
    • Fixed crash cause by nested Keras models.
    • Reduced Tensor viewer memory usage
    • Dropped support for Ubuntu 16.04 LTS. See the up-to-date Minimum Requirements.
    • Deprecated support for macOS 10.14 Mojave

    API

    • Added the Zetane API context manager to automate view updates and cleanup, resulting in less verbose code.
    • Added support for Python 3.9
    • Dropped support for Python 3.6
    • Fixed protobuf dependency versioning
    Source code(tar.gz)
    Source code(zip)
    Zetane-1.7.4.deb(273.45 MB)
    Zetane-1.7.4.dmg(312.91 MB)
    Zetane-1.7.4.msi(300.01 MB)
  • 1.7.0(Nov 15, 2021)

  • 1.6.2(Sep 22, 2021)

    • Added output blocks for models to prevent navigation to the end of the model graph
    • Added a Top-K output view for tensors that match certain shapes, e.g. (1, N). Classification models now have a more human understandable output.
    • Update to onnxruntime 1.8.1 to support latest ONNX opset.
    • Improve autodetection of input shapes to allow more inputs to pass inference without shape errors.
    • Fixes for RAM overuse
    • Fixes for Mesh API
    Source code(tar.gz)
    Source code(zip)
    Zetane-1.6.2.deb(347.23 MB)
    Zetane-1.6.2.dmg(326.94 MB)
    Zetane-1.6.2.msi(301.44 MB)
  • 1.5.0(Jun 16, 2021)

  • 1.4.0(May 26, 2021)

  • 1.3.0(Apr 21, 2021)

    • When ONNX models are loaded, an inference pass with sample data is run by default. That means all tensors / feature maps / weights / biases should be viewable immediately after input load. Please let us know if there are models that don't succeed at this initial pass so we can fix them!
    Screen Shot 2021-04-21 at 11 55 29 AM

    (PRO) User input nodes are now attached to the model architecture diagram. When using Zetane Viewer Pro ($15/month) you can load custom inputs and send them through the model. Currently supported formats are .npy, .npz, .pb, and the majority of image formats (jpg, png, tiff, hdr, pic). Screen Shot 2021-04-21 at 11 53 16 AM Screen Shot 2021-04-21 at 12 10 19 PM Screen Shot 2021-04-21 at 11 54 01 AM

    (PRO) When user inputs are misshapen, the engine will display an error about the model's shape expectation. Note that this feature is also usable by free users without the error popup, the input node will load the user input and show dimensions before attempting to run inference with the model. Screen Shot 2021-04-21 at 11 59 54 AM Screen Shot 2021-04-21 at 12 00 10 PM

    (PRO) Any errors during model inference will also appear in the UI. An example is the shape error above. Individual graph operations may fail at any point during the inference pass-- the engine will attempt to populate the graph outputs up until the point of the error, a stack trace of the model run.

    As always, we welcome feedback, bug reports, and any suggestions you might have.

    Source code(tar.gz)
    Source code(zip)
    Zetane-1.3.0.deb(395.53 MB)
    Zetane-1.3.0.dmg(451.85 MB)
    Zetane-1.3.0.msi(452.15 MB)
  • 1.2.0(Apr 5, 2021)

    • Shape mismatch errors for running model inference are shown in the UI, describing the expected input and the given input. (PRO)
    Screen Shot 2021-04-05 at 12 02 59 PM
    • Changed default UI interaction with a mouse wheel to zoom by default, right click to drag the UI.
    • Panels now scroll or move on hover, not just after being selected.
    • Tensor viewer displays the original shape from file or API, without reordering the dimensions to fit the view panel.
    • User notification for version upgrade now appears in the UI.
    • Mac / Linux now run in API mode by default.
    • Added a new ZTN snapshot for XAI features.
    • User inputs now show above the Model Explorer panel's input node.
    • A number of bug fixes and performance improvements
    Source code(tar.gz)
    Source code(zip)
    Zetane-1.2.0.deb(373.68 MB)
    Zetane-1.2.0.dmg(451.16 MB)
    Zetane-1.2.0.msi(438.49 MB)
  • 1.1.4(Feb 22, 2021)

Owner
Zetane Systems
Zetane Systems
A command line simple note taking app

Why yet another note taking program? note was designed with a very specific target in mind: me, and my 2354 scraps of paper. It runs from the command

64 Nov 20, 2022
Tensorflow implementation of soft-attention mechanism for video caption generation.

SA-tensorflow Tensorflow implementation of soft-attention mechanism for video caption generation. An example of soft-attention mechanism. The attentio

Paul Chen 153 Nov 14, 2022
Pun Detection and Location

Pun Detection and Location “The Boating Store Had Its Best Sail Ever”: Pronunciation-attentive Contextualized Pun Recognition Yichao Zhou, Jyun-yu Jia

lawson 3 May 13, 2022
This is official implementaion of paper "Token Shift Transformer for Video Classification".

This is official implementaion of paper "Token Shift Transformer for Video Classification". We achieve SOTA performance 80.40% on Kinetics-400 val. Paper link

VideoNet 60 Dec 30, 2022
ReGAN: Sequence GAN using RE[INFORCE|LAX|BAR] based PG estimators

Sequence Generation with GANs trained by Gradient Estimation Requirements: PyTorch v0.3 Python 3.6 CUDA 9.1 (For GPU) Origin The idea is from paper Se

40 Nov 03, 2022
WaveFake: A Data Set to Facilitate Audio DeepFake Detection

WaveFake: A Data Set to Facilitate Audio DeepFake Detection This is the code repository for our NeurIPS 2021 (Track on Datasets and Benchmarks) paper

Chair for Sys­tems Se­cu­ri­ty 27 Dec 22, 2022
FAMIE is a comprehensive and efficient active learning (AL) toolkit for multilingual information extraction (IE)

FAMIE: A Fast Active Learning Framework for Multilingual Information Extraction

18 Sep 01, 2022
History Aware Multimodal Transformer for Vision-and-Language Navigation

History Aware Multimodal Transformer for Vision-and-Language Navigation This repository is the official implementation of History Aware Multimodal Tra

Shizhe Chen 46 Nov 23, 2022
DeepSpamReview: Detection of Fake Reviews on Online Review Platforms using Deep Learning Architectures. Summer Internship project at CoreView Systems.

Detection of Fake Reviews on Online Review Platforms using Deep Learning Architectures Dataset: https://s3.amazonaws.com/fast-ai-nlp/yelp_review_polar

Ashish Salunkhe 37 Dec 17, 2022
Aspect-Sentiment-Multiple-Opinion Triplet Extraction (NLPCC 2021)

The code and data for the paper "Aspect-Sentiment-Multiple-Opinion Triplet Extraction" Requirements Python 3.6.8 torch==1.2.0 pytorch-transformers==1.

慢半拍 5 Jul 02, 2022
Riemannian Geometry for Molecular Surface Approximation (RGMolSA)

Riemannian Geometry for Molecular Surface Approximation (RGMolSA) Introduction Ligand-based virtual screening aims to reduce the cost and duration of

11 Nov 15, 2022
A curated list of neural network pruning resources.

A curated list of neural network pruning and related resources. Inspired by awesome-deep-vision, awesome-adversarial-machine-learning, awesome-deep-learning-papers and Awesome-NAS.

Yang He 1.7k Jan 09, 2023
A Gura parser implementation for Python

Gura Python parser This repository contains the implementation of a Gura (compliant with version 1.0.0) format parser in Python. Installation pip inst

Gura Config Lang 19 Jan 25, 2022
The official implementation of You Only Compress Once: Towards Effective and Elastic BERT Compression via Exploit-Explore Stochastic Nature Gradient.

You Only Compress Once: Towards Effective and Elastic BERT Compression via Exploit-Explore Stochastic Nature Gradient (paper) @misc{zhang2021compress,

46 Dec 07, 2022
A powerful framework for decentralized federated learning with user-defined communication topology

Scatterbrained Decentralized Federated Learning Scatterbrained makes it easy to build federated learning systems. In addition to traditional federated

Johns Hopkins Applied Physics Laboratory 7 Sep 26, 2022
ResNEsts and DenseNEsts: Block-based DNN Models with Improved Representation Guarantees

ResNEsts and DenseNEsts: Block-based DNN Models with Improved Representation Guarantees This repository is the official implementation of the empirica

Kuan-Lin (Jason) Chen 2 Oct 02, 2022
DeepLM: Large-scale Nonlinear Least Squares on Deep Learning Frameworks using Stochastic Domain Decomposition (CVPR 2021)

DeepLM DeepLM: Large-scale Nonlinear Least Squares on Deep Learning Frameworks using Stochastic Domain Decomposition (CVPR 2021) Run Please install th

Jingwei Huang 130 Dec 02, 2022
3ds-Ghidra-Scripts - Ghidra scripts to help with 3ds reverse engineering

3ds Ghidra Scripts These are ghidra scripts to help with 3ds reverse engineering

Zak 7 May 23, 2022
Cancer Drug Response Prediction via a Hybrid Graph Convolutional Network

DeepCDR Cancer Drug Response Prediction via a Hybrid Graph Convolutional Network This work has been accepted to ECCB2020 and was also published in the

Qiao Liu 50 Dec 18, 2022
Controlling the MicriSpotAI robot from scratch

Project-MicroSpot-AI Controlling the MicriSpotAI robot from scratch Colaborators Alexander Dennis Components from MicroSpot The MicriSpotAI has the fo

Dennis Núñez-Fernández 5 Oct 20, 2022