DeepView ML Toolkit

DeepViewRT - Train your model once, run it anywhere

The DeepView™RT 2.0 Run-Time inference engine gives developers and data scientists the model portability and runtime performance needed for the most demanding embedded Machine Learning workloads and applications. 


The engine is optimized for runtime size and performance across a long list of the most popular embedded processors and standard x86 class devices - this means you can run public and proprietary ML models anywhere the DeepViewRT engine lives including:

  • Multiple Processor Classes & Architectures:

    • Microcontrollers (MCU's)

    • Application Processors (AP's)

    • Graphics Processing Unit's (GPU's) 

    • Neural Processing Units (NPU's) or AI Accelerators

  • Most common Runtime Environments:

    • Linux, Android, OpenCL, RTOS and bare metal

  • Standard Architectures and most common devices:

    • Arm cortex-A, cortex-M and Mali

    • Broadcom / Raspberry Pi

    • Infineon, NXP, Renesas, Rockchip, Samsung, ST Micro & more

Product Brief
DeepView 2.0 ML Tookit and SDK Product B
Model Support Matrix
DeepView 2.0 Inference Engine Support Ma

The table above shows the relative performance of the most common public Neural Network models across a range of devices from well known semiconductor vendors.  

DeepViewML Model Conversion Tools

The DeepView™ 2.0 ML Toolkit supports 2 primary workflows

  • BYOD (Bring Your Own Data) allows the developer to quickly and easily transfer learn a pre-existing neural network with images relevant to your interests.

  • BYOM (Bring Your Own Model) workflow allows you to easily convert your custom or proprietary network from Caffe or TensorFlow and deploy to target(s) of your choosing for evaluation and optimization.

Model conversion is easy with the DeepViewML Toolkit using either the GUI tool or the command line interface.  In one step, the tool converts and optimizes your desired ML model for runtime deployment on the DeepViewRT inference engine.


With the engine ported and optimized for all of the devices shown in the table above, developers are able to quickly evaluate their ML workloads across a host of devices without the need to compile from source for each model and device.

DeepView QML Development Examples

The Application Notes below provide examples of what can be created using the DeepViewQuick library.  These examples provide developers with easy to follow reference implementations for the most common vision use cases and a convenient way get up to speed quickly and easily when developing custom visual intelligence solutions.


The DeepViewQuick library is based on Qt's QML (Qt Modelling Language) platform and greatly accelerates the development of prototypes, custom vision pipelines, camera source integrations and custom User Interfaces on embedded applications processors.   

The DeepViewRT inference engine is pre-integrated into all of the examples below, allowing developers to easily update and modify the underlying Machine Learning models, swap camera sources or develop entirely custom applications.

Single Shot Detection (SSD) Camera

This example shows developers exactly how to build a Single Shot Detection (SSD) Camera demo using the DeepViewRT tools and inference engine.

Single Shot Detection techniques are an efficient way to detect multiple objects in a single image and are often the first step in a complete vision pipeline.

DeepViewRT Image Classifer thumb.png

Image Classification

This example details the DeepViewRT Label Image QML sample. This sample will show an application built using QML that can classify images using image classification models to label the various objects.

The example gives the developer the option to load public models, transfer learn custom models or execute custom, proprietary models.

DeepViewRT Live Camera.png

Live Camera

This example shows the user how the video feed from virtually any camera source can be quickly and easily integrated into the DeepView video pipeline.  With access to the video source, the example application built using QML can then classify video frames using a user configurable image classification models running on the DeepViewRT inference engine.

Alternative video pipelines are available, including OpenCV and V4FL2. 

DeepViewRT PoseNet thumb.png

PoseNet and Gesture

This example showcases the PoseNet model running on the DeepViewRT inference engine to provide a very efficient Pose and Gesture recognition solution.


The demo shows an application built using QML that can detect and overlay an outline of a person or persons’ joints and limbs onto a video feed using a PoseNet model.

Development Workflow

This video highlights the DeepView ML Toolkit development workflow for training and deploying models to embedded targets running the DeepViewRT inference engine.


Contact Us

© 2014 by Au Zone Technologies

114, 1215 13th St SE Calgary Alberta

© 2014 by Au Zone Technologies