Inference Engine

DeepViewRT - Train your model once, run it anywhere

The DeepView™RT 2.0 Run-Time inference engine gives developers and data scientists the model portability and runtime performance needed for the most demanding embedded Machine Learning workloads and applications. 


The engine is optimized for runtime size and performance across a long list of the most popular embedded processors and standard x86 class devices - this means you can run public and proprietary ML models anywhere the DeepViewRT engine lives including:

  • Multiple Processor Classes & Architectures:

    • Microcontrollers (MCU's)

    • Application Processors (AP's)

    • Graphics Processing Unit's (GPU's) 

    • Neural Processing Units (NPU's) or AI Accelerators

  • Most common Runtime Environments:

    • Linux, Android, OpenCL, RTOS and bare metal

  • Standard Architectures and most common devices:

    • Arm cortex-A, cortex-M and Mali

    • Broadcom / Raspberry Pi

    • NXP

DeepView QML Development Examples

The Application Notes below provide examples of what can be created using the DeepViewQuick library.  These examples provide developers with easy to follow reference implementations for the most common vision use cases and a convenient way get up to speed quickly and easily when developing custom visual intelligence solutions.


The DeepViewQuick library is based on Qt's QML (Qt Modelling Language) platform and greatly accelerates the development of prototypes, custom vision pipelines, camera source integrations and custom User Interfaces on embedded applications processors.   

The DeepViewRT inference engine is pre-integrated into all of the examples below, allowing developers to easily update and modify the underlying Machine Learning models, swap camera sources or develop entirely custom applications.

Single Shot Detection (SSD) Camera

This example shows developers exactly how to build a Single Shot Detection (SSD) Camera demo using the DeepViewRT tools and inference engine.

Single Shot Detection techniques are an efficient way to detect multiple objects in a single image and are often the first step in a complete vision pipeline.

DeepViewRT Image Classifer thumb.png

Image Classification

This example details the DeepViewRT Label Image QML sample. This sample will show an application built using QML that can classify images using image classification models to label the various objects.

The example gives the developer the option to load public models, transfer learn custom models or execute custom, proprietary models.

DeepViewRT Live Camera.png

Live Camera

This example shows the user how the video feed from virtually any camera source can be quickly and easily integrated into the DeepView video pipeline.  With access to the video source, the example application built using QML can then classify video frames using a user configurable image classification models running on the DeepViewRT inference engine.

Alternative video pipelines are available, including OpenCV and V4FL2. 

DeepViewRT PoseNet thumb.png

PoseNet and Gesture

This example showcases the PoseNet model running on the DeepViewRT inference engine to provide a very efficient Pose and Gesture recognition solution.


The demo shows an application built using QML that can detect and overlay an outline of a person or persons’ joints and limbs onto a video feed using a PoseNet model.

Raspberry Pi Release

By taking advantage of this release, developers are able to leverage the DeepView QML development examples used to generate the demos captured in the example videos below and then build on them to explore their own custom Visual  Intelligence ideas and solutions.


The DeepView on Pi release supports the rapid deployment of existing public models such as MobileNet to RPi 3 & RPi 4 platforms as well as enabling developers to deploy proprietary models for evaluation and test on the same devices.  The streamlined model deployment methods offered by the Model Conversion Tools allows developers to focus on the creation of their Visual Intelligence solution without the need to build the entire embedded stack or pay a runtime size or performance penalty.

Register below and download the DeepView on Pi Release and access the Sample Projects and Tutorials.

Let us know what you think and have fun!

DeepView on Pi Demo Examples

The demo videos below were all created with the DeepViewML Toolkit using the DeepView on Pi Release.  Each of these projects can ​be easily replicated  by following the DeepView QML development examples and developers are then able to customize and extend these example projects to experiment with other applications and use cases using both public and proprietary ML models.

Single Shot Detection (SSD) Example

This video shows a standard TensorFlow SSD model converted and optimized to run on the DeepViewRT engine deployed to a Raspberry Pi 4 platform.  In this example the model runs at 20fps.

With an appropriate dataset and the DeepViewML Toolkit, developers can retrain standard and custom SSD models to detect other objects of interest.

Real Time Segmentation Example

This video shows a standard TensorFlow Segmentation model converted and optimized to run on the DeepViewRT engine deployed to a Raspberry Pi 4.

Image segmentation annotates or 'paints' pixels in real time with a color denoting the classification of the specific object detected by the ML model. 


In this example using a public model converted and optimized with the DeepView tools to run on the DeepView engine, the system is able to detect, classify and annotate humans and trains at 5 fps on a standard RPi 4 device.

Gesture Recognition for User Input

This video shows a modified PoseNet model converted and optimized to run on the DeepViewRT engine deployed to a Raspberry Pi 4 platform.

The user application has been extended to make use of the ML model output as an input to a gaming emulator deployed on the same target.

Developers are able to extend this example to provide gesture recognition and user input for a wide variety of useful applications  (or just have fun playing other video games).