Intelligent Vision Software

Resource Centre

Free Public Models

Public Models as the name suggests, these are models that have been adapted, tested and validated and are ready for evaluation on common platforms from NXP and its partners using the eIQ Toolkit.

Common public models that have been adapted, tested and validated and are ready for evaluation on common platforms from NXP and its partners using eIQ Portal.

Current examples include:

  • MobileNet V1, V2, V3

  • MobileNet V1, V2, V3 SSD

  • ResNet V2 50

  • PoseNet

  • More coming soon



RaspberryPi Camera Classifier

This video lays out a detailed walkthrough of developing a DeepViewRT camera classifier using Qt Quick integration on the RaspberryPi.



DeepView Tutorial VOC Dataset Import

This tutorial shows how to use the deepview-importer tool to import a VOC-type dataset into DeepView Creator.

DeepView Tutorial 
Remote Validation RT1170

This tutorial shows you how to setup the i.MX RT1170 EVK with ModelRunner and how to run remote validation from Creator.

DeepView Tutorial
Remote Validation

This tutorial shows how to setup the DeepView ModelRunner on an i.MX8MPlus EVK and perform remote validation and benchmarking of DeepViewRT on the CPU and NPU as well as TFLite on the CPU. 

DeepView Tutorial 
Model Converter

In this quick tutorial we show you how to use the DeepView RTM Tool on the RaspberryPi for Model Conversion.

DeepView Tutorial 
Raspberry Pi Registration

In this tutorial we show you how to register your Raspberry Pi to use with DeepViewRT.



Object Detection Vehicle Demo

This Video shows an example of real time object detection and tracking implemented with DeepView RT running real time on a Raspberry Pi4.

Object Tracking at the Edge - Au-Zone Blog

Classification Car Track Demo

In this video we demonstrate the RTCam classification model with toy cars on a race track.

Image Segmentation Demo

This video demonstrates the DeepView RT image segmentation model.

Object Detection Queue Counter Demo

This Video shows another example of real time object detection and tracking implemented with DeepView RT running real time on a Raspberry Pi4.

Object Tracking at the Edge - Au-Zone Blog

Pose Estimation Demo

This video demonstrates the DeepView RT pose estimation model around several different applications

Face Recognition Demo

Au-Zone face recognition demo built with DeepView running on NXP's i.MX8M. The module gives access to known people and provides functionality to add new users to the system and retrains the model at the edge within a few seconds.

Distracted Driving Demo

Distracted Driver Demo running on the i.MX8. Deepzone runs a network with 8.5M weights on the Vivante GC7000 GPU in about 100ms while the GUI is running the sequence at full framerate.



Development and Maintenance of your Embedded Linux Vision System – Simplified!

As one of the i.MX 8M Plus applications processor ecosystem partners, Toradex shows ways to develop and maintain embedded Linux vision systems. In this webinar, we take you from proof of concept to volume production, but we do not stop there. We also discuss how to maintain your devices with software updates.

Harnessing the Edge and the Cloud Together for Visual AI

Embedded developers are increasingly comfortable deploying trained neural networks as static elements in edge devices, as well as using cloud-based vision services to implement visual intelligence remotely. In this presentation, Taylor explores the benefits of combining edge and cloud computing to bring added capability and flexibility to edge devices

Tools and Techniques for Optimizing DNNs on Arm-based Processors with Au-Zone’s DeepView ML Toolkit

In this presentation, Taylor describes methods and tools for developing, profiling and optimizing neural network solutions for deployment on Arm MCUs, CPUs and GPUs using Au-Zone’s DeepView ML Toolkit. He introduces the need for optimization to enable efficient deployment of deep learning models, and highlights the specific challenges of profiling and optimizing models for deployment in cost- and energy-constrained systems.

Deploying CNN-based Vision Solutions on a $3 Microcontroller

In this presentation, Lytle explains how his company designed, trained and deployed a CNN-based embedded vision solution on a low-cost, Cortex-M-based microcontroller (MCU). He describes the steps taken to design an appropriate neural network and then to implement it within the limited memory and computational constraints of an embedded MCU. He highlight the technical challenges Au-Zone Technologies encountered in the implementation and explains the methods the company used to meet its design objectives. He also explores how this type of solution can be scaled across a range of low-cost processors with different price and performance metrics. Finally, he presents and interprets benchmark data for representative devices

Implementing an Optimized CNN Traffic Sign Recognition Solution - Presentation by NXP & Au-Zone Technologies

Rafal Malewski, Head of the Graphics Technology Engineering Center at NXP Semiconductors, and Sébastien Taylor, Vision Technology Architect at Au-Zone Technologies, present the "Implementing an Optimized CNN Traffic Sign Recognition Solution" tutorial at the May 2017 Embedded Vision Summit.


Product Information


DeepViewRT Benchmark Data

The DeepViewRT run time inference engine provides developers with the freedom to quickly deploy ML models to a broad selection of embedded devices and compute architectures without sacrificing flexibility or performance.


Maivin AI Vision Starter Kit

Toradex, Au-Zone and Vision Components present a Modular i.MX 8M Plus AI Vision Kit that accelerates your product development from proof-of-concept to production.