08a2fb_54fdb215c5644f72aa9439bc018e25b6-

Intelligent Vision Software

Resource Centre

Free Public Models

Public Models as the name suggests, these are models that have been adapted, tested and validated and are ready for evaluation on common platforms from NXP and its partners using the eIQ Toolkit.

Common public models that have been adapted, tested and validated and are ready for evaluation on common platforms from NXP and its partners using eIQ Portal.

Current examples include:

  • MobileNet V1, V2, V3

  • MobileNet V1, V2, V3 SSD

  • ResNet V2 50

  • PoseNet

  • More coming soon

 

Training

DeepViewRT
RaspberryPi Camera Classifier

This video lays out a detailed walkthrough of developing a DeepViewRT camera classifier using Qt Quick integration on the RaspberryPi.

 

Tutorials

DeepView Tutorial VOC Dataset Import

This tutorial shows how to use the deepview-importer tool to import a VOC-type dataset into DeepView Creator.

DeepView Tutorial 
Remote Validation RT1170

This tutorial shows you how to setup the i.MX RT1170 EVK with ModelRunner and how to run remote validation from Creator.

DeepView Tutorial
Remote Validation

This tutorial shows how to setup the DeepView ModelRunner on an i.MX8MPlus EVK and perform remote validation and benchmarking of DeepViewRT on the CPU and NPU as well as TFLite on the CPU. 

DeepView Tutorial 
Model Converter

In this quick tutorial we show you how to use the DeepView RTM Tool on the RaspberryPi for Model Conversion.

DeepView Tutorial 
Raspberry Pi Registration

In this tutorial we show you how to register your Raspberry Pi to use with DeepViewRT.

 

Demos

Smart City Demo powered by DeepView Middleware

In this video we are going to demonstrate how the DeepView Vision Starter Kit | Micro was used to build a real-world remote vehicle detection system for a parking use case example.

Object Tracking at the Edge - Au-Zone Blog

DeepViewRT
Classification Car Track Demo

In this video we demonstrate the RTCam classification model with toy cars on a race track.

DeepViewRT
Image Segmentation Demo

This video demonstrates the DeepView RT image segmentation model.

DeepViewRT
Distracted Driving Demo

Distracted Driver Demo running on the i.MX8. Deepzone runs a network with 8.5M weights on the Vivante GC7000 GPU in about 100ms while the GUI is running the sequence at full framerate.

DeepViewRT
Object Detection Vehicle Demo

This Video shows an example of real time object detection and tracking implemented with DeepView RT running real time on a Raspberry Pi4.

Object Tracking at the Edge - Au-Zone Blog

DeepViewRT
Object Detection Queue Counter Demo

This Video shows another example of real time object detection and tracking implemented with DeepView RT running real time on a Raspberry Pi4.

Object Tracking at the Edge - Au-Zone Blog

DeepViewRT
Pose Estimation Demo

This video demonstrates the DeepView RT pose estimation model around several different applications

DeepViewRT
Face Recognition Demo

Au-Zone face recognition demo built with DeepView running on NXP's i.MX8M. The module gives access to known people and provides functionality to add new users to the system and retrains the model at the edge within a few seconds.

 

Webinars

Au-Zone & NXP Semiconductors Present ML at the Edge Visual Intelligence with a Low Cost MCU

Are you an embedded developer contemplating how to add visual intelligence to your next IoT platform, or a data scientist looking for a practical edge platform to test your custom models in the wild? Developing models for custom applications on constrained compute platforms isn't inherently easy, and with a little extra assistance and guidance, your next commercial solution can be on the market quickly

Using Advanced Detection Models With eIQ Portal

This presentation from our partners at Au-Zone focuses on the differences between advanced detection models and standard models such as SSD. The emphasis will be on model topology, performance, accuracy and size. The session discusses the eIQ Portal add-ons that are available through Au-Zone to provide the advanced detection models, and is in the context of using the NXP i.MX 8M Plus and its NPU to accelerate the detection model’s performance.

Harnessing the Edge and the Cloud Together for Visual AI

Embedded developers are increasingly comfortable deploying trained neural networks as static elements in edge devices, as well as using cloud-based vision services to implement visual intelligence remotely. In this presentation, Taylor explores the benefits of combining edge and cloud computing to bring added capability and flexibility to edge devices

Development and Maintenance of your Embedded Linux Vision System – Simplified!

As one of the i.MX 8M Plus applications processor ecosystem partners, Toradex shows ways to develop and maintain embedded Linux vision systems. In this webinar, we take you from proof of concept to volume production, but we do not stop there. We also discuss how to maintain your devices with software updates.

Tools and Techniques for Optimizing DNNs on Arm-based Processors with Au-Zone’s DeepView ML Toolkit

In this presentation, Taylor describes methods and tools for developing, profiling and optimizing neural network solutions for deployment on Arm MCUs, CPUs and GPUs using Au-Zone’s DeepView ML Toolkit. He introduces the need for optimization to enable efficient deployment of deep learning models, and highlights the specific challenges of profiling and optimizing models for deployment in cost- and energy-constrained systems.

Deploying CNN-based Vision Solutions on a $3 Microcontroller

In this presentation, Lytle explains how his company designed, trained and deployed a CNN-based embedded vision solution on a low-cost, Cortex-M-based microcontroller (MCU). He describes the steps taken to design an appropriate neural network and then to implement it within the limited memory and computational constraints of an embedded MCU. He highlight the technical challenges Au-Zone Technologies encountered in the implementation and explains the methods the company used to meet its design objectives. He also explores how this type of solution can be scaled across a range of low-cost processors with different price and performance metrics. Finally, he presents and interprets benchmark data for representative devices

Implementing an Optimized CNN Traffic Sign Recognition Solution - Presentation by NXP & Au-Zone Technologies

Rafal Malewski, Head of the Graphics Technology Engineering Center at NXP Semiconductors, and Sébastien Taylor, Vision Technology Architect at Au-Zone Technologies, present the "Implementing an Optimized CNN Traffic Sign Recognition Solution" tutorial at the May 2017 Embedded Vision Summit.

 

Product Information

Documents

DeepViewRT Benchmark Data

The DeepViewRT run time inference engine provides developers with the freedom to quickly deploy ML models to a broad selection of embedded devices and compute architectures without sacrificing flexibility or performance.

Product Brief - DeepView Model Pack for Object Detection

DeepView Model Pack is a bundle of state-of-the-art detection models pretrained with COCO and OpenImages and has been fully tested & optimized for NXP RT Crossover MCU’s and i.MX8 Application Processors.

Product Brief - DeepView Application Pack for Object Tracking on MCU

DeepView Model Pack is a bundle of state-of-the-art detection models pretrained with COCO and OpenImages and has been fully tested & optimized for NXP RT Crossover MCU’s and i.MX8 Application Processors.

Product Brief - DeepView Vision Pack for Application Processors

The Vision Pack for Apps Processors provides developers with an end to end, hardware accelerated video pipeline for optimized AI based Vision applications. Fully integrated with DeepViewRT inference engine for the high performance / low overhead AI vision solution on Applications Processors.

Product Brief - DeepView Vision Pack for Microcontrollers

The Vision Pack for Microcontrollers provides developers with an end to end, hardware accelerated video pipeline for optimized AI based Vision applications. Fully integrated with DeepViewRT inference engine for the high performance / low overhead AI vision solution on MCU's.

Product Brief - Maivin AI Vision Starter Kit

The Maivin AI Vision Starter Kit is a modular AI smart camera platform built on NXP’s i.MX8MPlus Applications Processor and production grade hardware and software components to enable rapid prototyping and field deployment of custom Vision Solutions. The Maivin targets applications where compute performance is the priority.

Product Brief - Micro AI Vision Starter Kit

The Micro Vision Starter Kit is a modular AI smart camera platform built on NXP’s i.MXRT1064 Crossover MCU and production grade hardware and software components to enable rapid prototyping and field deployment of custom Vision Solutions. The Micro targets applications where lower cost, design complexity and size are prioritized over computer performance.

Videos

Maivin AI Vision Starter Kit
Detailed Tear Down Video

A detailed teardown of the Maivin AI Vision Starter Kit by Au-Zone Technologies.

Maivin AI Vision Starter Kit - Click Here

Maivin AI Vision Starter Kit
Unboxing & Setup

Join Au-Zone Technologies for a detailed unboxing and setup video for the Micro AI Vision Starter Kit.

DeepView Starter Kit | Micro - Click Here

Micro AI Vision Starter Kit
Detailed Tear Down

A detailed teardown of the Micro AI Vision Starter Kit by Au-Zone Technologies.

Micro AI Vision Starter Kit - Click Here

Micro AI Vision Starter Kit
Unboxing & Setup

Join Au-Zone Technologies for a detailed unboxing and setup video for the Micro AI Vision Starter Kit.

DeepView Starter Kit | Micro - Click Here

Maivin AI Vision Starter Kit

Toradex, Au-Zone and Vision Components present a Modular i.MX 8M Plus AI Vision Kit that accelerates your product development from proof-of-concept to production.