DeepViewRT Inference Engine

AVAILABLE NOW! 

Best In Class Performance and Unprecedented Portability

The DeepViewRT runtime inference engine provides developers with the freedom to quickly deploy ML models to a broad selection of embedded devices and compute architectures without sacrificing flexibility or performance.

Select a proven public model from DeepView model zoo, create or convert your own model with NXP's eIQ portal and compare performance tradeoffs between quantized and floating point under real-world runtime conditions.

Rocket%20Launch_edited.jpg

Lastest DeepViewRT Benchmarks

100,000+

Devices Deployed

100,000,000+

Real world inferences (and counting)

What if there was a production grade, embedded inference engine that delivered best in class performance and portability?

What if that engine was FREE?

Now there is, and it's called DeepViewRT

The DeepViewRT engine has been highly optimized for runtime size and performance across a long list of the most popular embedded processors, architectures and standard x86 class devices - this means you can run public, custom and proprietary ML models anywhere the DeepViewRT engine is supported. 

Best of all, it's FREE for development and production.

Benefits of the DeepViewRT production-ready engine

  • Tested & documented for quick out-of-the-box deployment

  • Examples and tutorials to save you time getting started

  • Field proven to avoid surprises when you ship your products

  • Lifecycle management for stability, longevity & compatibility

  • Professional support if you need it

 

Runtime environments 

  • Embedded: Linux, Android, Azure, FreeRTOS and bare metal

  • Desktop: Linux and Windows​​

Processor Types & Compute Architectures:

  • Microcontrollers (MCPUs): Arm Cortex M7

  • Application Processors(CPUs): Arm Cortex A35, 53, 72  

  • Graphics Processing Units (GPUs): OpenVx

  • Neural Processing Units (NPU's): VeriSilicon and Ethos*

  • Desktop: x86 (development & validation)

Model deployment formats:

  • Floating point for full precision accuracy

  • Fixed point for optimal size and efficiency

Lastest DeepViewRT Benchmarks

Questions? 

927px-NXP-Logo.svg.png
Raspi_Colour_R.png
arm (1).png
c7b8113247fecd83bd9b5ed5bd3f34d5.png
VeriSilicon_Logo.jpg
1200px-Android_robot.svg.png

DeepView Application Packs

au-zone-01.jpg

The DeepView AppPack provides you with the building blocks and glue for robust, turn-key intelligent vision applications.

dev pack-01.jpg

DeepView DevPack delivers production-grade tools to help you optimize your machine learning models and fine-tune your training data sets.

apppackicon-01.jpg

The DeepView Model Pack provides developers with both public (Open Source) and production ready models. 

modelpackicon11-01.jpg

The DeepViewRT run time inference engine provides developers with the freedom to quickly deploy ML models to a broad selection of embedded devices 

modelpackicon-01.jpg

DeepView Vision Packs provide the vision pipeline solutions for your edge computing and embedded machine learning applications. 

DeepView vision starter kits include the hardware and software you need to accelerate your machine learning development programs from bench-top through field trials and into production.

DeepView works with the tools and technologies you already use

ARM-Holdings.png

NN SDK

03c9107bb6f6cce388bf89cce77de49.png
jupyter-logo.png
logo.png
profiler.png
onnx.jpg
39885596-911dca9c-54bf-11e8-9603-9bb9c99