
DeepViewRT™ Inference Engine
AVAILABLE NOW!
Best In Class Performance and Unprecedented Portability
The DeepViewRT runtime inference engine provides developers with the freedom to quickly deploy ML models to a broad selection of embedded devices and compute architectures without sacrificing flexibility or performance.
Select a proven public model from DeepView model zoo, create or convert your own model with NXP's eIQ portal and compare performance tradeoffs between quantized and floating point under real-world runtime conditions.

Lastest DeepViewRT Benchmarks
164,300+
Devices Deployed
100,000,000+
Real world inferences (and counting)

What if there was a production grade, embedded inference engine that delivered best in class performance and portability?
What if that engine was FREE?
Now there is, and it's called DeepViewRT
The DeepViewRT engine has been highly optimized for runtime size and performance across a long list of the most popular embedded processors, architectures and standard x86 class devices - this means you can run public, custom and proprietary ML models anywhere the DeepViewRT engine is supported.
Best of all, it's FREE for development and production.
Benefits of the DeepViewRT production-ready engine
-
Tested & documented for quick out-of-the-box deployment
-
Examples and tutorials to save you time getting started
-
Field proven to avoid surprises when you ship your products
-
Lifecycle management for stability, longevity & compatibility
-
Professional support if you need it
Runtime environments
-
Embedded: Linux, Android, Azure, FreeRTOS and bare metal
-
Desktop: Linux and Windows
Processor Types & Compute Architectures:
-
Microcontrollers (MCPUs): Arm Cortex M7
-
Application Processors(CPUs): Arm Cortex A35, 53, 72
-
Graphics Processing Units (GPUs): OpenVx
-
Neural Processing Units (NPU's): VeriSilicon and Ethos*
-
Desktop: x86 (development & validation)
Model deployment formats:
-
Floating point for full precision accuracy
-
Fixed point for optimal size and efficiency
Lastest DeepViewRT Benchmarks
Questions?


.png)



DeepView™ Application Packs
The DeepView AppPack provides you with the building blocks and glue for robust, turn-key intelligent vision applications.
DeepView DevPack delivers production-grade tools to help you optimize your machine learning models and fine-tune your training data sets.
The DeepView Model Pack provides developers with both public (Open Source) and production ready models.
The DeepViewRT run time inference engine provides developers with the freedom to quickly deploy ML models to a broad selection of embedded devices
DeepView Vision Packs provide the vision pipeline solutions for your edge computing and embedded machine learning applications.
DeepView vision starter kits include the hardware and software you need to accelerate your machine learning development programs from bench-top through field trials and into production.
DeepView™ works with the tools and technologies you already use

NN SDK





