
DeepViewRT
Inference Engine
™
AVAILABLE NOW!
Best In Class Performance and Unprecedented Portability
The DeepViewRT runtime inference engine provides developers with the freedom to quickly deploy ML models to a broad selection of embedded devices and compute architectures without sacrificing flexibility or performance.
Select a proven public model from DeepView model zoo, create or convert your own model with NXP's eIQ portal and compare performance tradeoffs between quantized and floating point under real-world runtime conditions.

Lastest DeepViewRT Benchmarks
164,300+
100,000,000+
Devices Deployed
Real world inferences (and counting)
The DeepViewRT engine has been highly optimized for runtime size and performance across a long list of the most popular embedded processors, architectures and standard x86 class devices - this means you can run public, custom and proprietary ML models anywhere the DeepViewRT engine is supported.
Best of all, it's FREE for development and production.
Benefits of the DeepViewRT production-ready engine
-
Tested & documented for quick out-of-the-box deployment
-
Examples and tutorials to save you time getting started
-
Field proven to avoid surprises when you ship your products
-
Lifecycle management for stability, longevity & compatibility
-
Professional support if you need it
Runtime environments
-
Embedded: Linux, Android, Azure, FreeRTOS and bare metal
-
Desktop: Linux and Windows
Lastest DeepViewRT Benchmarks
Processor Types & Compute Architectures:
-
Microcontrollers (MCPUs): Arm Cortex M7
-
Application Processors(CPUs): Arm Cortex A35, 53, 72
-
Graphics Processing Units (GPUs): OpenVx
-
Neural Processing Units (NPU's): VeriSilicon and Ethos*
-
Desktop: x86 (development & validation)
Model deployment formats:
-
Floating point for full precision accuracy
-
Fixed point for optimal size and efficiency
Questions?


.png)



DeepView™ works with the tools and technologies you already use

NN SDK





