DeepViewRT Inference Engine


Best In Class Performance and Unprecedented Portability

The DeepViewRT run time inference engine provides developers with the freedom to quickly deploy ML models to a broad selection of embedded devices and compute architectures without sacrificing flexibility or performance.

Select a proven model from eIQ Portal, deploy floating point or quantized versions to your target and profile your model's performance under real-world runtime conditions.

See for yourself

What if there was a production grade, embedded inference engine that delivered best in class performance and portability?

What if that engine was Free?

Now there is, and it's called DeepViewRT

The DeepViewRT engine has been highly optimized for runtime size and performance across a long list of the most popular embedded processors, architectures and standard x86 class devices - this means you can run public, custom ML models anywhere the DeepViewRT engine is supported.  Free.


Benefits of a commercial, product-ready engine are

  • Comes tested and documented for quick out-of-the-box deployment

  • Numerous examples and tutorials to save you time getting started

  • Field proven so you don't get surprised when you start shipping your products

  • Lifecycle management for stability, longevity and backwards compatibility

  • Professional support if you need help, optimization or customization


Runtime environments 

  • Embedded: Linux, Android, RTOS and bare metal

  • Desktop: Linux and Windows​​

Processor Types & Compute Architectures:

  • Microcontrollers: Arm Cortex M4, M7

  • Application Processors: Arm Cortex Ax 

  • Graphics Processing Units: OpenCL and OpenVx

  • Neural Processing Units (NPU's)

  • Desktop/Sever: x86 (Development & Validation)

Model deployment formats:

  • Floating point for full precision accuracy

  • Int 8 quantization for optimized size and efficiency

Want to Try it Out For Yourself?

Learn More

DeepView Application Packs


The DeepView AppPack provides you with the building blocks and glue for robust, turn-key intelligent vision applications.

dev pack-01.jpg

DeepView DevPack delivers production-grade tools to help you optimize your machine learning models and fine-tune your training data sets.


The DeepView Model Pack provides developers with both public (Open Source) and production ready models. 


DeepView Vision Packs provide the vision pipeline solutions for your edge computing and embedded machine learning applications. 

DeepView vision starter kits include the hardware and software you need to accelerate your machine learning development programs from bench-top through field trials and into production.

DeepView works with the tools and technologies you already use