top of page
SDK's and Porting Kits

Hardware Accelerated 
VISION PIPELINE

  • Utilize VisionPacks’ field proven Vision AI Acceleration Library (VAAL) to achieve industry leading performance.
     

  • Avoid common pitfalls of integrating and optimizing image capture & processing, model inference & decoding, and visualization. 
     

  • We’ve optimized runtime so you can focus on your vision application.

Immediate 
RESULTS

  • Instantly deploy standard or custom AI models on EVK, Maivin or your own hardware.
     

  • Fully compatible with eIQ® Toolkit graphical development environment for no-code model training and deployment.
     

  • Integration with multiple frameworks to deploy your AI application the way you want.

Best in class
PERFORMANCE

  • Achieve industry leading performance with DeepViewRT™ Inference Engine and DeepView™ VisionPack.
     

  • Ship your product with confidence knowing your vision pipeline is fully optimized for AI compute at the edge.

Commercially
SUPPORTED

  • Build your EdgeFirst AI solution with confidence.
     

  • Long Term Support and stability.
     

  • Documented code provenance.
     

  • Field proven reliability.

Stay up to date on all the DeepView AI Middleware releases. Subscribe Today!

au-zone-01.jpg

The DeepView AppPack provides you with the building blocks and glue for robust, turn-key intelligent vision applications.

modelpackicon-01.jpg

DeepView Vision Packs provide the vision pipeline solutions for your edge computing and embedded machine learning applications. 

DeepView vision starter kits include the hardware and software you need to accelerate your machine learning development programs from bench-top through field trials and into production.

dev pack-01.jpg

DeepView DevPack delivers production-grade tools to help you optimize your machine learning models and fine-tune your training data sets.

apppackicon-01.jpg

The DeepView Model Pack provides developers with both public (Open Source) and production ready models. 

modelpackicon11-01.jpg

The DeepViewRT run time inference engine provides developers with the freedom to quickly deploy ML models to a broad selection of embedded devices 

bottom of page