The DeepViewML Development Suite provides users with an end-to-end workflow to create object detection, classification and tracking solutions for IoT edge devices. The development suite offers a streamlined approach to curating and labelling datasets, selecting optimized ML models for training and validation with those datasets, and the ability to optimize deploy and profile models directly on a wide range of edge processors. The workflow will take you from curating images that describe your specific domain to building smart devices. The full range of MCU, CPU, GPU, NPU compute architectures are all enabled through DeepViewML.
The dataset workspace provides developers with an intuitive and easy to use tool to quickly and easily capture and annotate images for model training and validation. If you can draw a box around an object of interest and describe the item, then you’re ready to start building machine learning datasets and train models to solve your embedded computer vision challenge.
Dataset augmentation allows developers to quickly adjust the image parameters to improve model training by reducing over-fitting and increasing robustness to dynamic real-world environments. There’s also a live preview so you can quickly visualize what the augmentation parameters will do to your images before initiating a training session.
DeepViewML ships with a collection of proven ML models verified to work well for common embedded Computer Vision problems such as detection, classification and tracking. Models are optimized for maximum performance on the DeepViewRT inference engine.
The DeepView device class selection in the wizard allows developers to choose the best model and adjust optimization parameters to achieve the desired balance between performance and precision on the selected device architecture class. The target architecture classes include MCU, CPU, GPU, and NPU from NXP’s SoC lineup of i.MX Application Processors and Crossover devices and Raspberry Pi platforms.
While DeepViewML provides highly optimized models for typical computer vision applications, developers can bring their own. Custom models can be imported into the tool to be trained and optimized. The quantization assistant can work with your unique models, while the Model Tool will help you debug and optimize your custom models while adapting them for the target embedded processor.
If you find yourself short on engineering resources or have an embedded computer vision problem you need help solving, Au-Zone provides engineering design services to deliver advanced models to use in the tool for your specific tasks. Advanced models can include specialty plugins for DeepView to enable customer’s specific machine learning needs.
The DeepViewML trainer bundles the well-known TensorFlow framework to drive training and leverage our unique high-performance dataset pipeline. You don’t need to convert, pack, or pre-process your dataset. Simply select your hyperparameters and click train. The training pipeline is optimized for the tool’s workloads, and if your workstation is equipped with an Nvidia GPU, the trainer will gain a significant performance boost as well.
The model optimizer helps you fine-tune the model for the desired target platform and the inference engine. The optimizer provides automatic graph-level optimizations such as pruning, fusing, and folding layers to reduce complexity and improve performance with no loss of precision. Furthermore, optimizations that have accuracy tradeoffs are configurable by the developer. These include quantization, layer replacement, and weight rounding. Lossy optimizations can be validated and compared using the validator tool to understand the repercussions fully.
DeepViewRT provides best-in-class performance and support across all MCU, CPU, GPU, NPU devices, but our tooling is also capable of optimizing for and targetting TensorFlow Lite, ARM NN, GLOW, and the ONNX Runtime.
Once you’ve trained and optimized your model, the validator provides a rich set of tools for analyzing how the model performs. Take snapshots and compare against different optimizations. DeepView will help you explore the on-target runtime performance between floating-point and quantized models 32/16/8 bits of precision. Validation supports not only your workstation but also interfaces against any device running the DeepView ModelRunner, which supports all the major inference engines, including DeepViewRT.