Development Tools & RunTime Inference Engine
Train your model once, run it anywhere
DeepView™ Development Suite is a Python/Jupyter based Toolkit and highly portable Run-Time inference engine specifically developed to help embedded engineers design, train and deploy DNN's, CNN's and RNN's on embedded compute platforms.
The DeepView™ 2.0 ML Toolkit provide 2 primary workflows
BYOD (Bring Your Own Data) allows the developer to quickly and easily transfer learn a pre-existing neural network with images relevant to your interests.
BYOM (Bring Your Own Model) workflow allows you to easily convert your custom or proprietary network from Caffe or TensorFlow and deploy to target(s) of your choosing for evaluation and optimization.
The DeepView™RT 2.0 Run-Time inference engine allows the developer to easily deploy, evaluation and profile models on a very wide range of standard or custom silicon. It has been developed to support many different classes of embedded processing clusters, multiple operating systems and a wide range of industry standard System on Chip processors:
Processor types: Microcontrollers (MCU's), Microprocessors (MPU's), Graphics Processing Unit's (GPU's) and AI Accelerators
Runtime Environments: Linux, Android, common RTOS' and bare metal
Standard Semi Vendors: Arm, Broadcom, HiKey, Infineon, NXP, Renesas, Rockchip, Samsung, ST Micro, Synopsys & more
Raspberry Pi Dev Program
Au-Zone is completing alpha trials on RPi now and will be launching a DeepView™2.0 RPi Developer program to get the tools and engine into more hands. The goal of the program is simple, let people use DeepView™ for non-commercial, experimental and educational use and see what they do with it!
If you are interested to find out more, please sign up below
DeepView™RT 2.0 Support Matrix
The table below shows the relative performance for many common public DNN models (rows) on devices (columns) from several of the well known semiconductor manufacturers.