Overview of the Intel® Distribution of OpenVINO™ Toolkit
The Intel® Developer Cloud for the Edge comes preinstalled with the Intel® Distribution of OpenVINO™ toolkit to help developers run inference on a range of compute devices. This toolkit is designed to accelerate the development of machine learning solutions. Based on convolutional neural networks (CNNs), the Intel® Distribution of OpenVINO™ toolkit shares workloads across Intel® hardware (including accelerators) to maximize performance.
The Intel® Distribution of OpenVINO™ toolkit includes:
- A model optimizer to convert models from popular frameworks such as Caffe*, TensorFlow*, Open Neural Network Exchange (ONNX*), and Kaldi
- An inference engine that supports heterogeneous execution across computer vision accelerators from Intel, including CPUs, GPUs, FPGAs, and the Intel® Neural Compute Stick 2 (Intel® NCS2)
- Common API for heterogeneous Intel® hardware
Core Flow
The basic workflow is:
- Use a tool, such as Caffe, to create and train a CNN inference model.
- Run the model through Model Optimizer to produce an optimized Intermediate Representation (IR) stored in files (.bin and .xml) for use with the Inference Engine.
- The user application then loads and runs the model on targeted devices using the inference engine and the IR files.
- To begin working with the Intel Distribution of OpenVINO toolkit in Python*, create a new Jupyter* Notebook .
- To begin working with a local copy on your own machine, download the Intel Distribution of OpenVINO toolkit.