Maybe you've trained a model and it's ready to help you detect or classify, or maybe you want to use a pre-trained model to make sense of visual data – how do you deploy your model successfully and cost-effectively?
The answer is Intel OpenVINO, a deep-learning deployment toolkit that enables you to optimise your models on Intel hardware and find the right configuration for your business. That's a big boon for developers and managers alike, who until now might have assumed that taking an AI model from training to production requires costly dedicated graphics cards (GPUs).
OpenVINO provides a cost-effective and high-performance alternative that means you can take advantage of the built-in Intel processors that your business already owns across its infrastructure. You can use your computer's built-in Intel processors – now faster than ever before – to optimise your AI products.
What makes OpenVINO unique?
OpenVINO is best thought of as an enabler – it is a comprehensive solution to the challenge of AI model optimisation. With a write once, deploy anywhere approach, the pioneering OpenVINO platform analyses your model and the hardware available for inference.
You can use OpenVINO to optimise the compute graph of your model and then optimise it further for the hardware platform you choose.
Intel provides a wide variety of hardware to run your inference workloads whether that's CPU, integrated graphics or Intel accelerators. What's more, you can rapidly change the target hardware for your model. While you might start in one environment you can quickly change to another.
This ability to switch hardware easily and quickly saves developers a huge amount of time and resources. If you choose to run your inference on accelerators – such as Movidius VPUs rather than CPU, for example – you don't have to create additional code to make this change happen.
Crucially, while OpenVINO enables you to run inference on your Intel hardware, the open-source nature of the tool does not confine you to a specific hardware platform.
It's these features – from optimisation through to open-source utilisation – that make OpenVINO an effective way to move your AI models to a production-ready environment. As well as being a unique toolkit, OpenVINO provides business benefits that cover three key areas: performance, development and deployment.
Benefit 1: High performance, deep learning inference
Some organisations will want to run models on the cloud and others might choose to use an internal data centre. OpenVINO is not prescriptive and supports various data types, meaning your business can optimise and run its models in a way that suits your organisation's requirements.
OpenVINO also includes a post-training optimisation tool.. This tool accelerates the inference of deep-learning models by converting them into a more hardware-friendly representation, giving your model a considerable speed boost.
Case study 1: ArcelorMittal
Steel giant ArcelorMittal is using machine learning to identify critical data, such as material location and conditions, held in video frames about railway cars. ArcelorMittal Poland used the OpenVINO toolkit to run inference and deep learning models accelerated by Intel FPGAs.
By making use of its existing hardware, the business was able to avoid costly new infrastructure for development. The company can now process up to 19 frames per second, compared to only two to three frames per second without the optimisations of the toolkit.
Benefit 2: Streamlined development
There are more than 40 pre-trained models in the OpenVINO Open Model Zoo, where you can find a variety of use cases, such as object detection and image classification. You are free to use the pre-trained models in your commercial applications, to start developing your applications.
Examples include retailers that are using video to recognise which goods are on the shelves, transport specialists that are using image recognition to analyse if there are obstacles on the road, and blue-chip businesses using video for security purposes. Open-VINO supported frameworks include TensorFlow, Apache MXNet, PyTorch, and ONNX.
Case study 2: Vispera
Tech firm Vispera's Shelfsight solution automates the shelf-inspection process, detecting out-of-stock, misplaced, and excess items. The solution uses Intel Xeon Scalable processors optimized with the OpenVINO toolkit and Intel DevCloud for the Edge. Using the toolkit, Shelfsight's image-recognition engines have been benchmarked to perform on par with GPUs, with lower total cost of ownership. For a typical supermarket with 100 m of shelves, Shelfsight can scan an entire store every 30 minutes using a single Intel Xeon Scalable processor.
Benefit 3: Write once, deploy anywhere
With a write once, deploy anywhere approach, developers can use OpenVINO to write an application or algorithm and then deploy it across the hardware environment of their choice.
You can run the same OpenVINO code on any Intel hardware, whether that's CPU, accelerators or internal graphic cards. That means as a developer that you only need to write your code once and you can then deploy it across whatever hardware you want for inference.
Rather than having to rely on dedicated GPUs that are expensive to buy and run, you can use Intel OpenVINO as an all-in-one solution that allows your organisation to save money on AI inference and which provides competitive performance through a range of key business benefits.
Now is the time for you to find out what OpenVINO can do for your organisation.
To find out more about AI & OpenVINO, register for an upcoming webinar