Intel expands its AI developer toolkit

Artificial Intelligence
(Image credit: Pixabay)

Ahead of MWC 2022, Intel has released a new version of the Intel Distribution of the OpenVINO toolkit which induces major upgrades to accelerate AI inferencing performance.

Since the launch of OpenVINO in 2018, the chip giant has enabled hundreds of thousands of developers to accelerate the performance of AI inferencing beginning at the edge and extending to both enterprise and clients.

This latest release includes new features built upon three-and-a-half years of developer feedback and also includes a greater selection of deep learning models, more device portability choices and higher inferencing performance with fewer code changes.

VP of OpenVINO developer tools in Intel's Network and Edge Group, Adam Burns provided further insight on this latest version of the company's distribution of the OpenVINO toolkit in a press release, saying:

“The latest release of OpenVINO 2022.1 builds on more than three years of learnings from hundreds of thousands of developers to simplify and automate optimizations. The latest upgrade adds hardware auto-discovery and automatic optimization, so software developers can achieve optimal performance on every platform. This software plus Intel silicon enables a significant AI ROI advantage and is deployed easily into the Intel-based solutions in your network.”

OpenVINO toolkit

Built on the foundation of oneAPI, the Intel Distribution of OpenVINO toolkit is a suite of tools for high-performance deep learning targeted at enabling faster, more accurate real-world results deployed into production from the edge to the cloud. New features in the latest release make it easier for developers to adopt, maintain, optimize and deploy code with ease across an expanded range of deep learning models.

The latest version of the Intel Distribution of OpenVINO toolkit features an updated, cleaner API that requires fewer code changes when transitioning from another framework. At the same time, the Model Optimizer's API parameters have been reduced to minimize complexity.

Intel has also included broader support for natural language programming (NLP) models for use cases such as text-to-speech and voice recognition.  In terms of performance, AUTO device mode now self-discovers available system inferencing capacity based on model requirements so that applications no longer need to know their compute environment in order to advance.

Finally, Intel has added support for the hybrid architecture in 12th Gen Intel Core CPUs to deliver enhancements for high-performance inferencing on both the CPU and integrated GPU.

TOPICS
Anthony Spadafora

After working with the TechRadar Pro team for the last several years, Anthony is now the security and networking editor at Tom’s Guide where he covers everything from data breaches and ransomware gangs to the best way to cover your whole home or business with Wi-Fi. When not writing, you can find him tinkering with PCs and game consoles, managing cables and upgrading his smart home.