Supported Devices — OpenVINO™ documentation (original) (raw)
The OpenVINO™ runtime enables you to use the following devices to run your deep learning models:CPU,GPU,NPU.
Beside running inference with a specific device, OpenVINO offers the option of running automated inference with the following inference modes:
automatically selects the best device available for the given task. It offers many additional options and optimizations, including inference on multiple devices at the same time.
enables splitting inference among several devices automatically, for example, if one device doesn’t support certain operations.
automatically groups inference requests to improve device utilization.
executes inference on multiple devices. Currently, this mode is considered a legacy solution. Using Automatic Device Selection instead is advised.
Feature Support and API Coverage#
Supported Feature | CPU | GPU | NPU |
---|---|---|---|
Automatic Device Selection | Yes | Yes | Partial |
Heterogeneous execution | Yes | Yes | No |
Automatic batching | No | Yes | No |
Multi-stream execution | Yes | Yes | No |
Model caching | Yes | Partial | Yes |
Dynamic shapes | Yes | Partial | No |
Import/Export | Yes | Yes | Yes |
Preprocessing acceleration | Yes | Yes | No |
Stateful models | Yes | Yes | Yes |
Extensibility | Yes | Yes | No |
(LEGACY) Multi-device execution | Yes | Yes | Partial |
API Coverage: | plugin | infer_request | compiled_model |
---|---|---|---|
CPU | 98.31 % | 100.0 % | 90.7 % |
CPU_ARM | 80.0 % | 100.0 % | 89.74 % |
GPU | 91.53 % | 100.0 % | 100.0 % |
dGPU | 89.83 % | 100.0 % | 100.0 % |
NPU | 18.64 % | 0.0 % | 9.3 % |
AUTO | 93.88 % | 100.0 % | 100.0 % |
BATCH | 86.05 % | 100.0 % | 86.05 % |
HETERO | 61.22 % | 99.24 % | 86.05 % |
Percentage of API supported by the device, as of OpenVINO 2024.5, 20 Nov. 2024. |
For setting up a relevant configuration, refer to theIntegrate with Customer Applicationtopic (step 3 “Configure input and output”).
Device | Archives | PyPI | APT/YUM/ZYPPER | Conda | Homebrew | vcpkg | Conan | npm |
---|---|---|---|---|---|---|---|---|
CPU | V | V | V | V | V | V | V | V |
GPU | V | V | V | V | V | V | V | V |
NPU | V* | V* | V* | n/a | n/a | n/a | n/a | V* |
* Of the Linux systems, versions 22.04 and 24.04 include drivers for NPU.
For Windows, CPU inference on ARM64 is not supported.
Note
With the OpenVINO 2024.0 release, support for GNA has been discontinued. To keep using it in your solutions, revert to the 2023.3 (LTS) version.
With the OpenVINO™ 2023.0 release, support has been cancelled for:
- Intel® Neural Compute Stick 2 powered by the Intel® Movidius™ Myriad™ X
- Intel® Vision Accelerator Design with Intel® Movidius™
To keep using the MYRIAD and HDDL plugins with your hardware, revert to the OpenVINO 2022.3 (LTS) version.