OpenVINO™ Inference Request — OpenVINO™ documentation (original) (raw)

To set up and run inference, use the ov::InferRequest class. It enables you to run inference on different devices either synchronously or asynchronously. It also includes methods to retrieve data or adjust data from model inputs and outputs.

The ov::InferRequest can be created from the ov::CompiledModel.

Python

infer_request = compiled_model.create_infer_request()

C++

auto infer_request = compiled_model.create_infer_request();

Synchronous / asynchronous inference#

The synchronous mode is the basic mode of inference and means that inference stages block the application execution, as one waits for the other to finish. Use ov::InferRequest::inferto execute in this mode.

The asynchronous mode may improve application performance, as it enables the app to operate before inference finishes, with the accelerator still running inference. Useov::InferRequest::start_async to execute in this mode.

Python

infer_request.start_async()

C++

infer_request.start_async();

The asynchronous mode supports two ways the application waits for inference results. Both are thread-safe.

For more information, see theClassification Async Sample, as well as the articles onsynchronousandasynchronousinference requests.

Working with Input and Output tensors#

ov::InferRequest enables you to get input/output tensors by tensor name, index, and port. Note that a similar logic is applied to retrieving data using the ov::Model methods.

get_input_tensor, set_input_tensor, get_output_tensor, set_output_tensor

ov::InferRequest::get_tensor, ov::InferRequest::set_tensor

Infer Request Use Scenarios#

Cascade of Models#

ov::InferRequest can be used to organize a cascade of models. Infer Requests are required for each model. In this case, you can get the output tensor from the first request, usingov::InferRequest::get_tensor and set it as input for the second request, usingov::InferRequest::set_tensor. Keep in mind that tensors shared across compiled models can be rewritten by the first model if the first infer request is run once again, while the second model has not started yet.

Python

output = infer_request1.get_output_tensor(0) infer_request2.set_input_tensor(0, output)

C++

auto output = infer_request1.get_output_tensor(0); infer_request2.set_input_tensor(0, output);

Re-use shared input in several models (e.g. ROI Tensors)#

If a model processes data created by a different model in the same pipeline, you may be able to reuse the input, instead of allocating two separate input tensors. Just allocate memory for the first model input, and then reuse it for the second model, adjusting it if necessary. A good example is, when the first model detects objects in a video frame (stored as an input tensor), and the second model uses the generated Region of Interest (ROI) to perform additional operations. In this case, the second model may take the pre-allocated input and crop the frame to the size of the generated bounding boxes. In this case, use ov::Tensor with ov::Tensor and ov::Coordinate as parameters.

Python

input_tensor points to input of a previous network and

cropROI contains coordinates of output bounding box **/

input_tensor = ov.Tensor(type=ov.Type.f32, shape=ov.Shape([1, 3, 100, 100])) begin = [0, 0, 0, 0] end = [1, 3, 32, 32]

...

C++

/** input_tensor points to input of a previous network and cropROI contains coordinates of output bounding box **/ ov::Tensor input_tensor(ov::element::f32, ov::Shape({1, 3, 20, 20})); ov::Coordinate begin({0, 0, 0, 0}); ov::Coordinate end({1, 2, 3, 3}); //...

Using Remote Tensors#

By using ov::RemoteContext you can create a remote tensor to work with remote device memory.

Python

C++

ov::RemoteContext context = core.get_default_context("GPU"); auto input_port = compiled_model.input("tensor_name");