NVIDIA - TensorRT (original) (raw)

TensorRT Execution Provider

With the TensorRT execution provider, the ONNX Runtime delivers better inferencing performance on the same hardware compared to generic GPU acceleration.

The TensorRT execution provider in the ONNX Runtime makes use of NVIDIA’s TensorRT Deep Learning inferencing engine to accelerate ONNX model in their family of GPUs. Microsoft and NVIDIA worked closely to integrate the TensorRT execution provider with ONNX Runtime.

Contents

Install

Please select the GPU (CUDA/TensorRT) version of Onnx Runtime: https://onnxruntime.ai/docs/install. Pre-built packages and Docker images are available for Jetpack in the Jetson Zoo.

Build from source

See Build instructions.

Requirements

Note: Starting with version 1.19, CUDA 12 becomes the default version when distributing ONNX Runtime GPU packages.

ONNX Runtime TensorRT CUDA
main 10.9 12.0-12.8, 11.8
1.21 10.8 12.0-12.8, 11.8
1.20 10.4 12.0-12.6, 11.8
1.19 10.2 12.0-12.6, 11.8
1.18 10.0 11.8, 12.0-12.6
1.17 8.6 11.8, 12.0-12.6
1.16 8.6 11.8
1.15 8.6 11.8
1.14 8.5 11.6
1.12-1.13 8.4 11.4
1.11 8.2 11.4
1.10 8.0 11.4
1.9 8.0 11.4
1.7-1.8 7.2 11.0.3
1.5-1.6 7.1 10.2
1.2-1.4 7.0 10.1
1.0-1.1 6.0 10.0

For more details on CUDA/cuDNN versions, please see CUDA EP requirements.

Usage

C/C++

Ort::Env env = Ort::Env{ORT_LOGGING_LEVEL_ERROR, "Default"};
Ort::SessionOptions sf;
int device_id = 0;
Ort::ThrowOnError(OrtSessionOptionsAppendExecutionProvider_Tensorrt(sf, device_id));
Ort::ThrowOnError(OrtSessionOptionsAppendExecutionProvider_CUDA(sf, device_id));
Ort::Session session(env, model_path, sf);

The C API details are here.

Python

To use TensorRT execution provider, you must explicitly register TensorRT execution provider when instantiating the InferenceSession. Note that it is recommended you also register CUDAExecutionProvider to allow Onnx Runtime to assign nodes to CUDA execution provider that TensorRT does not support.

import onnxruntime as ort
# set providers to ['TensorrtExecutionProvider', 'CUDAExecutionProvider'] with TensorrtExecutionProvider having the higher priority.
sess = ort.InferenceSession('model.onnx', providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider'])

Configurations

There are two ways to configure TensorRT settings, either by TensorRT Execution Provider Session Option or Environment Variables(deprecated).

Here are examples and different scenarios to set TensorRT EP session options:

Click below for Python API example:

import onnxruntime as ort

model_path = '<path to model>'

# note: for bool type options in python API, set them as False/True
providers = [
    ('TensorrtExecutionProvider', {
        'device_id': 0,                       # Select GPU to execute
        'trt_max_workspace_size': 2147483648, # Set GPU memory usage limit
        'trt_fp16_enable': True,              # Enable FP16 precision for faster inference  
    }),
    ('CUDAExecutionProvider', {
        'device_id': 0,
        'arena_extend_strategy': 'kNextPowerOfTwo',
        'gpu_mem_limit': 2 * 1024 * 1024 * 1024,
        'cudnn_conv_algo_search': 'EXHAUSTIVE',
        'do_copy_in_default_stream': True,
    })
]

sess_opt = ort.SessionOptions()
sess = ort.InferenceSession(model_path, sess_options=sess_opt, providers=providers)

Click below for C++ API example:

Ort::SessionOptions session_options;

const auto& api = Ort::GetApi();
OrtTensorRTProviderOptionsV2* tensorrt_options;
Ort::ThrowOnError(api.CreateTensorRTProviderOptions(&tensorrt_options));

std::vector<const char*> option_keys = {
    "device_id",
    "trt_max_workspace_size",
    "trt_max_partition_iterations",
    "trt_min_subgraph_size",
    "trt_fp16_enable",
    "trt_int8_enable",
    "trt_int8_use_native_calibration_table",
    "trt_dump_subgraphs",
    // below options are strongly recommended !
    "trt_engine_cache_enable",
    "trt_engine_cache_path",
    "trt_timing_cache_enable",
    "trt_timing_cache_path",
};
std::vector<const char*> option_values = {
    "1",
    "2147483648",
    "10",
    "5",
    "1",
    "1",
    "1",
    "1",
    "1",
    "1",
    "/path/to/cache",
    "1",
    "/path/to/cache", // can be same as the engine cache folder
};

Ort::ThrowOnError(api.UpdateTensorRTProviderOptions(tensorrt_options,
                                                    option_keys.data(), option_values.data(), option_keys.size()));


cudaStream_t cuda_stream;
cudaStreamCreate(&cuda_stream);
// this implicitly sets "has_user_compute_stream"
Ort::ThrowOnError(api.UpdateTensorRTProviderOptionsWithValue(cuda_options, "user_compute_stream", cuda_stream))

session_options.AppendExecutionProvider_TensorRT_V2(*tensorrt_options);
/// below code can be used to print all options
OrtAllocator* allocator;
char* options;
Ort::ThrowOnError(api.GetAllocatorWithDefaultOptions(&allocator));
Ort::ThrowOnError(api.GetTensorRTProviderOptionsAsString(tensorrt_options,          allocator, &options));

Scenario

Note: for bool type options, assign them with True/False in python, or 1/0 in C++.

Execution Provider Options

TensorRT configurations can be set by execution provider options. It’s useful when each model and inference session have their own configurations. In this case, execution provider option settings will override any environment variable settings. All configurations should be set explicitly, otherwise default value will be taken.

device_id
user_compute_stream
trt_max_workspace_size
trt_max_partition_iterations
trt_min_subgraph_size
trt_fp16_enable
trt_int8_enable
trt_int8_calibration_table_name
trt_int8_use_native_calibration_table
trt_dla_enable
trt_dla_core
trt_engine_cache_enable
trt_engine_cache_path
trt_engine_cache_prefix
trt_dump_subgraphs
trt_force_sequential_engine_build
trt_context_memory_sharing_enable
trt_layer_norm_fp32_fallback
trt_timing_cache_enable
trt_timing_cache_path
trt_force_timing_cache
trt_detailed_build_log
trt_build_heuristics_enable
trt_cuda_graph_enable
trt_sparsity_enable
trt_builder_optimization_level
trt_auxiliary_streams
trt_tactic_sources
trt_profile_min_shapes
trt_profile_max_shapes
trt_profile_opt_shapes
trt_engine_hw_compatible
trt_op_types_to_exclude

Environment Variables(deprecated)

Following environment variables can be set for TensorRT execution provider. Click below for more details.

One can override default values by setting environment variables. e.g. on Linux:

# Override default max workspace size to 2GB
export ORT_TENSORRT_MAX_WORKSPACE_SIZE=2147483648

# Override default maximum number of iterations to 10
export ORT_TENSORRT_MAX_PARTITION_ITERATIONS=10

# Override default minimum subgraph node size to 5
export ORT_TENSORRT_MIN_SUBGRAPH_SIZE=5

# Enable FP16 mode in TensorRT
export ORT_TENSORRT_FP16_ENABLE=1

# Enable INT8 mode in TensorRT
export ORT_TENSORRT_INT8_ENABLE=1

# Use native TensorRT calibration table
export ORT_TENSORRT_INT8_USE_NATIVE_CALIBRATION_TABLE=1

# Enable TensorRT engine caching
export ORT_TENSORRT_ENGINE_CACHE_ENABLE=1
# Please Note warning above. This feature is experimental.
# Engine cache files must be invalidated if there are any changes to the model, ORT version, TensorRT version or if the underlying hardware changes. Engine files are not portable across devices.

# Specify TensorRT cache path
export ORT_TENSORRT_CACHE_PATH="/path/to/cache"

# Dump out subgraphs to run on TensorRT
export ORT_TENSORRT_DUMP_SUBGRAPHS=1

# Enable context memory sharing between TensorRT subgraphs. Default 0 = false, nonzero = true
export ORT_TENSORRT_CONTEXT_MEMORY_SHARING_ENABLE=1

TensorRT EP Caches

There are three major TRT EP caches:

Caches can help reduce session creation time from minutes to seconds

Following numbers are measured from initializing session with TRT EP for SD UNet model.

image

How to set caches

The folder structure of the caches:

image

With the following command, the embedded engine model (model_ctx.onnx) will be generated along with the engine cache in the same directory.

Note: The example does not specify trt_engine_cache_path because onnxruntime_perf_test requires a specific folder structure to run the inference. However, we still recommend specifying trt_engine_cache_path to better organize the caches.

$./onnxruntime_perf_test -e tensorrt -r 1 -i "trt_engine_cache_enable|true trt_dump_ep_context_model|true" /model_database/transformer_model/model.onnx

Once the inference is complete, the embedded engine model is saved to disk. User can then run this model just like the original one, but with a significantly quicker session creation time.

$./onnxruntime_perf_test -e tensorrt -r 1 /model_database/transformer_model/model_ctx.onnx

More about Embedded engine model / EPContext model

Performance Tuning

For performance tuning, please see guidance on this page: ONNX Runtime Perf Tuning

When/if using onnxruntime_perf_test, use the flag -e tensorrt. Check below for sample.

Shape Inference for TensorRT Subgraphs

If some operators in the model are not supported by TensorRT, ONNX Runtime will partition the graph and only send supported subgraphs to TensorRT execution provider. Because TensorRT requires that all inputs of the subgraphs have shape specified, ONNX Runtime will throw error if there is no input shape info. In this case please run shape inference for the entire model first by running script here (Check below for sample).

TensorRT Plugins Support

ORT TRT can leverage the TRT plugins which come with TRT plugin library in official release. To use TRT plugins, firstly users need to create the custom node (a one-to-one mapping to TRT plugin) with a registered plugin name and trt.plugins domain in the ONNX model. So, ORT TRT can recognize this custom node and pass the node together with the subgraph to TRT. Please see following python example to create a new custom node in the ONNX model:

Click below for Python API example:

from onnx import TensorProto, helper

def generate_model(model_name):
    nodes = [
        helper.make_node(
            "DisentangledAttention_TRT", # The registered name is from https://github.com/NVIDIA/TensorRT/blob/main/plugin/disentangledAttentionPlugin/disentangledAttentionPlugin.cpp#L36
            ["input1", "input2", "input3"],
            ["output"],
            "DisentangledAttention_TRT",
            domain="trt.plugins", # The domain has to be "trt.plugins"
            factor=0.123,
            span=128,
        ),
    ]

    graph = helper.make_graph(
        nodes,
        "trt_plugin_custom_op",
        [  # input
            helper.make_tensor_value_info("input1", TensorProto.FLOAT, [12, 256, 256]),
            helper.make_tensor_value_info("input2", TensorProto.FLOAT, [12, 256, 256]),
            helper.make_tensor_value_info("input3", TensorProto.FLOAT, [12, 256, 256]),
        ],
        [  # output
            helper.make_tensor_value_info("output", TensorProto.FLOAT, [12, 256, 256]),
        ],
    )

    model = helper.make_model(graph)
    onnx.save(model, model_name)

Note: If users want to use TRT plugins that are not in the TRT plugin library in official release, please see the ORT TRT provider option trt_extra_plugin_lib_paths for more details.

Timing cache

Enabling trt_timing_cache_enable will enable ORT TRT to use TensorRT timing cache to accelerate engine build time on a device with the same compute capability. This will work across models as it simply stores kernel latencies for specific configurations and cubins (TRT 9.0+). Those files are usually very small (only a few KB or MB) which makes them very easy to ship with an application to accelerate the build time on the user end.

Note: A timing cache can be used across one GPU compute capability similar to an engine. Nonetheless the preferred way is to use one per GPU model, but practice shows that sharing across one compute capability works well in most cases.

The following examples shows build time reduction with timing cache:

Model no Cache with Cache
efficientnet-lite4-11 34.6 s 7.7 s
yolov4 108.62 s 9.4 s

Click below for Python example:

import onnxruntime as ort

ort.set_default_logger_severity(0) # Turn on verbose mode for ORT TRT
sess_options = ort.SessionOptions()

trt_ep_options = {
    "trt_timing_cache_enable": True,
}

sess = ort.InferenceSession(
    "my_model.onnx",
    providers=[
        ("TensorrtExecutionProvider", trt_ep_options),
        "CUDAExecutionProvider",
    ],
)

# Once inference session initialization is done (assume no dynamic shape input, otherwise you must wait until inference run is done)
# you can find timing cache is saved in the 'trt_engine_cache_path' directory, e.g., TensorrtExecutionProvider_cache_cc75.timing, please note
# that the name contains information of compute capability.

sess.run(
    None,
    {"input_ids": np.zeros((1, 77), dtype=np.int32)}
)

Explicit shape range for dynamic shape input

ORT TRT lets you explicitly specify min/max/opt shapes for each dynamic shape input through three provider options, trt_profile_min_shapes, trt_profile_max_shapes and trt_profile_opt_shapes. If these three provider options are not specified and model has dynamic shape input, ORT TRT will determine the min/max/opt shapes for the dynamic shape input based on incoming input tensor. The min/max/opt shapes are required for TRT optimization profile (An optimization profile describes a range of dimensions for each TRT network input and the dimensions that the auto-tuner will use for optimization. When using runtime dimensions, you must create at least one optimization profile at build time.)

To use the engine cache built with optimization profiles specified by explicit shape ranges, user still needs to provide those three provider options as well as engine cache enable flag. ORT TRT will firstly compare the shape ranges of those three provider options with the shape ranges saved in the .profile file, and then rebuild the engine if the shape ranges don’t match.

Click below for Python example:

import onnxruntime as ort

ort.set_default_logger_severity(0) # Turn on verbose mode for ORT TRT
sess_options = ort.SessionOptions()

trt_ep_options = {
    "trt_fp16_enable": True,
    "trt_engine_cache_enable": True,
    "trt_profile_min_shapes": "sample:2x4x64x64,encoder_hidden_states:2x77x768",
    "trt_profile_max_shapes": "sample:32x4x64x64,encoder_hidden_states:32x77x768",
    "trt_profile_opt_shapes": "sample:2x4x64x64,encoder_hidden_states:2x77x768",
}

sess = ort.InferenceSession(
    "my_model.onnx",
    providers=[
        ("TensorrtExecutionProvider", trt_ep_options),
        "CUDAExecutionProvider",
    ],
)

batch_size = 1
unet_dim = 4
max_text_len = 77
embed_dim = 768
latent_height = 64
latent_width = 64

args = {
    "sample": np.zeros(
        (2 * batch_size, unet_dim, latent_height, latent_width), dtype=np.float32
    ),
    "timestep": np.ones((1,), dtype=np.float32),
    "encoder_hidden_states": np.zeros(
        (2 * batch_size, max_text_len, embed_dim),
        dtype=np.float32,
    ),
}
sess.run(None, args)
# you can find engine cache and profile cache are saved in the 'trt_engine_cache_path' directory, e.g.
# TensorrtExecutionProvider_TRTKernel_graph_torch_jit_1843998305741310361_0_0_fp16.engine and TensorrtExecutionProvider_TRTKernel_graph_torch_jit_1843998305741310361_0_0_fp16.profile.

Please note that there is a constraint of using this explicit shape range feature, i.e., all the dynamic shape inputs should be provided with corresponding min/max/opt shapes.

Data-dependant shape (DDS) ops

The DDS operations — NonMaxSuppression, NonZero, and RoiAlign — have output shapes that are only determined at runtime.

To ensure DDS ops are executed by TRT-EP/TRT instead of CUDA EP or CPU EP, please check the following:

Samples

This example shows how to run the Faster R-CNN model on TensorRT execution provider.

  1. Download the Faster R-CNN onnx model from the ONNX model zoo here.
  2. Infer shapes in the model by running the shape inference script
 python symbolic_shape_infer.py --input /path/to/onnx/model/model.onnx --output /path/to/onnx/model/new_model.onnx --auto_merge  
  1. To test model with sample input and verify the output, run onnx_test_runner under ONNX Runtime build directory.

    Models and test_data_set_ folder need to be stored under the same path. onnx_test_runner will test all models under this path.

 ./onnx_test_runner -e tensorrt /path/to/onnx/model/  
  1. To test on model performance, run onnxruntime_perf_test on your shape-inferred Faster-RCNN model

    Download sample test data with model from model zoo, and put test_data_set folder next to your inferred model

 # e.g.  
 # -r: set up test repeat time  
 # -e: set up execution provider  
 # -i: set up params for execution provider options  
 ./onnxruntime_perf_test -r 1 -e tensorrt -i "trt_fp16_enable|true" /path/to/onnx/your_inferred_model.onnx  

Please see this Notebook for an example of running a model on GPU using ONNX Runtime through Azure Machine Learning Services.

Known Issues