馃悰 [Bug] Compilation Error on GPT-2 路 Issue #1455 路 pytorch/TensorRT (original) (raw)

Bug Description

When converting the GPT-2 network (https://huggingface.co/gpt2) from TorchScript to Torch-TRT, the following error is encountered:

compiled_cpp_mod = _C.compile_graph(module._c, _parse_compile_spec(spec)) RuntimeError: [Error thrown at core/partitioning/shape_analysis.cpp:167] Unsupported input data type unsigned char

To Reproduce

Steps to reproduce the behavior:

  1. Run torch_tensorrt.compile with GPT-2 model as input, using fp32 precision.
  2. Choose fixed input size of [1, 128] and enable truncate_long_and_double with 12 GB workspace.
  3. Pass in model keyword args to disable attention and hidden state outputs

Expected behavior

Model should successfully compile to Torch-TRT. Specifically, internal (non-user-provided) type-casting issues should not cause errors.

Environment

Additional context

The problematic data in GPT-2 seems to be this bias term, instantiated in the attention module, which has type uint8. In both the TorchScript IR and the model code (example 1, example 2), it seems that this bias term is generally cast to a bool. The error is thrown in this code segment:

c10::optionalnvinfer1::DataType dtype = util::optTypeMetaToTRTDataType(cur_ivalue.toTensor().dtype());
if (dtype == c10::nullopt) {
TORCHTRT_THROW_ERROR("Unsupported input data type " << cur_ivalue.toTensor().dtype());

The conversion of a uint8 type to a TRT Data Type fails, however simply patching this conversion also does not fix the issue, as an out-of-bounds error later follows.

Temporary Solution

A temporary fix to this problem is to add the following to the compilation arguments in torch_tensorrt.compile:

torch_tensorrt.compile( ..., torch_executed_ops=["aten::where"], ...)

This solution works as it happens to exclude the code which uses and processes the uint8 tensor, however it is only a temporary fix and does not resolve the underlying issue.

Steps to a Solution