nvidia.dali.plugin.pytorch.fn.torch_python_function — NVIDIA DALI (original) (raw)
nvidia.dali.plugin.pytorch.fn.torch_python_function(*input, batch_processing=True, bytes_per_sample_hint=[0], function, num_outputs=1, output_layouts=None, preserve=False, device=None, name=None)#
Executes a function that is operating on Torch tensors.
This class is analogous to nvidia.dali.fn.python_function() but the tensor data is handled as PyTorch tensors.
This operator allows sequence inputs and supports volumetric data.
This operator will not be optimized out of the graph.
Supported backends
- ‘cpu’
- ‘gpu’
Parameters:
__input_¶[ 0..255 ] (TensorList , optional) – This function accepts up to 256 optional positional inputs
Keyword Arguments:
- batch_processing¶ (bool, optional, default = True) – Determines whether the function gets an entire batch as an input.
- bytes_per_sample_hint¶ (int or list of int, optional, default = [0]) –
Output size hint, in bytes per sample.
If specified, the operator’s outputs residing in GPU or page-locked host memory will be preallocated to accommodate a batch of samples of this size. - function¶ (object) –
A callable object that defines the function of the operator.
Warning
The function must not hold a reference to the pipeline in which it is used. If it does, a circular reference to the pipeline will form and the pipeline will never be freed. - num_outputs¶ (int, optional, default = 1) – Number of outputs.
- output_layouts¶ (layout str or list of layout str, optional) –
Tensor data layouts for the outputs.
This argument can be a list that contains a distinct layout for each output. If the list has fewer than num_outputs elements, only the first outputs have the layout set and the rest of the outputs have no layout assigned. - preserve¶ (bool, optional, default = False) – Prevents the operator from being removed from the graph even if its outputs are not used.
See also