torch.nn.functional.torch.nn.parallel.data_parallel — PyTorch 2.7 documentation (original) (raw)
torch.nn.parallel.data_parallel(module, inputs, device_ids=None, output_device=None, dim=0, module_kwargs=None)[source][source]¶
Evaluate module(input) in parallel across the GPUs given in device_ids.
This is the functional version of the DataParallel module.
Parameters
- module (Module) – the module to evaluate in parallel
- inputs (Tensor) – inputs to the module
- device_ids (list of int or torch.device) – GPU ids on which to replicate module
- output_device (list of int or torch.device) – GPU location of the output Use -1 to indicate the CPU. (default: device_ids[0])
Returns
a Tensor containing the result of module(input) located on output_device
Return type