mmengine.dist.broadcast_object_list — mmengine 0.10.7 documentation (original) (raw)

mmengine.dist.broadcast_object_list(data, src=0, group=None)[source]

Broadcasts picklable objects in object_list to the whole group. Similar to broadcast(), but Python objects can be passed in. Note that all objects in object_list must be picklable in order to be broadcasted.

Note

Calling broadcast_object_list in non-distributed environment does nothing.

Parameters:

Return type:

None

Note

For NCCL-based process groups, internal tensor representations of objects must be moved to the GPU device before communication starts. In this case, the used device is given bytorch.cuda.current_device() and it is the user’s responsibility to ensure that this is correctly set so that each rank has an individual GPU, via torch.cuda.set_device().

Examples

import torch import mmengine.dist as dist

non-distributed environment

data = ['foo', 12, {1: 2}] dist.broadcast_object_list(data) data ['foo', 12, {1: 2}]

distributed environment

We have 2 process groups, 2 ranks.

if dist.get_rank() == 0: # Assumes world_size of 3. data = ["foo", 12, {1: 2}] # any picklable object else: data = [None, None, None] dist.broadcast_object_list(data) data ["foo", 12, {1: 2}] # Rank 0 ["foo", 12, {1: 2}] # Rank 1