tf.distribute.MirroredStrategy  |  TensorFlow v2.16.1 (original) (raw)

Synchronous training across multiple replicas on one machine.

Inherits From: Strategy

tf.distribute.MirroredStrategy(
    devices=None, cross_device_ops=None
)

Used in the notebooks

Used in the guide Used in the tutorials
Random number generation Distributed training with TensorFlow Estimators Use a GPU Migrate single-worker multiple-GPU training Distributed Input Save and load a model using a distribution strategy Custom training with tf.distribute.Strategy Distributed training with Keras TFP Release Notes notebook (0.13.0)

This strategy is typically used for training on one machine with multiple GPUs. For TPUs, usetf.distribute.TPUStrategy. To use MirroredStrategy with multiple workers, please refer to tf.distribute.experimental.MultiWorkerMirroredStrategy.

For example, a variable created under a MirroredStrategy is aMirroredVariable. If no devices are specified in the constructor argument of the strategy then it will use all the available GPUs. If no GPUs are found, it will use the available CPUs. Note that TensorFlow treats all CPUs on a machine as a single device, and uses threads internally for parallelism.

strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) with strategy.scope(): x = tf.Variable(1.) x MirroredVariable:{ 0: <tf.Variable ... shape=() dtype=float32, numpy=1.0>, 1: <tf.Variable ... shape=() dtype=float32, numpy=1.0> }

While using distribution strategies, all the variable creation should be done within the strategy's scope. This will replicate the variables across all the replicas and keep them in sync using an all-reduce algorithm.

Variables created inside a MirroredStrategy which is wrapped with atf.function are still MirroredVariables.

x = [] @tf.function # Wrap the function with tf.function. def create_variable(): if not x: x.append(tf.Variable(1.)) return x[0] strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) with strategy.scope(): _ = create_variable() print(x[0]) MirroredVariable:{ 0: <tf.Variable ... shape=() dtype=float32, numpy=1.0>, 1: <tf.Variable ... shape=() dtype=float32, numpy=1.0> }

experimental_distribute_dataset can be used to distribute the dataset across the replicas when writing your own training loop. If you are using .fit and.compile methods available in tf.keras, then tf.keras will handle the distribution for you.

For example:

my_strategy = tf.distribute.MirroredStrategy()
with my_strategy.scope():
  @tf.function
  def distribute_train_epoch(dataset):
    def replica_fn(input):
      # process input and return result
      return result

    total_result = 0
    for x in dataset:
      per_replica_result = my_strategy.run(replica_fn, args=(x,))
      total_result += my_strategy.reduce(tf.distribute.ReduceOp.SUM,
                                         per_replica_result, axis=None)
    return total_result

  dist_dataset = my_strategy.experimental_distribute_dataset(dataset)
  for _ in range(EPOCHS):
    train_result = distribute_train_epoch(dist_dataset)
Args
devices a list of device strings such as ['/gpu:0', '/gpu:1']. IfNone, all available GPUs are used. If no GPUs are found, CPU is used.
cross_device_ops optional, a descendant of CrossDeviceOps. If this is not set, NcclAllReduce() will be used by default. One would customize this if NCCL isn't available or if a special implementation that exploits the particular hardware is available.
Attributes
cluster_resolver Returns the cluster resolver associated with this strategy.In general, when using a multi-worker tf.distribute strategy such astf.distribute.experimental.MultiWorkerMirroredStrategy ortf.distribute.TPUStrategy(), there is atf.distribute.cluster_resolver.ClusterResolver associated with the strategy used, and such an instance is returned by this property. Strategies that intend to have an associatedtf.distribute.cluster_resolver.ClusterResolver must set the relevant attribute, or override this property; otherwise, None is returned by default. Those strategies should also provide information regarding what is returned by this property. Single-worker strategies usually do not have atf.distribute.cluster_resolver.ClusterResolver, and in those cases this property will return None. The tf.distribute.cluster_resolver.ClusterResolver may be useful when the user needs to access information such as the cluster spec, task type or task id. For example, os.environ['TF_CONFIG'] = json.dumps({ 'cluster': { 'worker': ["localhost:12345", "localhost:23456"], 'ps': ["localhost:34567"] }, 'task': {'type': 'worker', 'index': 0} }) # This implicitly uses TF_CONFIG for the cluster and current task info. strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy() ... if strategy.cluster_resolver.task_type == 'worker': # Perform something that's only applicable on workers. Since we set this # as a worker above, this block will run on this particular instance. elif strategy.cluster_resolver.task_type == 'ps': # Perform something that's only applicable on parameter servers. Since we # set this as a worker above, this block will not run on this particular # instance. For more information, please seetf.distribute.cluster_resolver.ClusterResolver's API docstring.
extended tf.distribute.StrategyExtended with additional methods.
num_replicas_in_sync Returns number of replicas over which gradients are aggregated.

Methods

distribute_datasets_from_function

View source

distribute_datasets_from_function(
    dataset_fn, options=None
)

Distributes tf.data.Dataset instances created by calls to dataset_fn.

The argument dataset_fn that users pass in is an input function that has atf.distribute.InputContext argument and returns a tf.data.Datasetinstance. It is expected that the returned dataset from dataset_fn is already batched by per-replica batch size (i.e. global batch size divided by the number of replicas in sync) and sharded.tf.distribute.Strategy.distribute_datasets_from_function does not batch or shard the tf.data.Dataset instance returned from the input function. dataset_fn will be called on the CPU device of each of the workers and each generates a dataset where every replica on that worker will dequeue one batch of inputs (i.e. if a worker has two replicas, two batches will be dequeued from the Dataset every step).

This method can be used for several purposes. First, it allows you to specify your own batching and sharding logic. (In contrast,tf.distribute.experimental_distribute_dataset does batching and sharding for you.) For example, whereexperimental_distribute_dataset is unable to shard the input files, this method might be used to manually shard the dataset (avoiding the slow fallback behavior in experimental_distribute_dataset). In cases where the dataset is infinite, this sharding can be done by creating dataset replicas that differ only in their random seed.

The dataset_fn should take an tf.distribute.InputContext instance where information about batching and input replication can be accessed.

You can use element_spec property of thetf.distribute.DistributedDataset returned by this API to query thetf.TypeSpec of the elements returned by the iterator. This can be used to set the input_signature property of a tf.function. Followtf.distribute.DistributedDataset.element_spec to see an example.

For a tutorial on more usage and properties of this method, refer to thetutorial on distributed input). If you are interested in last partial batch handling, read this section.

Args
dataset_fn A function taking a tf.distribute.InputContext instance and returning a tf.data.Dataset.
options tf.distribute.InputOptions used to control options on how this dataset is distributed.
Returns
A tf.distribute.DistributedDataset.

experimental_distribute_dataset

View source

experimental_distribute_dataset(
    dataset, options=None
)

Creates tf.distribute.DistributedDataset from tf.data.Dataset.

The returned tf.distribute.DistributedDataset can be iterated over similar to regular datasets. NOTE: The user cannot add any more transformations to atf.distribute.DistributedDataset. You can only create an iterator or examine the tf.TypeSpec of the data generated by it. See API docs oftf.distribute.DistributedDataset to learn more.

The following is an example:

global_batch_size = 2 # Passing the devices is optional. strategy = tf.distribute.MirroredStrategy(devices=["GPU:0", "GPU:1"]) # Create a dataset dataset = tf.data.Dataset.range(4).batch(global_batch_size) # Distribute that dataset dist_dataset = strategy.experimental_distribute_dataset(dataset) @tf.function def replica_fn(input): return input*2 result = [] # Iterate over the `tf.distribute.DistributedDataset` for x in dist_dataset: # process dataset elements result.append(strategy.run(replica_fn, args=(x,))) print(result) [PerReplica:{ 0: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([0])>, 1: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([2])> }, PerReplica:{ 0: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([4])>, 1: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([6])> }]

Three key actions happening under the hood of this method are batching, sharding, and prefetching.

In the code snippet above, dataset is batched by global_batch_size, and calling experimental_distribute_dataset on it rebatches dataset to a new batch size that is equal to the global batch size divided by the number of replicas in sync. We iterate through it using a Pythonic for loop.x is a tf.distribute.DistributedValues containing data for all replicas, and each replica gets data of the new batch size.tf.distribute.Strategy.run will take care of feeding the right per-replica data in x to the right replica_fn executed on each replica.

Sharding contains autosharding across multiple workers and within every worker. First, in multi-worker distributed training (i.e. when you usetf.distribute.experimental.MultiWorkerMirroredStrategyor tf.distribute.TPUStrategy), autosharding a dataset over a set of workers means that each worker is assigned a subset of the entire dataset (if the right tf.data.experimental.AutoShardPolicy is set). This is to ensure that at each step, a global batch size of non-overlapping dataset elements will be processed by each worker. Autosharding has a couple of different options that can be specified usingtf.data.experimental.DistributeOptions. Then, sharding within each worker means the method will split the data among all the worker devices (if more than one a present). This will happen regardless of multi-worker autosharding.

By default, this method adds a prefetch transformation at the end of the user provided tf.data.Dataset instance. The argument to the prefetch transformation which is buffer_size is equal to the number of replicas in sync.

If the above batch splitting and dataset sharding logic is undesirable, please usetf.distribute.Strategy.distribute_datasets_from_functioninstead, which does not do any automatic batching or sharding for you.

For a tutorial on more usage and properties of this method, refer to thetutorial on distributed input. If you are interested in last partial batch handling, read this section.

Args
dataset tf.data.Dataset that will be sharded across all replicas using the rules stated above.
options tf.distribute.InputOptions used to control options on how this dataset is distributed.
Returns
A tf.distribute.DistributedDataset.

experimental_distribute_values_from_function

View source

experimental_distribute_values_from_function(
    value_fn
)

Generates tf.distribute.DistributedValues from value_fn.

This function is to generate tf.distribute.DistributedValues to pass into run, reduce, or other methods that take distributed values when not using datasets.

Args
value_fn The function to run to generate values. It is called for each replica with tf.distribute.ValueContext as the sole argument. It must return a Tensor or a type that can be converted to a Tensor.
Returns
A tf.distribute.DistributedValues containing a value for each replica.

Example usage:

  1. Return constant value per replica:
    strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
    def value_fn(ctx):
    return tf.constant(1.)
    distributed_values = (
    strategy.experimental_distribute_values_from_function(
    value_fn))

local_result = strategy.experimental_local_results(
distributed_values)
local_result
(<tf.Tensor: shape=(), dtype=float32, numpy=1.0>,
<tf.Tensor: shape=(), dtype=float32, numpy=1.0>)
2. Distribute values in array based on replica_id:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
array_value = np.array([3., 2., 1.])
def value_fn(ctx):
return array_value[ctx.replica_id_in_sync_group]
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
local_result = strategy.experimental_local_results(
distributed_values)
local_result
(3.0, 2.0)
3. Specify values using num_replicas_in_sync:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
def value_fn(ctx):
return ctx.num_replicas_in_sync
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
local_result = strategy.experimental_local_results(
distributed_values)
local_result
(2, 2)
4. Place values on devices and distribute:

strategy = tf.distribute.TPUStrategy()  
worker_devices = strategy.extended.worker_devices  
multiple_values = []  
for i in range(strategy.num_replicas_in_sync):  
  with tf.device(worker_devices[i]):  
    multiple_values.append(tf.constant(1.0))  
def value_fn(ctx):  
  return multiple_values[ctx.replica_id_in_sync_group]  
distributed_values = strategy.  
  experimental_distribute_values_from_function(  
  value_fn)  

experimental_local_results

View source

experimental_local_results(
    value
)

Returns the list of all local per-replica values contained in value.

Args
value A value returned by experimental_run(), run(), or a variable created inscope`.
Returns
A tuple of values contained in value where ith element corresponds to ith replica. If value represents a single value, this returns(value,).

gather

View source

gather(
    value, axis
)

Gather value across replicas along axis to the current device.

Given a tf.distribute.DistributedValues or tf.Tensor-like object value, this API gathers and concatenates value across replicas along the axis-th dimension. The result is copied to the "current" device, which would typically be the CPU of the worker on which the program is running. For tf.distribute.TPUStrategy, it is the first TPU host. For multi-client tf.distribute.MultiWorkerMirroredStrategy, this is the CPU of each worker.

This API can only be called in the cross-replica context. For a counterpart in the replica context, see tf.distribute.ReplicaContext.all_gather.

strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) # A DistributedValues with component tensor of shape (2, 1) on each replica distributed_values = strategy.experimental_distribute_values_from_function(lambda _: tf.identity(tf.constant([[1], [2]]))) @tf.function def run(): return strategy.gather(distributed_values, axis=0) run() <tf.Tensor: shape=(4, 1), dtype=int32, numpy= array([[1], [2], [1], [2]], dtype=int32)>

Consider the following example for more combinations:

strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1", "GPU:2", "GPU:3"]) single_tensor = tf.reshape(tf.range(6), shape=(1,2,3)) distributed_values = strategy.experimental_distribute_values_from_function(lambda _: tf.identity(single_tensor)) @tf.function def run(axis): return strategy.gather(distributed_values, axis=axis) axis=0 run(axis) <tf.Tensor: shape=(4, 2, 3), dtype=int32, numpy= array([[[0, 1, 2], [3, 4, 5]], [[0, 1, 2], [3, 4, 5]], [[0, 1, 2], [3, 4, 5]], [[0, 1, 2], [3, 4, 5]]], dtype=int32)> axis=1 run(axis) <tf.Tensor: shape=(1, 8, 3), dtype=int32, numpy= array([[[0, 1, 2], [3, 4, 5], [0, 1, 2], [3, 4, 5], [0, 1, 2], [3, 4, 5], [0, 1, 2], [3, 4, 5]]], dtype=int32)> axis=2 run(axis) <tf.Tensor: shape=(1, 2, 12), dtype=int32, numpy= array([[[0, 1, 2, 0, 1, 2, 0, 1, 2, 0, 1, 2], [3, 4, 5, 3, 4, 5, 3, 4, 5, 3, 4, 5]]], dtype=int32)>

Args
value a tf.distribute.DistributedValues instance, e.g. returned byStrategy.run, to be combined into a single tensor. It can also be a regular tensor when used with tf.distribute.OneDeviceStrategy or the default strategy. The tensors that constitute the DistributedValues can only be dense tensors with non-zero rank, NOT a tf.IndexedSlices.
axis 0-D int32 Tensor. Dimension along which to gather. Must be in the range [0, rank(value)).
Returns
A Tensor that's the concatenation of value across replicas alongaxis dimension.

reduce

View source

reduce(
    reduce_op, value, axis
)

Reduce value across replicas and return result on current device.

strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) def step_fn(): i = tf.distribute.get_replica_context().replica_id_in_sync_group return tf.identity(i) `` per_replica_result = strategy.run(step_fn) total = strategy.reduce("SUM", per_replica_result, axis=None) total <tf.Tensor: shape=(), dtype=int32, numpy=1>

To see how this would look with multiple replicas, consider the same example with MirroredStrategy with 2 GPUs:

strategy = tf.distribute.MirroredStrategy(devices=["GPU:0", "GPU:1"])
def step_fn():
  i = tf.distribute.get_replica_context().replica_id_in_sync_group
  return tf.identity(i)

per_replica_result = strategy.run(step_fn)
# Check devices on which per replica result is:
strategy.experimental_local_results(per_replica_result)[0].device
# /job:localhost/replica:0/task:0/device:GPU:0
strategy.experimental_local_results(per_replica_result)[1].device
# /job:localhost/replica:0/task:0/device:GPU:1

total = strategy.reduce("SUM", per_replica_result, axis=None)
# Check device on which reduced result is:
total.device
# /job:localhost/replica:0/task:0/device:CPU:0

This API is typically used for aggregating the results returned from different replicas, for reporting etc. For example, loss computed from different replicas can be averaged using this API before printing.

There are a number of different tf.distribute APIs for reducing values across replicas:

What should axis be?

Given a per-replica value returned by run, say a per-example loss, the batch will be divided across all the replicas. This function allows you to aggregate across replicas and optionally also across batch elements by specifying the axis parameter accordingly.

For example, if you have a global batch size of 8 and 2 replicas, values for examples [0, 1, 2, 3] will be on replica 0 and[4, 5, 6, 7] will be on replica 1. With axis=None, reduce will aggregate only across replicas, returning [0+4, 1+5, 2+6, 3+7]. This is useful when each replica is computing a scalar or some other value that doesn't have a "batch" dimension (like a gradient or loss).

strategy.reduce("sum", per_replica_result, axis=None)

Sometimes, you will want to aggregate across both the global batch _and_all replicas. You can get this behavior by specifying the batch dimension as the axis, typically axis=0. In this case it would return a scalar 0+1+2+3+4+5+6+7.

strategy.reduce("sum", per_replica_result, axis=0)

If there is a last partial batch, you will need to specify an axis so that the resulting shape is consistent across replicas. So if the last batch has size 6 and it is divided into [0, 1, 2, 3] and [4, 5], you would get a shape mismatch unless you specify axis=0. If you specifytf.distribute.ReduceOp.MEAN, using axis=0 will use the correct denominator of 6. Contrast this with computing reduce_mean to get a scalar value on each replica and this function to average those means, which will weigh some values 1/8 and others 1/4.

Args
reduce_op a tf.distribute.ReduceOp value specifying how values should be combined. Allows using string representation of the enum such as "SUM", "MEAN".
value a tf.distribute.DistributedValues instance, e.g. returned byStrategy.run, to be combined into a single tensor. It can also be a regular tensor when used with OneDeviceStrategy or default strategy.
axis specifies the dimension to reduce along within each replica's tensor. Should typically be set to the batch dimension, orNone to only reduce across replicas (e.g. if the tensor has no batch dimension).
Returns
A Tensor.

run

View source

run(
    fn, args=(), kwargs=None, options=None
)

Invokes fn on each replica, with the given arguments.

This method is the primary way to distribute your computation with a tf.distribute object. It invokes fn on each replica. If args or kwargshave tf.distribute.DistributedValues, such as those produced by atf.distribute.DistributedDataset fromtf.distribute.Strategy.experimental_distribute_dataset ortf.distribute.Strategy.distribute_datasets_from_function, when fn is executed on a particular replica, it will be executed with the component of tf.distribute.DistributedValues that correspond to that replica.

fn is invoked under a replica context. fn may calltf.distribute.get_replica_context() to access members such asall_reduce. Please see the module-level docstring of tf.distribute for the concept of replica context.

All arguments in args or kwargs can be a nested structure of tensors, e.g. a list of tensors, in which case args and kwargs will be passed to the fn invoked on each replica. Or args or kwargs can betf.distribute.DistributedValues containing tensors or composite tensors, i.e. tf.compat.v1.TensorInfo.CompositeTensor, in which case each fn call will get the component of a tf.distribute.DistributedValues corresponding to its replica. Note that arbitrary Python values that are not of the types above are not supported.

Example usage:

  1. Constant tensor input.
    strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
    tensor_input = tf.constant(3.0)
    @tf.function
    def replica_fn(input):
    return input*2.0
    result = strategy.run(replica_fn, args=(tensor_input,))
    result
    PerReplica:{
    0: <tf.Tensor: shape=(), dtype=float32, numpy=6.0>,

1: <tf.Tensor: shape=(), dtype=float32, numpy=6.0>
}
2. DistributedValues input.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"])
@tf.function
def run():
def value_fn(value_context):
return value_context.num_replicas_in_sync
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
def replica_fn2(input):
return input*2
return strategy.run(replica_fn2, args=(distributed_values,))
result = run()
result
<tf.Tensor: shape=(), dtype=int32, numpy=4>
3. Use tf.distribute.ReplicaContext to allreduce values.
strategy = tf.distribute.MirroredStrategy(["gpu:0", "gpu:1"])
@tf.function
def run():
def value_fn(value_context):
return tf.constant(value_context.replica_id_in_sync_group)
distributed_values = (
strategy.experimental_distribute_values_from_function(
value_fn))
def replica_fn(input):
return tf.distribute.get_replica_context().all_reduce(
"sum", input)
return strategy.run(replica_fn, args=(distributed_values,))
result = run()
result
PerReplica:{
0: <tf.Tensor: shape=(), dtype=int32, numpy=1>,
1: <tf.Tensor: shape=(), dtype=int32, numpy=1>
}

Args
fn The function to run on each replica.
args Optional positional arguments to fn. Its element can be a tensor, a nested structure of tensors or a tf.distribute.DistributedValues.
kwargs Optional keyword arguments to fn. Its element can be a tensor, a nested structure of tensors or a tf.distribute.DistributedValues.
options An optional instance of tf.distribute.RunOptions specifying the options to run fn.
Returns
Merged return value of fn across replicas. The structure of the return value is the same as the return value from fn. Each element in the structure can either be tf.distribute.DistributedValues, Tensorobjects, or Tensors (for example, if running on a single replica).

scope

View source

scope()

Context manager to make the strategy current and distribute variables.

This method returns a context manager, and is used as follows:

strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) # Variable created inside scope: with strategy.scope(): mirrored_variable = tf.Variable(1.) mirrored_variable MirroredVariable:{ 0: <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0>, 1: <tf.Variable 'Variable/replica_1:0' shape=() dtype=float32, numpy=1.0> } # Variable created outside scope: regular_variable = tf.Variable(1.) regular_variable <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0>

What happens when Strategy.scope is entered?

What should be in scope and what should be outside?

There are a number of requirements on what needs to happen inside the scope. However, in places where we have information about which strategy is in use, we often enter the scope for the user, so they don't have to do it explicitly (i.e. calling those either inside or outside the scope is OK).

Returns
A context manager.