tf.compat.v1.fixed_size_partitioner  |  TensorFlow v2.16.1 (original) (raw)

tf.compat.v1.fixed_size_partitioner

Partitioner to specify a fixed number of shards along given axis.

tf.compat.v1.fixed_size_partitioner(
    num_shards, axis=0
)

Migrate to TF2

This API is deprecated in TF2. In TF2, partitioner is no longer part of the variable declaration via tf.Variable.ParameterServer Traininghandles partitioning of variables. The corresponding TF2 partitioner class offixed_size_partitioner istf.distribute.experimental.partitioners.FixedShardsPartitioner.

Check the migration guideon the differences in treatment of variables and losses between TF1 and TF2.

Before:

  x = tf.compat.v1.get_variable(
    "x", shape=(2,), partitioner=tf.compat.v1.fixed_size_partitioner(2)
  )

After:

  partitioner = (
      tf.distribute.experimental.partitioners.FixedShardsPartitioner(
          num_shards=2)
  )
  strategy = tf.distribute.experimental.ParameterServerStrategy(
                 cluster_resolver=cluster_resolver,
                 variable_partitioner=partitioner)

  with strategy.scope():
    x = tf.Variable([1.0, 2.0])

Description

Args
num_shards int, number of shards to partition variable.
axis int, axis to partition on.
Returns
A partition function usable as the partitioner argument tovariable_scope and get_variable.

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.

Last updated 2024-04-26 UTC.