Module: tf.distribute.experimental | TensorFlow v2.16.1 (original) (raw)
Module: tf.distribute.experimental
Stay organized with collections Save and categorize content based on your preferences.
Public API for tf._api.v2.distribute.experimental namespace
Modules
coordinator module: Public API for tf._api.v2.distribute.experimental.coordinator namespace
partitioners module: Public API for tf._api.v2.distribute.experimental.partitioners namespace
rpc module: Public API for tf._api.v2.distribute.experimental.rpc namespace
Classes
class CentralStorageStrategy: A one-machine strategy that puts all variables on a single device.
class CollectiveCommunication: Cross device communication implementation.
class CollectiveHints: Hints for collective operations like AllReduce.
class CommunicationImplementation: Cross device communication implementation.
class CommunicationOptions: Options for cross device communications like All-reduce.
class MultiWorkerMirroredStrategy: A distribution strategy for synchronous training on multiple workers.
class ParameterServerStrategy: An multi-worker tf.distribute strategy with parameter servers.
class PreemptionCheckpointHandler: Preemption and error handler for synchronous training.
class PreemptionWatcher: Watch preemption signal and store it.
class TPUStrategy: Synchronous training on TPUs and TPU Pods.
class TerminationConfig: Customization of PreemptionCheckpointHandler
for various platforms.
class ValueContext: A class wrapping information needed by a distribute function.