tf.compat.v1.tpu.initialize_system  |  TensorFlow v2.16.1 (original) (raw)

tf.compat.v1.tpu.initialize_system

Stay organized with collections Save and categorize content based on your preferences.

Initializes a distributed TPU system for use with TensorFlow.

tf.compat.v1.tpu.initialize_system(
    embedding_config: Optional[embedding_pb2.TPUEmbeddingConfiguration] = None,
    job: Optional[Text] = None,
    compilation_failure_closes_chips: bool = True,
    tpu_cancellation_closes_chips: Optional[bool] = None
) -> core_types.Tensor
Args
embedding_config If not None, a TPUEmbeddingConfiguration proto describing the desired configuration of the hardware embedding lookup tables. If embedding_config is None, no hardware embeddings can be used.
job The job (the XXX in TensorFlow device specification /job:XXX) that contains the TPU devices that will be initialized. If job=None it is assumed there is only one job in the TensorFlow flock, and an error will be returned if this assumption does not hold.
compilation_failure_closes_chips Set the configuration whether we want to close TPU chips when there is a compilation failure.
tpu_cancellation_closes_chips Set the configuration whether we want to close TPU chips when a TPU execution is cancelled. If the value is None, the behavior will be determined by the command line flagtpu_cancellation_closes_chips for the TPU worker. WARNING: this argument only applies to TFRT TPU runtime.
Returns
A serialized TopologyProto that describes the TPU system. Note: the topology must be evaluated using Session.run before it can be used.

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.

Last updated 2024-04-26 UTC.