tf.test.experimental.sync_devices  |  TensorFlow v2.16.1 (original) (raw)

tf.test.experimental.sync_devices

Stay organized with collections Save and categorize content based on your preferences.

Synchronizes all devices.

View aliases

Compat aliases for migration

SeeMigration guide for more details.

tf.compat.v1.test.experimental.sync_devices

tf.test.experimental.sync_devices() -> None

By default, GPUs run asynchronously. This means that when you run an op on the GPU, like tf.linalg.matmul, the op may still be running on the GPU when the function returns. Non-GPU devices can also be made to run asynchronously by calling tf.config.experimental.set_synchronous_execution(False). Callingsync_devices() blocks until pending ops have finished executing. This is primarily useful for measuring performance during a benchmark.

For example, here is how you can measure how long tf.linalg.matmul runs:

import time x = tf.random.normal((4096, 4096)) tf.linalg.matmul(x, x) # Warmup. tf.test.experimental.sync_devices() # Block until warmup has completed. `` start = time.time() y = tf.linalg.matmul(x, x) tf.test.experimental.sync_devices() # Block until matmul has completed. end = time.time() print(f'Time taken: {end - start}')

If the call to sync_devices() was omitted, the time printed could be too small. This is because the op could still be running asynchronously when the line end = time.time() is executed.

Raises
RuntimeError If run outside Eager mode. This must be called in Eager mode, outside any tf.functions.