torch.accelerator.synchronize β€” PyTorch 2.7 documentation (original) (raw)

torch.accelerator.synchronize(device=None, /)[source][source]ΒΆ

Wait for all kernels in all streams on the given device to complete.

Parameters

device (torch.device, str, int, optional) – device for which to synchronize. It must match the current accelerator device type. If not given, use torch.accelerator.current_device_index() by default.

Note

This function is a no-op if the current accelerator is not initialized.

Example:

assert torch.accelerator.is_available() "No available accelerators detected." start_event = torch.Event(enable_timing=True) end_event = torch.Event(enable_timing=True) start_event.record() tensor = torch.randn(100, device=torch.accelerator.current_accelerator()) sum = torch.sum(tensor) end_event.record() torch.accelerator.synchronize() elapsed_time_ms = start_event.elapsed_time(end_event)