Stream — PyTorch 2.7 documentation (original) (raw)

class torch.cuda.Stream(device=None, priority=0, **kwargs)[source][source]

Wrapper around a CUDA stream.

A CUDA stream is a linear sequence of execution that belongs to a specific device, independent from other streams. It supports with statement as a context manager to ensure the operators within the with block are running on the corresponding stream. See CUDA semantics for details.

Parameters

query()[source][source]

Check if all the work submitted has been completed.

Returns

A boolean indicating if all kernels in this stream are completed.

Return type

bool

record_event(event=None)[source][source]

Record an event.

Parameters

event (torch.cuda.Event, optional) – event to record. If not given, a new one will be allocated.

Returns

Recorded event.

synchronize()[source][source]

Wait for all the kernels in this stream to complete.

wait_event(event)[source][source]

Make all future work submitted to the stream wait for an event.

Parameters

event (torch.cuda.Event) – an event to wait for.

Note

This is a wrapper around cudaStreamWaitEvent(): seeCUDA Stream documentation for more info.

This function returns without waiting for event: only future operations are affected.

wait_stream(stream)[source][source]

Synchronize with another stream.

All future work submitted to this stream will wait until all kernels submitted to a given stream at the time of call complete.

Parameters

stream (Stream) – a stream to synchronize.

Note

This function returns without waiting for currently enqueued kernels in stream: only future operations are affected.