CUDAGraph — PyTorch 2.7 documentation (original) (raw)

class torch.cuda.CUDAGraph[source][source]

Wrapper around a CUDA graph.

Warning

This API is in beta and may change in future releases.

capture_begin(pool=None, capture_error_mode='global')[source][source]

Begin capturing CUDA work on the current stream.

Typically, you shouldn’t call capture_begin yourself. Use graph or make_graphed_callables(), which call capture_begin internally.

Parameters

capture_end()[source][source]

End CUDA graph capture on the current stream.

After capture_end, replay may be called on this instance.

Typically, you shouldn’t call capture_end yourself. Use graph or make_graphed_callables(), which call capture_end internally.

debug_dump(debug_path)[source][source]

Parameters

debug_path (required) – Path to dump the graph to.

Calls a debugging function to dump the graph if the debugging is enabled via CUDAGraph.enable_debug_mode()

enable_debug_mode()[source][source]

Enable debugging mode for CUDAGraph.debug_dump.

pool()[source][source]

Return an opaque token representing the id of this graph’s memory pool.

This id can optionally be passed to another graph’s capture_begin, which hints the other graph may share the same memory pool.

replay()[source][source]

Replay the CUDA work captured by this graph.

reset()[source][source]

Delete the graph currently held by this instance.