API Documentation — PyTensor dev documentation (original) (raw)

This documentation covers PyTensor module-wise. This is suited to finding the Types and Ops that you can use to build and compile expression graphs.

Modules#

There are also some top-level imports that you might find more convenient:

Graph#

pytensor.shared(...)[source]#

Alias for pytensor.compile.sharedvalue.shared()

pytensor.function(...)[source]#

Alias for pytensor.compile.function.function()

pytensor.clone_replace(...)[source]#

Clone a graph and replace subgraphs within it.

It returns a copy of the initial subgraph with the corresponding substitutions.

Parameters:

Alias for pytensor.graph.basic.clone_replace()

Control flow#

pytensor.scan(...)[source]#

This function constructs and applies a Scan Op to the provided arguments.

Parameters:

fn

fn is a function that describes the operations involved in one

step of scan. fn should construct variables describing the output of one iteration step. It should expect as inputVariables representing all the slices of the input sequences and previous values of the outputs, as well as all other arguments given to scan as non_sequences. The order in which scan passes these variables to fn is the following :

The order of the sequences is the same as the one in the listsequences given to scan. The order of the outputs is the same as the order of outputs_info. For any sequence or output the order of the time slices is the same as the one in which they have been given as taps. For example if one writes the following :

scan( fn, sequences=[ dict(input=Sequence1, taps=[-3, 2, -1]), Sequence2, dict(input=Sequence3, taps=3), ], outputs_info=[ dict(initial=Output1, taps=[-3, -5]), dict(initial=Output2, taps=None), Output3, ], non_sequences=[Argument1, Argument2], )

fn should expect the following arguments in this given order:

  1. sequence1[t-3]
  2. sequence1[t+2]
  3. sequence1[t-1]
  4. sequence2[t]
  5. sequence3[t+3]
  6. output1[t-3]
  7. output1[t-5]
  8. output3[t-1]
  9. argument1
  10. argument2

The list of non_sequences can also contain shared variables used in the function, though scan is able to figure those out on its own so they can be skipped. For the clarity of the code we recommend though to provide them to scan. To some extendscan can also figure out other non sequences (not shared) even if not passed to scan (but used by fn). A simple example of this would be :

import pytensor.tensor as pt

W = pt.matrix() W_2 = W**2

def f(x):

return pt.dot(x, W_2)

The function fn is expected to return two things. One is a list of outputs ordered in the same order as outputs_info, with the difference that there should be only one output variable per output initial state (even if no tap value is used). Secondlyfn should return an update dictionary (that tells how to update any shared variable after each iteration step). The dictionary can optionally be given as a list of tuples. There is no constraint on the order of these two list, fn can return either (outputs_list, update_dictionary) or(update_dictionary, outputs_list) or just one of the two (in case the other is empty).

To use scan as a while loop, the user needs to change the function fn such that also a stopping condition is returned. To do so, one needs to wrap the condition in an until class. The condition should be returned as a third element, for example:

... return [y1_t, y2_t], {x: x + 1}, until(x < 50)

Note that a number of steps–considered in here as the maximum number of steps–is still required even though a condition is passed. It is used to allocate memory if needed.

sequences

sequences is the list of Variables or dicts describing the sequences scan has to iterate over. If a sequence is given as wrapped in a dict, then a set of optional information can be provided about the sequence. The dictshould have the following keys:

All Variables in the list sequences are automatically wrapped into a dict where taps is set to [0]

outputs_info

outputs_info is the list of Variables or dicts describing the initial state of the outputs computed recurrently. When the initial states are given as dicts, optional information can be provided about the output corresponding to those initial states. The dict should have the following keys:

scan will follow this logic if partial information is given:

If outputs_info is an empty list or None, scan assumes that no tap is used for any of the outputs. If information is provided just for a subset of the outputs, an exception is raised, because there is no convention on how scan should map the provided information to the outputs of fn.

non_sequences

non_sequences is the list of arguments that are passed tofn at each steps. One can choose to exclude variables used in fn from this list, as long as they are part of the computational graph, although–for clarity–this is not encouraged.

n_steps

n_steps is the number of steps to iterate given as an intor a scalar Variable. If any of the input sequences do not have enough elements, scan will raise an error. If the value is 0, the outputs will have 0 rows. If n_steps is not provided, scan will figure out the amount of steps it should run given its input sequences. n_steps < 0 is not supported anymore.

truncate_gradient

truncate_gradient is the number of steps to use in truncated back-propagation through time (BPTT). If you compute gradients through a Scan Op, they are computed using BPTT. By providing a different value then -1, you choose to use truncated BPTT instead of classical BPTT, where you go for only truncate_gradient number of steps back in time.

go_backwards

go_backwards is a flag indicating if scan should go backwards through the sequences. If you think of each sequence as indexed by time, making this flag True would mean thatscan goes back in time, namely that for any sequence it starts from the end and goes towards 0.

name

When profiling scan, it is helpful to provide a name for any instance of scan. For example, the profiler will produce an overall profile of your code as well as profiles for the computation of one step of each instance ofScan. The name of the instance appears in those profiles and can greatly help to disambiguate information.

mode

The mode used to compile the inner-graph. If you prefer the computations of one step of scan to be done differently then the entire function, you can use this parameter to describe how the computations in this loop are done (seepytensor.function for details about possible values and their meaning).

profile

If True or a non-empty string, a profile object will be created and attached to the inner graph of Scan. When profile is True, the profiler results will use the name of the Scan instance, otherwise it will use the passed string. The profiler only collects and prints information when running the inner graph with the CVM Linker.

allow_gc

Set the value of allow_gc for the internal graph of the Scan. If set to None, this will use the value ofpytensor.config.scan__allow_gc.

The full Scan behavior related to allocation is determined by this value and the flag pytensor.config.allow_gc. If the flagallow_gc is True (default) and this allow_gc is False(default), then we let Scan allocate all intermediate memory on the first iteration, and they are not garbage collected after that first iteration; this is determined by allow_gc. This can speed up allocation of the subsequent iterations. All those temporary allocations are freed at the end of all iterations; this is what the flag pytensor.config.allow_gc means.

strict

If True, all the shared variables used in fn must be provided as a part of non_sequences or sequences.

return_list

If True, will always return a list, even if there is only one output.

Returns:

tuple of the form (outputs, updates).outputs is either a Variable or a list of Variables representing the outputs in the same order as in outputs_info.updates is a subclass of dict specifying the update rules for all shared variables used in Scan. This dict should be passed to pytensor.function when you compile your function.

Return type:

tuple

Alias for pytensor.scan.basic.scan()

Convert to Variable#

pytensor.as_symbolic(...)[source]#

Convert x into an equivalent PyTensor Variable.

Parameters:

Raises:

TypeError – If x cannot be converted to a Variable.

Debug#

pytensor.dprint(...)[source]#

Print a graph as text.

Each line printed represents a Variable in a graph. The indentation of lines corresponds to its depth in the symbolic graph. The first part of the text identifies whether it is an input or the output of some Apply node. The second part of the text is an identifier of the Variable.

If a Variable is encountered multiple times in the depth-first search, it is only printed recursively the first time. Later, just the Variableidentifier is printed.

If an Apply node has multiple outputs, then a .N suffix will be appended to the Apply node’s identifier, indicating to which output a line corresponds.

Parameters:

Return type:

A string representing the printed graph, if file is a string, else file.

Alias for pytensor.printing.debugprint()