sparse – Symbolic Sparse Matrices — PyTensor dev documentation (original) (raw)
In the tutorial section, you can find a sparse tutorial.
The sparse submodule is not loaded when we import PyTensor. You must import pytensor.sparse
to enable it.
The sparse module provides the same functionality as the tensor module. The difference lies under the covers because sparse matrices do not store data in a contiguous array. The sparse module has been used in:
- NLP: Dense linear transformations of sparse vectors.
- Audio: Filterbank in the Fourier domain.
Compressed Sparse Format#
This section tries to explain how information is stored for the two sparse formats of SciPy supported by PyTensor.
PyTensor supports two compressed sparse formats: csc
and csr
, respectively based on columns and rows. They have both the same attributes: data
, indices
, indptr
and shape
.
- The
data
attribute is a one-dimensionalndarray
which contains all the non-zero elements of the sparse matrix.- The
indices
andindptr
attributes are used to store the position of the data in the sparse matrix.- The
shape
attribute is exactly the same as theshape
attribute of a dense (i.e. generic) matrix. It can be explicitly specified at the creation of a sparse matrix if it cannot be inferred from the first three attributes.
CSC Matrix#
In the Compressed Sparse Column format, indices
stands for indexes inside the column vectors of the matrix and indptr
tells where the column starts in the data
and in the indices
attributes. indptr
can be thought of as giving the slice which must be applied to the other attribute in order to get each column of the matrix. In other words, slice(indptr[i], indptr[i+1])
corresponds to the slice needed to find the i-th column of the matrix in the data
and indices
fields.
The following example builds a matrix and returns its columns. It prints the i-th column, i.e. a list of indices in the column and their corresponding value in the second list.
import numpy as np import scipy.sparse as sp data = np.asarray([7, 8, 9]) indices = np.asarray([0, 1, 2]) indptr = np.asarray([0, 2, 3, 3]) m = sp.csc_matrix((data, indices, indptr), shape=(3, 3)) m.toarray() array([[7, 0, 0], [8, 0, 0], [0, 9, 0]]) i = 0 m.indices[m.indptr[i]:m.indptr[i+1]], m.data[m.indptr[i]:m.indptr[i+1]] (array([0, 1], dtype=int32), array([7, 8])) i = 1 m.indices[m.indptr[i]:m.indptr[i+1]], m.data[m.indptr[i]:m.indptr[i+1]] (array([2], dtype=int32), array([9])) i = 2 m.indices[m.indptr[i]:m.indptr[i+1]], m.data[m.indptr[i]:m.indptr[i+1]] (array([], dtype=int32), array([], dtype=int64))
CSR Matrix#
In the Compressed Sparse Row format, indices
stands for indexes inside the row vectors of the matrix and indptr
tells where the row starts in the data
and in the indices
attributes. indptr
can be thought of as giving the slice which must be applied to the other attribute in order to get each row of the matrix. In other words, slice(indptr[i], indptr[i+1])
corresponds to the slice needed to find the i-th row of the matrix in the data
and indices
fields.
The following example builds a matrix and returns its rows. It prints the i-th row, i.e. a list of indices in the row and their corresponding value in the second list.
import numpy as np import scipy.sparse as sp data = np.asarray([7, 8, 9]) indices = np.asarray([0, 1, 2]) indptr = np.asarray([0, 2, 3, 3]) m = sp.csr_matrix((data, indices, indptr), shape=(3, 3)) m.toarray() array([[7, 8, 0], [0, 0, 9], [0, 0, 0]]) i = 0 m.indices[m.indptr[i]:m.indptr[i+1]], m.data[m.indptr[i]:m.indptr[i+1]] (array([0, 1], dtype=int32), array([7, 8])) i = 1 m.indices[m.indptr[i]:m.indptr[i+1]], m.data[m.indptr[i]:m.indptr[i+1]] (array([2], dtype=int32), array([9])) i = 2 m.indices[m.indptr[i]:m.indptr[i+1]], m.data[m.indptr[i]:m.indptr[i+1]] (array([], dtype=int32), array([], dtype=int64))
List of Implemented Operations#
- Moving from and to sparse
dense_from_sparse
. Both grads are implemented. Structured by default.csr_from_dense
,csc_from_dense
. The grad implemented is structured.- PyTensor SparseVariable objects have a method
toarray()
that is the same asdense_from_sparse
.
- Construction of Sparses and their Properties
- CSM and
CSC
,CSR
to construct a matrix. The grad implemented is regular. csm_properties
. to get the properties of a sparse matrix. The grad implemented is regular.- csm_indices(x), csm_indptr(x), csm_data(x) and csm_shape(x) or x.shape.
- sp_ones_like. The grad implemented is regular.
- sp_zeros_like. The grad implemented is regular.
square_diagonal
. The grad implemented is regular.construct_sparse_from_list
. The grad implemented is regular.
- CSM and
- Cast
- cast with
bcast
,wcast
,icast
,lcast
,fcast
,dcast
,ccast
, andzcast
. The grad implemented is regular.
- cast with
- Transpose
transpose
. The grad implemented is regular.
- Basic Arithmetic
neg
. The grad implemented is regular.eq
.neq
.gt
.ge
.lt
.le
.- add. The grad implemented is regular.
- sub. The grad implemented is regular.
- mul. The grad implemented is regular.
- col_scale to multiply by a vector along the columns. The grad implemented is structured.
- row_scale to multiply by a vector along the rows. The grad implemented is structured.
- Monoid (Element-wise operation with only one sparse input).
They all have a structured grad.
structured_sigmoid
structured_exp
structured_log
structured_pow
structured_minimum
structured_maximum
structured_add
sin
arcsin
tan
arctan
sinh
arcsinh
tanh
arctanh
rad2deg
deg2rad
rint
ceil
floor
trunc
sign
log1p
expm1
sqr
sqrt
- Dot Product
- dot.
* One of the inputs must be sparse, the other sparse or dense.
* The grad implemented is regular. * No C code for perform and no C code for grad. * Returns a dense for perform and a dense for grad.
- structured_dot.
* The first input is sparse, the second can be sparse or dense.
* The grad implemented is structured. * C code for perform and grad. * It returns a sparse output if both inputs are sparse and dense one if one of the inputs is dense. * Returns a sparse grad for sparse inputs and dense grad for dense inputs.
- true_dot.
* The first input is sparse, the second can be sparse or dense.
* The grad implemented is regular. * No C code for perform and no C code for grad. * Returns a Sparse. * The gradient returns a Sparse for sparse inputs and by default a dense for dense inputs. The parameter`grad_preserves_dense` can be set to False to return a sparse grad for dense inputs.
sampling_dot
.* Both inputs must be dense.
* The grad implemented is structured for `p`. * Sample of the dot and sample of the gradient. * C code for perform but not for grad. * Returns sparse for perform and grad.
usmm
.* You _shouldn’t_ insert this op yourself!
* There is a rewrite that transforms a[dot](#pytensor.sparse.basic.dot "pytensor.sparse.basic.dot") to `Usmm` when possible. * This `Op` is the equivalent of gemm for sparse dot. * There is no grad implemented for this `Op`. * One of the inputs must be sparse, the other sparse or dense. * Returns a dense from perform.
- dot.
- Slice Operations
- sparse_variable[N, N], returns a tensor scalar. There is no grad implemented for this operation.
- sparse_variable[M:N, O:P], returns a sparse matrix There is no grad implemented for this operation.
- Sparse variables don’t support [M, N:O] and [M:N, O] as we don’t support sparse vectors and returning a sparse matrix would break the numpy interface. Use [M:M+1, N:O] and [M:N, O:O+1] instead.
diag
. The grad implemented is regular.
- Concatenation
- Probability
There is no grad implemented for these operations.
Poisson
andpoisson
Binomial
andcsc_fbinomial
,csc_dbinomial
csr_fbinomial
,csr_dbinomial
Multinomial
andmultinomial
- Internal Representation
They all have a regular grad implemented.
ensure_sorted_indices
.remove0
.- clean to resort indices and remove zeros
- To help testing
tests.sparse.test_basic.sparse_random_inputs()
sparse – Sparse Op#
Classes for handling sparse matrices.
To read about different sparse formats, seehttp://www-users.cs.umn.edu/~saad/software/SPARSKIT/paper.ps
TODO: Automatic methods for determining best sparse format?
class pytensor.sparse.basic.AddSD[source]#
Construct a graph for the gradient with respect to each input variable.
Each returned Variable
represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType
for that input.
Using the reverse-mode AD characterization given in [1]_, for a\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by theVariable
s in inputs
, the values returned by Op.grad
represent the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)in
\[\operatorname{Tr}\left(\bar{C}^\top dC\right) = \operatorname{Tr}\left(\bar{A}^\top dA\right) + \operatorname{Tr}\left(\bar{B}^\top dB\right)\]
Parameters:
- inputs – The input variables.
- output_grads – The gradients of the output variables.
Returns:
The gradients with respect to each Variable
in inputs
.
Return type:
grads
References
Construct an Apply
node that represent the application of this operation to the given inputs.
This must be implemented by sub-classes.
Returns:
node – The constructed Apply
node.
Return type:
perform(node, inputs, outputs)[source]#
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
- node – The symbolic
Apply
node that represents this computation. - inputs – Immutable sequence of non-symbolic/numeric inputs. These are the values of each
Variable
innode.inputs
. - output_storage – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each
Variable
innode.outputs
. The primary purpose of this method is to set the values of these sub-lists.
Notes
The output_storage
list might contain data. If an element of output_storage is not None
, it has to be of the right type, for instance, for a TensorVariable
, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform()
; they could’ve been allocated by anotherOp
’s perform method. An Op
is free to reuse output_storage
as it sees fit, or to discard it and allocate new memory.
class pytensor.sparse.basic.AddSS[source]#
Construct a graph for the gradient with respect to each input variable.
Each returned Variable
represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType
for that input.
Using the reverse-mode AD characterization given in [1]_, for a\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by theVariable
s in inputs
, the values returned by Op.grad
represent the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)in
\[\operatorname{Tr}\left(\bar{C}^\top dC\right) = \operatorname{Tr}\left(\bar{A}^\top dA\right) + \operatorname{Tr}\left(\bar{B}^\top dB\right)\]
Parameters:
- inputs – The input variables.
- output_grads – The gradients of the output variables.
Returns:
The gradients with respect to each Variable
in inputs
.
Return type:
grads
References
Construct an Apply
node that represent the application of this operation to the given inputs.
This must be implemented by sub-classes.
Returns:
node – The constructed Apply
node.
Return type:
perform(node, inputs, outputs)[source]#
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
- node – The symbolic
Apply
node that represents this computation. - inputs – Immutable sequence of non-symbolic/numeric inputs. These are the values of each
Variable
innode.inputs
. - output_storage – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each
Variable
innode.outputs
. The primary purpose of this method is to set the values of these sub-lists.
Notes
The output_storage
list might contain data. If an element of output_storage is not None
, it has to be of the right type, for instance, for a TensorVariable
, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform()
; they could’ve been allocated by anotherOp
’s perform method. An Op
is free to reuse output_storage
as it sees fit, or to discard it and allocate new memory.
class pytensor.sparse.basic.AddSSData[source]#
Add two sparse matrices assuming they have the same sparsity pattern.
Notes
The grad implemented is structured.
Construct a graph for the gradient with respect to each input variable.
Each returned Variable
represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType
for that input.
Using the reverse-mode AD characterization given in [1]_, for a\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by theVariable
s in inputs
, the values returned by Op.grad
represent the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)in
\[\operatorname{Tr}\left(\bar{C}^\top dC\right) = \operatorname{Tr}\left(\bar{A}^\top dA\right) + \operatorname{Tr}\left(\bar{B}^\top dB\right)\]
Parameters:
- inputs – The input variables.
- output_grads – The gradients of the output variables.
Returns:
The gradients with respect to each Variable
in inputs
.
Return type:
grads
References
Parameters:
- x – Sparse matrix.
- y – Sparse matrix.
Notes
x
and y
are assumed to have the same sparsity pattern.
perform(node, inputs, outputs)[source]#
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
- node – The symbolic
Apply
node that represents this computation. - inputs – Immutable sequence of non-symbolic/numeric inputs. These are the values of each
Variable
innode.inputs
. - output_storage – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each
Variable
innode.outputs
. The primary purpose of this method is to set the values of these sub-lists.
Notes
The output_storage
list might contain data. If an element of output_storage is not None
, it has to be of the right type, for instance, for a TensorVariable
, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform()
; they could’ve been allocated by anotherOp
’s perform method. An Op
is free to reuse output_storage
as it sees fit, or to discard it and allocate new memory.
class pytensor.sparse.basic.CSM(format, kmap=None)[source]#
Construct a CSM matrix from constituent parts.
Notes
The grad method returns a dense vector, so it provides a regular grad.
Construct a graph for the gradient with respect to each input variable.
Each returned Variable
represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType
for that input.
Using the reverse-mode AD characterization given in [1]_, for a\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by theVariable
s in inputs
, the values returned by Op.grad
represent the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)in
\[\operatorname{Tr}\left(\bar{C}^\top dC\right) = \operatorname{Tr}\left(\bar{A}^\top dA\right) + \operatorname{Tr}\left(\bar{B}^\top dB\right)\]
Parameters:
- inputs – The input variables.
- output_grads – The gradients of the output variables.
Returns:
The gradients with respect to each Variable
in inputs
.
Return type:
grads
References
make_node(data, indices, indptr, shape)[source]#
Parameters:
- data – One dimensional tensor representing the data of the sparse matrix to construct.
- indices – One dimensional tensor of integers representing the indices of the sparse matrix to construct.
- indptr – One dimensional tensor of integers representing the indice pointer for the sparse matrix to construct.
- shape – One dimensional tensor of integers representing the shape of the sparse matrix to construct.
perform(node, inputs, outputs)[source]#
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
- node – The symbolic
Apply
node that represents this computation. - inputs – Immutable sequence of non-symbolic/numeric inputs. These are the values of each
Variable
innode.inputs
. - output_storage – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each
Variable
innode.outputs
. The primary purpose of this method is to set the values of these sub-lists.
Notes
The output_storage
list might contain data. If an element of output_storage is not None
, it has to be of the right type, for instance, for a TensorVariable
, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform()
; they could’ve been allocated by anotherOp
’s perform method. An Op
is free to reuse output_storage
as it sees fit, or to discard it and allocate new memory.
class pytensor.sparse.basic.CSMGrad(kmap=None)[source]#
Compute the gradient of a CSM.
Note
CSM creates a matrix from data, indices, and indptr vectors; it’s gradient is the gradient of the data vector only. There are two complexities to calculate this gradient:
1. The gradient may be sparser than the input matrix defined by (data, indices, indptr). In this case, the data vector of the gradient will have less elements than the data vector of the input because sparse formats remove 0s. Since we are only returning the gradient of the data vector, the relevant 0s need to be added back. 2. The elements in the sparse dimension are not guaranteed to be sorted. Therefore, the input data vector may have a different order than the gradient data vector.
make_node(x_data, x_indices, x_indptr, x_shape, g_data, g_indices, g_indptr, g_shape)[source]#
Construct an Apply
node that represent the application of this operation to the given inputs.
This must be implemented by sub-classes.
Returns:
node – The constructed Apply
node.
Return type:
perform(node, inputs, outputs)[source]#
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
- node – The symbolic
Apply
node that represents this computation. - inputs – Immutable sequence of non-symbolic/numeric inputs. These are the values of each
Variable
innode.inputs
. - output_storage – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each
Variable
innode.outputs
. The primary purpose of this method is to set the values of these sub-lists.
Notes
The output_storage
list might contain data. If an element of output_storage is not None
, it has to be of the right type, for instance, for a TensorVariable
, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform()
; they could’ve been allocated by anotherOp
’s perform method. An Op
is free to reuse output_storage
as it sees fit, or to discard it and allocate new memory.
class pytensor.sparse.basic.CSMProperties(kmap=None)[source]#
Create arrays containing all the properties of a given sparse matrix.
More specifically, this Op
extracts the .data
, .indices
,.indptr
and .shape
fields.
For specific field, csm_data, csm_indices, csm_indptrand csm_shape are provided.
Notes
The grad implemented is regular, i.e. not structured.infer_shape method is not available for this Op
.
We won’t implement infer_shape for this op now. This will ask that we implement an GetNNZ op, and this op will keep the dependence on the input of this op. So this won’t help to remove computations in the graph. To remove computation, we will need to make an infer_sparse_pattern feature to remove computations. Doing this is trickier then the infer_shape feature. For example, how do we handle the case when some op create some 0 values? So there is dependence on the values themselves. We could write an infer_shape for the last output that is the shape, but I dough this will get used.
We don’t return a view of the shape, we create a new ndarray from the shape tuple.
Construct a graph for the gradient with respect to each input variable.
Each returned Variable
represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType
for that input.
Using the reverse-mode AD characterization given in [1]_, for a\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by theVariable
s in inputs
, the values returned by Op.grad
represent the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)in
\[\operatorname{Tr}\left(\bar{C}^\top dC\right) = \operatorname{Tr}\left(\bar{A}^\top dA\right) + \operatorname{Tr}\left(\bar{B}^\top dB\right)\]
Parameters:
- inputs – The input variables.
- output_grads – The gradients of the output variables.
Returns:
The gradients with respect to each Variable
in inputs
.
Return type:
grads
References
The output vectors correspond to the tuple(data, indices, indptr, shape)
, i.e. the properties of a csm
array.
Parameters:
csm – Sparse matrix in CSR
or CSC
format.
perform(node, inputs, out)[source]#
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
- node – The symbolic
Apply
node that represents this computation. - inputs – Immutable sequence of non-symbolic/numeric inputs. These are the values of each
Variable
innode.inputs
. - output_storage – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each
Variable
innode.outputs
. The primary purpose of this method is to set the values of these sub-lists.
Notes
The output_storage
list might contain data. If an element of output_storage is not None
, it has to be of the right type, for instance, for a TensorVariable
, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform()
; they could’ve been allocated by anotherOp
’s perform method. An Op
is free to reuse output_storage
as it sees fit, or to discard it and allocate new memory.
view_map = {0: [0], 1: [0], 2: [0]}[source]#
A dict
that maps output indices to the input indices of which they are a view.
Examples
view_map = {0: [1]} # first output is a view of second input view_map = {1: [0]} # second output is a view of first input
class pytensor.sparse.basic.Cast(out_type)[source]#
grad(inputs, outputs_gradients)[source]#
Construct a graph for the gradient with respect to each input variable.
Each returned Variable
represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType
for that input.
Using the reverse-mode AD characterization given in [1]_, for a\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by theVariable
s in inputs
, the values returned by Op.grad
represent the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)in
\[\operatorname{Tr}\left(\bar{C}^\top dC\right) = \operatorname{Tr}\left(\bar{A}^\top dA\right) + \operatorname{Tr}\left(\bar{B}^\top dB\right)\]
Parameters:
- inputs – The input variables.
- output_grads – The gradients of the output variables.
Returns:
The gradients with respect to each Variable
in inputs
.
Return type:
grads
References
Construct an Apply
node that represent the application of this operation to the given inputs.
This must be implemented by sub-classes.
Returns:
node – The constructed Apply
node.
Return type:
perform(node, inputs, outputs)[source]#
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
- node – The symbolic
Apply
node that represents this computation. - inputs – Immutable sequence of non-symbolic/numeric inputs. These are the values of each
Variable
innode.inputs
. - output_storage – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each
Variable
innode.outputs
. The primary purpose of this method is to set the values of these sub-lists.
Notes
The output_storage
list might contain data. If an element of output_storage is not None
, it has to be of the right type, for instance, for a TensorVariable
, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform()
; they could’ve been allocated by anotherOp
’s perform method. An Op
is free to reuse output_storage
as it sees fit, or to discard it and allocate new memory.
class pytensor.sparse.basic.ColScaleCSC[source]#
Construct a graph for the gradient with respect to each input variable.
Each returned Variable
represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType
for that input.
Using the reverse-mode AD characterization given in [1]_, for a\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by theVariable
s in inputs
, the values returned by Op.grad
represent the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)in
\[\operatorname{Tr}\left(\bar{C}^\top dC\right) = \operatorname{Tr}\left(\bar{A}^\top dA\right) + \operatorname{Tr}\left(\bar{B}^\top dB\right)\]
Parameters:
- inputs – The input variables.
- output_grads – The gradients of the output variables.
Returns:
The gradients with respect to each Variable
in inputs
.
Return type:
grads
References
Construct an Apply
node that represent the application of this operation to the given inputs.
This must be implemented by sub-classes.
Returns:
node – The constructed Apply
node.
Return type:
perform(node, inputs, outputs)[source]#
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
- node – The symbolic
Apply
node that represents this computation. - inputs – Immutable sequence of non-symbolic/numeric inputs. These are the values of each
Variable
innode.inputs
. - output_storage – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each
Variable
innode.outputs
. The primary purpose of this method is to set the values of these sub-lists.
Notes
The output_storage
list might contain data. If an element of output_storage is not None
, it has to be of the right type, for instance, for a TensorVariable
, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform()
; they could’ve been allocated by anotherOp
’s perform method. An Op
is free to reuse output_storage
as it sees fit, or to discard it and allocate new memory.
class pytensor.sparse.basic.ConstructSparseFromList[source]#
Constructs a sparse matrix out of a list of 2-D matrix rows.
Notes
The grad implemented is regular, i.e. not structured.
R_op(inputs, eval_points)[source]#
Construct a graph for the R-operator.
This method is primarily used by Rop
.
Parameters:
- inputs – The
Op
inputs. - eval_points – A
Variable
or list ofVariable
s with the same length as inputs. Each element ofeval_points
specifies the value of the corresponding input at the point where the R-operator is to be evaluated.
Return type:
rval[i]
should be Rop(f=f_i(inputs), wrt=inputs, eval_points=eval_points)
.
Construct a graph for the gradient with respect to each input variable.
Each returned Variable
represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType
for that input.
Using the reverse-mode AD characterization given in [1]_, for a\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by theVariable
s in inputs
, the values returned by Op.grad
represent the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)in
\[\operatorname{Tr}\left(\bar{C}^\top dC\right) = \operatorname{Tr}\left(\bar{A}^\top dA\right) + \operatorname{Tr}\left(\bar{B}^\top dB\right)\]
Parameters:
- inputs – The input variables.
- output_grads – The gradients of the output variables.
Returns:
The gradients with respect to each Variable
in inputs
.
Return type:
grads
References
make_node(x, values, ilist)[source]#
This creates a sparse matrix with the same shape as x
. Its values are the rows of values
moved. It operates similar to the following pseudo-code:
output = csc_matrix.zeros_like(x, dtype=values.dtype) for in_idx, out_idx in enumerate(ilist): output[out_idx] = values[in_idx]
Parameters:
- x – A dense matrix that specifies the output shape.
- values – A dense matrix with the values to use for output.
- ilist – A dense vector with the same length as the number of rows of values. It specifies where in the output to put the corresponding rows.
perform(node, inp, out_)[source]#
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
- node – The symbolic
Apply
node that represents this computation. - inputs – Immutable sequence of non-symbolic/numeric inputs. These are the values of each
Variable
innode.inputs
. - output_storage – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each
Variable
innode.outputs
. The primary purpose of this method is to set the values of these sub-lists.
Notes
The output_storage
list might contain data. If an element of output_storage is not None
, it has to be of the right type, for instance, for a TensorVariable
, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform()
; they could’ve been allocated by anotherOp
’s perform method. An Op
is free to reuse output_storage
as it sees fit, or to discard it and allocate new memory.
class pytensor.sparse.basic.DenseFromSparse(structured=True)[source]#
Convert a sparse matrix to a dense one.
Notes
The grad implementation can be controlled through the constructor via thestructured
parameter. True
will provide a structured grad while False
will provide a regular grad. By default, the grad is structured.
Construct a graph for the gradient with respect to each input variable.
Each returned Variable
represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType
for that input.
Using the reverse-mode AD characterization given in [1]_, for a\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by theVariable
s in inputs
, the values returned by Op.grad
represent the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)in
\[\operatorname{Tr}\left(\bar{C}^\top dC\right) = \operatorname{Tr}\left(\bar{A}^\top dA\right) + \operatorname{Tr}\left(\bar{B}^\top dB\right)\]
Parameters:
- inputs – The input variables.
- output_grads – The gradients of the output variables.
Returns:
The gradients with respect to each Variable
in inputs
.
Return type:
grads
References
Parameters:
x – A sparse matrix.
perform(node, inputs, outputs)[source]#
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
- node – The symbolic
Apply
node that represents this computation. - inputs – Immutable sequence of non-symbolic/numeric inputs. These are the values of each
Variable
innode.inputs
. - output_storage – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each
Variable
innode.outputs
. The primary purpose of this method is to set the values of these sub-lists.
Notes
The output_storage
list might contain data. If an element of output_storage is not None
, it has to be of the right type, for instance, for a TensorVariable
, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform()
; they could’ve been allocated by anotherOp
’s perform method. An Op
is free to reuse output_storage
as it sees fit, or to discard it and allocate new memory.
class pytensor.sparse.basic.Diag[source]#
Extract the diagonal of a square sparse matrix as a dense vector.
Notes
The grad implemented is regular, i.e. not structured, since the output is a dense vector.
Construct a graph for the gradient with respect to each input variable.
Each returned Variable
represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType
for that input.
Using the reverse-mode AD characterization given in [1]_, for a\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by theVariable
s in inputs
, the values returned by Op.grad
represent the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)in
\[\operatorname{Tr}\left(\bar{C}^\top dC\right) = \operatorname{Tr}\left(\bar{A}^\top dA\right) + \operatorname{Tr}\left(\bar{B}^\top dB\right)\]
Parameters:
- inputs – The input variables.
- output_grads – The gradients of the output variables.
Returns:
The gradients with respect to each Variable
in inputs
.
Return type:
grads
References
Parameters:
x – A square sparse matrix in csc format.
perform(node, inputs, outputs)[source]#
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
- node – The symbolic
Apply
node that represents this computation. - inputs – Immutable sequence of non-symbolic/numeric inputs. These are the values of each
Variable
innode.inputs
. - output_storage – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each
Variable
innode.outputs
. The primary purpose of this method is to set the values of these sub-lists.
Notes
The output_storage
list might contain data. If an element of output_storage is not None
, it has to be of the right type, for instance, for a TensorVariable
, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform()
; they could’ve been allocated by anotherOp
’s perform method. An Op
is free to reuse output_storage
as it sees fit, or to discard it and allocate new memory.
class pytensor.sparse.basic.Dot[source]#
Construct a graph for the gradient with respect to each input variable.
Each returned Variable
represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType
for that input.
Using the reverse-mode AD characterization given in [1]_, for a\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by theVariable
s in inputs
, the values returned by Op.grad
represent the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)in
\[\operatorname{Tr}\left(\bar{C}^\top dC\right) = \operatorname{Tr}\left(\bar{A}^\top dA\right) + \operatorname{Tr}\left(\bar{B}^\top dB\right)\]
Parameters:
- inputs – The input variables.
- output_grads – The gradients of the output variables.
Returns:
The gradients with respect to each Variable
in inputs
.
Return type:
grads
References
Construct an Apply
node that represent the application of this operation to the given inputs.
This must be implemented by sub-classes.
Returns:
node – The constructed Apply
node.
Return type:
perform(node, inputs, out)[source]#
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
- node – The symbolic
Apply
node that represents this computation. - inputs – Immutable sequence of non-symbolic/numeric inputs. These are the values of each
Variable
innode.inputs
. - output_storage – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each
Variable
innode.outputs
. The primary purpose of this method is to set the values of these sub-lists.
Notes
The output_storage
list might contain data. If an element of output_storage is not None
, it has to be of the right type, for instance, for a TensorVariable
, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform()
; they could’ve been allocated by anotherOp
’s perform method. An Op
is free to reuse output_storage
as it sees fit, or to discard it and allocate new memory.
class pytensor.sparse.basic.EnsureSortedIndices(inplace)[source]#
Re-sort indices of a sparse matrix.
CSR column indices are not necessarily sorted. Likewise for CSC row indices. Use ensure_sorted_indices
when sorted indices are required (e.g. when passing data to other libraries).
Notes
The grad implemented is regular, i.e. not structured.
grad(inputs, output_grad)[source]#
Construct a graph for the gradient with respect to each input variable.
Each returned Variable
represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType
for that input.
Using the reverse-mode AD characterization given in [1]_, for a\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by theVariable
s in inputs
, the values returned by Op.grad
represent the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)in
\[\operatorname{Tr}\left(\bar{C}^\top dC\right) = \operatorname{Tr}\left(\bar{A}^\top dA\right) + \operatorname{Tr}\left(\bar{B}^\top dB\right)\]
Parameters:
- inputs – The input variables.
- output_grads – The gradients of the output variables.
Returns:
The gradients with respect to each Variable
in inputs
.
Return type:
grads
References
Parameters:
x – A sparse matrix.
perform(node, inputs, outputs)[source]#
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
- node – The symbolic
Apply
node that represents this computation. - inputs – Immutable sequence of non-symbolic/numeric inputs. These are the values of each
Variable
innode.inputs
. - output_storage – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each
Variable
innode.outputs
. The primary purpose of this method is to set the values of these sub-lists.
Notes
The output_storage
list might contain data. If an element of output_storage is not None
, it has to be of the right type, for instance, for a TensorVariable
, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform()
; they could’ve been allocated by anotherOp
’s perform method. An Op
is free to reuse output_storage
as it sees fit, or to discard it and allocate new memory.
class pytensor.sparse.basic.EqualSD[source]#
class pytensor.sparse.basic.EqualSS[source]#
class pytensor.sparse.basic.GetItem2Lists[source]#
Select elements of sparse matrix, returning them in a vector.
grad(inputs, g_outputs)[source]#
Construct a graph for the gradient with respect to each input variable.
Each returned Variable
represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType
for that input.
Using the reverse-mode AD characterization given in [1]_, for a\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by theVariable
s in inputs
, the values returned by Op.grad
represent the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)in
\[\operatorname{Tr}\left(\bar{C}^\top dC\right) = \operatorname{Tr}\left(\bar{A}^\top dA\right) + \operatorname{Tr}\left(\bar{B}^\top dB\right)\]
Parameters:
- inputs – The input variables.
- output_grads – The gradients of the output variables.
Returns:
The gradients with respect to each Variable
in inputs
.
Return type:
grads
References
make_node(x, ind1, ind2)[source]#
Parameters:
- x – Sparse matrix.
- index – List of two lists, first list indicating the row of each element and second list indicating its column.
perform(node, inp, outputs)[source]#
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
- node – The symbolic
Apply
node that represents this computation. - inputs – Immutable sequence of non-symbolic/numeric inputs. These are the values of each
Variable
innode.inputs
. - output_storage – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each
Variable
innode.outputs
. The primary purpose of this method is to set the values of these sub-lists.
Notes
The output_storage
list might contain data. If an element of output_storage is not None
, it has to be of the right type, for instance, for a TensorVariable
, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform()
; they could’ve been allocated by anotherOp
’s perform method. An Op
is free to reuse output_storage
as it sees fit, or to discard it and allocate new memory.
class pytensor.sparse.basic.GetItem2ListsGrad[source]#
make_node(x, ind1, ind2, gz)[source]#
Construct an Apply
node that represent the application of this operation to the given inputs.
This must be implemented by sub-classes.
Returns:
node – The constructed Apply
node.
Return type:
perform(node, inp, outputs)[source]#
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
- node – The symbolic
Apply
node that represents this computation. - inputs – Immutable sequence of non-symbolic/numeric inputs. These are the values of each
Variable
innode.inputs
. - output_storage – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each
Variable
innode.outputs
. The primary purpose of this method is to set the values of these sub-lists.
Notes
The output_storage
list might contain data. If an element of output_storage is not None
, it has to be of the right type, for instance, for a TensorVariable
, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform()
; they could’ve been allocated by anotherOp
’s perform method. An Op
is free to reuse output_storage
as it sees fit, or to discard it and allocate new memory.
class pytensor.sparse.basic.GetItem2d[source]#
Implement a subtensor of sparse variable, returning a sparse matrix.
If you want to take only one element of a sparse matrix seeGetItemScalar that returns a tensor scalar.
Notes
Subtensor selection always returns a matrix, so indexing with [a:b, c:d] is forced. If one index is a scalar, for instance, x[a:b, c] or x[a, b:c], an error will be raised. Use instead x[a:b, c:c+1] or x[a:a+1, b:c].
The above indexing methods are not supported because the return value would be a sparse matrix rather than a sparse vector, which is a deviation from numpy indexing rule. This decision is made largely to preserve consistency between numpy and pytensor. This may be revised when sparse vectors are supported.
The grad is not implemented for this op.
Parameters:
- x – Sparse matrix.
- index – Tuple of slice object.
perform(node, inputs, outputs)[source]#
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
- node – The symbolic
Apply
node that represents this computation. - inputs – Immutable sequence of non-symbolic/numeric inputs. These are the values of each
Variable
innode.inputs
. - output_storage – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each
Variable
innode.outputs
. The primary purpose of this method is to set the values of these sub-lists.
Notes
The output_storage
list might contain data. If an element of output_storage is not None
, it has to be of the right type, for instance, for a TensorVariable
, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform()
; they could’ve been allocated by anotherOp
’s perform method. An Op
is free to reuse output_storage
as it sees fit, or to discard it and allocate new memory.
class pytensor.sparse.basic.GetItemList[source]#
Select row of sparse matrix, returning them as a new sparse matrix.
grad(inputs, g_outputs)[source]#
Construct a graph for the gradient with respect to each input variable.
Each returned Variable
represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType
for that input.
Using the reverse-mode AD characterization given in [1]_, for a\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by theVariable
s in inputs
, the values returned by Op.grad
represent the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)in
\[\operatorname{Tr}\left(\bar{C}^\top dC\right) = \operatorname{Tr}\left(\bar{A}^\top dA\right) + \operatorname{Tr}\left(\bar{B}^\top dB\right)\]
Parameters:
- inputs – The input variables.
- output_grads – The gradients of the output variables.
Returns:
The gradients with respect to each Variable
in inputs
.
Return type:
grads
References
Parameters:
- x – Sparse matrix.
- index – List of rows.
perform(node, inp, outputs)[source]#
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
- node – The symbolic
Apply
node that represents this computation. - inputs – Immutable sequence of non-symbolic/numeric inputs. These are the values of each
Variable
innode.inputs
. - output_storage – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each
Variable
innode.outputs
. The primary purpose of this method is to set the values of these sub-lists.
Notes
The output_storage
list might contain data. If an element of output_storage is not None
, it has to be of the right type, for instance, for a TensorVariable
, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform()
; they could’ve been allocated by anotherOp
’s perform method. An Op
is free to reuse output_storage
as it sees fit, or to discard it and allocate new memory.
class pytensor.sparse.basic.GetItemListGrad[source]#
make_node(x, index, gz)[source]#
Construct an Apply
node that represent the application of this operation to the given inputs.
This must be implemented by sub-classes.
Returns:
node – The constructed Apply
node.
Return type:
perform(node, inp, outputs)[source]#
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
- node – The symbolic
Apply
node that represents this computation. - inputs – Immutable sequence of non-symbolic/numeric inputs. These are the values of each
Variable
innode.inputs
. - output_storage – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each
Variable
innode.outputs
. The primary purpose of this method is to set the values of these sub-lists.
Notes
The output_storage
list might contain data. If an element of output_storage is not None
, it has to be of the right type, for instance, for a TensorVariable
, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform()
; they could’ve been allocated by anotherOp
’s perform method. An Op
is free to reuse output_storage
as it sees fit, or to discard it and allocate new memory.
class pytensor.sparse.basic.GetItemScalar[source]#
Subtensor of a sparse variable that takes two scalars as index and returns a scalar.
If you want to take a slice of a sparse matrix see GetItem2d that returns a sparse matrix.
Notes
The grad is not implemented for this op.
Parameters:
- x – Sparse matrix.
- index – Tuple of scalars.
perform(node, inputs, outputs)[source]#
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
- node – The symbolic
Apply
node that represents this computation. - inputs – Immutable sequence of non-symbolic/numeric inputs. These are the values of each
Variable
innode.inputs
. - output_storage – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each
Variable
innode.outputs
. The primary purpose of this method is to set the values of these sub-lists.
Notes
The output_storage
list might contain data. If an element of output_storage is not None
, it has to be of the right type, for instance, for a TensorVariable
, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform()
; they could’ve been allocated by anotherOp
’s perform method. An Op
is free to reuse output_storage
as it sees fit, or to discard it and allocate new memory.
class pytensor.sparse.basic.GreaterEqualSD[source]#
class pytensor.sparse.basic.GreaterEqualSS[source]#
class pytensor.sparse.basic.GreaterThanSD[source]#
class pytensor.sparse.basic.GreaterThanSS[source]#
class pytensor.sparse.basic.HStack(format=None, dtype=None)[source]#
Construct a graph for the gradient with respect to each input variable.
Each returned Variable
represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType
for that input.
Using the reverse-mode AD characterization given in [1]_, for a\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by theVariable
s in inputs
, the values returned by Op.grad
represent the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)in
\[\operatorname{Tr}\left(\bar{C}^\top dC\right) = \operatorname{Tr}\left(\bar{A}^\top dA\right) + \operatorname{Tr}\left(\bar{B}^\top dB\right)\]
Parameters:
- inputs – The input variables.
- output_grads – The gradients of the output variables.
Returns:
The gradients with respect to each Variable
in inputs
.
Return type:
grads
References
Construct an Apply
node that represent the application of this operation to the given inputs.
This must be implemented by sub-classes.
Returns:
node – The constructed Apply
node.
Return type:
perform(node, block, outputs)[source]#
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
- node – The symbolic
Apply
node that represents this computation. - inputs – Immutable sequence of non-symbolic/numeric inputs. These are the values of each
Variable
innode.inputs
. - output_storage – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each
Variable
innode.outputs
. The primary purpose of this method is to set the values of these sub-lists.
Notes
The output_storage
list might contain data. If an element of output_storage is not None
, it has to be of the right type, for instance, for a TensorVariable
, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform()
; they could’ve been allocated by anotherOp
’s perform method. An Op
is free to reuse output_storage
as it sees fit, or to discard it and allocate new memory.
class pytensor.sparse.basic.LessEqualSD[source]#
class pytensor.sparse.basic.LessEqualSS[source]#
class pytensor.sparse.basic.LessThanSD[source]#
class pytensor.sparse.basic.LessThanSS[source]#
class pytensor.sparse.basic.MulSD[source]#
Construct a graph for the gradient with respect to each input variable.
Each returned Variable
represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType
for that input.
Using the reverse-mode AD characterization given in [1]_, for a\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by theVariable
s in inputs
, the values returned by Op.grad
represent the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)in
\[\operatorname{Tr}\left(\bar{C}^\top dC\right) = \operatorname{Tr}\left(\bar{A}^\top dA\right) + \operatorname{Tr}\left(\bar{B}^\top dB\right)\]
Parameters:
- inputs – The input variables.
- output_grads – The gradients of the output variables.
Returns:
The gradients with respect to each Variable
in inputs
.
Return type:
grads
References
Construct an Apply
node that represent the application of this operation to the given inputs.
This must be implemented by sub-classes.
Returns:
node – The constructed Apply
node.
Return type:
perform(node, inputs, outputs)[source]#
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
- node – The symbolic
Apply
node that represents this computation. - inputs – Immutable sequence of non-symbolic/numeric inputs. These are the values of each
Variable
innode.inputs
. - output_storage – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each
Variable
innode.outputs
. The primary purpose of this method is to set the values of these sub-lists.
Notes
The output_storage
list might contain data. If an element of output_storage is not None
, it has to be of the right type, for instance, for a TensorVariable
, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform()
; they could’ve been allocated by anotherOp
’s perform method. An Op
is free to reuse output_storage
as it sees fit, or to discard it and allocate new memory.
class pytensor.sparse.basic.MulSS[source]#
Construct a graph for the gradient with respect to each input variable.
Each returned Variable
represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType
for that input.
Using the reverse-mode AD characterization given in [1]_, for a\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by theVariable
s in inputs
, the values returned by Op.grad
represent the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)in
\[\operatorname{Tr}\left(\bar{C}^\top dC\right) = \operatorname{Tr}\left(\bar{A}^\top dA\right) + \operatorname{Tr}\left(\bar{B}^\top dB\right)\]
Parameters:
- inputs – The input variables.
- output_grads – The gradients of the output variables.
Returns:
The gradients with respect to each Variable
in inputs
.
Return type:
grads
References
Construct an Apply
node that represent the application of this operation to the given inputs.
This must be implemented by sub-classes.
Returns:
node – The constructed Apply
node.
Return type:
perform(node, inputs, outputs)[source]#
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
- node – The symbolic
Apply
node that represents this computation. - inputs – Immutable sequence of non-symbolic/numeric inputs. These are the values of each
Variable
innode.inputs
. - output_storage – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each
Variable
innode.outputs
. The primary purpose of this method is to set the values of these sub-lists.
Notes
The output_storage
list might contain data. If an element of output_storage is not None
, it has to be of the right type, for instance, for a TensorVariable
, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform()
; they could’ve been allocated by anotherOp
’s perform method. An Op
is free to reuse output_storage
as it sees fit, or to discard it and allocate new memory.
class pytensor.sparse.basic.MulSV[source]#
Element-wise multiplication of sparse matrix by a broadcasted dense vector element wise.
Notes
The grad implemented is regular, i.e. not structured.
Construct a graph for the gradient with respect to each input variable.
Each returned Variable
represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType
for that input.
Using the reverse-mode AD characterization given in [1]_, for a\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by theVariable
s in inputs
, the values returned by Op.grad
represent the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)in
\[\operatorname{Tr}\left(\bar{C}^\top dC\right) = \operatorname{Tr}\left(\bar{A}^\top dA\right) + \operatorname{Tr}\left(\bar{B}^\top dB\right)\]
Parameters:
- inputs – The input variables.
- output_grads – The gradients of the output variables.
Returns:
The gradients with respect to each Variable
in inputs
.
Return type:
grads
References
Parameters:
- x – Sparse matrix to multiply.
- y – Tensor broadcastable vector.
perform(node, inputs, outputs)[source]#
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
- node – The symbolic
Apply
node that represents this computation. - inputs – Immutable sequence of non-symbolic/numeric inputs. These are the values of each
Variable
innode.inputs
. - output_storage – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each
Variable
innode.outputs
. The primary purpose of this method is to set the values of these sub-lists.
Notes
The output_storage
list might contain data. If an element of output_storage is not None
, it has to be of the right type, for instance, for a TensorVariable
, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform()
; they could’ve been allocated by anotherOp
’s perform method. An Op
is free to reuse output_storage
as it sees fit, or to discard it and allocate new memory.
class pytensor.sparse.basic.Neg[source]#
Negative of the sparse matrix (i.e. multiply by -1
).
Notes
The grad is regular, i.e. not structured.
Construct a graph for the gradient with respect to each input variable.
Each returned Variable
represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType
for that input.
Using the reverse-mode AD characterization given in [1]_, for a\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by theVariable
s in inputs
, the values returned by Op.grad
represent the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)in
\[\operatorname{Tr}\left(\bar{C}^\top dC\right) = \operatorname{Tr}\left(\bar{A}^\top dA\right) + \operatorname{Tr}\left(\bar{B}^\top dB\right)\]
Parameters:
- inputs – The input variables.
- output_grads – The gradients of the output variables.
Returns:
The gradients with respect to each Variable
in inputs
.
Return type:
grads
References
Parameters:
x – Sparse matrix.
perform(node, inputs, outputs)[source]#
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
- node – The symbolic
Apply
node that represents this computation. - inputs – Immutable sequence of non-symbolic/numeric inputs. These are the values of each
Variable
innode.inputs
. - output_storage – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each
Variable
innode.outputs
. The primary purpose of this method is to set the values of these sub-lists.
Notes
The output_storage
list might contain data. If an element of output_storage is not None
, it has to be of the right type, for instance, for a TensorVariable
, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform()
; they could’ve been allocated by anotherOp
’s perform method. An Op
is free to reuse output_storage
as it sees fit, or to discard it and allocate new memory.
class pytensor.sparse.basic.NotEqualSD[source]#
class pytensor.sparse.basic.NotEqualSS[source]#
class pytensor.sparse.basic.Remove0(inplace=False)[source]#
Remove explicit zeros from a sparse matrix.
Notes
The grad implemented is regular, i.e. not structured.
Construct a graph for the gradient with respect to each input variable.
Each returned Variable
represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType
for that input.
Using the reverse-mode AD characterization given in [1]_, for a\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by theVariable
s in inputs
, the values returned by Op.grad
represent the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)in
\[\operatorname{Tr}\left(\bar{C}^\top dC\right) = \operatorname{Tr}\left(\bar{A}^\top dA\right) + \operatorname{Tr}\left(\bar{B}^\top dB\right)\]
Parameters:
- inputs – The input variables.
- output_grads – The gradients of the output variables.
Returns:
The gradients with respect to each Variable
in inputs
.
Return type:
grads
References
Parameters:
x – Sparse matrix.
perform(node, inputs, outputs)[source]#
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
- node – The symbolic
Apply
node that represents this computation. - inputs – Immutable sequence of non-symbolic/numeric inputs. These are the values of each
Variable
innode.inputs
. - output_storage – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each
Variable
innode.outputs
. The primary purpose of this method is to set the values of these sub-lists.
Notes
The output_storage
list might contain data. If an element of output_storage is not None
, it has to be of the right type, for instance, for a TensorVariable
, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform()
; they could’ve been allocated by anotherOp
’s perform method. An Op
is free to reuse output_storage
as it sees fit, or to discard it and allocate new memory.
class pytensor.sparse.basic.RowScaleCSC[source]#
Construct a graph for the gradient with respect to each input variable.
Each returned Variable
represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType
for that input.
Using the reverse-mode AD characterization given in [1]_, for a\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by theVariable
s in inputs
, the values returned by Op.grad
represent the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)in
\[\operatorname{Tr}\left(\bar{C}^\top dC\right) = \operatorname{Tr}\left(\bar{A}^\top dA\right) + \operatorname{Tr}\left(\bar{B}^\top dB\right)\]
Parameters:
- inputs – The input variables.
- output_grads – The gradients of the output variables.
Returns:
The gradients with respect to each Variable
in inputs
.
Return type:
grads
References
Construct an Apply
node that represent the application of this operation to the given inputs.
This must be implemented by sub-classes.
Returns:
node – The constructed Apply
node.
Return type:
perform(node, inputs, outputs)[source]#
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
- node – The symbolic
Apply
node that represents this computation. - inputs – Immutable sequence of non-symbolic/numeric inputs. These are the values of each
Variable
innode.inputs
. - output_storage – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each
Variable
innode.outputs
. The primary purpose of this method is to set the values of these sub-lists.
Notes
The output_storage
list might contain data. If an element of output_storage is not None
, it has to be of the right type, for instance, for a TensorVariable
, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform()
; they could’ve been allocated by anotherOp
’s perform method. An Op
is free to reuse output_storage
as it sees fit, or to discard it and allocate new memory.
A dict
that maps output indices to the input indices of which they are a view.
Examples
view_map = {0: [1]} # first output is a view of second input view_map = {1: [0]} # second output is a view of first input
class pytensor.sparse.basic.SamplingDot[source]#
Compute the dot product dot(x, y.T) = z
for only a subset of z
.
This is equivalent to p * (x . y.T)
where *
is the element-wise product, x
and y
operands of the dot product and p
is a matrix that contains 1 when the corresponding element of z
should be calculated and 0
when it shouldn’t. Note that SamplingDot has a different interface than dot because it requires x
to be a m x k
matrix whiley
is a n x k
matrix instead of the usual k x n
matrix.
Notes
It will work if the pattern is not binary value, but if the pattern doesn’t have a high sparsity proportion it will be slower then a more optimized dot followed by a normal elemwise multiplication.
The grad implemented is regular, i.e. not structured.
Construct a graph for the gradient with respect to each input variable.
Each returned Variable
represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType
for that input.
Using the reverse-mode AD characterization given in [1]_, for a\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by theVariable
s in inputs
, the values returned by Op.grad
represent the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)in
\[\operatorname{Tr}\left(\bar{C}^\top dC\right) = \operatorname{Tr}\left(\bar{A}^\top dA\right) + \operatorname{Tr}\left(\bar{B}^\top dB\right)\]
Parameters:
- inputs – The input variables.
- output_grads – The gradients of the output variables.
Returns:
The gradients with respect to each Variable
in inputs
.
Return type:
grads
References
Parameters:
- x – Tensor matrix.
- y – Tensor matrix.
- p – Sparse matrix in csr format.
perform(node, inputs, outputs)[source]#
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
- node – The symbolic
Apply
node that represents this computation. - inputs – Immutable sequence of non-symbolic/numeric inputs. These are the values of each
Variable
innode.inputs
. - output_storage – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each
Variable
innode.outputs
. The primary purpose of this method is to set the values of these sub-lists.
Notes
The output_storage
list might contain data. If an element of output_storage is not None
, it has to be of the right type, for instance, for a TensorVariable
, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform()
; they could’ve been allocated by anotherOp
’s perform method. An Op
is free to reuse output_storage
as it sees fit, or to discard it and allocate new memory.
class pytensor.sparse.basic.SpSum(axis=None, sparse_grad=True)[source]#
WARNING: judgement call… We are not using the structured in the comparison or hashing because it doesn’t change the perform method therefore, we_do_ want Sums with different structured values to be merged by the merge optimization and this requires them to compare equal.
Construct a graph for the gradient with respect to each input variable.
Each returned Variable
represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType
for that input.
Using the reverse-mode AD characterization given in [1]_, for a\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by theVariable
s in inputs
, the values returned by Op.grad
represent the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)in
\[\operatorname{Tr}\left(\bar{C}^\top dC\right) = \operatorname{Tr}\left(\bar{A}^\top dA\right) + \operatorname{Tr}\left(\bar{B}^\top dB\right)\]
Parameters:
- inputs – The input variables.
- output_grads – The gradients of the output variables.
Returns:
The gradients with respect to each Variable
in inputs
.
Return type:
grads
References
Construct an Apply
node that represent the application of this operation to the given inputs.
This must be implemented by sub-classes.
Returns:
node – The constructed Apply
node.
Return type:
perform(node, inputs, outputs)[source]#
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
- node – The symbolic
Apply
node that represents this computation. - inputs – Immutable sequence of non-symbolic/numeric inputs. These are the values of each
Variable
innode.inputs
. - output_storage – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each
Variable
innode.outputs
. The primary purpose of this method is to set the values of these sub-lists.
Notes
The output_storage
list might contain data. If an element of output_storage is not None
, it has to be of the right type, for instance, for a TensorVariable
, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform()
; they could’ve been allocated by anotherOp
’s perform method. An Op
is free to reuse output_storage
as it sees fit, or to discard it and allocate new memory.
class pytensor.sparse.basic.SparseBlockDiagonal(n_inputs, format='csc')[source]#
Construct an Apply
node that represent the application of this operation to the given inputs.
This must be implemented by sub-classes.
Returns:
node – The constructed Apply
node.
Return type:
perform(node, inputs, output_storage, params=None)[source]#
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
- node – The symbolic
Apply
node that represents this computation. - inputs – Immutable sequence of non-symbolic/numeric inputs. These are the values of each
Variable
innode.inputs
. - output_storage – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each
Variable
innode.outputs
. The primary purpose of this method is to set the values of these sub-lists.
Notes
The output_storage
list might contain data. If an element of output_storage is not None
, it has to be of the right type, for instance, for a TensorVariable
, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform()
; they could’ve been allocated by anotherOp
’s perform method. An Op
is free to reuse output_storage
as it sees fit, or to discard it and allocate new memory.
class pytensor.sparse.basic.SparseConstant(type, data, name=None)[source]#
property unique_value[source]#
Return the unique value of a tensor, if there is one
class pytensor.sparse.basic.SparseConstantSignature(iterable=(), /)[source]#
class pytensor.sparse.basic.SparseFromDense(format)[source]#
Convert a dense matrix to a sparse matrix.
Construct a graph for the gradient with respect to each input variable.
Each returned Variable
represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType
for that input.
Using the reverse-mode AD characterization given in [1]_, for a\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by theVariable
s in inputs
, the values returned by Op.grad
represent the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)in
\[\operatorname{Tr}\left(\bar{C}^\top dC\right) = \operatorname{Tr}\left(\bar{A}^\top dA\right) + \operatorname{Tr}\left(\bar{B}^\top dB\right)\]
Parameters:
- inputs – The input variables.
- output_grads – The gradients of the output variables.
Returns:
The gradients with respect to each Variable
in inputs
.
Return type:
grads
References
Parameters:
x – A dense matrix.
perform(node, inputs, outputs)[source]#
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
- node – The symbolic
Apply
node that represents this computation. - inputs – Immutable sequence of non-symbolic/numeric inputs. These are the values of each
Variable
innode.inputs
. - output_storage – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each
Variable
innode.outputs
. The primary purpose of this method is to set the values of these sub-lists.
Notes
The output_storage
list might contain data. If an element of output_storage is not None
, it has to be of the right type, for instance, for a TensorVariable
, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform()
; they could’ve been allocated by anotherOp
’s perform method. An Op
is free to reuse output_storage
as it sees fit, or to discard it and allocate new memory.
class pytensor.sparse.basic.SparseVariable(type, owner, index=None, name=None)[source]#
class pytensor.sparse.basic.SquareDiagonal[source]#
Produce a square sparse (csc) matrix with a diagonal given by a dense vector.
Notes
The grad implemented is regular, i.e. not structured.
Construct a graph for the gradient with respect to each input variable.
Each returned Variable
represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType
for that input.
Using the reverse-mode AD characterization given in [1]_, for a\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by theVariable
s in inputs
, the values returned by Op.grad
represent the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)in
\[\operatorname{Tr}\left(\bar{C}^\top dC\right) = \operatorname{Tr}\left(\bar{A}^\top dA\right) + \operatorname{Tr}\left(\bar{B}^\top dB\right)\]
Parameters:
- inputs – The input variables.
- output_grads – The gradients of the output variables.
Returns:
The gradients with respect to each Variable
in inputs
.
Return type:
grads
References
Parameters:
x – Dense vector for the diagonal.
perform(node, inputs, outputs)[source]#
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
- node – The symbolic
Apply
node that represents this computation. - inputs – Immutable sequence of non-symbolic/numeric inputs. These are the values of each
Variable
innode.inputs
. - output_storage – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each
Variable
innode.outputs
. The primary purpose of this method is to set the values of these sub-lists.
Notes
The output_storage
list might contain data. If an element of output_storage is not None
, it has to be of the right type, for instance, for a TensorVariable
, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform()
; they could’ve been allocated by anotherOp
’s perform method. An Op
is free to reuse output_storage
as it sees fit, or to discard it and allocate new memory.
class pytensor.sparse.basic.StructuredAddSV[source]#
Structured addition of a sparse matrix and a dense vector.
The elements of the vector are only added to the corresponding non-zero elements of the sparse matrix. Therefore, this operation outputs another sparse matrix.
Notes
The grad implemented is structured since the op is structured.
Construct a graph for the gradient with respect to each input variable.
Each returned Variable
represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType
for that input.
Using the reverse-mode AD characterization given in [1]_, for a\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by theVariable
s in inputs
, the values returned by Op.grad
represent the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)in
\[\operatorname{Tr}\left(\bar{C}^\top dC\right) = \operatorname{Tr}\left(\bar{A}^\top dA\right) + \operatorname{Tr}\left(\bar{B}^\top dB\right)\]
Parameters:
- inputs – The input variables.
- output_grads – The gradients of the output variables.
Returns:
The gradients with respect to each Variable
in inputs
.
Return type:
grads
References
Parameters:
- x – Sparse matrix.
- y – Tensor type vector.
perform(node, inputs, outputs)[source]#
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
- node – The symbolic
Apply
node that represents this computation. - inputs – Immutable sequence of non-symbolic/numeric inputs. These are the values of each
Variable
innode.inputs
. - output_storage – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each
Variable
innode.outputs
. The primary purpose of this method is to set the values of these sub-lists.
Notes
The output_storage
list might contain data. If an element of output_storage is not None
, it has to be of the right type, for instance, for a TensorVariable
, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform()
; they could’ve been allocated by anotherOp
’s perform method. An Op
is free to reuse output_storage
as it sees fit, or to discard it and allocate new memory.
class pytensor.sparse.basic.StructuredDot[source]#
Construct a graph for the gradient with respect to each input variable.
Each returned Variable
represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType
for that input.
Using the reverse-mode AD characterization given in [1]_, for a\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by theVariable
s in inputs
, the values returned by Op.grad
represent the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)in
\[\operatorname{Tr}\left(\bar{C}^\top dC\right) = \operatorname{Tr}\left(\bar{A}^\top dA\right) + \operatorname{Tr}\left(\bar{B}^\top dB\right)\]
Parameters:
- inputs – The input variables.
- output_grads – The gradients of the output variables.
Returns:
The gradients with respect to each Variable
in inputs
.
Return type:
grads
References
Construct an Apply
node that represent the application of this operation to the given inputs.
This must be implemented by sub-classes.
Returns:
node – The constructed Apply
node.
Return type:
perform(node, inputs, outputs)[source]#
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
- node – The symbolic
Apply
node that represents this computation. - inputs – Immutable sequence of non-symbolic/numeric inputs. These are the values of each
Variable
innode.inputs
. - output_storage – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each
Variable
innode.outputs
. The primary purpose of this method is to set the values of these sub-lists.
Notes
The output_storage
list might contain data. If an element of output_storage is not None
, it has to be of the right type, for instance, for a TensorVariable
, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform()
; they could’ve been allocated by anotherOp
’s perform method. An Op
is free to reuse output_storage
as it sees fit, or to discard it and allocate new memory.
class pytensor.sparse.basic.StructuredDotGradCSC[source]#
c_code(node, name, inputs, outputs, sub)[source]#
Return the C implementation of an Op
.
Returns C code that does the computation associated to this Op
, given names for the inputs and outputs.
Parameters:
- node (Apply instance) – The node for which we are compiling the current C code. The same
Op
may be used in more than one node. - name (str) – A name that is automatically assigned and guaranteed to be unique.
- inputs (list of strings) – There is a string for each input of the function, and the string is the name of a C variable pointing to that input. The type of the variable depends on the declared type of the input. There is a corresponding python variable that can be accessed by prepending
"py_"
to the name in the list. - outputs (list of strings) – Each string is the name of a C variable where the
Op
should store its output. The type depends on the declared type of the output. There is a corresponding Python variable that can be accessed by prepending"py_"
to the name in the list. In some cases the outputs will be preallocated and the value of the variable may be pre-filled. The value for an unallocated output is type-dependent. - sub (dict of strings) – Extra symbols defined in
CLinker
sub symbols (such as'fail'
).
c_code_cache_version()[source]#
Return a tuple of integers indicating the version of this Op
.
An empty tuple indicates an “unversioned” Op
that will not be cached between processes.
The cache mechanism may erase cached modules that have been superseded by newer versions. See ModuleCache
for details.
See also
c_code_cache_version_apply
make_node(a_indices, a_indptr, b, g_ab)[source]#
Construct an Apply
node that represent the application of this operation to the given inputs.
This must be implemented by sub-classes.
Returns:
node – The constructed Apply
node.
Return type:
perform(node, inputs, outputs)[source]#
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
- node – The symbolic
Apply
node that represents this computation. - inputs – Immutable sequence of non-symbolic/numeric inputs. These are the values of each
Variable
innode.inputs
. - output_storage – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each
Variable
innode.outputs
. The primary purpose of this method is to set the values of these sub-lists.
Notes
The output_storage
list might contain data. If an element of output_storage is not None
, it has to be of the right type, for instance, for a TensorVariable
, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform()
; they could’ve been allocated by anotherOp
’s perform method. An Op
is free to reuse output_storage
as it sees fit, or to discard it and allocate new memory.
class pytensor.sparse.basic.StructuredDotGradCSR[source]#
c_code(node, name, inputs, outputs, sub)[source]#
Return the C implementation of an Op
.
Returns C code that does the computation associated to this Op
, given names for the inputs and outputs.
Parameters:
- node (Apply instance) – The node for which we are compiling the current C code. The same
Op
may be used in more than one node. - name (str) – A name that is automatically assigned and guaranteed to be unique.
- inputs (list of strings) – There is a string for each input of the function, and the string is the name of a C variable pointing to that input. The type of the variable depends on the declared type of the input. There is a corresponding python variable that can be accessed by prepending
"py_"
to the name in the list. - outputs (list of strings) – Each string is the name of a C variable where the
Op
should store its output. The type depends on the declared type of the output. There is a corresponding Python variable that can be accessed by prepending"py_"
to the name in the list. In some cases the outputs will be preallocated and the value of the variable may be pre-filled. The value for an unallocated output is type-dependent. - sub (dict of strings) – Extra symbols defined in
CLinker
sub symbols (such as'fail'
).
c_code_cache_version()[source]#
Return a tuple of integers indicating the version of this Op
.
An empty tuple indicates an “unversioned” Op
that will not be cached between processes.
The cache mechanism may erase cached modules that have been superseded by newer versions. See ModuleCache
for details.
See also
c_code_cache_version_apply
make_node(a_indices, a_indptr, b, g_ab)[source]#
Construct an Apply
node that represent the application of this operation to the given inputs.
This must be implemented by sub-classes.
Returns:
node – The constructed Apply
node.
Return type:
perform(node, inputs, outputs)[source]#
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
- node – The symbolic
Apply
node that represents this computation. - inputs – Immutable sequence of non-symbolic/numeric inputs. These are the values of each
Variable
innode.inputs
. - output_storage – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each
Variable
innode.outputs
. The primary purpose of this method is to set the values of these sub-lists.
Notes
The output_storage
list might contain data. If an element of output_storage is not None
, it has to be of the right type, for instance, for a TensorVariable
, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform()
; they could’ve been allocated by anotherOp
’s perform method. An Op
is free to reuse output_storage
as it sees fit, or to discard it and allocate new memory.
class pytensor.sparse.basic.Transpose[source]#
Transpose of a sparse matrix.
Notes
The returned matrix will not be in the same format. csc
matrix will be changed in csr
matrix and csr
matrix in csc
matrix.
The grad is regular, i.e. not structured.
Construct a graph for the gradient with respect to each input variable.
Each returned Variable
represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType
for that input.
Using the reverse-mode AD characterization given in [1]_, for a\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by theVariable
s in inputs
, the values returned by Op.grad
represent the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)in
\[\operatorname{Tr}\left(\bar{C}^\top dC\right) = \operatorname{Tr}\left(\bar{A}^\top dA\right) + \operatorname{Tr}\left(\bar{B}^\top dB\right)\]
Parameters:
- inputs – The input variables.
- output_grads – The gradients of the output variables.
Returns:
The gradients with respect to each Variable
in inputs
.
Return type:
grads
References
Parameters:
x – Sparse matrix.
perform(node, inputs, outputs)[source]#
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
- node – The symbolic
Apply
node that represents this computation. - inputs – Immutable sequence of non-symbolic/numeric inputs. These are the values of each
Variable
innode.inputs
. - output_storage – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each
Variable
innode.outputs
. The primary purpose of this method is to set the values of these sub-lists.
Notes
The output_storage
list might contain data. If an element of output_storage is not None
, it has to be of the right type, for instance, for a TensorVariable
, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform()
; they could’ve been allocated by anotherOp
’s perform method. An Op
is free to reuse output_storage
as it sees fit, or to discard it and allocate new memory.
A dict
that maps output indices to the input indices of which they are a view.
Examples
view_map = {0: [1]} # first output is a view of second input view_map = {1: [0]} # second output is a view of first input
class pytensor.sparse.basic.TrueDot(grad_preserves_dense=True)[source]#
Construct a graph for the gradient with respect to each input variable.
Each returned Variable
represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType
for that input.
Using the reverse-mode AD characterization given in [1]_, for a\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by theVariable
s in inputs
, the values returned by Op.grad
represent the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)in
\[\operatorname{Tr}\left(\bar{C}^\top dC\right) = \operatorname{Tr}\left(\bar{A}^\top dA\right) + \operatorname{Tr}\left(\bar{B}^\top dB\right)\]
Parameters:
- inputs – The input variables.
- output_grads – The gradients of the output variables.
Returns:
The gradients with respect to each Variable
in inputs
.
Return type:
grads
References
Construct an Apply
node that represent the application of this operation to the given inputs.
This must be implemented by sub-classes.
Returns:
node – The constructed Apply
node.
Return type:
perform(node, inp, out_)[source]#
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
- node – The symbolic
Apply
node that represents this computation. - inputs – Immutable sequence of non-symbolic/numeric inputs. These are the values of each
Variable
innode.inputs
. - output_storage – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each
Variable
innode.outputs
. The primary purpose of this method is to set the values of these sub-lists.
Notes
The output_storage
list might contain data. If an element of output_storage is not None
, it has to be of the right type, for instance, for a TensorVariable
, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform()
; they could’ve been allocated by anotherOp
’s perform method. An Op
is free to reuse output_storage
as it sees fit, or to discard it and allocate new memory.
class pytensor.sparse.basic.Usmm[source]#
Computes the dense matrix resulting from alpha * x @ y + z
.
Notes
At least one of x
or y
must be a sparse matrix.
make_node(alpha, x, y, z)[source]#
Parameters:
- alpha – A scalar.
- x – Matrix variable.
- y – Matrix variable.
- z – Dense matrix.
perform(node, inputs, outputs)[source]#
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
- node – The symbolic
Apply
node that represents this computation. - inputs – Immutable sequence of non-symbolic/numeric inputs. These are the values of each
Variable
innode.inputs
. - output_storage – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each
Variable
innode.outputs
. The primary purpose of this method is to set the values of these sub-lists.
Notes
The output_storage
list might contain data. If an element of output_storage is not None
, it has to be of the right type, for instance, for a TensorVariable
, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform()
; they could’ve been allocated by anotherOp
’s perform method. An Op
is free to reuse output_storage
as it sees fit, or to discard it and allocate new memory.
class pytensor.sparse.basic.VStack(format=None, dtype=None)[source]#
Construct a graph for the gradient with respect to each input variable.
Each returned Variable
represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType
for that input.
Using the reverse-mode AD characterization given in [1]_, for a\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by theVariable
s in inputs
, the values returned by Op.grad
represent the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)in
\[\operatorname{Tr}\left(\bar{C}^\top dC\right) = \operatorname{Tr}\left(\bar{A}^\top dA\right) + \operatorname{Tr}\left(\bar{B}^\top dB\right)\]
Parameters:
- inputs – The input variables.
- output_grads – The gradients of the output variables.
Returns:
The gradients with respect to each Variable
in inputs
.
Return type:
grads
References
perform(node, block, outputs)[source]#
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
- node – The symbolic
Apply
node that represents this computation. - inputs – Immutable sequence of non-symbolic/numeric inputs. These are the values of each
Variable
innode.inputs
. - output_storage – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each
Variable
innode.outputs
. The primary purpose of this method is to set the values of these sub-lists.
Notes
The output_storage
list might contain data. If an element of output_storage is not None
, it has to be of the right type, for instance, for a TensorVariable
, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform()
; they could’ve been allocated by anotherOp
’s perform method. An Op
is free to reuse output_storage
as it sees fit, or to discard it and allocate new memory.
pytensor.sparse.basic.add(x, y)[source]#
Add two matrices, at least one of which is sparse.
This method will provide the right op according to the inputs.
Parameters:
- x – A matrix variable.
- y – A matrix variable.
Returns:
x
+ y
Return type:
A sparse matrix
Notes
At least one of x
and y
must be a sparse matrix.
The grad will be structured only when one of the variable will be a dense matrix.
pytensor.sparse.basic.as_sparse(x, name=None, ndim=None, **kwargs)[source]#
Wrapper around SparseVariable constructor to construct a Variable with a sparse matrix with the same dtype and format.
Parameters:
x – A sparse matrix.
Returns:
SparseVariable version of x
.
Return type:
object
pytensor.sparse.basic.as_sparse_variable(x, name=None, ndim=None, **kwargs)[source]#
Wrapper around SparseVariable constructor to construct a Variable with a sparse matrix with the same dtype and format.
Parameters:
x – A sparse matrix.
Returns:
SparseVariable version of x
.
Return type:
object
pytensor.sparse.basic.block_diag(*matrices, format='csc')[source]#
Construct a block diagonal matrix from a sequence of input matrices.
Given the inputs A
, B
and C
, the output will have these arrays arranged on the diagonal:
[[A, 0, 0],
[0, B, 0], [0, 0, C]]
Parameters:
- A (tensors) –
Input tensors to form the block diagonal matrix. last two dimensions of the inputs will be used, and all inputs should have at least 2 dimensins.
Note that the input matrices need not be sparse themselves, and will be automatically converted to the requested format if they are not. - B (tensors) –
Input tensors to form the block diagonal matrix. last two dimensions of the inputs will be used, and all inputs should have at least 2 dimensins.
Note that the input matrices need not be sparse themselves, and will be automatically converted to the requested format if they are not. - ... (C) –
Input tensors to form the block diagonal matrix. last two dimensions of the inputs will be used, and all inputs should have at least 2 dimensins.
Note that the input matrices need not be sparse themselves, and will be automatically converted to the requested format if they are not. - format (str , optional) – The format of the output sparse matrix. One of ‘csr’ or ‘csc’. Default is ‘csr’. Ignored if sparse=False.
Returns:
out – Symbolic sparse matrix in the specified format.
Return type:
sparse matrix tensor
Examples
Create a sparse block diagonal matrix from two sparse 2x2 matrices:
A = csr_matrix([[1, 2], [3, 4]]) B = csr_matrix([[5, 6], [7, 8]]) result_sparse = block_diag(A, B, format='csr')
print(result_sparse) print(result_sparse.toarray().eval())
SparseVariable{csr,int64} [[1 2 0 0] [3 4 0 0] [0 0 5 6] [0 0 7 8]]
pytensor.sparse.basic.cast(variable, dtype)[source]#
Cast sparse variable to the desired dtype.
Parameters:
- variable – Sparse matrix.
- dtype – The dtype wanted.
Return type:
Same as x
but having dtype
as dtype.
Notes
The grad implemented is regular, i.e. not structured.
pytensor.sparse.basic.clean(x)[source]#
Remove explicit zeros from a sparse matrix, and re-sort indices.
CSR column indices are not necessarily sorted. Likewise for CSC row indices. Use clean when sorted indices are required (e.g. when passing data to other libraries) and to ensure there are no zeros in the data.
Parameters:
x – A sparse matrix.
Returns:
The same as x
with indices sorted and zeros removed.
Return type:
A sparse matrix
Notes
The grad implemented is regular, i.e. not structured.
pytensor.sparse.basic.col_scale(x, s)[source]#
Scale each columns of a sparse matrix by the corresponding element of a dense vector.
Parameters:
- x – A sparse matrix.
- s – A dense vector with length equal to the number of columns of
x
.
Returns:
- A sparse matrix in the same format as
x
which each column had been - multiply by the corresponding element of
s
.
Notes
The grad implemented is structured.
pytensor.sparse.basic.csm_data(csm)[source]#
Return the data field of the sparse variable.
pytensor.sparse.basic.csm_grad[source]#
alias of CSMGrad
pytensor.sparse.basic.csm_indices(csm)[source]#
Return the indices field of the sparse variable.
pytensor.sparse.basic.csm_indptr(csm)[source]#
Return the indptr field of the sparse variable.
pytensor.sparse.basic.csm_shape(csm)[source]#
Return the shape field of the sparse variable.
pytensor.sparse.basic.dot(x, y)[source]#
Efficiently compute the dot product when one or all operands are sparse.
Supported formats are CSC and CSR. The output of the operation is dense.
Parameters:
- x – Sparse or dense matrix variable.
- y – Sparse or dense matrix variable.
Return type:
The dot product x @ y
in a dense format.
Notes
The grad implemented is regular, i.e. not structured.
At least one of x
or y
must be a sparse matrix.
When the operation has the form dot(csr_matrix, dense)
the gradient of this operation can be performed inplace by UsmmCscDense
. This leads to significant speed-ups.
pytensor.sparse.basic.hstack(blocks, format=None, dtype=None)[source]#
Stack sparse matrices horizontally (column wise).
This wrap the method hstack from scipy.
Parameters:
- blocks – List of sparse array of compatible shape.
- format – String representing the output format. Default is csc.
- dtype – Output dtype.
Returns:
The concatenation of the sparse array column wise.
Return type:
array
Notes
The number of line of the sparse matrix must agree.
The grad implemented is regular, i.e. not structured.
pytensor.sparse.basic.mul(x, y)[source]#
Multiply elementwise two matrices, at least one of which is sparse.
This method will provide the right op according to the inputs.
Parameters:
- x – A matrix variable.
- y – A matrix variable.
Returns:
x
* y
Return type:
A sparse matrix
Notes
At least one of x
and y
must be a sparse matrix. The grad is regular, i.e. not structured.
pytensor.sparse.basic.row_scale(x, s)[source]#
Scale each row of a sparse matrix by the corresponding element of a dense vector.
Parameters:
- x – A sparse matrix.
- s – A dense vector with length equal to the number of rows of
x
.
Returns:
A sparse matrix in the same format as x
whose each row has been multiplied by the corresponding element of s
.
Return type:
A sparse matrix
Notes
The grad implemented is structured.
pytensor.sparse.basic.sp_ones_like(x)[source]#
Construct a sparse matrix of ones with the same sparsity pattern.
Parameters:
x – Sparse matrix to take the sparsity pattern.
Returns:
The same as x
with data changed for ones.
Return type:
A sparse matrix
pytensor.sparse.basic.sp_sum(x, axis=None, sparse_grad=False)[source]#
Calculate the sum of a sparse matrix along the specified axis.
It operates a reduction along the specified axis. When axis
is None
, it is applied along all axes.
Parameters:
- x – Sparse matrix.
- axis – Axis along which the sum is applied. Integer or
None
. - sparse_grad (bool) –
True
to have a structured grad.
Returns:
The sum of x
in a dense format.
Return type:
object
Notes
The grad implementation is controlled with the sparse_grad
parameter.True
will provide a structured grad and False
will provide a regular grad. For both choices, the grad returns a sparse matrix having the same format as x
.
This op does not return a sparse matrix, but a dense tensor matrix.
pytensor.sparse.basic.sp_zeros_like(x)[source]#
Construct a sparse matrix of zeros.
Parameters:
x – Sparse matrix to take the shape.
Returns:
The same as x
with zero entries for all element.
Return type:
A sparse matrix
pytensor.sparse.basic.sparse_formats = ['csc', 'csr'][source]#
Types of sparse matrices to use for testing.
pytensor.sparse.basic.structured_dot(x, y)[source]#
Structured Dot is like dot, except that only the gradient wrt non-zero elements of the sparse matrixa
are calculated and propagated.
The output is presumed to be a dense matrix, and is represented by a TensorType instance.
Parameters:
- a – A sparse matrix.
- b – A sparse or dense matrix.
Returns:
The dot product of a
and b
.
Return type:
A sparse matrix
Notes
The grad implemented is structured.
pytensor.sparse.basic.sub(x, y)[source]#
Subtract two matrices, at least one of which is sparse.
This method will provide the right op according to the inputs.
Parameters:
- x – A matrix variable.
- y – A matrix variable.
Returns:
x
- y
Return type:
A sparse matrix
Notes
At least one of x
and y
must be a sparse matrix.
The grad will be structured only when one of the variable will be a dense matrix.
pytensor.sparse.basic.true_dot(x, y, grad_preserves_dense=True)[source]#
Operation for efficiently calculating the dot product when one or all operands are sparse. Supported formats are CSC and CSR. The output of the operation is sparse.
Parameters:
- x – Sparse matrix.
- y – Sparse matrix or 2d tensor variable.
- grad_preserves_dense (bool) – If True (default), makes the grad of dense inputs dense. Otherwise the grad is always sparse.
Returns:
- The dot product
x
.`y` in a sparse format. - Notex
- —–
- The grad implemented is regular, i.e. not structured.
pytensor.sparse.basic.vstack(blocks, format=None, dtype=None)[source]#
Stack sparse matrices vertically (row wise).
This wrap the method vstack from scipy.
Parameters:
- blocks – List of sparse array of compatible shape.
- format – String representing the output format. Default is csc.
- dtype – Output dtype.
Returns:
The concatenation of the sparse array row wise.
Return type:
array
Notes
The number of column of the sparse matrix must agree.
The grad implemented is regular, i.e. not structured.
pytensor.sparse.sparse_grad(var)[source]#
This function return a new variable whose gradient will be stored in a sparse format instead of dense.
Currently only variable created by AdvancedSubtensor1 is supported. i.e. a_tensor_var[an_int_vector].
New in version 0.6rc4.