tfp.util.TransformedVariable  |  TensorFlow Probability (original) (raw)

Variable tracking object which applies a bijector upon convert_to_tensor.

Inherits From: DeferredTensor

tfp.util.TransformedVariable(
    initial_value, bijector, dtype=None, name=None, **kwargs
)

Used in the notebooks

Used in the tutorials
Learnable Distributions Zoo Gaussian Process Regression in TensorFlow Probability Linear Mixed Effects Models Bayesian Modeling with Joint Distribution Probabilistic PCA

Example

import tensorflow.compat.v2 as tf
import tensorflow_probability as tfp
tfb = tfp.bijectors

positive_variable = tfp.util.TransformedVariable(1., bijector=tfb.Exp())

positive_variable
# ==> <TransformedVariable: dtype=float32, shape=[], fn=exp>

# Note that the initial value corresponds to the transformed output.
tf.convert_to_tensor(positive_variable)
# ==> 1.

positive_variable.pretransformed_input
# ==> <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=0.0>

# Operators work with `TransformedVariable`.
positive_variable + 1.
# ==> 2.

# It is also possible to assign values to a TransformedVariable
with tf.control_dependencies([positive_variable.assign_add(2.)]):
  positive_variable
# ==> 3.

A common use case for the `TransformedVariable` is to fit constrained
parameters. E.g.:

```python
import tensorflow.compat.v2 as tf
import tensorflow_probability as tfp
tfb = tfp.bijectors
tfd = tfp.distributions

trainable_normal = tfd.Normal(
    loc=tf.Variable(0.),
    scale=tfp.util.TransformedVariable(1., bijector=tfb.Exp()))

trainable_normal.loc
# ==> <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=0.0>

trainable_normal.scale
# ==> <TransformedVariable: dtype=float32, shape=[], fn=exp>

with tf.GradientTape() as tape:
  negloglik = -trainable_normal.log_prob(0.5)
g = tape.gradient(negloglik, trainable_normal.trainable_variables)
# ==> (-0.5, 0.75)

opt = tf.optimizers.Adam(learning_rate=0.05)
loss = tf.function(lambda: -trainable_normal.log_prob(0.5))
for _ in range(int(1e3)):
  opt.minimize(loss, trainable_normal.trainable_variables)
trainable_normal.mean()
# ==> 0.5
trainable_normal.stddev()
# ==> (approximately) 0.0075
Args
initial_value A Tensor, or Python object convertible to a Tensor, which is the initial value for the TransformedVariable. The underlying untransformed tf.Variable will be initialized withbijector.inverse(initial_value). Can also be a callable with no argument that returns the initial value when called.
bijector A Bijector-like instance which defines the transformations applied to the underlying tf.Variable.
dtype tf.dtype.DType instance or otherwise valid dtype value totf.convert_to_tensor(..., dtype). Default value: None (i.e., bijector.dtype).
name Python str representing the underlying tf.Variable's name. Default value: None.
**kwargs Keyword arguments forward to tf.Variable.
Attributes
also_track Additional variables tracked by tf.Module in self.trainable_variables.
bijector
dtype Represents the type of the elements in a Tensor.
initializer The initializer operation for the underlying variable.
name The string name of this object.
name_scope Returns a tf.name_scope instance for this class.
non_trainable_variables Sequence of non-trainable variables owned by this module and its submodules.
pretransformed_input Input to transform_fn.
shape Represents the shape of a Tensor.
submodules Sequence of all sub-modules.Submodules are modules which are properties of this module, or found as properties of modules which are properties of this module (and so on). a = tf.Module() b = tf.Module() c = tf.Module() a.b = b b.c = c list(a.submodules) == [b, c] True list(b.submodules) == [c] True list(c.submodules) == [] True
trainable_variables Sequence of trainable variables owned by this module and its submodules.
transform_fn Function which characterizes the Tensorization of this object.
variables Sequence of variables owned by this module and its submodules.

Methods

assign

assign(
    value, use_locking=False, name=None, read_value=True
)

Assigns a new value to the variable.

This is essentially a shortcut for assign(self, value).

Args
value A Tensor. The new value for this variable.
use_locking If True, use locking during the assignment.
name The name of the operation to be created
read_value if True, will return something which evaluates to the new value of the variable; if False will return the assign op.
Returns
The updated variable. If read_value is false, instead returns None in Eager mode and the assign op in graph mode.

assign_add

assign_add(
    delta, use_locking=False, name=None, read_value=True
)

Adds a value to this variable.

This is essentially a shortcut for assign_add(self, delta).

Args
delta A Tensor. The value to add to this variable.
use_locking If True, use locking during the operation.
name The name of the operation to be created
read_value if True, will return something which evaluates to the new value of the variable; if False will return the assign op.
Returns
The updated variable. If read_value is false, instead returns None in Eager mode and the assign op in graph mode.

assign_sub

assign_sub(
    delta, use_locking=False, name=None, read_value=True
)

Subtracts a value from this variable.

This is essentially a shortcut for assign_sub(self, delta).

Args
delta A Tensor. The value to subtract from this variable.
use_locking If True, use locking during the operation.
name The name of the operation to be created
read_value if True, will return something which evaluates to the new value of the variable; if False will return the assign op.
Returns
The updated variable. If read_value is false, instead returns None in Eager mode and the assign op in graph mode.

numpy

View source

numpy()

Returns (copy of) deferred values as a NumPy array or scalar.

set_shape

View source

set_shape(
    shape
)

Updates the shape of this pretransformed_input.

This method can be called multiple times, and will merge the given shapewith the current shape of this object. It can be used to provide additional information about the shape of this object that cannot be inferred from the graph alone.

Args
shape A TensorShape representing the shape of thispretransformed_input, a TensorShapeProto, a list, a tuple, or None.
Raises
ValueError If shape is not compatible with the current shape of thispretransformed_input.

with_name_scope

@classmethod with_name_scope( method )

Decorator to automatically enter the module name scope.

class MyModule(tf.Module): @tf.Module.with_name_scope def __call__(self, x): if not hasattr(self, 'w'): self.w = tf.Variable(tf.random.normal([x.shape[1], 3])) return tf.matmul(x, self.w)

Using the above module would produce tf.Variables and tf.Tensors whose names included the module name:

mod = MyModule() mod(tf.ones([1, 2])) <tf.Tensor: shape=(1, 3), dtype=float32, numpy=..., dtype=float32)> mod.w <tf.Variable 'my_module/Variable:0' shape=(2, 3) dtype=float32, numpy=..., dtype=float32)>

Args
method The method to wrap.
Returns
The original method wrapped such that it enters the module's name scope.

__abs__

View source

__abs__(
    *args, **kwargs
)

__add__

View source

__add__(
    *args, **kwargs
)

__and__

View source

__and__(
    *args, **kwargs
)

__array__

View source

__array__(
    dtype=None
)

__bool__

__bool__()

Dummy method to prevent a tensor from being used as a Python bool.

This overload raises a TypeError when the user inadvertently treats a Tensor as a boolean (most commonly in an if or whilestatement), in code that was not converted by AutoGraph. For example:

if tf.constant(True):  # Will raise.
  # ...

if tf.constant(5) < tf.constant(7):  # Will raise.
  # ...
Raises
TypeError.

__div__

View source

__div__(
    *args, **kwargs
)

__floordiv__

View source

__floordiv__(
    *args, **kwargs
)

__ge__

View source

__ge__(
    *args, **kwargs
)

__getitem__

View source

__getitem__(
    *args, **kwargs
)

__gt__

View source

__gt__(
    *args, **kwargs
)

__invert__

View source

__invert__(
    *args, **kwargs
)

__iter__

View source

__iter__(
    *args, **kwargs
)

__le__

View source

__le__(
    *args, **kwargs
)

__lt__

View source

__lt__(
    *args, **kwargs
)

__matmul__

View source

__matmul__(
    *args, **kwargs
)

__mod__

View source

__mod__(
    *args, **kwargs
)

__mul__

View source

__mul__(
    *args, **kwargs
)

__neg__

View source

__neg__(
    *args, **kwargs
)

__nonzero__

__nonzero__()

Dummy method to prevent a tensor from being used as a Python bool.

This is the Python 2.x counterpart to __bool__() above.

Raises
TypeError.

__or__

View source

__or__(
    *args, **kwargs
)

__pow__

View source

__pow__(
    *args, **kwargs
)

__radd__

View source

__radd__(
    *args, **kwargs
)

__rand__

View source

__rand__(
    *args, **kwargs
)

__rdiv__

View source

__rdiv__(
    *args, **kwargs
)

__rfloordiv__

View source

__rfloordiv__(
    *args, **kwargs
)

__rmatmul__

View source

__rmatmul__(
    *args, **kwargs
)

__rmod__

View source

__rmod__(
    *args, **kwargs
)

__rmul__

View source

__rmul__(
    *args, **kwargs
)

__ror__

View source

__ror__(
    *args, **kwargs
)

__rpow__

View source

__rpow__(
    *args, **kwargs
)

__rsub__

View source

__rsub__(
    *args, **kwargs
)

__rtruediv__

View source

__rtruediv__(
    *args, **kwargs
)

__rxor__

View source

__rxor__(
    *args, **kwargs
)

__sub__

View source

__sub__(
    *args, **kwargs
)

__truediv__

View source

__truediv__(
    *args, **kwargs
)

__xor__

View source

__xor__(
    *args, **kwargs
)