tfc.entropy_models.LocationScaleIndexedEntropyModel | TensorFlow v2.16.1 (original) (raw)
Indexed entropy model for location-scale family of random variables.
Inherits From: ContinuousIndexedEntropyModel
View aliases
Main aliases
tfc.LocationScaleIndexedEntropyModel
tfc.entropy_models.LocationScaleIndexedEntropyModel(
prior_fn,
num_scales,
scale_fn,
coding_rank,
compression=False,
stateless=False,
expected_grads=False,
tail_mass=(2 ** -8),
range_coder_precision=12,
bottleneck_dtype=None,
prior_dtype=tf.float32,
laplace_tail_mass=0
)
This class is a common special case of ContinuousIndexedEntropyModel
. The specified distribution is parameterized with num_scales
values of scale parameters. An element-wise location parameter is handled by shifting the distributions to zero.
This method is illustrated in Figure 10 of:
"Nonlinear Transform Coding"
J. Ballé, P.A. Chou, D. Minnen, S. Singh, N. Johnston, E. Agustsson, S.J. Hwang, G. Toderici
https://doi.org/10.1109/JSTSP.2020.3034501
Args | |
---|---|
prior_fn | A callable returning a tfp.distributions.Distribution object, typically a Distribution class or factory function. This is a density model fitting the marginal distribution of the bottleneck data with additive uniform noise, which is shared a priori between the sender and the receiver. For best results, the distributions should be flexible enough to have a unit-width uniform distribution as a special case, since this is the marginal distribution for bottleneck dimensions that are constant. The callable will receive keyword arguments as determined by parameter_fns. |
num_scales | Integer. Values in indexes must be in the range[0, num_scales). |
scale_fn | Callable. indexes is passed to the callable, and the return value is given as scale keyword argument to prior_fn. |
coding_rank | Integer. Number of innermost dimensions considered a coding unit. Each coding unit is compressed to its own bit string, and the bits in the __call__ method are summed over each coding unit. |
compression | Boolean. If set to True, the range coding tables used bycompress() and decompress() will be built on instantiation. If set to False, these two methods will not be accessible. |
stateless | Boolean. If False, range coding tables are created asVariables. This allows the entropy model to be serialized using theSavedModel protocol, so that both the encoder and the decoder use identical tables when loading the stored model. If True, creates range coding tables as Tensors. This makes the entropy model stateless and allows it to be constructed within a tf.function body, for when the range coding tables are provided manually. If compression=False, thenstateless=True is implied and the provided value is ignored. |
expected_grads | If True, will use analytical expected gradients during backpropagation w.r.t. additive uniform noise. |
tail_mass | Float. Approximate probability mass which is encoded using an Elias gamma code embedded into the range coder. |
range_coder_precision | Integer. Precision passed to the range coding op. |
bottleneck_dtype | tf.dtypes.DType. Data type of bottleneck tensor. Defaults to tf.keras.mixed_precision.global_policy().compute_dtype. |
prior_dtype | tf.dtypes.DType. Data type of prior and probability computations. Defaults to tf.float32. |
laplace_tail_mass | Float, or a float-valued tf.Tensor. If positive, will augment the prior with a NoisyLaplace mixture component for training stability. (experimental) |
Attributes | |
---|---|
bottleneck_dtype | Data type of the bottleneck tensor. |
cdf | The CDFs used by range coding. |
cdf_offset | The CDF offsets used by range coding. |
channel_axis | Position of channel axis in indexes tensor. |
coding_rank | Number of innermost dimensions considered a coding unit. |
compression | Whether this entropy model is prepared for compression. |
expected_grads | Whether to use analytical expected gradients during backpropagation. |
index_ranges | Upper bound(s) on values allowed in indexes tensor. |
laplace_tail_mass | Whether to augment the prior with a NoisyLaplace mixture. |
name | Returns the name of this module as passed or determined in the ctor. |
name_scope | Returns a tf.name_scope instance for this class. |
non_trainable_variables | Sequence of non-trainable variables owned by this module and its submodules. |
parameter_fns | Functions mapping indexes to each distribution parameter. |
prior | Prior distribution, used for deriving range coding tables. |
prior_dtype | Data type of prior. |
prior_fn | Class or factory function returning a Distribution object. |
range_coder_precision | Precision used in range coding op. |
stateless | Whether range coding tables are created as Tensors or Variables. |
submodules | Sequence of all sub-modules.Submodules are modules which are properties of this module, or found as properties of modules which are properties of this module (and so on). a = tf.Module() b = tf.Module() c = tf.Module() a.b = b b.c = c list(a.submodules) == [b, c] True list(b.submodules) == [c] True list(c.submodules) == [] True |
tail_mass | Approximate probability mass which is range encoded with overflow. |
trainable_variables | Sequence of trainable variables owned by this module and its submodules. |
variables | Sequence of variables owned by this module and its submodules. |
Methods
compress
compress(
bottleneck, scale_indexes, loc=None
)
Compresses a floating-point tensor.
Compresses the tensor to bit strings. bottleneck
is first quantized as in quantize()
, and then compressed using the probability tables derived from indexes
. The quantized tensor can later be recovered by callingdecompress()
.
The innermost self.coding_rank
dimensions are treated as one coding unit, i.e. are compressed into one string each. Any additional dimensions to the left are treated as batch dimensions.
Args | |
---|---|
bottleneck | tf.Tensor containing the data to be compressed. |
scale_indexes | tf.Tensor indexing the scale parameter for each element in bottleneck. Must have the same shape as bottleneck. |
loc | None or tf.Tensor. If None, the location parameter for all elements is assumed to be zero. Otherwise, specifies the location parameter for each element in bottleneck. Must have the same shape asbottleneck. |
Returns |
---|
A tf.Tensor having the same shape as bottleneck without theself.coding_rank innermost dimensions, containing a string for each coding unit. |
decompress
decompress(
strings, scale_indexes, loc=None
)
Decompresses a tensor.
Reconstructs the quantized tensor from bit strings produced by compress()
.
Args | |
---|---|
strings | tf.Tensor containing the compressed bit strings. |
scale_indexes | tf.Tensor indexing the scale parameter for each output element. |
loc | None or tf.Tensor. If None, the location parameter for all output elements is assumed to be zero. Otherwise, specifies the location parameter for each output element. Must have the same shape asscale_indexes. |
Returns |
---|
A tf.Tensor of the same shape as scale_indexes. |
from_config
@classmethod
from_config( config )
Instantiates an entropy model from a configuration dictionary.
get_config
get_config()
Returns the configuration of the entropy model.
get_weights
get_weights()
quantize
quantize(
bottleneck, loc=None
)
Quantizes a floating-point tensor.
To use this entropy model as an information bottleneck during training, pass a tensor through this function. The tensor is rounded to integer values modulo the location parameters of the prior distribution given in loc
.
The gradient of this rounding operation is overridden with the identity (straight-through gradient estimator).
Args | |
---|---|
bottleneck | tf.Tensor containing the data to be quantized. |
loc | None or tf.Tensor. If None, the location parameter for all elements is assumed to be zero. Otherwise, specifies the location parameter for each element in bottleneck. Must have the same shape asbottleneck. |
Returns |
---|
A tf.Tensor containing the quantized values. |
set_weights
set_weights(
weights
)
with_name_scope
@classmethod
with_name_scope( method )
Decorator to automatically enter the module name scope.
class MyModule(tf.Module):
@tf.Module.with_name_scope
def __call__(self, x):
if not hasattr(self, 'w'):
self.w = tf.Variable(tf.random.normal([x.shape[1], 3]))
return tf.matmul(x, self.w)
Using the above module would produce tf.Variables and tf.Tensors whose names included the module name:
mod = MyModule()
mod(tf.ones([1, 2]))
<tf.Tensor: shape=(1, 3), dtype=float32, numpy=..., dtype=float32)>
mod.w
<tf.Variable 'my_module/Variable:0' shape=(2, 3) dtype=float32,
numpy=..., dtype=float32)>
Args | |
---|---|
method | The method to wrap. |
Returns |
---|
The original method wrapped such that it enters the module's name scope. |
__call__
__call__(
bottleneck, scale_indexes, loc=None, training=True
)
Perturbs a tensor with (quantization) noise and estimates rate.
Args | |
---|---|
bottleneck | tf.Tensor containing the data to be compressed. |
scale_indexes | tf.Tensor indexing the scale parameter for each element in bottleneck. Must have the same shape as bottleneck. |
loc | None or tf.Tensor. If None, the location parameter for all elements is assumed to be zero. Otherwise, specifies the location parameter for each element in bottleneck. Must have the same shape asbottleneck. |
training | Boolean. If False, computes the Shannon information ofbottleneck under the distribution computed by self.prior_fn, which is a non-differentiable, tight lower bound on the number of bits needed to compress bottleneck using compress(). If True, returns a somewhat looser, but differentiable upper bound on this quantity. |
Returns |
---|
A tuple (bottleneck_perturbed, bits) where bottleneck_perturbed isbottleneck perturbed with (quantization) noise and bits is the rate.bits has the same shape as bottleneck without the self.coding_rankinnermost dimensions. |