tft.TransformFeaturesLayer | TFX | TensorFlow (original) (raw)
A Keras layer for applying a tf.Transform output to input layers.
tft.TransformFeaturesLayer(
tft_output: tft.TFTransformOutput,
exported_as_v1: Optional[bool] = None
)
Attributes | ||
---|---|---|
activity_regularizer | Optional regularizer function for the output of this layer. | |
autotune_steps_per_execution | Settable property to enable tuning for steps_per_execution | |
compute_dtype | The dtype of the layer's computations.This is equivalent to Layer.dtype_policy.compute_dtype. Unless mixed precision is used, this is the same as Layer.dtype, the dtype of the weights. Layers automatically cast their inputs to the compute dtype, which causes computations and the output to be in the compute dtype as well. This is done by the base Layer class in Layer.call, so you do not have to insert these casts if implementing your own layer. Layers often perform certain internal computations in higher precision when compute_dtype is float16 or bfloat16 for numeric stability. The output will still typically be float16 or bfloat16 in such cases. | |
distribute_reduction_method | The method employed to reduce per-replica values during training.Unless specified, the value "auto" will be assumed, indicating that the reduction strategy should be chosen based on the current running environment. See reduce_per_replica function for more details. | |
distribute_strategy | The tf.distribute.Strategy this model was created under. | |
dtype | The dtype of the layer weights.This is equivalent to Layer.dtype_policy.variable_dtype. Unless mixed precision is used, this is the same as Layer.compute_dtype, the dtype of the layer's computations. | |
dtype_policy | The dtype policy associated with this layer.This is an instance of a tf.keras.mixed_precision.Policy. | |
dynamic | Whether the layer is dynamic (eager-only); set in the constructor. | |
input | Retrieves the input tensor(s) of a layer.Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer. | |
input_spec | InputSpec instance(s) describing the input format for this layer.When you create a layer subclass, you can set self.input_spec to enable the layer to run input compatibility checks when it is called. Consider a Conv2D layer: it can only be called on a single input tensor of rank 4. As such, you can set, in __init__(): self.input_spec = tf.keras.layers.InputSpec(ndim=4) Now, if you try to call the layer on an input that isn't rank 4 (for instance, an input of shape (2,), it will raise a nicely-formatted error: ValueError: Input 0 of layer conv2d is incompatible with the layer: expected ndim=4, found ndim=1. Full shape received: [2] Input checks that can be specified via input_spec include: Structure (e.g. a single input, a list of 2 inputs, etc) Shape Rank (ndim) Dtype For more information, see tf.keras.layers.InputSpec. | |
jit_compile | Specify whether to compile the model with XLA.XLA is an optimizing compiler for machine learning. jit_compile is not enabled by default. Note that jit_compile=True may not necessarily work for all models. For more information on supported operations please refer to theXLA documentation. Also refer toknown XLA issuesfor more details. | |
layers | ||
losses | List of losses added using the add_loss() API.Variable regularization tensors are created when this property is accessed, so it is eager safe: accessing losses under atf.GradientTape will propagate gradients back to the corresponding variables. class MyLayer(tf.keras.layers.Layer): def call(self, inputs): self.add_loss(tf.abs(tf.reduce_mean(inputs))) return inputs l = MyLayer() l(np.ones((10, 1))) l.losses [1.0] inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) # Activity regularization. len(model.losses) 0 model.add_loss(tf.abs(tf.reduce_mean(x))) len(model.losses) 1 inputs = tf.keras.Input(shape=(10,)) d = tf.keras.layers.Dense(10, kernel_initializer='ones') x = d(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) # Weight regularization. model.add_loss(lambda: tf.reduce_mean(d.kernel)) model.losses [<tf.Tensor: shape=(), dtype=float32, numpy=1.0>] | |
metrics | Return metrics added using compile() or add_metric().inputs = tf.keras.layers.Input(shape=(3,)) outputs = tf.keras.layers.Dense(2)(inputs) model = tf.keras.models.Model(inputs=inputs, outputs=outputs) model.compile(optimizer="Adam", loss="mse", metrics=["mae"]) [m.name for m in model.metrics] [] x = np.random.random((2, 3)) y = np.random.randint(0, 2, (2, 2)) model.fit(x, y) [m.name for m in model.metrics] ['loss', 'mae'] inputs = tf.keras.layers.Input(shape=(3,)) d = tf.keras.layers.Dense(2, name='out') output_1 = d(inputs) output_2 = d(inputs) model = tf.keras.models.Model( inputs=inputs, outputs=[output_1, output_2]) model.add_metric( tf.reduce_sum(output_2), name='mean', aggregation='mean') model.compile(optimizer="Adam", loss="mse", metrics=["mae", "acc"]) model.fit(x, (y, y)) [m.name for m in model.metrics] ['loss', 'out_loss', 'out_1_loss', 'out_mae', 'out_acc', 'out_1_mae', 'out_1_acc', 'mean'] | |
metrics_names | Returns the model's display labels for all outputs.inputs = tf.keras.layers.Input(shape=(3,)) outputs = tf.keras.layers.Dense(2)(inputs) model = tf.keras.models.Model(inputs=inputs, outputs=outputs) model.compile(optimizer="Adam", loss="mse", metrics=["mae"]) model.metrics_names [] x = np.random.random((2, 3)) y = np.random.randint(0, 2, (2, 2)) model.fit(x, y) model.metrics_names ['loss', 'mae'] inputs = tf.keras.layers.Input(shape=(3,)) d = tf.keras.layers.Dense(2, name='out') output_1 = d(inputs) output_2 = d(inputs) model = tf.keras.models.Model( inputs=inputs, outputs=[output_1, output_2]) model.compile(optimizer="Adam", loss="mse", metrics=["mae", "acc"]) model.fit(x, (y, y)) model.metrics_names ['loss', 'out_loss', 'out_1_loss', 'out_mae', 'out_acc', 'out_1_mae', 'out_1_acc'] | |
name | Name of the layer (string), set in the constructor. | |
name_scope | Returns a tf.name_scope instance for this class. | |
non_trainable_weights | List of all non-trainable weights tracked by this layer.Non-trainable weights are not updated during training. They are expected to be updated manually in call(). | |
output | Retrieves the output tensor(s) of a layer.Only applicable if the layer has exactly one output, i.e. if it is connected to one incoming layer. | |
run_eagerly | Settable attribute indicating whether the model should run eagerly.Running eagerly means that your model will be run step by step, like Python code. Your model might run slower, but it should become easier for you to debug it by stepping into individual layer calls. By default, we will attempt to compile your model to a static graph to deliver the best execution performance. | |
steps_per_execution | Settable steps_per_execution variable. Requires a compiled model. | |
submodules` | Sequence of all sub-modules.Submodules are modules which are properties of this module, or found as properties of modules which are properties of this module (and so on). a = tf.Module() b = tf.Module() c = tf.Module() a.b = b b.c = c list(a.submodules) == [b, c] True list(b.submodules) == [c] True list(c.submodules) == [] True | |
supports_masking | Whether this layer supports computing a mask using compute_mask. | |
trainable | ||
trainable_weights | List of all trainable weights tracked by this layer.Trainable weights are updated via gradient descent during training. | |
variable_dtype | Alias of Layer.dtype, the dtype of the weights. | |
weights | Returns the list of all layer variables/weights. |
Methods
add_loss
add_loss(
losses, **kwargs
)
Add loss tensor(s), potentially dependent on layer inputs.
Some losses (for instance, activity regularization losses) may be dependent on the inputs passed when calling a layer. Hence, when reusing the same layer on different inputs a
and b
, some entries inlayer.losses
may be dependent on a
and some on b
. This method automatically keeps track of dependencies.
This method can be used inside a subclassed layer or model's call
function, in which case losses
should be a Tensor or list of Tensors.
Example:
class MyLayer(tf.keras.layers.Layer):
def call(self, inputs):
self.add_loss(tf.abs(tf.reduce_mean(inputs)))
return inputs
The same code works in distributed training: the input to add_loss()
is treated like a regularization loss and averaged across replicas by the training loop (both built-in Model.fit() and compliant custom training loops).
The add_loss
method can also be called directly on a Functional Model during construction. In this case, any loss Tensors passed to this Model must be symbolic and be able to be traced back to the model's Input
s. These losses become part of the model's topology and are tracked inget_config
.
Example:
inputs = tf.keras.Input(shape=(10,))
x = tf.keras.layers.Dense(10)(inputs)
outputs = tf.keras.layers.Dense(1)(x)
model = tf.keras.Model(inputs, outputs)
# Activity regularization.
model.add_loss(tf.abs(tf.reduce_mean(x)))
If this is not the case for your loss (if, for example, your loss references a Variable
of one of the model's layers), you can wrap your loss in a zero-argument lambda. These losses are not tracked as part of the model's topology since they can't be serialized.
Example:
inputs = tf.keras.Input(shape=(10,))
d = tf.keras.layers.Dense(10)
x = d(inputs)
outputs = tf.keras.layers.Dense(1)(x)
model = tf.keras.Model(inputs, outputs)
# Weight regularization.
model.add_loss(lambda: tf.reduce_mean(d.kernel))
Args | |
---|---|
losses | Loss tensor, or list/tuple of tensors. Rather than tensors, losses may also be zero-argument callables which create a loss tensor. |
**kwargs | Used for backwards compatibility only. |
build
build(
input_shape
)
Builds the model based on input shapes received.
This is to be used for subclassed models, which do not know at instantiation time what their inputs look like.
This method only exists for users who want to call model.build()
in a standalone way (as a substitute for calling the model on real data to build it). It will never be called by the framework (and thus it will never throw unexpected errors in an unrelated workflow).
Args | |
---|---|
input_shape | Single tuple, TensorShape instance, or list/dict of shapes, where shapes are tuples, integers, or TensorShapeinstances. |
Raises | |
---|---|
ValueError | In case of invalid user-provided data (not of type tuple, list, TensorShape, or dict). If the model requires call arguments that are agnostic to the input shapes (positional or keyword arg in call signature). If not all layers were properly built. If float type inputs are not supported within the layers. In each of these cases, the user should build their model by calling it on real tensor data. |
call
call(
inputs: Mapping[str, common_types.TensorType]
) -> Dict[str, common_types.TensorType]
Calls the model on new inputs and returns the outputs as tensors.
In this case call()
just reapplies all ops in the graph to the new inputs (e.g. build a new computational graph from the provided inputs).
Args | |
---|---|
inputs | Input tensor, or dict/list/tuple of input tensors. |
training | Boolean or boolean scalar tensor, indicating whether to run the Network in training mode or inference mode. |
mask | A mask or list of masks. A mask can be either a boolean tensor or None (no mask). For more details, check the guidehere. |
Returns |
---|
A tensor if there is a single output, or a list of tensors if there are more than one outputs. |
compute_mask
compute_mask(
inputs, mask=None
)
Computes an output mask tensor.
Args | |
---|---|
inputs | Tensor or list of tensors. |
mask | Tensor or list of tensors. |
Returns |
---|
None or a tensor (or list of tensors, one per output tensor of the layer). |
count_params
count_params()
Count the total number of scalars composing the weights.
Returns |
---|
An integer count. |
Raises | |
---|---|
ValueError | if the layer isn't yet built (in which case its weights aren't yet defined). |
save_spec
save_spec(
dynamic_batch=True
)
Returns the tf.TensorSpec of call args as a tuple (args, kwargs)
.
This value is automatically defined after calling the model for the first time. Afterwards, you can use it when exporting the model for serving:
model = tf.keras.Model(...)
@tf.function
def serve(*args, **kwargs):
outputs = model(*args, **kwargs)
# Apply postprocessing steps, or add additional outputs.
...
return outputs
# arg_specs is `[tf.TensorSpec(...), ...]`. kwarg_specs, in this
# example, is an empty dict since functional models do not use keyword
# arguments.
arg_specs, kwarg_specs = model.save_spec()
model.save(path, signatures={
'serving_default': serve.get_concrete_function(*arg_specs,
**kwarg_specs)
})
Args | |
---|---|
dynamic_batch | Whether to set the batch sizes of all the returnedtf.TensorSpec to None. (Note that when defining functional or Sequential models with tf.keras.Input([...], batch_size=X), the batch size will always be preserved). Defaults to True. |
Returns |
---|
If the model inputs are defined, returns a tuple (args, kwargs). All elements in args and kwargs are tf.TensorSpec. If the model inputs are not defined, returns None. The model inputs are automatically set when calling the model,model.fit, model.evaluate or model.predict. |
__call__
__call__(
*args, **kwargs
)