tf.nn.relu6 | TensorFlow v2.16.1 (original) (raw)
tf.nn.relu6
Computes Rectified Linear 6: min(max(features, 0), 6)
.
tf.nn.relu6(
features, name=None
)
In comparison with tf.nn.relu, relu6 activation functions have shown to empirically perform better under low-precision conditions (e.g. fixed point inference) by encouraging the model to learn sparse features earlier. Source: Convolutional Deep Belief Networks on CIFAR-10: Krizhevsky et al., 2010.
For example:
x = tf.constant([-3.0, -1.0, 0.0, 6.0, 10.0], dtype=tf.float32)
y = tf.nn.relu6(x)
y.numpy()
array([0., 0., 0., 6., 6.], dtype=float32)
Args | |
---|---|
features | A Tensor with type float, double, int32, int64, uint8,int16, or int8. |
name | A name for the operation (optional). |
Returns |
---|
A Tensor with the same type as features. |
References |
---|
Convolutional Deep Belief Networks on CIFAR-10: Krizhevsky et al., 2010 (pdf) |
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2024-04-26 UTC.