Module: tff.learning.optimizers  |  TensorFlow Federated (original) (raw)

Module: tff.learning.optimizers

Stay organized with collections Save and categorize content based on your preferences.

Libraries for optimization algorithms.

Classes

class Optimizer: Represents an optimizer for use in TensorFlow Federated.

Functions

build_adafactor(...): Builds an Adafactor optimizer.

build_adagrad(...): Returns a tff.learning.optimizers.Optimizer for Adagrad.

build_adam(...): Returns a tff.learning.optimizers.Optimizer for Adam.

build_adamw(...): Returns a tff.learning.optimizers.Optimizer for AdamW.

build_rmsprop(...): Returns a tff.learning.optimizers.Optimizer for RMSprop.

build_sgdm(...): Returns a tff.learning.optimizers.Optimizer for momentum SGD.

build_yogi(...): Returns a tff.learning.optimizers.Optimizer for Yogi.

check_weights_gradients_match(...): Checks that weights and non-none gradients match.

handle_indexed_slices_gradients(...): Converts any tf.IndexedSlices to tensors.

schedule_learning_rate(...): Returns an optimizer with scheduled learning rate.

Other Members
LEARNING_RATE_KEY 'learning_rate'

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2024-09-20 UTC.