Fast-Convergent Federated Learning With Adaptive Weighting (original) (raw)

2021, IEEE Transactions on Cognitive Communications and Networking

Sign up for access to the world's latest research.

checkGet notified about relevant papers

checkSave papers to use in your research

checkJoin the discussion with peers

checkTrack your impact

Optimization in Federated Learning

2019

Federated learning (FL) is an emerging branch of machine learning (ML) research, that is examining the methods for scenarios, where individual nodes possess parts of the data, and the task is to form a single common model that fits to the whole distribution. In FL, we generally use mini batch gradient descent for optimizing weights of the models that appears to work very well for federated scenarios. For traditional machine learning setups, a number of modifications has been proposed to accelerate the learning process and to help to get over challenges posed by the high dimensionality and nonconvexity of search spaces of the parameters. In this paper we present our experiments on applying different popular optimization methods for training neural networks in a federated manner. 1 Federated Learning Federated learning (FL) [1] is a new paradigm in Machine Learning (ML), that is dealing with an increasingly important distributed optimization setting, that came into view with the sprea...

Loading...

Loading Preview

Sorry, preview is currently unavailable. You can download the paper by clicking the button above.