FLANP, a novel straggler-resilient federated learning meta-algorithm that incorporates statistical characteristics of the clients’ data to adaptively select the clients in order to speed up the learning procedure. The key idea of FLANP is to start the training procedure with faster nodes and gradually involve the slower ones in the model training once the statistical accuracy of the current participating nodes’ data is reached, while the final model for each stage is used as a warm-start model for the next stage. Our theoretical results characterize the speedup provided by the meta-algorithm FLANP in comparison to standard federated benchmarks for strongly convex losses and i.i.d. samples. For particular instances, FLANP slashes the overall expected runtime by a factor of $\mathcal {O}(\ln (Ns))$ , where $N$ and $s$ denote the total number of nodes and the number of samples per node, respectively. In experiments, FLANP demonstrates significant speedups in wall-clock time -up to $6 \times $ – compared to standard federated learning benchmarks.">

Straggler-Resilient Federated Learning: Leveraging the Interplay Between Statistical Accuracy and System Heterogeneity (original) (raw)

IEEE Account

Purchase Details

Profile Information

Need Help?

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity.
© Copyright 2026 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.