FLIX: A Simple and Communication-Efficient Alternative to Local Methods in Federated Learning (original) (raw)

Federated Learning: A Cutting-Edge Survey of the Latest Advancements and Applications

arXiv (Cornell University), 2023

Robust machine learning (ML) models can be developed by leveraging large volumes of data and distributing the computational tasks across numerous devices or servers. Federated learning (FL) is a technique in the realm of ML that facilitates this goal by utilizing cloud infrastructure to enable collaborative model training among a network of decentralized devices. Beyond distributing the computational load, FL targets the resolution of privacy issues and the reduction of communication costs simultaneously. To protect user privacy, FL requires users to send model updates rather than transmitting large quantities of raw and potentially confidential data. Specifically, individuals train ML models locally using their own data and then upload the results in the form of weights and gradients to the cloud for aggregation into the global model. This strategy is also advantageous in environments with limited bandwidth or high communication costs, as it prevents the transmission of large data volumes. With the increasing volume of data and rising privacy concerns, alongside the emergence of large-scale ML models like Large Language Models (LLMs), FL presents itself as a timely and relevant solution. It is therefore essential to review current FL algorithms to guide future research that meets the rapidly evolving ML demands. This survey provides a comprehensive analysis and comparison of the most recent FL algorithms, evaluating them on various fronts including mathematical frameworks, privacy protection, resource allocation, and applications. Beyond summarizing existing FL methods, this survey identifies potential gaps, open areas, and future challenges based on the performance reports and algorithms used in recent studies. This survey enables researchers to readily identify existing limitations in the FL field for further exploration.

Trading Off Privacy, Utility and Efficiency in Federated Learning

ACM Transactions on Intelligent Systems and Technology

Federated learning (FL) enables participating parties to collaboratively build a global model with boosted utility without disclosing private data information. Appropriate protection mechanisms have to be adopted to fulfill the opposing requirements in preserving privacy and maintaining high model utility . In addition, it is a mandate for a federated learning system to achieve high efficiency in order to enable large-scale model training and deployment. We propose a unified federated learning framework that reconciles horizontal and vertical federated learning. Based on this framework, we formulate and quantify the trade-offs between privacy leakage, utility loss, and efficiency reduction, which leads us to the No-Free-Lunch (NFL) theorem for the federated learning system. NFL indicates that it is unrealistic to expect an FL algorithm to simultaneously provide excellent privacy, utility, and efficiency in certain scenarios. We then analyze the lower bounds for the privacy leakage, ...

An Empirical Study of Efficiency and Privacy of Federated Learning Algorithms

arXiv (Cornell University), 2023

In today's world, the rapid expansion of IoT networks and the proliferation of smart devices in our daily lives, have resulted in the generation of substantial amounts of heterogeneous data. These data forms a stream which requires special handling. To handle this data effectively, advanced data processing technologies are necessary to guarantee the preservation of both privacy and efficiency. Federated learning emerged as a distributed learning method that trains models locally and aggregates them on a server to preserve data privacy. This paper showcases two illustrative scenarios that highlight the potential of federated learning (FL) as a key to delivering efficient and privacy-preserving machine learning within IoT networks. We first give the mathematical foundations for key aggregation algorithms in federated learning, i.e., FedAvg and FedProx. Then, we conduct simulations, using Flower Framework, to show the efficiency of these algorithms by training deep neural networks on common datasets and show a comparison between the accuracy and loss metrics of FedAvg and FedProx. Then, we present the results highlighting the trade-off between maintaining privacy versus accuracy via simulations-involving the implementation of the differential privacy (DP) method-in Pytorch and Opacus ML frameworks on common FL datasets and data distributions for both FedAvg and FedProx strategies.

Ensuring Fairness and Gradient Privacy in Personalized Heterogeneous Federated Learning

ACM transactions on intelligent systems and technology, 2024

With the increasing tension between conflicting requirements of the availability of large amounts of data for effective machine learning-based analysis, and for ensuring their privacy, the paradigm of federated learning has emerged, a distributed machine learning setting where the clients provide only the machine learning model updates to the server rather than the actual data for decision making. However, the distributed nature of federated learning raises specific challenges related to fairness in a heterogeneous setting. This motivates the focus of our article, on the heterogeneity of client devices having different computational capabilities and their impact on fairness in federated learning. Furthermore, our aim is to achieve fairness in heterogeneity while ensuring privacy. As far as we are aware there are no existing works that address all three aspects of fairness, device heterogeneity, and privacy simultaneously in federated learning. In this article, we propose a novel federated learning algorithm with personalization in the context of heterogeneous devices while maintaining compatibility with the gradient privacy preservation techniques of secure aggregation. We analyze the proposed federated learning algorithm under different environments with different datasets and show that it achieves performance close to or greater than the state-of-the-art in heterogeneous device personalized federated learning. We also provide theoretical proofs for the fairness and convergence properties of our proposed algorithm. CCS Concepts: • Computing methodologies → Distributed algorithms; Machine learning; Distributed artificial intelligence; • Security and privacy → Privacy-preserving protocols; • Social and professional topics → User characteristics ;

FedDec: Peer-to-peer Aided Federated Learning

arXiv (Cornell University), 2023

Federated learning (FL) has enabled training machine learning models exploiting the data of multiple agents without compromising privacy. However, FL is known to be vulnerable to data heterogeneity, partial device participation, and infrequent communication with the server, which are nonetheless three distinctive characteristics of this framework. While much of the recent literature has tackled these weaknesses using different tools, only a few works have explored the possibility of exploiting interagent communication to improve FL's performance. In this work, we present FedDec, an algorithm that interleaves peer-to-peer communication and parameter averaging (similar to decentralized learning in networks) between the local gradient updates of FL. We analyze the convergence of FedDec under the assumptions of non-iid data distribution, partial device participation, and smooth and strongly convex costs, and show that inter-agent communication alleviates the negative impact of infrequent communication rounds with the server by reducing the dependence on the number of local updates H from O(H 2) to O(H). Furthermore, our analysis reveals that the term improved in the bound is multiplied by a constant that depends on the spectrum of the inter-agent communication graph, and that vanishes quickly the more connected the network is. We confirm the predictions of our theory in numerical simulations, where we show that FedDec converges faster than FedAvg, and that the gains are greater as either H or the connectivity of the network increase.

Provably Personalized and Robust Federated Learning

arXiv (Cornell University), 2023

Identifying clients with similar objectives and learning a model-per-cluster is an intuitive and interpretable approach to personalization in federated learning. However, doing so with provable and optimal guarantees has remained an open challenge. We formalize this problem as a stochastic optimization problem, achieving optimal convergence rates for a large class of loss functions. We propose simple iterative algorithms which identify clusters of similar clients and train a personalized model-per-cluster, using local client gradients and flexible constraints on the clusters. The convergence rates of our algorithms asymptotically match those obtained if we knew the true underlying clustering of the clients and are provably robust in the Byzantine setting where some fraction of the clients are malicious.

DP-FEDAW: FEDERATED LEARNING WITH DIFFERENTIAL PRIVACY IN NON-IID DATA

Granthaalayah Publication and Printers, 2023

Federated learning can effectively utilize data from various users to coordinately train machine learning models while ensuring that data does not leave the user's device. However, it also faces the challenge of slow global model convergence and even the leakage of model parameters under heterogeneous data. To address this issue, this paper proposes a federated weighted average with differential privacy (DP-FedAW) algorithm, which studies the security and convergence issues of federated learning for Nonindependent identically distributed (Non-IID) data. Firstly, the DP-FedAW algorithm quantifies the degree of Non-IID for different user datasets and further adjusts the aggregation weights of each user, effectively alleviating the model convergence problem caused by differences in Non-IID data during the training process. Secondly, a federated weighted average algorithm for privacy protection is designed to ensure that the model parameters meet differential privacy requirements. In theory, this algorithm effectively provides privacy and security during the training process while accelerating the convergence of the model. Experiments have shown that compared to the federated average algorithm, this algorithm can converge faster. In addition, with the increase of the privacy budget, the model's accuracy gradually tends to be without noise while ensuring model security. This study provides an important reference for ensuring model parameter security and improving the algorithm convergence rate of federated learning towards the Non-IID data.

Personalized Federated Learning with Communication Compression

2022

In contrast to training traditional machine learning (ML) models in data centers, federated learning (FL) trains ML models over local datasets contained on resourceconstrained heterogeneous edge devices. Existing FL algorithms aim to learn a single global model for all participating devices, which may not be helpful to all devices participating in the training due to the heterogeneity of the data across the devices. Recently, Hanzely and Richtárik (2020) proposed a new formulation for training personalized FL models aimed at balancing the trade-off between the traditional global model and the local models that could be trained by individual devices using their private data only. They derived a new algorithm, called loopless gradient descent (L2GD), to solve it and showed that this algorithms leads to improved communication complexity guarantees in regimes when more personalization is required. In this paper, we equip their L2GD algorithm with a bidirectional compression mechanism to further reduce the communication bottleneck between the local devices and the server. Unlike other compression-based algorithms used in the FL-setting, our compressed L2GD algorithm operates on a probabilistic communication protocol, where communication does not happen on a fixed schedule. Moreover, our compressed L2GD algorithm maintains a similar convergence rate as vanilla SGD without compression. To empirically validate the efficiency of our algorithm, we perform diverse numerical experiments on both convex and non-convex problems and using various compression techniques.

Privacy-Preserving Federated Learning via Normalized (instead of Clipped) Updates

2021

Differentially private federated learning (FL) entails bounding the sensitivity to each client’s update. The customary approach used in practice for bounding sensitivity is to clip the client updates, which is just projection onto an `2 ball of some radius (called the clipping threshold) centered at the origin. However, clipping introduces bias depending on the clipping threshold and its impact on convergence has not been properly analyzed in the FL literature. In this work, we propose a simpler alternative for bounding sensitivity which is normalization, i.e. use only the unit vector along the client updates, completely discarding the magnitude information. We call this algorithm DP-NormFedAvg and show that it has the same order-wise convergence rate as FedAvg on smooth quasar-convex functions (an important class of non-convex functions for modeling optimization of deep neural networks) modulo the noise variance term (due to privacy). Further, assuming that the per-sample client lo...

FEDERATED LEARNING: TRAINING ML MODELS COLLABORATIVELY ACROSS MULTIPLE DEVICES WITHOUT SHARING RAW DATA, PRESERVING PRIVACY

Rabindra Bharati University Journal of Economics, 2024

The importance of data security and privacy is rising in tandem with the need for machine learning models. One potential answer that has recently arisen is federated learning, a decentralised method of machine learning that enables several entities to work together and construct models without disclosing private information. With an emphasis on its privacy-preserving features, this allencompassing examination delves into the concepts, methods, and uses of federated learning.