Hyungbin Kim - Academia.edu (original) (raw)

Hyungbin Kim

Related Authors

Chu Myaet Thwal

Divin Dominic

Divin Dominic

Indraprastha Institute of Information Technology(IIIT), Delhi

Hunmin Lee

Aq. Aq.

morteza homayounfar

Dr. Tariq Umer

ShuQi Ke

The Chinese University of Hong Kong, Shenzhen

Uploads

Papers by Hyungbin Kim

Research paper thumbnail of K-FL: Kalman Filter-Based Clustering Federated Learning Method

IEEE Access

Federated learning is a distributed machine learning framework that enables a large number of dev... more Federated learning is a distributed machine learning framework that enables a large number of devices to cooperatively train a model without data sharing. However, because federated learning trains a model using non-independent and identically distributed (non-IID) data stored at local devices, the weight divergence causes a performance loss. This paper focuses on solving the non-IID problems and proposes Kalman filter-based clustering federated learning method called K-FL to get performance gain by providing a specific model with low variance to the device. To the best of our knowledge, it is the first clustering federated learning method that can train a model requiring fewer communication rounds under the premise that non-IID environment without any prior knowledge and an initial value set by the user. From simulations, we demonstrate that the proposed K-FL can train a model much faster, requiring fewer communication rounds than FedAvg and LG-FedAvg when testing neural networks using the MNIST, FMNIST, and CIFAR-10 datasets. As a numerical result, it is shown that the accuracy is improved in all datasets while the computational time cost is reduced by 1.43×, 1.67×, and 1.63× compared to FedAvg, respectively. INDEX TERMS Federated learning, distributed machine learning, clustering method, kalman filter.

Research paper thumbnail of K-FL: Kalman Filter-Based Clustering Federated Learning Method

IEEE Access

Federated learning is a distributed machine learning framework that enables a large number of dev... more Federated learning is a distributed machine learning framework that enables a large number of devices to cooperatively train a model without data sharing. However, because federated learning trains a model using non-independent and identically distributed (non-IID) data stored at local devices, the weight divergence causes a performance loss. This paper focuses on solving the non-IID problems and proposes Kalman filter-based clustering federated learning method called K-FL to get performance gain by providing a specific model with low variance to the device. To the best of our knowledge, it is the first clustering federated learning method that can train a model requiring fewer communication rounds under the premise that non-IID environment without any prior knowledge and an initial value set by the user. From simulations, we demonstrate that the proposed K-FL can train a model much faster, requiring fewer communication rounds than FedAvg and LG-FedAvg when testing neural networks using the MNIST, FMNIST, and CIFAR-10 datasets. As a numerical result, it is shown that the accuracy is improved in all datasets while the computational time cost is reduced by 1.43×, 1.67×, and 1.63× compared to FedAvg, respectively. INDEX TERMS Federated learning, distributed machine learning, clustering method, kalman filter.

Log In