COMMUNICATION-EFFICIENT FEDERATED LEARNING METHOD VIA REDUNDANT DATA ELIMINATION

Communication-efficient federated learning method via redundant data elimination

Communication-efficient federated learning method via redundant data elimination

Blog Article

To address the influence of limited network bandwidth of edge devices on the communication efficiency of federated learning, and efficiently transmit local model update to complete model aggregation, a communication-efficient federated learning method via redundant data elimination was proposed.The essential reasons for generation of redundant update parameters and according to non-IID properties and model distributed training features of FL were analyzed, a novel sensitivity and loss function tolerance definitions for coreset was given, Desk Lamp (Single) and a novel federated coreset construction algorithm was proposed.Furthermore, to fit the extracted coreset, a novel Toasters distributed adaptive sparse network model evolution mechanism was designed to dynamically adjust the structure and the training model size before each global training iteration, which reduced the number of communication bits between edge devices and the server while also guarantees the training model accuracy.Experimental results show that the proposed method achieves 17% reduction in communication bits transmission while only 0.

5% degradation in model accuracy compared with state-of-the-art method.

Report this page