Survey of Communication Modification on Federated Learning
Keywords:
Federated learning, Model compression, Communication optimizationsAbstract
The widespread adoption of artificial intelligence technologies, particularly deep learning, has exposed significant security vulnerabilities that pose substantial challenges to cyberspace safety. Traditional cloud-centric distributed machine learning relies on centralized data collection from participants, making the system vulnerable to security breaches and privacy violations during data exchange and model updates. These risks often result in system performance degradation or sensitive data exposure. Federated Learning emerges as a privacy-preserving distributed machine learning paradigm that mitigates these issues. By facilitating encrypted model parameter exchanges between clients and a central parameter server, while retaining raw data locally, Federated Learning, unlike other training model, enables collaborative model training with significantly reduced privacy leakage risks, at the same time it maintains the performance of the whole training process. However, as deep learning models scale and Federated Learning tasks grow more complex, communication overhead becomes a major barrier to deployment. Consequently, optimizing FL communication efficiency has become a critical research focus.
