Federated Learning is a relatively new branch of machine learning introduced by Google in 2015 and has been intensively explored since then. This technology avoids sending personal data from multiple devices to a central server to train the models. Instead, data is kept on each device and models are trained locally before getting aggregated on a central server. This framework is considered to be “privacy-aware” as no rough data is communicated to the server or to other devices.
Yet, the convergence of Federated Learning algorithms is threatened by attacks of a small portion of malicious clients. Thus, Byzantine robustness has received significant attention recently in this field. In spite of this, the authors of the presented paper identify severe flaws in existing algorithms even when the data across the participants is identically distributed. To address these issues, they present two surprisingly simple strategies: a new robust iterative clipping procedure, and incorporating worker momentum to overcome time-coupled attacks. This is the first provably robust method for the standard stochastic optimization setting.Link: http://proceedings.mlr.press/v139/karimireddy21a/karimireddy21a.pdf