Abstract
Federated learning is a privacy-preserving machine learning paradigm to protect the data of clients against privacy breaches. Federated learning algorithms are further reinforced with differential privacy to provide added privacy. Yet, many existing federated learning algorithms are not robust against Byzantine clients. Specifically, in the online federated learning environments, such as in real-time sensing and dynamic systems where data varies with time, coping with Byzantine clients poses a serious challenge. Byzantine clients disrupt convergence by poisoning the local models of non-faulty clients. Hence, it is important to develop an algorithm that is robust against Byzantine clients with the guarantee of convergence to the sequence of global models over time. Thus, this work proposes a robust algorithm based on online mirror descent to guarantee optimal convergence. The regret bound obtained is compared with the Federated Averaging algorithm. The regret bound shows that the proposed algorithm performs well even in the presence of Byzantine clients.
| Original language | English |
|---|---|
| Title of host publication | Unknown book |
| Pages | 66-70 |
| State | Published - 2023 |