Byzantine-Resilient Federated Learning With Differential Privacy Using Online Mirror Descent

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Federated learning is a privacy-preserving machine learning paradigm to protect the data of clients against privacy breaches. Federated learning algorithms are further reinforced with differential privacy to provide added privacy. Yet, many existing federated learning algorithms are not robust against Byzantine clients. Specifically, in the online federated learning environments, such as in real-time sensing and dynamic systems where data varies with time, coping with Byzantine clients poses a serious challenge. Byzantine clients disrupt convergence by poisoning the local models of non-faulty clients. Hence, it is important to develop an algorithm that is robust against Byzantine clients with the guarantee of convergence to the sequence of global models over time. Thus, this work proposes a robust algorithm based on online mirror descent to guarantee optimal convergence. The regret bound obtained is compared with the Federated Averaging algorithm. The regret bound shows that the proposed algorithm performs well even in the presence of Byzantine clients.
Original languageEnglish
Title of host publicationUnknown book
Pages66-70
StatePublished - 2023

Fingerprint

Dive into the research topics of 'Byzantine-Resilient Federated Learning With Differential Privacy Using Online Mirror Descent'. Together they form a unique fingerprint.

Cite this