Abstract
Federated learning allows distributed devices to jointly train a global model without breaching privacy. Conventional federated learning uses the cross-device setting or the cross-silo setting. However, this work focuses on hybrid architecture that combines the cross-device and cross-silo settings. The hybrid architecture overcomes congestion at the central server in cross-device settings. Also, it overcomes convergence instability in cross-silo settings. Furthermore, this work proposes two online federated learning algorithms that can work well for real-time applications, unlike many existing federated learning algorithms. The first algorithm named Online Federated Learning with Compression (OFedCom) is designed for the full information settings where the time-varying loss function is observable and the loss gradients can be computed. The second algorithm named Online Federated Learning with Compression and Bandit Feedback (OFedCom-B) is designed for the bandit setting where the time-varying loss function is not observable and the loss gradient cannot be computed. Two compression techniques are incorporated into the proposed algorithms to overcome communication bottlenecks while guaranteeing good convergence. Separate regret analyses are designed for both convex and non-convex time-varying loss functions. The simulation results show faster convergence and better regret bound than existing algorithms.
| Original language | English |
|---|---|
| Pages (from-to) | 191046-191058 |
| Number of pages | 13 |
| Journal | IEEE Access |
| Volume | 12 |
| Issue number | Issue |
| DOIs | |
| State | Published - Jan 1 2024 |
Keywords
- Bandit
- compression
- federated learning
- graph theory
- online learning