Byzantine-Robust FL: Defending Against Malicious Devices
· 8 min read
Federated learning has an adversary problem.
When training across thousands or millions of devices, you can't trust everyone. Some devices may be:
- Compromised by malware
- Malicious (intentionally poisoning the model)
- Faulty (hardware errors, bugs)
- Adversarially motivated (competitors, attackers)
A single malicious device uploading carefully crafted gradients can completely destroy model accuracy. Without defenses, federated learning is vulnerable to Byzantine attacks—named after the Byzantine Generals' Problem where some participants may be traitors.
This post explores Byzantine-robust aggregation methods and how Octomil implements defenses against adversarial devices.