Skip to main content

2 posts tagged with "heterogeneity"

View All Tags

Handling Device Heterogeneity: Asynchronous FL for the Real World

· 9 min read

The textbook version of federated learning assumes a perfect world:

  • All devices have similar compute power
  • Network connections are equally fast
  • Devices complete training at roughly the same time
  • No one drops out mid-round

Reality: None of these assumptions hold.

In production FL, you're coordinating across:

  • iPhone 15 Pro (6-core CPU, 16-core GPU) vs. budget Android (4-core, no GPU)
  • 5G fiber (1 Gbps) vs. rural 3G (0.5 Mbps)
  • Always-plugged smart display vs. battery-conscious smartphone
  • Reliable edge server vs. intermittent mobile device

This post explores how Octomil handles the chaos of real-world device heterogeneity through asynchronous federated learning.

Personalized Federated Learning: One Global Model, Many Local Needs

· 7 min read

The fundamental premise of federated learning is to train a single global model across diverse devices. But what happens when "one size fits all" doesn't fit anyone particularly well?

The personalization dilemma: A global keyboard prediction model trained on millions of devices might be mediocre for everyone—users who text in multiple languages, users with specialized vocabularies (medical, legal), or users with unique writing styles all suffer from a lowest-common-denominator model.

This post explores how personalized federated learning enables Octomil to deliver both collective intelligence and individual adaptation.