Skip to main content

One post tagged with "distributionally-robust"

View All Tags

Beyond Cross-Entropy: Federated AUC Maximization and X-Risk Optimization

· 9 min read

The default objective in machine learning is minimizing cross-entropy loss. It's what PyTorch uses out-of-the-box. It's what most federated learning papers optimize. It's simple, well-understood, and works great for balanced classification problems.

But real-world applications rarely have balanced classes or standard objectives.

  • Medical diagnosis: 1% of patients have disease → 99% accuracy means predicting "no disease" for everyone
  • Fraud detection: 0.1% of transactions are fraudulent → standard loss fails
  • Financial risk: We care about tail risk (the 1% worst-case scenarios), not average loss
  • Imbalanced federated data: Some devices have only positive examples, others only negatives

This post explores specialized optimization objectives for federated learning and how Octomil supports them through recent breakthroughs from Guo, Yang, and collaborators.