A Stochastic Multi-Rate Control Framework For Modeling Distributed Optimization Algorithms

Research output: Contribution to journalConference articlepeer-review

Abstract

In modern machine learning systems, distributed algorithms are deployed across applications to ensure data privacy and optimal utilization of computational resources. This work offers a fresh perspective to model, analyze, and design distributed optimization algorithms through the lens of stochastic multi-rate feedback control. We show that a substantial class of distributed algorithms-including popular Gradient Tracking for decentralized learning, and FedPD and Scaffold for federated learning-can be modeled as a certain discrete-time stochastic feedback-control system, possibly with multiple sampling rates. This key observation allows us to develop a generic framework to analyze the convergence of the entire algorithm class. It also enables one to easily add desirable features such as differential privacy guarantees, or to deal with practical settings such as partial agent participation, communication compression, and imperfect communication in algorithm design and analysis.

Original languageEnglish (US)
Pages (from-to)26206-26222
Number of pages17
JournalProceedings of Machine Learning Research
Volume162
StatePublished - 2022
Event39th International Conference on Machine Learning, ICML 2022 - Baltimore, United States
Duration: Jul 17 2022Jul 23 2022

Bibliographical note

Publisher Copyright:
Copyright © 2022 by the author(s)

Fingerprint

Dive into the research topics of 'A Stochastic Multi-Rate Control Framework For Modeling Distributed Optimization Algorithms'. Together they form a unique fingerprint.

Cite this