Penalty Dual Decomposition Method for Nonsmooth Nonconvex Optimization - Part I: Algorithms and Convergence Analysis

Qingjiang Shi, Mingyi Hong

Research output: Contribution to journalArticlepeer-review

92 Scopus citations

Abstract

Many contemporary signal processing, machine learning and wireless communication applications can be formulated as nonconvex nonsmooth optimization problems. Often there is a lack of efficient algorithms for these problems, especially when the optimization variables are nonlinearly coupled in some nonconvex constraints. In this work, we propose an algorithm named penalty dual decomposition (PDD) for these difficult problems and discuss its various applications. The PDD is a double-loop iterative algorithm. Its inner iteration is used to inexactly solve a nonconvex nonsmooth augmented Lagrangian problem via block-coordinate-descent-type methods, while its outer iteration updates the dual variables and/or a penalty parameter. In Part I of this work, we describe the PDD algorithm and establish its convergence to KKT solutions. In Part II we evaluate the performance of PDD by customizing it to three applications arising from signal processing and wireless communications.

Original languageEnglish (US)
Article number9120361
Pages (from-to)4108-4122
Number of pages15
JournalIEEE Transactions on Signal Processing
Volume68
DOIs
StatePublished - 2020
Externally publishedYes

Bibliographical note

Publisher Copyright:
© 1991-2012 IEEE.

Keywords

  • BSUM
  • KKT
  • Penalty method
  • augmented Lagrangian
  • dual decomposition
  • nonconvex optimization

Fingerprint

Dive into the research topics of 'Penalty Dual Decomposition Method for Nonsmooth Nonconvex Optimization - Part I: Algorithms and Convergence Analysis'. Together they form a unique fingerprint.

Cite this