Robust control perspectives on algorithm analysis and design
Applied Mathematics Seminar
Speaker: Laurent Lessard (Northeastern University)
Host: J. Forbes (平特五不中, CIM)
Abstract: Most complicated optimization problems, in particular those involving a large number of variables, are solved in practice using iterative algorithms. The problem of selecting a suitable algorithm is currently more of an art than a science; a great deal of expertise is required to know which algorithms to try and how to properly tune them. Moreover, there are seldom performance guarantees. In this talk, I will show how the problem of algorithm selection can be approached using tools from robust control theory. By solving simple semidefinite programs (that do not scale with problem size), we can derive robust bounds on convergence rates for popular algorithms such as the gradient method, proximal methods, fast/accelerated methods, and operator-splitting methods such as ADMM. The bounds derived in this manner either match or improve upon the best known bounds from the literature. The bounds also lead to a natural energy dissipation interpretation and an associated Lyapunov function. Finally, our framework can be used to efficiently search for algorithms that meet desired performance specifications, thus establishing a principled methodology for designing new algorithms. We give examples of novel algorithm designs to address distributed optimization and stochastic optimization problems.