Home > > CS 774: Optimization Techniques

CS 774: Optimization Techniques

CS 774: Optimization Techniques

Pre-requisites

Instructor’s consent. Familiarity with basics of probability and statistics, and linear algebra would be essential. In particular, fluency in the topics of the courses MTH101, MTH102, ESC101, MSO201 (or an equivalent such as CS201) would be required for a enjoyable learning experience. Prior exposure to machine learning techniques would be desirable.

Short Description

The area of optimization has historically had a pervasive impact on the march of science and technology. One finds optimization playing a critical role even in contemporary areas such as decision and control, signal processing, and machine learning. This course intends to present a thorough treatment of optimization techniques with specific emphasis on modern applications. This will provide students with a sound background in the area and benefit those who wish to pursue doctoral or master level theses in this subject, or apply these techniques to their own areas.

Topics

  1. Background: convex analysis, linear and matrix algebra, probability theory
  2. Preliminaries: applications, optimality and duality conditions
  3. First Order Methods
    • Subgradient methods
    • Proximal methods
    • Multiplicative weights update, mirrored descent
  4. Second Order Methods
    • Newton method
    • Quasi-Newton methods, L-BFGS
  5. Stochastic Optimization Problems
    • Notion of regret, online to batch conversion
    • Methods offering vanishing regret - OGD, EG, OMD
  6. Non-convex Optimization Problems
    • Applications - sparse recovery, affine rank minimization, low-rank matrix completion
    • Convex approaches - relaxation-based methods
    • Non-convex approaches - projected gradient descent, alternating minimization
  7. Special topics (a subset would be chosen depending on interest and available time)
    • Accelerated first order methods
    • Bayesian methods
    • Coordinate methods
    • Cutting plane methods
    • Interior point methods
    • Optimization methods for deep learning
    • Parallel and distributed methods
    • Robust optimization problems and methods
    • Stochastic mini-batch methods
    • Submodular optimization problems and methods
    • Variance reduced stochastic methods
    • Zeroth order methods

References

There will be no textbook for this course. Material available from recent books, monographs, and publications will form the majority of reference material for the course.

  1. S. Bubeck, Convex Optimization: Algorithms and Complexity, Foundations and Trends in Machine Learning, 8(3-4): 231-357, 2015.
  2. T. Hastie, R. Tibshirani and M. J. Wainwright, Statistical Learning with Sparsity: the Lasso and Generalizations, Chapman and Hall/CRC Press, 2015. http://web.stanford.edu/~hastie/StatLearnSparsity/
  3. E. Hazan. Introduction to Online Convex Optimization. Draft, 2015. http://ocobook.cs.princeton.edu/
  4. S. Sra, S. Nowozin, and S. Wright (eds). Optimization for Machine Learning, The MIT Press, 2011.
  5. Y. Nesterov, Introductory lectures on convex optimization, Kluwer-Academic, 2003.
  6. S. Boyd and L. Vandenberghe, Convex Optimization, The Cambridge University Press, 2003.
  7. D. Bertsekas, Nonlinear programming, Athena Scientific, 1999.
  8. Selection of papers from leading conferences and journals in optimization, as well as applied areas such as signal processing, information theory, and machine learning