Lecture 2c: Introduction to Convex Programming

πŸ“ Abstract

This lecture provides an introduction to the convex programming and covers various aspects of optimization. The lecture begins with an overview of optimization, including linear and nonlinear programming, duality and convexity, and approximation techniques. It then delves into more specific topics within continuous optimization, such as linear programming problems and their standard form, transformations to standard form, and the duality of linear programming problems. The lecture also touches on nonlinear programming, discussing the standard form of an NLPP (nonlinear programming problem) and the necessary conditions of optimality known as the Karush-Kuhn-Tucker (KKT) conditions. Convexity is another important concept explored in the document, with explanations on the definition of convex functions and their properties. The lecture also discusses the duality of convex optimization problems and their usefulness in computation. Finally, the document briefly mentions various unconstrained optimization techniques, descent methods, and approximation methods under constraints.

πŸ—ΊοΈ Overview

  • Introduction
  • Linear programming
  • Nonlinear programming
  • Duality and Convexity
  • Approximation techniques
  • Convex Optimization
  • Books and online resources.

Classification of Optimizations

  • Continuous
    • Linear vs Non-linear
    • Convex vs Non-convex
  • Discrete
    • Polynomial time Solvable
    • NP-hard
      • Approximatable
      • Non-approximatable
  • Mixed

Continuous Optimization

classification

Linear Programming Problem

  • An LPP in standard form is:
  • The ingredients of LPP are:
    • An matrix , with
    • A vector
    • A vector

Example

Transformations to Standard Form

  • Theorem: Any LPP can be transformed into the standard form.
  • Variables not restricted in sign:
    • Decompose to two new variables
  • Transforming inequalities into equalities:
    • By putting slack variable
    • Set
  • Transforming a max into a min
    • max(expression) = min(expression);

Duality of LPP

  • If the primal problem of the LPP: .
  • Its dual is: .
  • If the primal problem is: .
  • Its dual is: .

Nonlinear Programming

  • The standard form of an NLPP is
  • Necessary conditions of optimality, Karush- Kuhn-Tucker (KKT) conditions:
    • ,
    • ,

Convexity

  • A function : is convex if is a convex set and .

  • Theorem: Assume that and are convex differentiable functions. If the pair satisfies the KKT conditions above, is an optimal solution of the problem. If in addition, is strictly convex, is the only solution of the problem.

    (Local minimum = global minimum)

Duality and Convexity

  • Dual is the NLPP: where

  • Dual problem is always convex.

  • Useful for computing the lower/upper bound.

Applications

  • Statistics
  • Filter design
  • Power control
  • Machine learning
    • SVM classifier
    • logistic regression

Convexify the non-convex's

Change of curvature: square

Transform: into:

πŸ‘‰ Note that are monotonic concave functions in .

Generalization:

  • Consider (power) instead of (magnitude).
  • square root -> Spectral factorization

Change of curvature: square

Transform: into: Then:

Change of curvature: sine

Transform: into: Then:

πŸ‘‰ Note that are monotonic concave functions in .

Change of curvature: log

Transform: into: where .

Then:

Generalization:

  • Geometric programming

Change of curvature: inverse

Transform: into:

Then:

πŸ‘‰ Note that , , and are monotonic functions.

Generalize to matrix inequalities

Transform: into:

Then:

Change of variables

Transform:

into: where .

Then:

Generalize to matrix inequalities

Transform:

into: where .

Then:

Some operations that preserve convexity

  • is concave if and only if is convex.
  • Nonnegative weighted sums:
    • if and are all convex, then so is In particular, the sum of two convex functions is convex.
  • Composition:
    • If and are convex functions and is non-decreasing over a univariate domain, then is convex. As an example, if is convex, then so is because is convex and monotonically increasing.
    • If is concave and is convex and non-increasing over a univariate domain, then is convex.
    • Convexity is invariant under affine maps.

Other thoughts

  • Minimizing any quasi-convex function subject to convex constraints can easily be transformed into a convex programming.
  • Replace a non-convex constraint with a sufficient condition (such as its lower bound). Less optimal.
  • Relaxation + heuristic
  • Decomposition

Unconstraint Techniques

  • Line search methods
  • Fixed or variable step size
  • Interpolation
  • Golden section method
  • Fibonacci's method
  • Gradient methods
  • Steepest descent
  • Quasi-Newton methods
  • Conjugate Gradient methods

General Descent Method

  1. Input: a starting point dom
  2. Output:
  3. repeat
    1. Determine a descent direction .
    2. Line search. Choose a step size .
    3. Update.
  4. until stopping criterion satisfied.

Some Common Descent Directions

  • Gradient descent:
  • Steepest descent:
    • = (un-normalized)
  • Newton's method:
  • Conjugate gradient method:
    • is "orthogonal" to all previous 's
  • Stochastic subgradient method:
    • is calculated from a set of sample data (instead of using all data)
  • Network flow problems:
    • is given by a "negative cycle" (or "negative cut").

Approximation Under Constraints

  • Penalization and barriers
  • Dual method
  • Interior Point method
  • Augmented Lagrangian method

πŸ“š Books and Online Resources

  • Pablo Pedregal. Introduction to Optimization, Springer. 2003 (O224 P371)
  • Stephen Boyd and Lieven Vandenberghe, Convex Optimization, Dec.Β 2002
  • Mittlemann, H. D. and Spellucci, P. Decision Tree for Optimization Software, World Wide Web, http://plato.la.asu.edu/guide.html, 2003