The material here is from the ETH lecture Advanced topics in Control. In 2020 spring, the topic is about large scale convex optimization.
Large scale in the sense 100k - 1B variables, constraints. Not ideal for robotics application. Some solvers: YALMIP, CVX (MATLAB), CVXPY (Python), MOSEK (for smaller medium problem)
Lectures include following topics: ( I also added a non-exhaustive introduction under each topic, need to summarize a better one in the future)
Indicator function can be introduced to translate the constrained problem into an unconstrained problem.
In robotics, we are often faced with such convex optimization problems.
A general form, the cost function is a linear programming. The inequality function requires the affine function to lie in the second-order cone.
A more general form than SOCP, where the inequality constraints are simply a k-second order cone.
Here inequality constraint becomes linear matrix inequality.
One reason we use duality is that we can then turn the optimization into another potentially-easy-to-solve optimization problem.
In the objective, we have two separable items, which is often the case, for example, regularization. To optimize the composite minimization problem, we can use Fermat's rule and turn to operator splitting methods.