I would like to "grok" Slater's condition and other constraint qualification conditions in optimization.
Slater's condition is only one of many different constraint qualifications in the optimization literature. Which one is the most fundamental? Which one tells me "what's really going on"? What is the basic idea at the heart of this?
Also, constraint qualifications appear in both convex and non-convex optimization. Is there a unifying viewpoint that shows it is the same simple, basic idea in all cases?
I'd be interested in any insights or viewpoints that lead to a deeper understanding of constraint qualifications in optimization.
Edit: Here is one possible viewpoint. Buried on p. 223 (chapter 23) of Rockafellar's Convex Analysis, we find the following fundamental and vital fact.
Let f1,…,fm be proper convex functions on Rn, and let f=f1+⋯+fm. If the convex sets ri(dom fi),i=1,…m, have a point in common, then ∂f(x)=∂f1(x)+⋯+∂fm(x).
This subdifferential sum rule can be used to derive optimality conditions for various convex optimization problems, including the KKT conditions for convex problems. For example, the optimization problem
minimizef(x)subject to x∈C
So is this "overlapping relative interiors" condition appearing in the subdifferential sum rule the ultimate, most fundamental constraint qualification?
Can Slater's condition be viewed as a special case of this "overlapping relative interior" condition?
The "overlapping relative interior" condition apparently has nothing to do with non-convex optimization problems. Is there a unifying viewpoint that applies to both convex and non-convex problems?