15th EUROPT Workshop on Advances in Continuous Optimization
Montréal, Canada, 12 — 14 juillet 2017
15th EUROPT Workshop on Advances in Continuous Optimization
Montréal, Canada, 12 — 14 juillet 2017
In Memory of Jonathan M. Borwein: Continuous Optimization
13 juil. 2017 15h30 – 16h45
Salle: Amphithéâtre Banque Nationale
Présidée par Henry Wolkowicz
3 présentations

15h30  15h55
A lineartime algorithm for computing conjugates of piecewise linearquadratic functions
Computational convex analysis focuses on the efficient computation of fundamental convex transforms, most notably the LegendreFenchel transform. Efficient algorithms have been implemented in the CCA numerical library that computes the entire graph of such transforms. A major challenge is to extend those algorithms to piecewisedefined functions whose domains do not follow a grid structure. We will summarize two previous algorithms based on computational geometry and parametric programming that run in loglinear time. Then we will present a new algorithm that combines a neighborhood graph with graphmatrix calculus to achieve a lineartime worstcase complexity.

15h55  16h20
The forwardbackward algorithm and the normal problem
The forwardbackward splitting technique is a popular method for solving monotone inclusions that has applications in optimization. In this talk we explore the behaviour of the algorithm when the inclusion problem has no solution. We present a new formula to define the normal solutions using the forwardbackward operator. We also provide a formula for the range of the displacement map of the forwardbackward operator. We present some examples which illustrate our theory.

16h20  16h45
LowRank Matrix Completion (LRMC) using Nuclear Norm (NN) with Facial Reduction (FR)
Minimization of the NN is often used as a surrogate, convex
relaxation, for solving LRMC problems.
The minimum NN problem can be solved as a trace minimization
semidefinite program (SDP). The SDP and its dual are regular
in the sense that they both satisfy strict feasibility.
FR has been successful in regularizing
many problems where strict feasibility fails, e.g., SDP relaxations
of discrete optimization problems such as QAP, GP,
as well as sensor network localization.
Here we take advantage of the structure at optimality for the NN
minimization and show that even though strict
feasibility holds, the FR framework can be successfully applied
to obtain a proper face that contains the optimal set. This can
dramatically reduce the size of the final NN problem while
guaranteeing a lowrank solution. We include numerical tests for
both exact and noisy cases.