15th EUROPT Workshop on Advances in Continuous Optimization
Montréal, Canada, 12 — 14 July 2017
15th EUROPT Workshop on Advances in Continuous Optimization
Montréal, Canada, 12 — 14 July 2017
In Memory of Jonathan M. Borwein: Continuous Optimization
Jul 13, 2017 03:30 PM – 04:45 PM
Location: Amphithéâtre Banque Nationale
Chaired by Henry Wolkowicz
3 Presentations
-
03:30 PM - 03:55 PM
A linear-time algorithm for computing conjugates of piecewise linear-quadratic functions
Computational convex analysis focuses on the efficient computation of fundamental convex transforms, most notably the Legendre-Fenchel transform. Efficient algorithms have been implemented in the CCA numerical library that computes the entire graph of such transforms. A major challenge is to extend those algorithms to piecewise-defined functions whose domains do not follow a grid structure. We will summarize two previous algorithms based on computational geometry and parametric programming that run in log-linear time. Then we will present a new algorithm that combines a neighborhood graph with graph-matrix calculus to achieve a linear-time worst-case complexity.
-
03:55 PM - 04:20 PM
The forward-backward algorithm and the normal problem
The forward-backward splitting technique is a popular method for solving monotone inclusions that has applications in optimization. In this talk we explore the behaviour of the algorithm when the inclusion problem has no solution. We present a new formula to define the normal solutions using the forward-backward operator. We also provide a formula for the range of the displacement map of the forward-backward operator. We present some examples which illustrate our theory.
-
04:20 PM - 04:45 PM
Low-Rank Matrix Completion (LRMC) using Nuclear Norm (NN) with Facial Reduction (FR)
Minimization of the NN is often used as a surrogate, convex
relaxation, for solving LRMC problems.
The minimum NN problem can be solved as a trace minimization
semidefinite program (SDP). The SDP and its dual are regular
in the sense that they both satisfy strict feasibility.
FR has been successful in regularizing
many problems where strict feasibility fails, e.g., SDP relaxations
of discrete optimization problems such as QAP, GP,
as well as sensor network localization.
Here we take advantage of the structure at optimality for the NN
minimization and show that even though strict
feasibility holds, the FR framework can be successfully applied
to obtain a proper face that contains the optimal set. This can
dramatically reduce the size of the final NN problem while
guaranteeing a low-rank solution. We include numerical tests for
both exact and noisy cases.