18th INFORMS Computing Society (ICS) Conference
Toronto, Canada, 14 — 16 March 2025
18th INFORMS Computing Society (ICS) Conference
Toronto, Canada, 14 — 16 March 2025

Methods for Large-Scale Nonlinear Optimization and Semidefinite Programming
Mar 16, 2025 04:45 PM – 06:15 PM
Location: Burwash
Chaired by Arnesh Sujanani
4 Presentations
-
04:45 PM - 05:07 PM
Binary Classification using FISTA with Data-Dependent Early-Stopping Criteria
We propose using FISTA for binary classification tasks modelled with support-vector machines (SVMs) and similar generalizations. The talk will introduce some conditions under which we can stop FISTA early and still obtain a solution, and discuss the data-dependent complexity of the proposed method.
-
05:07 PM - 05:29 PM
HALLaR: A New Solver for Solving Large SDPs
This talk presents a new first-order method for solving large-scale semidefinite programs (SDPs) with bounded domain. It is an inexact augmented Lagrangian (AL) method where the AL subproblems are solved by a hybrid low-rank method. In contrast to the classical low-rank method, the new method finds a near-optimal solution (with provable complexity bounds) of SDP instances satisfying strong duality. Computational results comparing the new method to state-of-the-art solvers on several large SDP instances show that the former finds higher accurate solutions in substantially less CPU time than the latter ones. For example, in less than 20 minutes, our method can solve (on a personal laptop) a maximum stable set SDP with 1 million vertices and 10 million edges within 1e-5 relative accuracy.
-
05:29 PM - 05:51 PM
The omega-Condition Number for Optimal Preconditioning and Low Rank Generalized Jacobian Updating
We study a non-classic matrix condition number, the
omega-condition number, in the context of preconditioning
for iterative methods and for low rank updating of positive definite matrices.
This condition measure is the ratio of the arithmetic and
geometric means of the eigenvalues. Therefore, we have nice differentiability
properties as well as the ability to find explicit formulae for optimal
preconditioning. In particular, we concentrate on
linear systems with low rank updates of positive definite matrices which are close to singular.
These systems arise in the context of nonsmooth Newton methods using
generalized Jacobians. We derive an explicit formula for the
optimal omega-preconditioned update in this framework.
Our empirics illustrate the improvements, both for error
analysis and for iterative methods, obtained
using omega compared to the classical kappa-condition number. -
05:51 PM - 06:13 PM
Single Element Error Correction in a Euclidean Distance Matrix
We consider the exact error correction of a noisy Euclidean distance matrix, EDM, under the assumptions: the embedding dimension, edim(D)=d, is given, and exactly one distance in the data is corrupted by nonzero noise. But we do not know the magnitude nor position of the noise. These use three divide and conquer strategies in combination with three versions of facial reduction that use: exposing vectors, facial vectors, and Gale transforms. Our highly successful empirics confirm the success of these approaches as we can solve huge problems of the order of 100, 000 nodes in approximately one minute to machine precision.
We provide a characterization for guaranteeing the perturbed EDM remains an EDM with embedding dimension d. In addition, we characterize when the intuitive approach of the nearest EDM problem, NEDM, solves our problem.