18th INFORMS Computing Society (ICS) Conference

Toronto, Canada, 14 — 16 mars 2025

18th INFORMS Computing Society (ICS) Conference

Toronto, Canada, 14 — 16 mars 2025

Horaire Auteurs Mon horaire

Algorithms for Nonlinear Optimization

14 mars 2025 16h45 – 18h15

Salle: East Common 

Présidée par Jonathan Eckstein

3 présentations

  • 16h45 - 17h07

    First-Order Method for Convex Smooth Function Constrained Variational Inequalities

    • Yan Wu, prés., Clemson University
    • Yuyuan Ouyang, Clemson University
    • Qi Luo, The University of Iowa

    Constrained Variational Inequality (CVI) is a versatile optimization framework for modeling and solving complementarity and equilibrium problems. In this paper, we introduce a novel Accelerated Constrained Operator Extrapolation (ACOE) method to address single-constrained monotone variational inequality (sCMVI) problems. Our analysis demonstrates that ACOE significantly improves the convergence rate to O(1/k) with respect to both operator and constraint first-order evaluations—matching the optimal performance achieved by unconstrained variational inequality algorithms. To further address more complex multi-constrained monotone variational inequality (mCMVI) problems, we propose the Accelerated Constrained Operator Extrapolation-Sliding (ACOE-S) algorithm. This approach maintains a convergence rate of O(1/k) for operator evaluations while leveraging a sliding technique to further reduce the number of first-order evaluations required for managing multiple constraints to O(1/k^2).

  • 17h07 - 17h29

    Fats optimization over simplex

    • Saeed Ghadimi, prés., University of Waterloo
    • Henry Wolkowicz, University of Waterloo
    • Arnesh Sujanani, University of Waterloo

    In this work, we talk about fast and scalable methods for solving the simplex-constrained optimization problems. We first present semismooth Newton method for faster projection onto the simplex and then develop a restarted
    FISTA specialized for faster general optimization on the simplex that uses a semismooth Newton method to
    solve its subproblems involving projection. We will also talk about a projection-free method over simplex using Hadamard parametrization.

  • 17h29 - 17h51

    Adaptive Relaxation in Approximate Augmented Lagrangian Methods, with an ADMM-like Application

    • Jonathan Eckstein, prés., Rutgers University
    • Chang Yu, Rutgers University

    This presentation describes a new relative-error approximate augmented Lagrangian method in which the length of the multiplier adjustment step is dynamically adapted to the accuracy of the subproblem solution. We report computational results using this method as the outer loop of a two-level algorithm for solving convex optimization problems in standard Fenchel-Rockafellar form. The inner loop of the two-level algorithm is ADMM-like, but incorporates a form of Nesterov acceleration due to Chambolle and Dossal, exploiting a connection between ADMM and proximal gradient methods that will also be described.

Retour