Journées de l'optimisation 2022

HEC Montréal, Québec, Canada, 16 — 18 mai 2022

Horaire Auteurs Mon horaire

WC5 - Numerical optimization and linear algebra with Julia IV

18 mai 2022 15h30 – 17h10

Salle: Louis-Laberge (rouge)

Présidée par Victor Zavala

4 présentations

  • 15h30 - 15h55

    On the computation of the B-differential of the Min C-function for the balanced linear complementarity problem

    • Jean-Pierre Dussault, Université de Sherbrooke
    • Jean Charles Gilbert, Inria Paris
    • Baptiste Plaquevent-Jourdain, prés., Inria Paris - Université de Sherbrooke - Sorbonne Université

    A complementarity problem is often dealt with C-functions, allowing
    us to reformulate the problem as a nonsmooth system of equations.
    Among them, the minimum function has interesting properties,
    including its simplicity. The semismooth Newton method makes use
    of the B-differential of this function. In this talk, we address
    the problem of describing completely this B-differential
    for a balanced linear complementarity problem.

    Along the way, we discuss equivalent forms of this problem, such as
    enumerating the sectors formed by an arrangement of hyperplanes
    all containing the origin, listing the biconvex bipartitions of a finite set
    and ascertaining all the signed solvability of systems of linear inequalities.

    After giving several properties of the B-differential, we propose a
    numerical technique to describe it. Focusing on the last equivalent form,
    we implement an incremental approach, with overall complexity
    better than the brute force technique, thus often much faster in practice.
    At each step, several optimization problems with a linear cost
    and a ball constraint must be solved. The efficiency of their solution
    computation is grounded on a self-concordant function
    that describes the feasible set. Several approaches are implemented
    to solve the optimization problems. Numerical results are presented.

  • 15h55 - 16h20

    Clustering tensor-variate data using mixture models implemented in the Julia programming language

    • Peter A. Tait, prés., McMaster University
    • Paul D. McNicholas, McMaster University

    Contemporary scientific data has become increasingly complex, taking a multitude of different forms. One such form is tensors or multidimensional arrays. Using tensors in clustering and classification problems can be challenging due to the high dimensionality of each tensor in a sample and the computational resources required to estimate model parameters. Motivated by our work in pediatric research, a tensor-variate normal mixture model is proposed to cluster five-way data. Five-way data consists of a sample of mode four tensors, one per subject. The model has interpretable parameters for the mean tensor and the covariance matrices for each mode of the array. The model parameters are estimated using an EM algorithm. Our model is implemented in Julia, a dynamic and high-performance numerical computing language.

  • 16h20 - 16h45

    Progress and applications of a Frank-Wolfe solver

    • Mathieu Besançon, prés., Zuse Institute Berlin

    Frank-Wolfe algorithms are first-order methods that support structured convex constraints, over which linear optimization can be performed efficiently. We present FrankWolfe.jl, a library for flexible and high-performance Frank-Wolfe algorithms. In particular, we will detail design choices that allow warm starts, scaling to large problems and some use cases leveraging those features.

  • 16h45 - 17h10

    Exponential Decay of Sensitivity in Graph-Structured Nonlinear Programs

    • Victor Zavala, prés., University of Wisconsin-Madison

    We study solution sensitivity for nonlinear programs (NLPs) whose structures are induced by graphs. These NLPs arise in many applications such as dynamic optimization, stochastic optimization, optimization with partial differential equations, and network optimization. We show that for a given pair of nodes, the sensitivity of the primal-dual solution at one node against a data perturbation at the other node decays exponentially with respect to the distance between these two nodes on the graph. In other words, the solution sensitivity decays exponentially as one moves away from the perturbation point. This result, which we call exponential decay of sensitivity, holds under fairly standard assumptions: the strong second-order sufficiency condition and the linear independence constraint qualification. We discuss how this property provides new and interesting insights on the behavior of complex systems and how it enables the design of new and scalable decomposition algorithms and software.

Retour