10h30 - 10h55
Linearizing the Method of Conjugate Gradients
The method of conjugate gradients (CG) is widely used for the iterative solution of large sparse systems of equations Ax=b, where A is symmetric positive definite. We present an expression for the Jacobian matrix of a CG iterate with respect to b. We discuss data assimilation applications in which these ideas are used for first-order propagation of covariance matrices.
10h55 - 11h20
A Primal-Dual Regularized Interior-Point Method for Semidefinite Programming
Interior-point methods in semidefinite programming (SDP) require the solution of a sequence of linear systems which are used to derive the search directions. Safeguards are typically required in order to handle rank-deficient Jacobians and free variables. We show that it is possible to recover an optimal solution of the original primal-dual pair via inaccurate solves of a sequence of regularized SDPs for both the NT and dual HKM directions. Benefits of our approach include increased robustness and a simpler implementation. Our method does not require the constraints to be linearly independent and does not assume that Slater's condition holds. We report numerical experience on standard problems that illustrate our findings.
11h20 - 11h45
A Regularized Interior-Point Method for Constrained Linear Least Squares
We propose an infeasible interior-point algorithm for constrained linear least-squares problems based on the primal-dual regularization of convex programs of Friedlander and Orban (2012). At each iteration, the sparse LDL factorization of a symmetric quasi-definite matrix is computed. This coefficient matrix is shown to be uniformly bounded and nonsingular. We establish conditions under which a solution of the original problem is recovered. The regularization allows us to dispense with the assumption that the active gradients are linearly independent. Although the implementation described here is factorization based, it paves the way to a matrix-free implementation in which a regularized unconstrained linear least-squares problem is solved at each iteration. We report on computational experience and illustrate the potential advantages of our approach.
11h45 - 12h10
Projected Krylov Methods for Saddle-Point Systems
Projected Krylov methods are full-space formulations of Krylov methods that take place in a nullspace. Provided projections into the nullspace can be computed accurately, those methods only require products between an operator and vectors lying in the nullspace. In the symmetric case, their convergence is thus entirely described by the spectrum of the (preconditioned) operator restricted to the nullspace. We provide systematic principles for obtaining the projected form of any well-defined Krylov method. Equivalence properties between projected Krylov methods and standard Krylov methods applied to a saddle-point operator with a constraint preconditioner allow us to show that, contrary to common belief, certain known methods such as MINRES and SYMMLQ are well defined in the presence of an indefinite preconditioner.