15th EUROPT Workshop on Advances in Continuous Optimization

Montréal, Canada, 12 — 14 juillet 2017

15th EUROPT Workshop on Advances in Continuous Optimization

Montréal, Canada, 12 — 14 juillet 2017

Horaire Auteurs Mon horaire

Nonlinear Optimization

12 juil. 2017 15h30 – 17h10

Salle: PWC

Présidée par Florian Potra

4 présentations

  • 15h30 - 15h55

    Numerical Observation of Maratos Effect with Multiple Precision Arithmetic

    • Hiroshige Dan, prés., Kansai University

    When we use the sequential quadratic programming (SQP) method to solve nonlinear programming problems (NLPs), the unit stepsize is required for superlinear convergence. However, we may suffer from the Maratos effect, which is a phenomenon in which the unit stepsize is rejected, even if an iteration point is sufficiently close to the optimal solution and the assumptions for fast convergence are satisfied. In this research, we observe this phenomenon numerically and seek some clues to overcome it efficiently. Especially, we utilize multiple precision arithmetic to observe it closely.

  • 15h55 - 16h20

    Adaptive matrix algebras in unconstrained optimization

    • Stefano Cipolla, prés., University of Rome Tor Vergata

    In this communication we will introduce some recent techniques which involve structured matrix spaces in the reduction of time and space complexity of BFGS-type minimization algorithms [1,2]. Some general results for the global convergence of algorithms for unconstrained optimization based on a BFGS-type Hessian approximation scheme are introduced and it is shown how the constructibility of convergent algorithms suitable for large scale problems can be tackled using projections onto low complexity matrix algebras.

    [1] C.Di Fiore, S.Fanelli, F.Lepore, P.Zellini, Matrix algebras in Quasi-Newton
    methods for unconstrained minimization, Numerische Mathematik, 94, (2003).

    [2] S.Cipolla, C.Di Fiore, F.Tudisco, P.Zellini, Adaptive matrix algebras in un-
    constrained minimization, Linear Algebra and its Application, 471, (2015).

  • 16h20 - 16h45

    Efficient modifications on the parameters of Yabe and Takano’s conjugate gradient algorithm

    • Mohammad Reza Peyghami, prés., K.N. Toosi University of Tech.
    • Akram Fazli, K.N. Toosi University of Technology
    • Hani Ahmadzadeh, Sharif University of Technology

    Our main concern in this work is to make some modifications on the Yabe and Takano’s conjugate gradient (CG) algorithm [Comput. Optim. Appl. 28 (2004), pp. 203-225] in order for receiving some appealing results in theory and practice. In fact, we propose an efficient adaptive updating formula for the parameters of Yabe and Takano CG algorithm leading to the well promising theoretical and numerical results. Global convergence property of the new proposed CG algorithm in Dai-Liao family is established under standard assumptions on uniformly convex and general functions. Numerical performance of the new algorithm on some test problems from CUTEr collection shows the efficiency and effectiveness of the proposed method in practice.

  • 16h45 - 17h10

    A superquadratic variant of Newton's method

    • Florian Potra, prés., UMBC

    We present the first Q-superquadratically convergent version of Newton's method for solving operator equations in Banach spaces that requires only one operator value and one inverse of the Frechet derivative per iteration. The R-order of convergence is at least 2.4142. A semi-local analysis provides sufficient conditions for existence of a solution and convergence. The local analysis assumes that a solution exists and shows that the method converges from any starting point belonging to an explicitly defined neighbourhood of the solution called the ball of attraction.

Retour