Skip to main content

Approaches to accelerate methods for solving systems of equations arising in nonlinear optimization

Time: Fri 2020-12-18 10.00

Location: via Zoom, https://kth-se.zoom.us/j/63180134076, Stockholm (English)

Subject area: Applied and Computational Mathematics, Optimization and Systems Theory

Doctoral student: David Ek , Optimeringslära och systemteori

Opponent: Professor Jacek Gondzio, School of Mathematics, University of Edinburgh, Scotland, UK

Supervisor: Professor Anders Forsgren, Optimeringslära och systemteori

Export to calendar

Abstract

Methods for solving nonlinear optimization problems typically involve solving systems of equations. This thesis concerns approaches for accelerating some of those methods. In our setting, accelerating involves finding a trade-off between the computational cost of an iteration and the quality of the computed search direction. We design approaches for which theoretical results in ideal settings are derived. We also investigate the practical performance of the approaches within and beyond the boundaries of the theoretical frameworks with numerical simulations.

Paper A concerns methods for solving strictly convex unconstrained quadratic optimization problems. This is equivalent to solving systems of linear equations where the matrices are symmetric positive definite. The main focus is exact linesearch limited-memory quasi-Newton methods which generate search directions parallel to those of the method of preconditioned conjugate gradients. We give a class of limited-memory quasi-Newton methods. In addition, we provide a dynamic framework for the construction of these methods. The methods are meant to be particularly useful for solving sequences of related systems of linear equations. Such sequences typically arise as methods for solving systems of nonlinear equations converge.

Paper B deals with solving systems of nonlinear equations that arise in interior-point methods for bound-constrained nonlinear programming. Application of Newton's method gives sequences of systems of linear equations. We propose partial and full approximate solutions to the Newton systems. The partial approximate solutions are computationally inexpensive, whereas each full approximate solution typically requires a reduced Newton system to be solved. In addition, we suggest two Newton-like approaches, which are based upon the proposed partial approximate solutions, for solving the systems of nonlinear equations.

Paper C is focused on interior-point methods for quadratic programming. We propose a structured modified Newton approach to solve the systems of nonlinear equations that arise. The modified Jacobians are composed of a previous Jacobian, plus one low-rank update matrix per succeeding iteration. For a given rank restriction, we construct a low-rank update matrix such that the modified Jacobian becomes closest to the current Jacobian, in both 2-norm and Frobenious norm. The approach is structured in the sense that it preserves the nonzero pattern of the Jacobian.

The approaches suggested in Paper B and Paper C are motivated by asymptotic results in ideal theoretical frameworks. In particular, it is shown that the approaches become increasingly accurate as primal-dual interior-point methods converge. A demonstration of the practical performance is given by numerical results. The results indicate the performance and limitations of the approaches suggested.

We envisage that the approaches of Paper A, Paper B and Paper C can be useful in parallel, or in combination, with existing solvers in order to accelerate methods.

Paper D is more pedagogical in nature. We give a derivation of the method of conjugate gradients in an optimization framework. The result itself is well known but the derivation has, to the best of our knowledge, not been presented before.

urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-286057