AQFC2015

Seminar: Computational Science in the late 1600s up to the mid-1700s: From digit--by--digit computation to second order rate of convergence

----------------------------------------------------------------------------------------------------

             Department of Systems Engineering and Engineering Management
                     The Chinese University of Hong Kong

----------------------------------------------------------------------------------------------------

Date:Friday, January 29, 2021, 16:30 to 17:30

Title: Computational Science in the late 1600s up to the mid-1700s: From digit--by--digit computation to second order rate of convergence

Speaker: Professor Trond Steihaug, University of Bergen

Abstract:

Computational science existed long before the computer age. Researchers in history of mathematics generally agree that Vieté’s method from 1600 introduces solving (polynomial) equations in numbers in a systematic fashion using numerous examples – i.e. what we today would call computational science. The method is computing the roots digit by digit and the method was refined by the English mathematicians Thomas Harriot and William Oughtred before it was picked up by Isaac Newton around 1664. Manuscripts by Newton shows that the Newton-Raphson method was derived around 1669 and appeared in print in 1685 in the algebra book published by John Wallis. Joseph Raphson published his method in 1690.
This talk covers a part of a larger project and is an overview of methods for solving nonlinear equations from 1600s up to mid-1700s.  The two different implementations of the Newton-Raphson method, Newton's method as described by Wallis in 1685, Raphson's method from 1690, Halley's method from 1694 for solving nonlinear equations and the digit – by -- digit methods. The methods are revisited and the differences and similarities are highlighted.

Biography:

Trond Steihaug has a master in mathematics from the University of Oslo in 1976 and PhD from Yale University in 1981 in operations research.  His work experiences includes being assistant professor at Rice University in Houston, Texas, Senior Engineer and Head of reservoir simulation department in Statoil and from 1990 full professor in Optimization at University of Bergen, Norway.
The research is mainly concentrated on Large Scale Nonlinear Optimization and cover theoretical convergence analysis and development of new algorithms. He is most known for his work on Inexact Newton method and the use of conjugate gradient method and trust region methods.
A primal dual interior point method for linear optimization generates large sparse linear system of equations. As the method converges the conditioning of the system increases and replacing the direct methods with iterative methods is challenging. A new class preconditioners has been derived for such system. Further, the theoretical complexity of small update methods for primal dual interior point methods is better than large update methods. However, the best practical methods are all large update methods and major progress has been made to reduce this gap between theoretical best bound and best method in practice.
Halley’s method is as old as the Newton’s method. For nonlinear systems of equations Halley’s method will require the second derivative of the function – for unconstrained optimization the third derivative. Halley’s method will in general have a third order rate of convergence. In a sequence of papers it has been shown that one step of Halley’s method can be made as efficient as one step of Newton’s method.
Estimating sparse Jacobian and Hessian matrices and higher derivatives using automatic differentiation gives combinatorial optimization problems. Some of these optimization problems can be shown to be equivalent to different graph coloring problems and computing the derivatives are important to achieve an overall good efficiency.
Recent works include overview papers on computational science in the 16th and 17th century.

Date: 
Friday, January 29, 2021 - 16:30 to 17:30