- Seminar Calendar
- Seminar Archive
- 2024-2025 Semester 1
- 2023-2024 Semester 2
- 2023-2024 Semester 1
- 2022-2023 Semester 2
- 2022-2023 Semester 1
- 2021-2022 Semester 2
- 2021-2022 Semester 1
- 2020-2021 Semester 2
- 2020-2021 Semester 1
- 2019-2020 Semester 2
- 2019-2020 Semester 1
- 2018-2019 Semester 2
- 2018-2019 Semester 1
- 2017-2018 Semester 2
- 2017-2018 Semester 1
- 2016-2017 Semester 2
- 2016-2017 Semester 1
- 2015-2016 Semester 1
- 2015-2016 Semester 2
- 2014-2015 Semester 2
- 2014-2015 Semester 1
- 2013-2014 Semester 2
- 2013-2014 Semester 1
- 2012-2013 Semester 2
- 2012-2013 Semester 1
- 2011-2012 Semester 2
- 2011-2012 Semester 1
- 2010-2011 Semester 2
- 2010-2011 Semester 1
- 2009-2010 Semester 2
- 2009-2010 Semester 1
- 2008-2009 Semester 2
- 2008-2009 Semester 1
- 2007-2008 Semester 2
- 2007-2008 Semester 1
- 2006-2007 Semester 2
- 2006-2007 Semester 1
- 2005-2006 Semester 2
- 2005-2006 Semester 1
- Contact
- Site Map
Seminar: Computational Science in the late 1600s up to the mid-1700s: From digit--by--digit computation to second order rate of convergence
----------------------------------------------------------------------------------------------------
Department of Systems Engineering and Engineering Management
The Chinese University of Hong Kong
----------------------------------------------------------------------------------------------------
Date:Friday, January 29, 2021, 16:30 to 17:30
Title: Computational Science in the late 1600s up to the mid-1700s: From digit--by--digit computation to second order rate of convergence
Speaker: Professor Trond Steihaug, University of Bergen
Abstract:
Computational science existed long before the computer age. Researchers in history of mathematics generally agree that Vieté’s method from 1600 introduces solving (polynomial) equations in numbers in a systematic fashion using numerous examples – i.e. what we today would call computational science. The method is computing the roots digit by digit and the method was refined by the English mathematicians Thomas Harriot and William Oughtred before it was picked up by Isaac Newton around 1664. Manuscripts by Newton shows that the Newton-Raphson method was derived around 1669 and appeared in print in 1685 in the algebra book published by John Wallis. Joseph Raphson published his method in 1690.
This talk covers a part of a larger project and is an overview of methods for solving nonlinear equations from 1600s up to mid-1700s. The two different implementations of the Newton-Raphson method, Newton's method as described by Wallis in 1685, Raphson's method from 1690, Halley's method from 1694 for solving nonlinear equations and the digit – by -- digit methods. The methods are revisited and the differences and similarities are highlighted.
Biography:
Trond Steihaug has a master in mathematics from the University of Oslo in 1976 and PhD from Yale University in 1981 in operations research. His work experiences includes being assistant professor at Rice University in Houston, Texas, Senior Engineer and Head of reservoir simulation department in Statoil and from 1990 full professor in Optimization at University of Bergen, Norway.
The research is mainly concentrated on Large Scale Nonlinear Optimization and cover theoretical convergence analysis and development of new algorithms. He is most known for his work on Inexact Newton method and the use of conjugate gradient method and trust region methods.
A primal dual interior point method for linear optimization generates large sparse linear system of equations. As the method converges the conditioning of the system increases and replacing the direct methods with iterative methods is challenging. A new class preconditioners has been derived for such system. Further, the theoretical complexity of small update methods for primal dual interior point methods is better than large update methods. However, the best practical methods are all large update methods and major progress has been made to reduce this gap between theoretical best bound and best method in practice.
Halley’s method is as old as the Newton’s method. For nonlinear systems of equations Halley’s method will require the second derivative of the function – for unconstrained optimization the third derivative. Halley’s method will in general have a third order rate of convergence. In a sequence of papers it has been shown that one step of Halley’s method can be made as efficient as one step of Newton’s method.
Estimating sparse Jacobian and Hessian matrices and higher derivatives using automatic differentiation gives combinatorial optimization problems. Some of these optimization problems can be shown to be equivalent to different graph coloring problems and computing the derivatives are important to achieve an overall good efficiency.
Recent works include overview papers on computational science in the 16th and 17th century.
Date:
Friday, January 29, 2021 - 16:30 to 17:30