Mathematical Description and Numerical Algorithms for Nonlinear Equations

Written by linearization | Published 2025/03/25
Tech Story Tags: nonlinearsolve.jl | robust-nonlinear-solvers | julia-programming-language | gpu-accelerated-computation | sparse-matrix-computations | jacobian-free-krylov-methods | scientific-machine-learning | benchmarking-nonlinear-solvers

TLDRThis section introduces the mathematical framework for numerically solving nonlinear problems and demonstrates the built-in support for such problems in NonlinearSolve.jl.via the TL;DR App

Table of Links

Abstract and 1. Introduction

2. Mathematical Description and 2.1. Numerical Algorithms for Nonlinear Equations

2.2. Globalization Strategies

2.3. Sensitivity Analysis

2.4. Matrix Coloring & Sparse Automatic Differentiation

3. Special Capabilities

3.1. Composable Building Blocks

3.2. Smart PolyAlgortihm Defaults

3.3. Non-Allocating Static Algorithms inside GPU Kernels

3.4. Automatic Sparsity Exploitation

3.5. Generalized Jacobian-Free Nonlinear Solvers using Krylov Methods

4. Results and 4.1. Robustness on 23 Test Problems

4.2. Initializing the Doyle-Fuller-Newman (DFN) Battery Model

4.3. Large Ill-Conditioned Nonlinear Brusselator System

5. Conclusion and References

2. Mathematical Description

This section introduces the mathematical framework for numerically solving nonlinear problems and demonstrates the built-in support for such problems in NonlinearSolve.jl. A nonlinear problem is defined as:

where J๐‘˜ is the Jacobian of ๐‘“ (๐‘ข, ๐œƒ) with respect to ๐‘ข, evaluated at ๐‘ข๐‘˜. This method exhibits rapid convergence [23, Theorem 11.2] when the initial guess is sufficiently close to a root. Furthermore, it requires only the function and its Jacobian, making it computationally efficient for many practical applications. Halleyโ€™s method enhances the Newton-Raphson method, leveraging information from the second total derivative of the function to achieve cubic convergence. It refines the initial guess ๐‘ข0 using:

This method provides higher convergence orders without reliance on higher-order derivatives. [25] summarizes other multi-step schemes that provide higher-order convergence using only first-order derivatives. These methods are local algorithms, and their convergence relies on having a good initial guess. We will discuss some techniques to facilitate the global convergence of these methods in the following section.

This paper is available on arxiv under CC BY 4.0 DEED license.


[5] For Halleyโ€™s method, we additionally assume twice-differentiability.

Authors:

(1) AVIK PAL, CSAIL MIT, Cambridge, MA;

(2) FLEMMING HOLTORF;

(3) AXEL LARSSON;

(4) TORKEL LOMAN;

(5) UTKARSH;

(6) FRANK SCHร„FER;

(7) QINGYU QU;

(8) ALAN EDELMAN;

(9) CHRIS RACKAUCKAS, CSAIL MIT, Cambridge, MA.


Written by linearization | We publish those who illuminate the path and make the intricate intuitive.
Published by HackerNoon on 2025/03/25