- Applied mathematics
- >
- Algorithms
- >
- Numerical analysis
- >
- Root-finding algorithms

- Applied mathematics
- >
- Computational mathematics
- >
- Numerical analysis
- >
- Root-finding algorithms

- Applied mathematics
- >
- Theoretical computer science
- >
- Algorithms
- >
- Root-finding algorithms

- Equivalence (mathematics)
- >
- Approximations
- >
- Numerical analysis
- >
- Root-finding algorithms

- Fields of mathematics
- >
- Applied mathematics
- >
- Algorithms
- >
- Root-finding algorithms

- Fields of mathematics
- >
- Computational mathematics
- >
- Numerical analysis
- >
- Root-finding algorithms

- Fields of mathematics
- >
- Mathematical logic
- >
- Algorithms
- >
- Root-finding algorithms

- Mathematical analysis
- >
- Fields of mathematical analysis
- >
- Numerical analysis
- >
- Root-finding algorithms

- Mathematical logic
- >
- Algorithms
- >
- Numerical analysis
- >
- Root-finding algorithms

- Mathematical relations
- >
- Approximations
- >
- Numerical analysis
- >
- Root-finding algorithms

- Philosophy of mathematics
- >
- Mathematical logic
- >
- Algorithms
- >
- Root-finding algorithms

- Theoretical computer science
- >
- Algorithms
- >
- Numerical analysis
- >
- Root-finding algorithms

- Theoretical computer science
- >
- Mathematics of computing
- >
- Numerical analysis
- >
- Root-finding algorithms

Anderson acceleration

In mathematics, Anderson acceleration, also called Anderson mixing, is a method for the acceleration of the convergence rate of fixed-point iterations. Introduced by Donald G. Anderson, this technique

Muller's method

Muller's method is a root-finding algorithm, a numerical method for solving equations of the form f(x) = 0. It was first presented by David E. Muller in 1956. Muller's method is based on the secant me

Methods of computing square roots

Methods of computing square roots are numerical analysis algorithms for approximating the principal, or non-negative, square root (usually denoted , , or ) of a real number. Arithmetically, it means g

Solving quadratic equations with continued fractions

In mathematics, a quadratic equation is a polynomial equation of the second degree. The general form is where a ≠ 0. The quadratic equation on a number can be solved using the well-known quadratic for

Fixed-point iteration

In numerical analysis, fixed-point iteration is a method of computing fixed points of a function. More specifically, given a function defined on the real numbers with real values and given a point in

Graeffe's method

In mathematics, Graeffe's method or Dandelin–Lobachesky–Graeffe method is an algorithm for finding all of the roots of a polynomial. It was developed independently by Germinal Pierre Dandelin in 1826

Bairstow's method

In numerical analysis, Bairstow's method is an efficient algorithm for finding the roots of a real polynomial of arbitrary degree. The algorithm first appeared in the appendix of the 1920 book Applied

Rational root theorem

In algebra, the rational root theorem (or rational root test, rational zero theorem, rational zero test or p/q theorem) states a constraint on rational solutions of a polynomial equation with integer

Steffensen's method

In numerical analysis, Steffensen's method is a root-finding technique named after Johan Frederik Steffensen which is similar to Newton's method. Steffensen's method also achieves quadratic convergenc

Householder's method

In mathematics, and more specifically in numerical analysis, Householder's methods are a class of root-finding algorithms that are used for functions of one real variable with continuous derivatives u

Halley's method

In numerical analysis, Halley's method is a root-finding algorithm used for functions of one real variable with a continuous second derivative. It is named after its inventor Edmond Halley. The algori

Brent's method

In numerical analysis, Brent's method is a hybrid root-finding algorithm combining the bisection method, the secant method and inverse quadratic interpolation. It has the reliability of bisection but

Methods of successive approximation

Mathematical methods relating to successive approximation include the following:
* Babylonian method, for finding square roots of numbers
* Fixed-point iteration
* Means of finding zeros of functio

Illinois algorithm

No description available.

CORDIC

CORDIC (for "coordinate rotation digital computer"), also known as Volder's algorithm, or: Digit-by-digit method Circular CORDIC (Jack E. Volder), Linear CORDIC, Hyperbolic CORDIC (John Stephen Walthe

Ridders' method

In numerical analysis, Ridders' method is a root-finding algorithm based on the false position method and the use of an exponential function to successively approximate a root of a continuous function

Secant method

In numerical analysis, the secant method is a root-finding algorithm that uses a succession of roots of secant lines to better approximate a root of a function f. The secant method can be thought of a

Inverse quadratic interpolation

In numerical analysis, inverse quadratic interpolation is a root-finding algorithm, meaning that it is an algorithm for solving equations of the form f(x) = 0. The idea is to use quadratic interpolati

Broyden's method

In numerical analysis, Broyden's method is a quasi-Newton method for finding roots in k variables. It was originally described by C. G. Broyden in 1965. Newton's method for solving f(x) = 0 uses the J

Shifting nth root algorithm

The shifting nth root algorithm is an algorithm for extracting the nth root of a positive real number which proceeds iteratively by shifting in n digits of the radicand, starting with the most signifi

Fast inverse square root

Fast inverse square root, sometimes referred to as Fast InvSqrt or by the hexadecimal constant 0x5F3759DF, is an algorithm that estimates , the reciprocal (or multiplicative inverse) of the square roo

Laguerre's method

In numerical analysis, Laguerre's method is a root-finding algorithm tailored to polynomials. In other words, Laguerre's method can be used to numerically solve the equation p(x) = 0 for a given polyn

Aberth method

The Aberth method, or Aberth–Ehrlich method or Ehrlich–Aberth method, named after Oliver Aberth and Louis W. Ehrlich, is a root-finding algorithm developed in 1967 for simultaneous approximation of al

Root of a function

No description available.

Integer square root

In number theory, the integer square root (isqrt) of a non-negative integer n is the non-negative integer m which is the greatest integer less than or equal to the square root of n, For example,

ITP method

In numerical analysis, the ITP method, short for Interpolate Truncate and Project, is the first root-finding algorithm that achieves the superlinear convergence of the secant method while retaining th

Root-finding algorithms

In mathematics and computing, a root-finding algorithm is an algorithm for finding zeros, also called "roots", of continuous functions. A zero of a function f, from the real numbers to real numbers or

Lehmer–Schur algorithm

In mathematics, the Lehmer–Schur algorithm (named after Derrick Henry Lehmer and Issai Schur) is a root-finding algorithm for complex polynomials, extending the idea of enclosing roots like in the one

Budan's theorem

In mathematics, Budan's theorem is a theorem for bounding the number of real roots of a polynomial in an interval, and computing the parity of this number. It was published in 1807 by François Budan d

Jenkins–Traub algorithm

The Jenkins–Traub algorithm for polynomial zeros is a fast globally convergent iterative polynomial root-finding method published in 1970 by and Joseph F. Traub. They gave two variants, one for genera

Splitting circle method

In mathematics, the splitting circle method is a numerical algorithm for the numerical factorization of a polynomial and, ultimately, for finding its complex roots. It was introduced by Arnold Schönha

Bailey's method (root finding)

No description available.

Bisection method

In mathematics, the bisection method is a root-finding method that applies to any continuous function for which one knows two values with opposite signs. The method consists of repeatedly bisecting th

Durand–Kerner method

In numerical analysis, the Weierstrass method or Durand–Kerner method, discovered by Karl Weierstrass in 1891 and rediscovered independently by Durand in 1960 and Kerner in 1966, is a root-finding alg

Alpha max plus beta min algorithm

The alpha max plus beta min algorithm is a high-speed approximation of the square root of the sum of two squares. The square root of the sum of two squares, also known as Pythagorean addition, is a us

Newton's method

In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximati

Sidi's generalized secant method

Sidi's generalized secant method is a root-finding algorithm, that is, a numerical method for solving equations of the form . The method was publishedby . The method is a generalization of the secant

Real-root isolation

In mathematics, and, more specifically in numerical analysis and computer algebra, real-root isolation of a polynomial consist of producing disjoint intervals of the real line, which contain each one

Ruffini's rule

In mathematics, Ruffini's rule is a method for computation of the Euclidean division of a polynomial by a binomial of the form x – r. It was described by Paolo Ruffini in 1804. The rule is a special c

© 2023 Useful Links.