List of numerical analysis topics
Encyclopedia

General

  • Iterative method
    Iterative method
    In computational mathematics, an iterative method is a mathematical procedure that generates a sequence of improving approximate solutions for a class of problems. A specific implementation of an iterative method, including the termination criteria, is an algorithm of the iterative method...

  • Rate of convergence
    Rate of convergence
    In numerical analysis, the speed at which a convergent sequence approaches its limit is called the rate of convergence. Although strictly speaking, a limit does not give information about any finite first part of the sequence, this concept is of practical importance if we deal with a sequence of...

     — the speed at which a convergent sequence approaches its limit
  • Series acceleration
    Series acceleration
    In mathematics, series acceleration is one of a collection of sequence transformations for improving the rate of convergence of a series. Techniques for series acceleration are often applied in numerical analysis, where they are used to improve the speed of numerical integration...

     — methods to accelerate the speed of convergence of a series
    • Aitken's delta-squared process
      Aitken's delta-squared process
      In numerical analysis, Aitken's delta-squared process is a series acceleration method, used for accelerating the rate of convergence of a sequence. It is named after Alexander Aitken, who introduced this method in 1926. Its early form was known to Seki Kōwa and was found for rectification of the...

       — most useful for linearly converging sequences
    • Minimum polynomial extrapolation
      Minimum polynomial extrapolation
      In mathematics, minimum polynomial extrapolation is a sequence transformation used for convergence acceleration.While Aitken's method is the most famous, it often fails for vector sequences. An effective method for vector sequences is the minimum polynomial extrapolation...

       — for vector sequences
    • Richardson extrapolation
      Richardson extrapolation
      In numerical analysis, Richardson extrapolation is a sequence acceleration method, used to improve the rate of convergence of a sequence. It is named after Lewis Fry Richardson, who introduced the technique in the early 20th century. In the words of Birkhoff and Rota, ".....

    • Shanks transformation
      Shanks transformation
      In numerical analysis, the Shanks transformation is a non-linear series acceleration method to increase the rate of convergence of a sequence. This method is named after Daniel Shanks, who rediscovered this sequence transformation in 1955. It was first derived and published by R...

       — similar to Aitken's delta-squared process, but applied to the partial sums
    • Van Wijngaarden transformation — for accelerating the convergence of an alternating series
  • Level set method
    Level set method
    The level set method is a numerical technique for tracking interfaces and shapes. The advantage of the level set method is that one can perform numerical computations involving curves and surfaces on a fixed Cartesian grid without having to parameterize these objects...

    • Level set (data structures)
      Level set (data structures)
      In computer science a level set data structure is designed to represent discretely sampled dynamic level sets functions.A common use of this form of data structure is in efficient image rendering...

       — data structures for representing level sets
  • Abramowitz and Stegun
    Abramowitz and Stegun
    Abramowitz and Stegun is the informal name of a mathematical reference work edited by Milton Abramowitz and Irene Stegun of the U.S. National Bureau of Standards...

     — book containing formulas and tables of many special functions
    • Digital Library of Mathematical Functions
      Digital Library of Mathematical Functions
      The Digital Library of Mathematical Functions is an online project at the National Institute of Standards and Technology to develop a major resource of mathematical reference data for special functions and their applications. It is intended as an update of Abramowitz's and Stegun's Handbook of...

       — successor of book by Abramowitz and Stegun
  • Curse of dimensionality
    Curse of dimensionality
    The curse of dimensionality refers to various phenomena that arise when analyzing and organizing high-dimensional spaces that do not occur in low-dimensional settings such as the physical space commonly modeled with just three dimensions.There are multiple phenomena referred to by this name in...

  • Local convergence
    Local convergence
    In numerical analysis, an iterative method is called locally convergent if the successive approximations produced by the method are guaranteed to converge to a solution when the initial approximation is already close enough to the solution...

     and global convergence — whether you need a good initial guess to get convergence
  • Superconvergence
    Superconvergence
    In numerical analysis, a superconvergent method is one which converges faster than generally expected. For example in the Finite Element Method approximation to Poisson's equation in two dimensions, using piecewise linear elements, the average error in the gradient is first order...

  • Discretization
    Discretization
    In mathematics, discretization concerns the process of transferring continuous models and equations into discrete counterparts. This process is usually carried out as a first step toward making them suitable for numerical evaluation and implementation on digital computers...

    • Collocation method
      Collocation method
      In mathematics, a collocation method is a method for the numerical solution of ordinary differential equations, partial differential equations and integral equations...

       — discretizes a continuous equation by requiring it only to hold at certain points
  • Difference quotient
    Difference quotient
    The primary vehicle of calculus and other higher mathematics is the function. Its "input value" is its argument, usually a point expressible on a graph...

  • Complexity:
    • Computational complexity of mathematical operations
      Computational complexity of mathematical operations
      The following tables list the running time of various algorithms for common mathematical operations.Here, complexity refers to the time complexity of performing computations on a multitape Turing machine...

    • Smoothed analysis
      Smoothed analysis
      Smoothed analysis is a way of measuring the complexity of an algorithm. It gives a more realistic analysis of the practical performance of the algorithm, such as its running time, than using worst-case or average-case scenarios....

       — measuring the expected performance of algorithms under slight random perturbations of worst-case inputs
  • Symbolic-numeric computation
    Symbolic-numeric computation
    In mathematics and computer science, symbolic-numeric computation is the use of software that combines symbolic and numeric methods to solve problems.-References:*, Dongming Wang, Lihong Zhi, Springer, 2007, ISBN 3764379839...

     — combination of symbolic and numeric methods
  • ABS methods
    ABS methods
    ABS methods, where the acronym contains the initials of Jozsef Abaffy, Charles G. Broyden and Emilio Spedicato, have been developed since 1981 to generate a large class of algorithms for the following applications:...

  • Cultural aspects:
    • International Workshops on Lattice QCD and Numerical Analysis
      International Workshops on Lattice QCD and Numerical Analysis
      The International Workshops on Lattice QCD and Numerical Analysis first started in 1995. The aim is to bring together applied mathematicians and theoretical physicists as well as to stimulate the exchange of ideas between leading experts in the fields of Lattice QCD and numerical analysis...

    • Hundred-dollar, Hundred-digit Challenge problems
      Hundred-dollar, Hundred-digit Challenge problems
      The Hundred-dollar, Hundred-digit Challenge problems are 10 problems in numerical mathematics published in 2002 by . A $100 prize was offered to whoever produced the most accurate solutions, measured up to 10 significant digits. The deadline for the contest was May 20, 2002...

       — list of ten problems proposed by Nick Trefethen in 2002

Error

Error analysis
Error analysis
Error analysis is the study of kind and quantity of error that occurs, particularly in the fields of applied mathematics , applied linguistics and statistics.- Error analysis in numerical modeling :...

  • Approximation
    Approximation
    An approximation is a representation of something that is not exact, but still close enough to be useful. Although approximation is most often applied to numbers, it is also frequently applied to such things as mathematical functions, shapes, and physical laws.Approximations may be used because...

  • Approximation error
    Approximation error
    The approximation error in some data is the discrepancy between an exact value and some approximation to it. An approximation error can occur because#the measurement of the data is not precise due to the instruments...

  • Arithmetic precision
  • Condition number
    Condition number
    In the field of numerical analysis, the condition number of a function with respect to an argument measures the asymptotically worst case of how much the function can change in proportion to small changes in the argument...

  • Discretization error
    Discretization error
    In numerical analysis, computational physics, and simulation, discretization error is error resulting from the fact that a function of a continuous variable is represented in the computer by a finite number of evaluations, for example, on a lattice...

  • Floating point
    Floating point
    In computing, floating point describes a method of representing real numbers in a way that can support a wide range of values. Numbers are, in general, represented approximately to a fixed number of significant digits and scaled using an exponent. The base for the scaling is normally 2, 10 or 16...

     number
    • Guard digit — extra precision introduced during a computation to reduce round-off error
    • Arbitrary-precision arithmetic
      Arbitrary-precision arithmetic
      In computer science, arbitrary-precision arithmetic indicates that calculations are performed on numbers whose digits of precision are limited only by the available memory of the host system. This contrasts with the faster fixed-precision arithmetic found in most ALU hardware, which typically...

    • Truncation
      Truncation
      In mathematics and computer science, truncation is the term for limiting the number of digits right of the decimal point, by discarding the least significant ones.For example, consider the real numbersThe result would be:- Truncation and floor function :...

       — rounding a floating-point number by discarding all digits after a certain digit
  • Interval arithmetic
    Interval arithmetic
    Interval arithmetic, interval mathematics, interval analysis, or interval computation, is a method developed by mathematicians since the 1950s and 1960s as an approach to putting bounds on rounding errors and measurement errors in mathematical computation and thus developing numerical methods that...

     — represent every number by two floating-point numbers guaranteed to have the unknown number between them
    • See also: Interval boundary element method
      Interval boundary element method
      Interval boundary element method is classical boundary element method with the interval parameters.Boundary element method is based on the following integral equation...

      , Interval finite element
      Interval finite element
      The interval finite element method is a finite element method that uses interval parameters. Interval FEM can be applied in situations where it is not possible to get reliable probabilistic characteristics of the structure. This is important in concrete structures, wood structures, geomechanics,...

  • Loss of significance
    Loss of significance
    Loss of significance is an undesirable effect in calculations using floating-point arithmetic. It occurs when an operation on two numbers increases relative error substantially more than it increases absolute error, for example in subtracting two large and nearly equal numbers. The effect is that...

  • Numerical error
    Numerical error
    In software engineering and mathematics, numerical error is the combined effect of two kinds of error in a calculation. The first is caused by the finite precision of computations involving floating-point or integer values...

  • Numerical stability
    Numerical stability
    In the mathematical subfield of numerical analysis, numerical stability is a desirable property of numerical algorithms. The precise definition of stability depends on the context, but it is related to the accuracy of the algorithm....

  • Error propagation:
    • Propagation of uncertainty
      Propagation of uncertainty
      In statistics, propagation of error is the effect of variables' uncertainties on the uncertainty of a function based on them...

    • Significance arithmetic
      Significance arithmetic
      Significance arithmetic is a set of rules for approximating the propagation of uncertainty in scientific or statistical calculations. These rules can be used to find the appropriate number of significant figures to use to represent the result of a calculation...

    • Residual (numerical analysis)
  • Relative difference — the relative difference between x and y is |xy| / max(|x|, |y|)
  • Round-off error
    Round-off error
    A round-off error, also called rounding error, is the difference between the calculated approximation of a number and its exact mathematical value. Numerical analysis specifically tries to estimate this error when using approximation equations and/or algorithms, especially when using finitely many...

    • Stochastic rounding
  • Significant figures
    Significant figures
    The significant figures of a number are those digits that carry meaning contributing to its precision. This includes all digits except:...

    • False precision
      False precision
      False precision occurs when numerical data are presented in a manner that implies better precision than is actually the case; since precision is a limit to accuracy, this often leads to overconfidence in the accuracy as well.In science and engineering, convention dictates that...

       — giving more significant figures than appropriate
  • Truncation error
    Truncation error
    Truncation error or local truncation error is error made by numerical algorithms that arises from taking finite number of steps in computation...

     — error committed by doing only a finite numbers of steps
  • Well-posed problem
    Well-posed problem
    The mathematical term well-posed problem stems from a definition given by Jacques Hadamard. He believed that mathematical models of physical phenomena should have the properties that# A solution exists# The solution is unique...

  • Affine arithmetic
    Affine arithmetic
    Affine arithmetic is a model for self-validated numerical analysis. In AA, the quantities of interest are represented as affine combinations of certain primitive variables, which stand for sources of uncertainty in the data or approximations made during the computation.Affine arithmetic is meant...


Elementary and special functions

  • Summation:
    • Kahan summation algorithm
      Kahan summation algorithm
      In numerical analysis, the Kahan summation algorithm significantly reduces the numerical error in the total obtained by adding a sequence of finite precision floating point numbers, compared to the obvious approach...

    • Binary splitting
      Binary splitting
      In mathematics, binary splitting is a technique for speeding up numerical evaluation of many types of series with rational terms. In particular, it can be used to evaluate hypergeometric series at rational points...

  • Multiplication:
    • Multiplication algorithm
      Multiplication algorithm
      A multiplication algorithm is an algorithm to multiply two numbers. Depending on the size of the numbers, different algorithms are in use...

       — general discussion, simple methods
    • Karatsuba algorithm — the first algorithm which is faster than straightforward multiplication
    • Toom–Cook multiplication — generalization of Karatsuba multiplication
    • Schönhage–Strassen algorithm — based on Fourier transform, asymptotically very fast
    • Fürer's algorithm
      Fürer's algorithm
      Fürer's algorithm is an integer multiplication algorithm for very large numbers possessing a very low asymptotic complexity. It was created in 2007 by Swiss mathematician Martin Fürer of Pennsylvania State University as an asymptotically faster algorithm than its predecessor, the...

       — asymptotically slightly faster than Schönhage-Strassen
  • Exponentiation:
    • Exponentiation by squaring
      Exponentiation by squaring
      Exponentiating by squaring is a general method for fast computation of large integer powers of a number. Some variants are commonly referred to as square-and-multiply algorithms or binary exponentiation. In additive notation the appropriate term is double-and-add...

    • Addition-chain exponentiation
      Addition-chain exponentiation
      In mathematics and computer science, optimal addition-chain exponentiation is a method of exponentiation by positive integer powers that requires a minimal number of multiplications. It works by creating a shortest addition chain that generates the desired exponent. Each exponentiation in the chain...

  • Polynomials:
    • Horner scheme
      Horner scheme
      In numerical analysis, the Horner scheme , named after William George Horner, is an algorithm for the efficient evaluation of polynomials in monomial form. Horner's method describes a manual process by which one may approximate the roots of a polynomial equation...

    • Estrin's scheme
      Estrin's scheme
      In numerical analysis Estrin's scheme is an algorithm for numerical evaluation of polynomials.The Horner scheme for evaluation of polynomials is one of the most commonly used algorithms for this purpose and unlike Estrin's scheme it is optimal in the sense that it minimizes the number of...

       — modification of the Horner scheme with more possibilities for parallellization
    • Clenshaw algorithm
      Clenshaw algorithm
      In numerical analysis, the Clenshaw algorithm is a recursive method to evaluate a linear combination of Chebyshev polynomials. In general it applies to any class of functions that can be defined by a three-term recurrence relation.-Clenshaw algorithm:...

    • De Casteljau's algorithm
      De Casteljau's algorithm
      In the mathematical field of numerical analysis, De Casteljau's algorithm is a recursive method to evaluate polynomials in Bernstein form or Bézier curves, named after its inventor Paul de Casteljau...

  • Square roots and other roots:
    • Integer square root
      Integer square root
      In number theory, the integer square root of a positive integer n is the positive integer m which is the greatest integer less than or equal to the square root of n,...

    • Methods of computing square roots
      Methods of computing square roots
      There are several methods for calculating the principal square root of a nonnegative real number. For the square roots of a negative or complex number, see below.- Rough estimation :...

    • nth root algorithm
    • Shifting nth root algorithm — similar to long division
    • Alpha max plus beta min algorithm — approximates (x2 + y2)1/2
    • Fast inverse square root
      Fast inverse square root
      Fast inverse square root is a method of calculating x-½, the reciprocal of a square root for a 32-bit floating point number in IEEE 754 floating point format...

       — calculates 1 / √x using details of the IEEE floating-point system
  • Elementary functions (exponential, logarithm, trigonometric functions):
    • Generating trigonometric tables
      Generating trigonometric tables
      In mathematics, tables of trigonometric functions are useful in a number of areas. Before the existence of pocket calculators, trigonometric tables were essential for navigation, science and engineering...

    • CORDIC
      CORDIC
      CORDIC is a simple and efficient algorithm to calculate hyperbolic and trigonometric functions...

       — shift-and-add algorithm using a table of arc tangents
    • BKM algorithm
      BKM algorithm
      The BKM algorithm is a shift-and-add algorithm for computing elementary functions, first published in 1994 by J.C. Bajard, S. Kla, and J.M. Muller. BKM is based on computing complex logarithms and exponentials using a method similar to the algorithm Henry Briggs used to compute logarithms...

       — shift-and-add algorithm using a table of logarithms and complex numbers
  • Gamma function:
    • Lanczos approximation
    • Spouge's approximation
      Spouge's approximation
      In mathematics, Spouge's approximation is a formula for the gamma function due to John L. Spouge. The formula is a modification of Stirling's approximation, and has the form...

       — modification of Stirling's approximation; easier to apply than Lanczos
  • AGM method
    AGM method
    In mathematics, the AGM method makes it possible to construct fast algorithms for calculation of exponential and trigonometric functions, and some mathematical constants and in particular, to quickly compute \pi....

     — computes arithmetic–geometric mean; related methods compute special functions
  • Gal's accurate tables
    Gal's accurate tables
    Gal's accurate tables is a method devised by Shmuel Gal to provide accurate values of special functions using a lookup table and interpolation. It is a fast and efficient method for generating values of functions like the exponential or the trigonometric functions to within last-bit accuracy for...

     — table of function values with unequal spacing to reduce round-off error
  • Spigot algorithm
    Spigot algorithm
    A spigot algorithm is a type of algorithm used to compute the value of a mathematical constant such as π or e. Unlike recursive algorithms, a spigot algorithm yields digits incrementally without using previously computed digits...

     — algorithms that can compute individual digits of a real number
  • Approximations of :
    • Liu Hui's pi algorithm — first algorithm that can compute π to arbitrary precision
    • Gauss–Legendre algorithm — iteration which converges quadratically to π, based on arithmetic–geometric mean
    • Borwein's algorithm
      Borwein's algorithm
      In mathematics, Borwein's algorithm is an algorithm devised by Jonathan and Peter Borwein to calculate the value of 1/π.-Jonathan Borwein and Peter Borwein's Version :Start out by setting A &= 63365028312971999585426220 \\...

       — iteration which converges quartically to 1/π, and other algorithms
    • Chudnovsky algorithm
      Chudnovsky algorithm
      The Chudnovsky algorithm is a fast method for calculating the digits of π. It was used by the Chudnovsky brothers to calculate more than one billion digits...

       — fast algorithm that calculates a hypergeometric series
    • Bailey–Borwein–Plouffe formula
      Bailey–Borwein–Plouffe formula
      The Bailey–Borwein–Plouffe formula provides a spigot algorithm for the computation of the nth binary digit of π. This summation formula was discovered in 1995 by Simon Plouffe. The formula is named after the authors of the paper in which the formula was published, David H. Bailey, Peter Borwein,...

       — can be used to compute individual hexadecimal digits of π
    • Bellard's formula
      Bellard's formula
      Bellard's formula, as used by PiHex, the now-completed distributed computing project, is used to calculate the nth digit of π in base 2. It is a faster version of the Bailey–Borwein–Plouffe formula...

       — faster version of Bailey–Borwein–Plouffe formula

Numerical linear algebra

Numerical linear algebra
Numerical linear algebra
Numerical linear algebra is the study of algorithms for performing linear algebra computations, most notably matrix operations, on computers. It is often a fundamental part of engineering and computational science problems, such as image and signal processing, Telecommunication, computational...

 — study of numerical algorithms for linear algebra problems

Basic concepts

  • Types of matrices appearing in numerical analysis:
    • Sparse matrix
      Sparse matrix
      In the subfield of numerical analysis, a sparse matrix is a matrix populated primarily with zeros . The term itself was coined by Harry M. Markowitz....

      • Band matrix
        Band matrix
        In mathematics, particularly matrix theory, a band matrix is a sparse matrix whose non-zero entries are confined to a diagonal band, comprising the main diagonal and zero or more diagonals on either side.-Matrix bandwidth:...

      • Tridiagonal matrix
      • Pentadiagonal matrix
        Pentadiagonal matrix
        In linear algebra, a pentadiagonal matrix is a matrix that is nearly diagonal; to be exact, it is a matrix in which the only nonzero entries are on the main diagonal, and the first two diagonals above and below it...

      • Skyline matrix
        Skyline matrix
        A skyline matrix, or a variable band matrix, or envelope storage scheme is a form of a sparse matrix storage format matrix that reduces the storage requirement of a matrix more than banded storage. In banded storage, all entries within a fixed distance from the diagonal are stored...

    • Circulant matrix
      Circulant matrix
      In linear algebra, a circulant matrix is a special kind of Toeplitz matrix where each row vector is rotated one element to the right relative to the preceding row vector. In numerical analysis, circulant matrices are important because they are diagonalized by a discrete Fourier transform, and hence...

    • Triangular matrix
      Triangular matrix
      In the mathematical discipline of linear algebra, a triangular matrix is a special kind of square matrix where either all the entries below or all the entries above the main diagonal are zero...

    • Diagonally dominant matrix
      Diagonally dominant matrix
      In mathematics, a matrix is said to be diagonally dominant if for every row of the matrix, the magnitude of the diagonal entry in a row is larger than or equal to the sum of the magnitudes of all the other entries in that row...

    • Stieltjes matrix
      Stieltjes matrix
      In mathematics, particularly matrix theory, a Stieltjes matrix, named after Thomas Joannes Stieltjes, is a real symmetric positive definite matrix with nonpositive off-diagonal entries. A Stieltjes matrix is necessarily an M-matrix...

       — symmetric positive definite with non-positive off-diagonal entries
    • Hilbert matrix — example of a matrix which is extremely ill-conditioned (and thus difficult to handle)
    • Wilkinson matrix
      Wilkinson matrix
      In linear algebra, Wilkinson matrices are symmetric, tridiagonal, order-N matrices with pairs of nearly, but not exactly, equal eigenvalues. It is named after the British mathematician James H. Wilkinson...

       — example of a symmetric tridiagonal matrix with pairs of nearly, but not exactly, equal eigenvalues
  • Algorithms for matrix multiplication:
    • Strassen algorithm
      Strassen algorithm
      In the mathematical discipline of linear algebra, the Strassen algorithm, named after Volker Strassen, is an algorithm used for matrix multiplication...

    • Coppersmith–Winograd algorithm
      Coppersmith–Winograd algorithm
      In the mathematical discipline of linear algebra, the Coppersmith–Winograd algorithm, named after Don Coppersmith and Shmuel Winograd, is the asymptotically fastest known algorithm for square matrix multiplication. It can multiply two n \times n matrices in O time...

    • Cannon's algorithm
      Cannon's algorithm
      In computer science, Cannon's algorithm is a distributed algorithm for matrix multiplication for two-dimensional meshes first described in 1969 by Lynn Elliot Cannon.It is especially suitable for computers laid out in an N × N mesh...

       — a distributed algorithm, especially suitable for processors laid out in a 2d grid
    • Freivald's algorithm
      Freivald's algorithm
      Freivalds' algorithm is a randomized algorithm used to verify matrix multiplication. Given three n × n matrices A, B, and C, a general problem is to verify whether A × B = C...

       — a randomized algorithm for checking the result of a multiplication
  • Matrix decomposition
    Matrix decomposition
    In the mathematical discipline of linear algebra, a matrix decomposition is a factorization of a matrix into some canonical form. There are many different matrix decompositions; each finds use among a particular class of problems.- Example :...

    s:
    • LU decomposition
      LU decomposition
      In linear algebra, LU decomposition is a matrix decomposition which writes a matrix as the product of a lower triangular matrix and an upper triangular matrix. The product sometimes includes a permutation matrix as well. This decomposition is used in numerical analysis to solve systems of linear...

       — lower triangular times upper triangular
    • QR decomposition
      QR decomposition
      In linear algebra, a QR decomposition of a matrix is a decomposition of a matrix A into a product A=QR of an orthogonal matrix Q and an upper triangular matrix R...

       — orthogonal matrix times triangular matrix
    • Polar decomposition — unitary matrix times positive-semidefinite Hermitian matrix
    • Decompositions by similarity:
      • Eigendecomposition
        Eigendecomposition of a matrix
        In the mathematical discipline of linear algebra, eigendecomposition or sometimes spectral decomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors...

         — decomposition in terms of eigenvectors and eigenvalues
      • Jordan normal form
        Jordan normal form
        In linear algebra, a Jordan normal form of a linear operator on a finite-dimensional vector space is an upper triangular matrix of a particular form called Jordan matrix, representing the operator on some basis...

         — bidiagonal matrix of a certain form; generalizes the eigendecomposition
      • Jordan–Chevalley decomposition — sum of commuting nilpotent matrix and diagonalizable matrix
      • Schur decomposition
        Schur decomposition
        In the mathematical discipline of linear algebra, the Schur decomposition or Schur triangulation, named after Issai Schur, is a matrix decomposition.- Statement :...

         — similarity transform bringing the matrix to a triangular matrix

Solving systems of linear equations

  • Gaussian elimination
    Gaussian elimination
    In linear algebra, Gaussian elimination is an algorithm for solving systems of linear equations. It can also be used to find the rank of a matrix, to calculate the determinant of a matrix, and to calculate the inverse of an invertible square matrix...

    • Row echelon form
      Row echelon form
      In linear algebra a matrix is in row echelon form if* All nonzero rows are above any rows of all zeroes, and...

       — matrix in which all entries below a nonzero entry are zero
    • Gauss–Jordan elimination
      Gauss–Jordan elimination
      In linear algebra, Gauss–Jordan elimination is an algorithm for getting matrices in reduced row echelon form using elementary row operations. It is a variation of Gaussian elimination. Gaussian elimination places zeros below each pivot in the matrix, starting with the top row and working downwards....

       — variant in which the entries below the pivot are also zeroed
    • Montante's method — variant which ensures that all entries remain integers if the initial matrix has integer entries
    • Tridiagonal matrix algorithm
      Tridiagonal matrix algorithm
      In numerical linear algebra, the tridiagonal matrix algorithm , also known as the Thomas algorithm , is a simplified form of Gaussian elimination that can be used to solve tridiagonal systems of equations...

       — simplified form of Gaussian elimination for tridiagonal matrices
  • LU decomposition
    LU decomposition
    In linear algebra, LU decomposition is a matrix decomposition which writes a matrix as the product of a lower triangular matrix and an upper triangular matrix. The product sometimes includes a permutation matrix as well. This decomposition is used in numerical analysis to solve systems of linear...

     — write a matrix as a product of an upper- and a lower-triangular matrix
    • Crout matrix decomposition
      Crout matrix decomposition
      In linear algebra, the Crout matrix decomposition is an LU decomposition which decomposes a matrix into a lower triangular matrix , an upper triangular matrix and, although not always needed, a permutation matrix ....

    • LU reduction
      LU reduction
      LU reduction is an algorithm related to LU decomposition. This term is usually used in the context of super computing and highly parallel computing. In this context it is used as a benchmarking algorithm, i.e. to provide a comparative measurement of speed for different computers...

       — a special parallelized version of a LU decomposition algorithm
  • Block LU decomposition
    Block LU decomposition
    In linear algebra, a Block LU decomposition is a matrix decomposition of a block matrix into a lower block triangular matrix L and an upper block triangular matrix U...

  • Cholesky decomposition
    Cholesky decomposition
    In linear algebra, the Cholesky decomposition or Cholesky triangle is a decomposition of a Hermitian, positive-definite matrix into the product of a lower triangular matrix and its conjugate transpose. It was discovered by André-Louis Cholesky for real matrices...

     — for solving a system with a positive definite matrix
    • Minimum degree algorithm
      Minimum degree algorithm
      In numerical analysis the minimum degree algorithm is an algorithm used to permute the rows and columns of a symmetric sparse matrix before applying the Cholesky decomposition, to reduce the number of non-zeros in the Cholesky factor....

    • Symbolic Cholesky decomposition
      Symbolic Cholesky decomposition
      In the mathematical subfield of numerical analysis the symbolic Cholesky decomposition is an algorithm used to determine the non-zero pattern for the L factors of a symmetric sparse matrix when applying the Cholesky decomposition or variants.-Algorithm:...

  • Frontal solver
    Frontal solver
    A frontal solver, due to Irons is an approach to solving sparse linear systems which is used extensively in finite element analysis. It is a variant of Gauss elimination that automatically avoids a large number of operations involving zero terms....

     — for sparse matrices; used in finite element methods
  • Levinson recursion
    Levinson recursion
    Levinson recursion or Levinson-Durbin recursion is a procedure in linear algebra to recursively calculate the solution to an equation involving a Toeplitz matrix...

     — for Toeplitz matrices
  • SPIKE algorithm
    SPIKE algorithm
    The SPIKE algorithm is a hybrid parallel solver for banded linear systems developed by Eric Polizzi and Ahmed Sameh.-Overview:The SPIKE algorithm deals with a linear system , where is a banded n\times n matrix of bandwidth much less than n, and is an n\times s matrix containing s right-hand sides...

     — hybrid parallel solver for narrow-banded matrices
  • Iterative methods:
    • Jacobi method
      Jacobi method
      In numerical linear algebra, the Jacobi method is an algorithm for determining the solutions of a system of linear equations with largest absolute values in each row and column dominated by the diagonal element. Each diagonal element is solved for, and an approximate value plugged in. The process...

    • Gauss–Seidel method
      • Successive over-relaxation
        Successive over-relaxation
        In numerical linear algebra, the method of successive over-relaxation is a variant of the Gauss–Seidel method for solving a linear system of equations, resulting in faster convergence. A similar method can be used for any slowly converging iterative process.It was devised simultaneously by David...

         (SOR) — a technique to accelerate the Gauss–Seidel method
    • Modified Richardson iteration
      Modified Richardson iteration
      Modified Richardson iteration is an iterative method for solving a system of linear equations. Richardson iteration was proposed by Lewis Richardson in his work dated 1910...

    • Conjugate gradient method
      Conjugate gradient method
      In mathematics, the conjugate gradient method is an algorithm for the numerical solution of particular systems of linear equations, namely those whose matrix is symmetric and positive-definite. The conjugate gradient method is an iterative method, so it can be applied to sparse systems that are too...

       (CG) — assumes that the matrix is positive definite
      • Nonlinear conjugate gradient method
        Nonlinear conjugate gradient method
        In numerical optimization, the nonlinear conjugate gradient method generalizes the conjugate gradient method to nonlinear optimization. For a quadratic function \displaystyle f:The minimum of f is obtained when the gradient is 0:...

         — generalization for nonlinear optimization problems
    • Biconjugate gradient method
      Biconjugate gradient method
      In mathematics, more specifically in numerical linear algebra, the biconjugate gradient method is an algorithm to solve systems of linear equationsA x= b.\,...

       (BiCG)
    • Generalized minimal residual method
      Generalized minimal residual method
      In mathematics, the generalized minimal residual method is an iterative method for the numerical solution of a system of linear equations. The method approximates the solution by the vector in a Krylov subspace with minimal residual. The Arnoldi iteration is used to find this vector.The GMRES...

       (GMRES) — based on the Arnoldi iteration
    • Stone's method
      Stone method
      Stone's method, also known as the strongly implicit procedure or SIP, is an algorithm for solving a sparse linear system of equations. The method uses an incomplete LU decomposition, which approximates the exact LU decomposition, to get an iterative solution of the problem. The method is named...

       (SIP - Srongly Implicit Procedure) — uses an incomplete LU decomposition
    • Kaczmarz method
      Kaczmarz method
      The Kaczmarz method, based on the work of the Polishmathematican Stefan Kaczmarz,is a method for solving linear systems of equations A x = b .It was rediscovered in the field of image reconstruction from projections by...

    • Preconditioner
      Preconditioner
      In mathematics, preconditioning is a procedure of an application of a transformation, called the preconditioner, that conditions a given problem into a form that is more suitable for numerical solution. Preconditioning is typically related to reducing a condition number of the problem...

      • Incomplete Cholesky factorization
        Incomplete Cholesky factorization
        In numerical analysis, an incomplete Cholesky factorization of a symmetric positive definite matrix is a sparse approximation of the Cholesky factorization...

         — sparse approximation to the Cholesky factorization
      • Incomplete LU factorization
        Incomplete LU factorization
        In numerical analysis, a field within mathematics, an incomplete LU factorization of a matrix is an sparse approximation of the LU factorization. Incomplete LU factorization is often used as a preconditioner.-External links:*...

         — sparse approximation to the LU factorization
  • Underdetermined and overdetermined systems (systems that have no or more than one solution):
    • Numerical computation of null space — find all solutions of an underdetermined system
    • Moore–Penrose pseudoinverse — for finding solution with smallest 2-norm (for underdetermined systems) or smallest residual
    • Sparse approximation
      Sparse approximation
      Sparse approximation is the problem of finding a signal or vector estimate with sparseness property, that is having a small number of nonzero elements, that satisfies a system of equations....

       — for finding the sparsest solution (i.e., the solution with as many zeros as possible)

Eigenvalue algorithms

Eigenvalue algorithm
Eigenvalue algorithm
In linear algebra, one of the most important problems is designing efficient and stable algorithms for finding the eigenvalues of a matrix. These eigenvalue algorithms may also find eigenvectors.-Characteristic polynomial:...

 — a numerical algorithm for locating the eigenvalues of a matrix
  • Power iteration
    Power iteration
    In mathematics, the power iteration is an eigenvalue algorithm: given a matrix A, the algorithm will produce a number λ and a nonzero vector v , such that Av = λv....

  • Inverse iteration
    Inverse iteration
    In numerical analysis, inverse iteration is an iterative eigenvalue algorithm. It allows to find an approximateeigenvector when an approximation to a corresponding eigenvalue is already known....

  • Rayleigh quotient iteration
    Rayleigh quotient iteration
    Rayleigh quotient iteration is an eigenvalue algorithm which extends the idea of the inverse iteration by using the Rayleigh quotient to obtain increasingly accurate eigenvalue estimates....

  • Arnoldi iteration
    Arnoldi iteration
    In numerical linear algebra, the Arnoldi iteration is an eigenvalue algorithm and an important example of iterative methods. Arnoldi finds the eigenvalues of general matrices; an analogous method for Hermitian matrices is the Lanczos iteration. The Arnoldi iteration was invented by W. E...

     — based on Krylov subspaces
  • Lanczos algorithm
    Lanczos algorithm
    The Lanczos algorithm is an iterative algorithm invented by Cornelius Lanczos that is an adaptation of power methods to find eigenvalues and eigenvectors of a square matrix or the singular value decomposition of a rectangular matrix. It is particularly useful for finding decompositions of very...

     — Arnoldi, specialized for positive-definite matrices
  • QR algorithm
    QR algorithm
    In numerical linear algebra, the QR algorithm is an eigenvalue algorithm: that is, a procedure to calculate the eigenvalues and eigenvectors of a matrix. The QR transformation was developed in the late 1950s by John G.F. Francis and by Vera N. Kublanovskaya , working independently...

  • Jacobi eigenvalue algorithm
    Jacobi eigenvalue algorithm
    In numerical linear algebra, the Jacobi eigenvalue algorithm is an iterative method for the calculation of the eigenvalues and eigenvectors of a real symmetric matrix...

     — select a small submatrix which can be diagonalized exactly, and repeat
  • Divide-and-conquer eigenvalue algorithm
    Divide-and-conquer eigenvalue algorithm
    Divide-and-conquer eigenvalue algorithms are a class of eigenvalue algorithms for Hermitian or real symmetric matrices that have recently become competitive in terms of stability and efficiency with more traditional algorithms such as the QR algorithm. The basic concept behind these algorithms is...

  • Folded spectrum method
    Folded spectrum method
    In mathematics, the folded spectrum method is a iterative method for solving large eigenvalue problems.Here you always find a vector with an eigenvalue close to a search-value \varepsilon...

  • LOBPCG
    LOBPCG
    Locally Optimal Block Preconditioned Conjugate Gradient Method is an algorithm, proposed in , for finding the largest eigenvalues and the corresponding eigenvectors of a symmetric positive definite generalized eigenvalue problem.A x= \lambda B x,for a given pair of complex Hermitian or real...

     - Locally Optimal Block Preconditioned Conjugate Gradient Method

Other concepts and algorithms

  • Orthogonalization algorithms:
    • Gram–Schmidt process
      Gram–Schmidt process
      In mathematics, particularly linear algebra and numerical analysis, the Gram–Schmidt process is a method for orthonormalising a set of vectors in an inner product space, most commonly the Euclidean space Rn...

    • Householder transformation
      Householder transformation
      In linear algebra, a Householder transformation is a linear transformation that describes a reflection about a plane or hyperplane containing the origin. Householder transformations are widely used in numerical linear algebra, to perform QR decompositions and in the first step of the QR algorithm...

    • Givens rotation
  • Krylov subspace
    Krylov subspace
    In linear algebra, the order-r Krylov subspace generated by an n-by-n matrix A and a vector b of dimension n is the linear subspace spanned by the images of b under the first r powers of A , that is,...

  • Block matrix pseudoinverse
    Block matrix pseudoinverse
    In mathematics, block matrix pseudoinverse is a formula of pseudoinverse of a partitioned matrix. This is useful for decomposing or approximating many algorithms updating parameters in signal processing, which are based on least squares method.- Derivation :...

  • Bidiagonalization
    Bidiagonalization
    Bidiagonalization is one of unitary matrix decompositions such that U* A V = B, where U and V are unitary matrices; * denotes Hermitian transpose; and B is upper bidiagonal...

  • In-place matrix transposition
    In-place matrix transposition
    In-place matrix transposition, also called in-situ matrix transposition, is the problem of transposing an N×M matrix in-place in computer memory, ideally with O additional storage, or at most with additional storage much less than NM...

     — computing the transpose of a matrix without using much additional storage
  • Pivot element
    Pivot element
    The pivot or pivot element is the element of a matrix, an array, or some other kind of finite set, which is selected first by an algorithm , to do certain calculations...

     — entry in a matrix on which the algorithm concentrates
  • Matrix-free methods
    Matrix-free methods
    In computational mathematics, a matrix-free method is an algorithm for solving a linear system of equations or an eigenvalue problem that does not store the coefficient matrix explicitly, but accesses the matrix by evaluating matrix-vector products...

     — methods that only access the matrix by evaluating matrix-vector products

Polynomial interpolation

Polynomial interpolation
Polynomial interpolation
In numerical analysis, polynomial interpolation is the interpolation of a given data set by a polynomial: given some points, find a polynomial which goes exactly through these points.- Applications :...

 — interpolation by polynomials
  • Linear interpolation
    Linear interpolation
    Linear interpolation is a method of curve fitting using linear polynomials. Lerp is an abbreviation for linear interpolation, which can also be used as a verb .-Linear interpolation between two known points:...

  • Runge's phenomenon
    Runge's phenomenon
    In the mathematical field of numerical analysis, Runge's phenomenon is a problem of oscillation at the edges of an interval that occurs when using polynomial interpolation with polynomials of high degree...

  • Vandermonde matrix
  • Chebyshev polynomials
    Chebyshev polynomials
    In mathematics the Chebyshev polynomials, named after Pafnuty Chebyshev, are a sequence of orthogonal polynomials which are related to de Moivre's formula and which can be defined recursively. One usually distinguishes between Chebyshev polynomials of the first kind which are denoted Tn and...

  • Chebyshev nodes
    Chebyshev nodes
    In numerical analysis, Chebyshev nodes are the roots of the Chebyshev polynomial of the first kind. They are often used as nodes in polynomial interpolation because the resulting interpolation polynomial minimizes the Runge's phenomenon.-Definition:...

  • Lebesgue constant (interpolation)
    Lebesgue constant (interpolation)
    In mathematics, the Lebesgue constants give an idea of how good the interpolant of a function is in comparison with the best polynomial approximation of the function...

  • Different forms for the interpolant:
    • Newton polynomial
      Newton polynomial
      In the mathematical field of numerical analysis, a Newton polynomial, named after its inventor Isaac Newton, is the interpolation polynomial for a given set of data points in the Newton form...

      • Divided differences
        Divided differences
        In mathematics divided differences is a recursive division process.The method can be used to calculate the coefficients in the interpolation polynomial in the Newton form.-Definition:Given n data points,\ldots,...

      • Neville's algorithm
        Neville's algorithm
        In mathematics, Neville's algorithm is an algorithm used for polynomial interpolation that was derived by the mathematician Eric Harold Neville. Given n + 1 points, there is a unique polynomial of degree ≤ n which goes through the given points...

         — for evaluating the interpolant; based on the Newton form
    • Lagrange polynomial
      Lagrange polynomial
      In numerical analysis, Lagrange polynomials are used for polynomial interpolation. For a given set of distinct points x_j and numbers y_j, the Lagrange polynomial is the polynomial of the least degree that at each point x_j assumes the corresponding value y_j...

    • Bernstein polynomial
      Bernstein polynomial
      In the mathematical field of numerical analysis, a Bernstein polynomial, named after Sergei Natanovich Bernstein, is a polynomial in the Bernstein form, that is a linear combination of Bernstein basis polynomials....

       — especially useful for approximation
  • Extensions to multiple dimensions:
    • Bilinear interpolation
      Bilinear interpolation
      In mathematics, bilinear interpolation is an extension of linear interpolation for interpolating functions of two variables on a regular grid. The interpolated function should not use the term of x^2 or y^2, but x y, which is the bilinear form of x and y.The key idea is to perform linear...

    • Trilinear interpolation
      Trilinear interpolation
      Trilinear interpolation is a method of multivariate interpolation on a 3-dimensional regular grid. It approximates the value of an intermediate point within the local axial rectangular prism linearly, using data on the lattice points...

    • Bicubic interpolation
      Bicubic interpolation
      In mathematics, bicubic interpolation is an extension of cubic interpolation for interpolating data points on a two dimensional regular grid. The interpolated surface is smoother than corresponding surfaces obtained by bilinear interpolation or nearest-neighbor interpolation...

    • Tricubic interpolation
      Tricubic interpolation
      In the mathematical subfield numerical analysis, tricubic interpolation is a method for obtaining values at arbitrary points in 3D space of a function defined on a regular grid...

    • Padua points
      Padua points
      In polynomial interpolation of two variables, the Padua points are the first known example of unisolvent point set with minimal growth of their Lebesgue constant, proven to be O.Their name is due to the University of Padua, where they were originally discovered.The points are defined...

       — set of points in R2 with unique polynomial interpolant and minimal growth of Lebesgue constant
  • Hermite interpolation
    Hermite interpolation
    In numerical analysis, Hermite interpolation, named after Charles Hermite, is a method of interpolating data points as a polynomial function. The generated Hermite polynomial is closely related to the Newton polynomial, in that both are derived from the calculation of divided differences.Unlike...

  • Birkhoff interpolation

Spline interpolation

Spline interpolation
Spline interpolation
In the mathematical field of numerical analysis, spline interpolation is a form of interpolation where the interpolant is a special type of piecewise polynomial called a spline. Spline interpolation is preferred over polynomial interpolation because the interpolation error can be made small even...

 — interpolation by piecewise polynomials
  • Spline (mathematics)
    Spline (mathematics)
    In mathematics, a spline is a sufficiently smooth piecewise-polynomial function. In interpolating problems, spline interpolation is often preferred to polynomial interpolation because it yields similar results, even when using low-degree polynomials, while avoiding Runge's phenomenon for higher...

     — the piecewise polynomials used as interpolants
  • Perfect spline
    Perfect spline
    In the mathematical subfields function theory and numerical analysis, a univariate polynomial spline of order m is called a perfect spline if its m-th derivative is equal to +1 or -1 between knots and changes its sign at every knot....

     — polynomial spline of degree m whose mth derivate is ±1
  • Cubic Hermite spline
    Cubic Hermite spline
    In the mathematical subfield of numerical analysis a cubic Hermite spline , named in honor of Charles Hermite, is a third-degree spline with each polynomial of the spline in Hermite form...

  • Monotone cubic interpolation
    Monotone cubic interpolation
    In the mathematical subfield of numerical analysis, monotone cubic interpolation is a variant of cubic interpolation that preserves monotonicity of the data set being interpolated....

  • Hermite spline
    Hermite spline
    In the mathematical subfield of numerical analysis, a Hermite spline is a spline curve where each polynomial of the spline is in Hermite form.-See also:*Cubic Hermite spline*Hermite polynomials*Hermite interpolation...

  • Cardinal spline
  • Bézier spline
    Bézier spline
    In the mathematical field of numerical analysis and in computer graphics, a Bézier spline is a spline curve where each polynomial of the spline is in Bézier form....

    • Bézier curve
      Bézier curve
      A Bézier curve is a parametric curve frequently used in computer graphics and related fields. Generalizations of Bézier curves to higher dimensions are called Bézier surfaces, of which the Bézier triangle is a special case....

    • De Casteljau's algorithm
      De Casteljau's algorithm
      In the mathematical field of numerical analysis, De Casteljau's algorithm is a recursive method to evaluate polynomials in Bernstein form or Bézier curves, named after its inventor Paul de Casteljau...

  • Smoothing spline
    Smoothing spline
    The smoothing spline is a method of smoothing using a spline function.-Definition:Let ;x_1...

    • Generalizations to more dimensions:
      • Bézier triangle — maps a triangle to R3
      • Bézier surface
        Bézier surface
        Bézier surfaces are a species of mathematical spline used in computer graphics, computer-aided design, and finite element modeling.As with the Bézier curve, a Bézier surface is defined by a set of control points...

         — maps a square to R3
  • B-spline
    B-spline
    In the mathematical subfield of numerical analysis, a B-spline is a spline function that has minimal support with respect to a given degree, smoothness, and domain partition. B-splines were investigated as early as the nineteenth century by Nikolai Lobachevsky...

    • Truncated power function
    • De Boor's algorithm
      De Boor's algorithm
      In the mathematical subfield of numerical analysis the de Boor's algorithm is a fast and numerically stable algorithm for evaluating spline curves in B-spline form. It is a generalization of the de Casteljau's algorithm for Bézier curves. The algorithm was devised by Carl R. de Boor...

       — generalizes De Casteljau's algorithm
  • Nonuniform rational B-spline
    Nonuniform rational B-spline
    Non-uniform rational basis spline is a mathematical model commonly used in computer graphics for generating and representing curves and surfaces which offers great flexibility and precision for handling both analytic and freeform shapes.- History :Development of NURBS began in the 1950s by...

     (NURBS)
  • Kochanek–Bartels spline
  • Blossom (functional)
    Blossom (functional)
    In numerical analysis, a blossom is a functional that can be applied to any polynomial, but is mostly used for Bézier and spline curves and surfaces....

     — a unique, affine, symmetric map associated to a polynomial or spline
  • See also: List of numerical computational geometry topics

Trigonometric interpolation

Trigonometric interpolation
Trigonometric interpolation
In mathematics, trigonometric interpolation is interpolation with trigonometric polynomials. Interpolation is the process of finding a function which goes through some given data points. For trigonometric interpolation, this function has to be a trigonometric polynomial, that is, a sum of sines and...

 — interpolation by trigonometric polynomials
  • Discrete Fourier transform
    Discrete Fourier transform
    In mathematics, the discrete Fourier transform is a specific kind of discrete transform, used in Fourier analysis. It transforms one function into another, which is called the frequency domain representation, or simply the DFT, of the original function...

     — can be viewed as trigonometric interpolation at equidistant points
    • Relations between Fourier transforms and Fourier series
      Relations between Fourier transforms and Fourier series
      In the mathematical field of harmonic analysis, the continuous Fourier transform has very precise relations with Fourier series. It is also closely related to the discrete-time Fourier transform and the discrete Fourier transform ....

  • Fast Fourier transform
    Fast Fourier transform
    A fast Fourier transform is an efficient algorithm to compute the discrete Fourier transform and its inverse. "The FFT has been called the most important numerical algorithm of our lifetime ." There are many distinct FFT algorithms involving a wide range of mathematics, from simple...

     — a fast method for computing the discrete Fourier transform
    • Bluestein's FFT algorithm
      Bluestein's FFT algorithm
      Bluestein's FFT algorithm , commonly called the chirp z-transform algorithm , is a fast Fourier transform algorithm that computes the discrete Fourier transform of arbitrary sizes by re-expressing the DFT as a convolution...

    • Bruun's FFT algorithm
      Bruun's FFT algorithm
      Bruun's algorithm is a fast Fourier transform algorithm based on an unusual recursive polynomial-factorization approach, proposed for powers of two by G. Bruun in 1978 and generalized to arbitrary even composite sizes by H. Murakami in 1996...

    • Cooley–Tukey FFT algorithm
    • Split-radix FFT algorithm
      Split-radix FFT algorithm
      The split-radix FFT is a fast Fourier transform algorithm for computing the discrete Fourier transform , and was first described in an initially little-appreciated paper by R. Yavne and subsequently rediscovered simultaneously by various authors in 1984. The split-radix FFT is a fast Fourier...

       — variant of Cooley–Tukey that uses a blend of radices 2 and 4
    • Goertzel algorithm
      Goertzel algorithm
      The Goertzel algorithm is a digital signal processing technique for identifying frequency components of a signal, published by Gerald Goertzel in 1958...

    • Prime-factor FFT algorithm
      Prime-factor FFT algorithm
      The prime-factor algorithm , also called the Good–Thomas algorithm , is a fast Fourier transform algorithm that re-expresses the discrete Fourier transform of a size N = N1N2 as a two-dimensional N1×N2 DFT, but only for the case where N1 and N2 are relatively prime...

    • Rader's FFT algorithm
      Rader's FFT algorithm
      Rader's algorithm is a fast Fourier transform algorithm that computes the discrete Fourier transform of prime sizes by re-expressing the DFT as a cyclic convolution...

    • Butterfly diagram
      Butterfly diagram
      In the context of fast Fourier transform algorithms, a butterfly is a portion of the computation that combines the results of smaller discrete Fourier transforms into a larger DFT, or vice versa . The name "butterfly" comes from the shape of the data-flow diagram in the radix-2 case, as described...

    • Twiddle factor
      Twiddle factor
      A twiddle factor, in fast Fourier transform algorithms, is any of the trigonometric constant coefficients that are multiplied by the data in the course of the algorithm...

       — the trigonometric constant coefficients that are multiplied by the data
    • Methods for computing discrete convolutions with finite impulse response filters using the FFT:
      • Overlap-add method
      • Overlap-save method
        Overlap-save method
        Overlap–save is the traditional name for an efficient way to evaluate the discrete convolution between a very long signal x[n] and a finite impulse response filter h[n]:...

  • Sigma approximation
    Sigma approximation
    In mathematics, σ-approximation adjusts a Fourier summation to eliminate the Gibbs phenomenon which would otherwise occur at discontinuities.A σ-approximated summation for a series of period T can be written as follows:...

  • Dirichlet kernel — convolving any function with the Dirichlet kernel yields its trigonometric interpolant
  • Gibbs phenomenon
    Gibbs phenomenon
    In mathematics, the Gibbs phenomenon, named after the American physicist J. Willard Gibbs, is the peculiar manner in which the Fourier series of a piecewise continuously differentiable periodic function behaves at a jump discontinuity: the nth partial sum of the Fourier series has large...


Other interpolants

  • Simple rational approximation
    Simple rational approximation
    Simple rational approximation is a subset of interpolating methods using rational functions. Especially, SRA interpolates a given function with a specific rational function whose poles and zeros are simple, which means that there is no multiplicity in poles and zeros. Sometimes, it only implies...

    • Polynomial and rational function modeling
      Polynomial and rational function modeling
      In statistical modeling , polynomial functions and rational functions are sometimes used as an empirical technique for curve fitting.-Polynomial function models:A polynomial function is one that has the form...

       — comparison of polynomial and rational interpolation
  • Wavelet
    Wavelet
    A wavelet is a wave-like oscillation with an amplitude that starts out at zero, increases, and then decreases back to zero. It can typically be visualized as a "brief oscillation" like one might see recorded by a seismograph or heart monitor. Generally, wavelets are purposefully crafted to have...

    • Continuous wavelet
      Continuous wavelet
      In numerical analysis, continuous wavelets are functions used by the continuous wavelet transform. These functions are defined as analytical expressions, as functions either of time or of frequency....

    • Transfer matrix
      Transfer matrix
      The transfer matrix is a formulation in terms of a block-Toeplitz matrix of the two-scale equation, which characterizes refinable functions. Refinable functions play an important role in wavelet theory and finite element theory....

    • See also: List of functional analysis topics, List of wavelet-related transforms
  • Inverse distance weighting
    Inverse distance weighting
    Inverse distance weighting is a method for multivariate interpolation, a process of assigning values to unknown points by using values from usually scattered set of known points...

    • Cascade algorithm
      Cascade algorithm
      In the mathematical topic of wavelet theory, the cascade algorithm is a numerical method for calculating function values of the basic scaling and wavelet functions of a discrete wavelet transform using an iterative algorithm. It starts from values on a coarse sequence of sampling points and...

       — iterative algorithm to compute wavelets
  • Radial basis function
    Radial basis function
    A radial basis function is a real-valued function whose value depends only on the distance from the origin, so that \phi = \phi; or alternatively on the distance from some other point c, called a center, so that \phi = \phi...

     (RBF) — a function of the form ƒ(x) = φ(|xx0|)
    • Polyharmonic spline
      Polyharmonic spline
      In mathematics, polyharmonic splines are used forfunction approximation and data interpolation.They are very useful for interpolation of scattered datain many dimensions.- Definition :Polyharmonic splines are a special case of radial basis functions and...

       — a commonly used radial basis function
    • Thin plate spline
      Thin plate spline
      This is a brief derivation for the closed form solutions for smoothing Thin Plate Spline. Details about these splines can be found in ....

       — a specific polyharmonic spline: r2 log r
    • Radial basis function network
      Radial basis function network
      A radial basis function network is an artificial neural network that uses radial basis functions as activation functions. It is a linear combination of radial basis functions...

       — neural network using radial basis functions as activation functions
    • Hierarchical RBF
      Hierarchical RBF
      A hierarchical RBF is an interpolation method based on RBF.Interpolation based on RBF used for the construction of shape models in 3D computer graphics , treatment of results from a 3D scanner, terrain reconstruction and others.This problem often named "large scattered data point set...

  • Subdivision surface
    Subdivision surface
    A subdivision surface, in the field of 3D computer graphics, is a method of representing a smooth surface via the specification of a coarser piecewise linear polygon mesh...

     — constructed by recursively subdividing a piecewise linear interpolant
    • Catmull–Clark subdivision surface
    • Doo–Sabin subdivision surface
    • Loop subdivision surface
      Loop subdivision surface
      In computer graphics, Loop subdivision surface is a subdivision scheme developed by Charles Loop in 1987 for triangular meshes.Each triangle is divided into four subtriangles, adding new vertices in the middle of each edge.- External links :...

  • Slerp
    Slerp
    In computer graphics, Slerp is shorthand for spherical linear interpolation, introduced by Ken Shoemake in the context of quaternion interpolation for the purpose of animating 3D rotation...

  • Irrational base discrete weighted transform
    Irrational base discrete weighted transform
    In mathematics, the irrational base discrete weighted transform is a variant of the fast Fourier transform using an irrational base; it was developed by Richard Crandall , Barry Fagin and Joshua Doenias in the early 1990s using Mathematica.The IBDWT is used in the Great Internet Mersenne Prime...

  • Nevanlinna–Pick interpolation
    Nevanlinna–Pick interpolation
    In complex analysis, Nevanlinna–Pick interpolation is the problem of finding a holomorphic function from the unit disc to the unit disc , which takes specified points to specified points...

     — interpolation by analytic functions in the unit disc subject to a bound
    • Pick matrix — the Nevanlinna–Pick interpolation has a solution if this matrix is positive semi-definite
  • Pareto interpolation
    Pareto interpolation
    Pareto interpolation is a method of estimating the median and other properties of a population that follows a Pareto distribution. It is used in economics when analysing the distribution of incomes in a population, when one must base estimates on a relatively small random sample taken from the...

  • Multivariate interpolation
    Multivariate interpolation
    In numerical analysis, multivariate interpolation or spatial interpolation is interpolation on functions of more than one variable.The function to be interpolated is known at given points and the interpolation problem consist of yielding values at arbitrary points .-Regular grid:For function...

     — the function being interpolated depends on more than one variable
    • Barnes interpolation
      Barnes interpolation
      Barnes interpolation, named after Stanley L. Barnes, is the interpolation of unstructured data points from a set of measurements of an unknown function in two dimensions into an analytic function of two variables...

       — method for two-dimensional functions using Gaussians common in meteorology
    • Lanczos resampling
      Lanczos resampling
      Lanczos resampling is an interpolation method used to compute new values for sampled data. It is often used in multivariate interpolation, for example for image scaling , but can be used for any other digital signal...

       — based on convolution with a sinc function
    • Natural neighbor
      Natural neighbor
      300px|thumb|right|Natural neighbor interpolation. The colored circles. which represent the interpolating weights, wi, are generated using the ratio of the shaded area to that of the cell area of the surrounding points...

       interpolation
    • PDE surface
      PDE surface
      PDE surfaces are used in geometric modelling and computer graphics for creating smooth surfaces conforming to a given boundary configuration. PDE surfaces utilise partial differential equations to generate a surface which usually satisfy a mathematical boundary value problem.PDE surfaces were first...

    • Method based on polynomials are listed under Polynomial interpolation

Approximation theory

Approximation theory
Approximation theory
In mathematics, approximation theory is concerned with how functions can best be approximated with simpler functions, and with quantitatively characterizing the errors introduced thereby...

  • Orders of approximation
    Orders of approximation
    In science, engineering, and other quantitative disciplines, orders of approximation refer to formal or informal terms for how precise an approximation is, and to indicate progressively more refined approximations: in increasing order of precision, a zeroth order approximation, a first order...

  • Lebesgue's lemma
    Lebesgue's lemma
    For Lebesgue's lemma for open covers of compact spaces in topology see Lebesgue's number lemmaIn mathematics, Lebesgue's lemma is an important statement in approximation theory. It provides a bound for the projection error.-Statement:...

  • Curve fitting
    Curve fitting
    Curve fitting is the process of constructing a curve, or mathematical function, that has the best fit to a series of data points, possibly subject to constraints. Curve fitting can involve either interpolation, where an exact fit to the data is required, or smoothing, in which a "smooth" function...

    • Vector field reconstruction
      Vector field reconstruction
      Vector field reconstruction is a method of creating a vector field from experimental or computer generated data, usually with the goal of finding a differential equation model of the system....

  • Modulus of continuity
    Modulus of continuity
    In mathematical analysis, a modulus of continuity is a function\omega:[0,\infty]\to[0,\infty]used to measure quantitatively the uniform continuity of functions. So, a function f:I\to\R admits \omega as a modulus of continuity if and only if|f-f|\leq\omega,for all x and y in the domain of f...

     — measures smoothness of a function
  • Minimax approximation algorithm
    Minimax approximation algorithm
    Polynomial expansions such as the Taylor series expansion are often convenient for theoretical work but less useful for practical applications. For practical work it is often desirable to minimize the maximum absolute or relative error of a polynomial fit for any given number of terms in an effort...

     — minimizes the maximum error over an interval (the L-norm)
  • Approximation by polynomials:
    • Linear approximation
      Linear approximation
      In mathematics, a linear approximation is an approximation of a general function using a linear function . They are widely used in the method of finite differences to produce first order methods for solving or approximating solutions to equations.-Definition:Given a twice continuously...

    • Bernstein polynomial
      Bernstein polynomial
      In the mathematical field of numerical analysis, a Bernstein polynomial, named after Sergei Natanovich Bernstein, is a polynomial in the Bernstein form, that is a linear combination of Bernstein basis polynomials....

       — basis of polynomials useful for approximating a function
    • Remez algorithm
      Remez algorithm
      The Remez algorithm , published by Evgeny Yakovlevich Remez in 1934 is an iterative algorithm used to find simple approximations to functions, specifically, approximations by functions in a Chebyshev space that are the best in the uniform norm L∞ sense.A typical...

       — for constructing the best polynomial approximation in the L-norm
    • Bramble–Hilbert lemma — upper bound on Lp error of polynomial approximation in multiple dimensions
    • Bernstein's constant
      Bernstein's constant
      Bernstein's constant, usually denoted by the Greek letter β , is a mathematical constant named after Sergei Natanovich Bernstein and is approximately equal to 0.2801694990.- Definition :...

       — error when approximating |x| by a polynomial
  • Surrogate model
    Surrogate model
    Most engineering design problems require experiments and/or simulations to evaluate design objective and constraint functions as function of design variables. For example, in order to find the optimal airfoil shape for an aircraft wing, an engineer simulates the air flow around the wing for...

     — application: replacing a function that is hard to evaluate by a simpler function
  • Jackson's inequality
    Jackson's inequality
    In approximation theory, Jackson's inequality is an inequality bounding the value of function's best approximation by algebraic or trigonometric polynomials in terms of the modulus of continuity of its derivatives...

     — upper bound for best approximation by a trigonometric polynomial
  • Different approximations:
    • Moving least squares
      Moving least squares
      Moving least squares is a method of reconstructing continuous functions from a set of unorganized point samples via the calculation of a weighted least squares measure biased towards the region around the point at which the reconstructed value is requested....

    • Padé approximant
      Padé approximant
      Padé approximant is the "best" approximation of a function by a rational function of given order - under this technique, the approximant's power series agrees with the power series of the function it is approximating....

      • Padé table
        Padé table
        In complex analysis, a Padé table is an array, possibly of infinite extent, of the rational Padé approximantsto a given complex formal power series...

         — table of Padé approximants
    • Szász–Mirakyan operator — approximation by en xk on a semi-infinite interval
    • Szász–Mirakjan–Kantorovich operator
    • Baskakov operator
      Baskakov operator
      In functional analysis, a branch of mathematics, the Baskakov operators are generalizations of Bernstein polynomials, Szász–Mirakyan operators, and Lupas operators...

       — generalize Bernstein polynomials, Szász–Mirakyan operators, and Lupas operators
    • Favard operator — approximation by sums of Gaussians

Miscellaneous

  • Extrapolation
    Extrapolation
    In mathematics, extrapolation is the process of constructing new data points. It is similar to the process of interpolation, which constructs new points between known points, but the results of extrapolations are often less meaningful, and are subject to greater uncertainty. It may also mean...

    • Linear predictive analysis
      Linear predictive analysis
      Linear predictive analysis can be thought of as a simple form of first-order extrapolation: if it has been changing at this rate then it will probably continue to change at approximately the same rate, at least in the short term...

       — linear extrapolation
  • Unisolvent functions
    Unisolvent functions
    In mathematics, a collection of n functions ƒ1, ƒ2, ..., ƒn is unisolvent on domain Ω if the vectorsare linearly independent for any choice of n distinct points x1, x2 ... xn in Ω...

     — functions for which the interpolation problem has a unique solution
  • Regression analysis
    Regression analysis
    In statistics, regression analysis includes many techniques for modeling and analyzing several variables, when the focus is on the relationship between a dependent variable and one or more independent variables...

  • Curve-fitting compaction
    Curve-fitting compaction
    Curve-fitting compaction is data compaction accomplished by replacing data to be stored or transmitted with an analytical expression.Examples of curve-fitting compaction consisting of discretization and then interpolation are:...

  • Interpolation (computer programming)
    Interpolation (computer programming)
    In the context of computer animation, interpolation refers to inbetweening, or filling in frames between the key frames. It typically calculates the inbetween frames through use of piecewise polynomial interpolation to draw images semi-automatically....

     — interpolation in the context of computer graphics

Finding roots of nonlinear equations

See #Numerical linear algebra for linear equations


Root-finding algorithm
Root-finding algorithm
A root-finding algorithm is a numerical method, or algorithm, for finding a value x such that f = 0, for a given function f. Such an x is called a root of the function f....

 — algorithms for solving the equation f(x) = 0
  • General methods:
    • Bisection method
      Bisection method
      The bisection method in mathematics is a root-finding method which repeatedly bisects an interval and then selects a subinterval in which a root must lie for further processing. It is a very simple and robust method, but it is also relatively slow...

       — simple and robust; linear convergence
      • Lehmer-Schur algorithm
        Lehmer-Schur algorithm
        In mathematics, the Lehmer–Schur algorithm is a root-finding algorithm extending the one-dimensional bracketing used by the bisection method to find the roots of a function of one complex variable inside any rectangular region of the function's holomorphicity .The rectangle in question is...

         — variant for complex functions
    • Fixed point iteration
    • Newton's method
      Newton's method
      In numerical analysis, Newton's method , named after Isaac Newton and Joseph Raphson, is a method for finding successively better approximations to the roots of a real-valued function. The algorithm is first in the class of Householder's methods, succeeded by Halley's method...

       — based on linear approximation around the current iterate; quadratic convergence
      • Newton fractal
      • Quasi-Newton method
        Quasi-Newton method
        In optimization, quasi-Newton methods are algorithms for finding local maxima and minima of functions. Quasi-Newton methods are based on...

         — uses an approximation of the Jacobian:
        • Broyden's method
          Broyden's method
          In numerical analysis, Broyden's method is a quasi-Newton method for the numerical solution of nonlinear equations in k variables. It was originally described by C. G. Broyden in 1965....

           — uses a rank-one update for the Jacobian
        • SR1 formula
          SR1 formula
          The Symmetric Rank 1 method is a quasi-Newton method to update the second derivative based on the derivatives calculated at two points...

           — a symmetric (but not necessarily positive definite) rank-one update of the Jacobian
        • Davidon-Fletcher-Powell formula
          Davidon-Fletcher-Powell formula
          The Davidon–Fletcher–Powell formula finds the solution to the secant equation that is closest to the current estimate and satisfies the curvature condition . It was the first quasi-Newton method which generalize the secant method to a multidimensional problem...

           — update of the Jacobian in which the matrix remains positive definite
        • BFGS method
          BFGS method
          In numerical optimization, the Broyden–Fletcher–Goldfarb–Shanno method is a method for solving nonlinear optimization problems ....

           — rank-two update of the Jacobian in which the matrix remains positive definite
        • L-BFGS
          L-BFGS
          The limited-memory BFGS algorithm is a member of the broad family of quasi-Newton optimization methods that uses a limited memory variation of the Broyden–Fletcher–Goldfarb–Shanno update to approximate the inverse Hessian matrix...

           method — truncated, matrix-free variant of BFGS method suitable for large problems
      • Steffensen's method
        Steffensen's method
        In numerical analysis, Steffensen's method is a root-finding method, similar to Newton's method, named after Johan Frederik Steffensen. Steffensen's method also achieves quadratic convergence, but without using derivatives as Newton's method does....

         — uses divided differences instead of the derivative
    • Secant method
      Secant method
      In numerical analysis, the secant method is a root-finding algorithm that uses a succession of roots of secant lines to better approximate a root of a function f. The secant method can be thought of as a finite difference approximation of Newton's method. However, the method was developed...

       — based on linear interpolation at last two iterates
    • False position method
      False position method
      The false position method or regula falsi method is a term for problem-solving methods in algebra and calculus. In simple terms, these methods begin by attempting to evaluate a problem using test values for the variables, and then adjust the values accordingly.-Algebra:In algebra, the false...

       — secant method with ideas from the bisection method
    • Müller's method
      Müller's method
      Müller's method is a root-finding algorithm, a numerical method for solving equations of the form f = 0. It is first presented by D. E. Müller in 1956....

       — based on quadratic interpolation at last three iterates
    • Inverse quadratic interpolation
      Inverse quadratic interpolation
      In numerical analysis, inverse quadratic interpolation is a root-finding algorithm, meaning that it is an algorithm for solving equations of the form f = 0. The idea is to use quadratic interpolation to approximate the inverse of f...

       — similar to Müller's method, but interpolates the inverse
    • Brent's method
      Brent's method
      In numerical analysis, Brent's method is a complicated but popular root-finding algorithm combining the bisection method, the secant method and inverse quadratic interpolation. It has the reliability of bisection but it can be as quick as some of the less reliable methods...

       — combines bisection method, secant method and inverse quadratic interpolation
    • Ridders' method
      Ridders' method
      In numerical analysis, Ridders' method is a root-finding algorithm based on the false position method and the use of an exponential function to successively approximate a root of a function f....

       — fits a linear function times an exponential to last two iterates and their midpoint
    • Halley's method
      Halley's method
      In numerical analysis, Halley’s method is a root-finding algorithm used for functions of one real variable with a continuous second derivative, i.e., C2 functions. It is named after its inventor Edmond Halley, who also discovered Halley's Comet....

       — uses f, f' and f; achieves the cubic convergence
    • Householder's method
      Householder's method
      In numerical analysis, the class of Householder's methods are root-finding algorithms used for functions of one real variable with continuous derivatives up to some order d+1, where d will be the order of the Householder's method....

       — uses first
      d derivatives to achieve order d + 1; generalizes Newton's and Halley's method
  • Methods for polynomials:
    • Aberth method
      Aberth method
      The Aberth method, or Aberth–Ehrlich method, named after Oliver Aberth and Louis W. Ehrlich, is a root-finding algorithm for simultaneous approximation of all the roots of a univariate polynomial....

    • Bairstow's method
      Bairstow's method
      In numerical analysis, Bairstow's method is an efficient algorithm for finding the roots of a real polynomial of arbitrary degree. The algorithm first appeared in the appendix of the 1920 book "Applied Aerodynamics" by Leonard Bairstow...

    • Durand–Kerner method
    • Graeffe's method
      Graeffe's method
      In mathematics, Graeffe's method or Dandelin–Graeffe method is an algorithm for finding all of the roots of a polynomial. It was developed independently by Germinal Pierre Dandelin in 1826 and Karl Heinrich Gräffe in 1837. Lobachevsky in 1834 also discovered the principal idea of the method....

    • Jenkins-Traub algorithm
      Jenkins-Traub algorithm
      The Jenkins–Traub algorithm for polynomial zeros is a fast globally convergent iterative method published in 1970 by Michael A. Jenkins and Joseph F. Traub...

       — fast, reliable, and widely used
    • Laguerre's method
      Laguerre's method
      In numerical analysis, Laguerre's method is a root-finding algorithm tailored to polynomials. In other words, Laguerre's method can be used to solve numerically the equation\ p = 0 for a given polynomial p...

    • Splitting circle method
      Splitting circle method
      In mathematics, the splitting circle method is a numerical algorithm for the numerical factorization of a polynomial and, ultimately, for finding its complex roots. It was introduced by Arnold Schönhage in his 1982 paper The fundamental theorem of algebra in terms of computational complexity...

  • Analysis:
  • Numerical continuation
    Numerical continuation
    Numerical continuation is a method of computing approximate solutions of a system of parameterized nonlinear equations,F = 0The parameter \lambda is usually a real scalar, and the solution an n-vector...

     — tracking a root as one parameters in the equation changes
    • Piecewise linear continuation
      Piecewise linear continuation
      - Simplicial Continuation :Simplicial Continuation, or Piecewise Linear Continuation is a one parameter continuation method which is well suited to small to medium embedding spaces...


Optimization

Optimization (mathematics)
Optimization (mathematics)
In mathematics, computational science, or management science, mathematical optimization refers to the selection of a best element from some set of available alternatives....

 — algorithm for finding maxima or minima of a given function

Basic concepts

  • Active set
  • Candidate solution
    Candidate solution
    In optimization , a candidate solution is a member of a set of possible solutions to a given problem. A candidate solution does not have to be a likely or reasonable solution to the problem – it is simply in the set that satisfies all constraints.The space of all candidate solutions is called the...

  • Constraint (mathematics)
    Constraint (mathematics)
    In mathematics, a constraint is a condition that a solution to an optimization problem must satisfy. There are two types of constraints: equality constraints and inequality constraints...

  • Corner solution
    Corner solution
    A corner solution is a special solution to an agent's maximization problem in which the quantity of one of the arguments in the maximized function is zero. The more usual solution will lie in the non-zero interior at the point of tangency between the objective function and the constraint...

  • Fitness function
    Fitness function
    A fitness function is a particular type of objective function that is used to summarise, as a single figure of merit, how close a given design solution is to achieving the set aims....

     — (esp. in genetic algorithms) an approximation to the objective function that is easier to evaluate
    • Fitness approximation
      Fitness approximation
      In function optimization, fitness approximation is a method for decreasing the number of fitness function evaluations to reach a target solution...

  • Global optimum
    Global optimum
    In mathematics, a global optimum is a selection from a given domain which yields either the highest value or lowest value , when a specific function is applied. For example, for the function...

     and Local optimum
    Local optimum
    Local optimum is a term in applied mathematics and computer science.A local optimum of a combinatorial optimization problem is a solution that is optimal within a neighboring set of solutions...

  • Maxima and minima
    Maxima and minima
    In mathematics, the maximum and minimum of a function, known collectively as extrema , are the largest and smallest value that the function takes at a point either within a given neighborhood or on the function domain in its entirety .More generally, the...

  • Slack variable
    Slack variable
    In an optimization problem, a slack variable is a variable that is added to an inequality constraint to transform it to an equality. Introducing a slack variable replaces an inequality constraint with an equality constraint and a nonnegativity constraint....

  • Continuous optimization
    Continuous optimization
    Continuous optimization is a branch of optimization in applied mathematics.As opposed to discrete optimization, the variables used in the objective function can assume real values, e.g., values from intervals of the real line....

  • Discrete optimization
    Discrete optimization
    Discrete optimization is a branch of optimization in applied mathematics and computer science.As opposed to continuous optimization, the variables used in the mathematical program are restricted to assume only discrete values, such as the integers.Two notable branches of discrete optimization...


Linear programming

Linear programming
Linear programming
Linear programming is a mathematical method for determining a way to achieve the best outcome in a given mathematical model for some list of requirements represented as linear relationships...

 (also treats integer programming) — objective function and constraints are linear
  • Algorithms for linear programming:
    • Simplex algorithm
      Simplex algorithm
      In mathematical optimization, Dantzig's simplex algorithm is a popular algorithm for linear programming. The journal Computing in Science and Engineering listed it as one of the top 10 algorithms of the twentieth century....

      • Bland's rule
        Bland's rule
        In mathematical optimization, Bland's rule is an algorithmic refinement of the simplex method for linear optimization....

         — rule to avoid cycling in the simplex method
    • Interior point method
      Interior point method
      Interior point methods are a certain class of algorithms to solve linear and nonlinear convex optimization problems.The interior point method was invented by John von Neumann...

      • Karmarkar's algorithm
        Karmarkar's algorithm
        Karmarkar's algorithm is an algorithm introduced by Narendra Karmarkar in 1984 for solving linear programming problems. It was the first reasonably efficient algorithm that solves these problems in polynomial time...

      • Mehrotra predictor-corrector method
        Mehrotra predictor-corrector method
        Mehrotra's predictor–corrector method in optimization is an implementation of interior point methods. It was proposed in 1989 by Sanjay Mehrotra....

    • Delayed column generation
      Delayed column generation
      Delayed column generation is an efficient algorithm for solving larger linear programs.The overarching idea is that many linear programs are too large to consider all the variables explicitly. Since most of the variables will be non-basic and assume a value of zero in the optimal solution, only a...

    • k-approximation of k-hitting set
      K-approximation of k-hitting set
      In computer science, k-approximation of k-hitting set is an approximation algorithm for weighted hitting set. The input is a collection S of subsets of some universe T and a mapping W from S to non-negative numbers called the weights of the elements of S. In k-hitting set the size of the sets in S...

       — algorithm for specific LP problems (to find a weighted hitting set)
  • Linear complementarity problem
    Linear complementarity problem
    In mathematical optimization theory, the linear complementarity problem arises frequently in computational mechanics and encompasses the well-known quadratic programming as a special case...

  • Decompositions:
    • Benders' decomposition
      Benders' decomposition
      Benders' decomposition is a technique in mathematical programming that allows the solution of very large linear programming problems that have a special block structure...

    • Dantzig–Wolfe decomposition
      Dantzig–Wolfe decomposition
      Dantzig–Wolfe decomposition is an algorithm for solving linear programming problems with special structure. It was originally developed by George Dantzig and Phil Wolfe and initially published in 1960...

  • Fourier–Motzkin elimination
    Fourier–Motzkin elimination
    Fourier–Motzkin elimination, FME method, is a mathematical algorithm for eliminating variables from a system of linear inequalities. It can look for both real and integer solutions...

  • Hilbert basis (linear programming)
    Hilbert basis (linear programming)
    In linear programming, a Hilbert basis for a convex cone C is an integer cone basis: minimal set of integer vectors such that every integer vector in C is a conical combination of the vectors in the Hilbert basis with integer coefficients....

     — set of integer vectors in a convex cone which generate all integer vectors in the cone

Nonlinear programming

Nonlinear programming
Nonlinear programming
In mathematics, nonlinear programming is the process of solving a system of equalities and inequalities, collectively termed constraints, over a set of unknown real variables, along with an objective function to be maximized or minimized, where some of the constraints or the objective function are...

 — the most general optimization problem in the usual framework
  • Special cases of nonlinear programming:
    • Quadratic programming
      Quadratic programming
      Quadratic programming is a special type of mathematical optimization problem. It is the problem of optimizing a quadratic function of several variables subject to linear constraints on these variables....

      • Linear least squares
        Linear least squares
        In statistics and mathematics, linear least squares is an approach to fitting a mathematical or statistical model to data in cases where the idealized value provided by the model for any data point is expressed linearly in terms of the unknown parameters of the model...

      • Frank–Wolfe algorithm
        Frank–Wolfe algorithm
        In mathematical optimization, the reduced gradient method of Frank and Wolfe is an iterative method for nonlinear programming. Also known as the Frank–Wolfe algorithm and the convex combination algorithm, the reduced gradient method was proposed by Marguerite Frank and Phil Wolfe in 1956 as an...

      • Sequential Minimal Optimization
        Sequential Minimal Optimization
        Sequential minimal optimization is an algorithm for efficiently solving the optimization problem which arises during the training of support vector machines. It was invented by John Platt in 1998 at Microsoft Research. SMO is widely used for training support vector machines and is implemented by...

         — breaks up large QP problems into a series of smallest possible QP problems
      • Bilinear program
        Bilinear program
        In mathematics, a bilinear program is a nonlinear optimization problem whose objective and/or constraint functions are bilinear. An example is the pooling problem.-References:* at the Mathematical Programming Glossary....

    • Convex optimization
      • Linear matrix inequality
      • Conic optimization
        Conic optimization
        Conic optimization is a subfield of convex optimization that studies a class of structured convex optimization problems called conic optimization problems...

        • Semidefinite programming
          Semidefinite programming
          Semidefinite programming is a subfield of convex optimization concerned with the optimization of a linear objective functionover the intersection of the cone of positive semidefinite matrices with an affine space, i.e., a spectrahedron....

        • Second-order cone programming
        • Quadratic programming (see above)
      • Bregman method
        Bregman method
        Bregman's method is an iterative algorithm to solve certain convex optimization problems. The algorithm is a row-action method accessing constraint functions one by one and the method is particularly suited for large optimization problems where constraints can be efficiently enumerated.The...

         — row-action method for strictly convex optimization problems
      • Subgradient method
        Subgradient method
        Subgradient methods are iterative methods for solving convex minimization problems. Originally developed by Naum Z. Shor and others in the 1960s and 1970s, subgradient methods are convergent when applied even to a non-differentiable objective function...

         — extension of steepest descent for problems with a nondifferentiable objective function
    • Geometric programming — problems involving signomials or posynomials
      • Signomial
        Signomial
        A "signomial" is an algebraic function of one or more independent variables. It is perhaps most easily thought of as an algebraic extension of multi-dimensional polynomials -- an extension that permits exponents to be arbitrary real numbers while requiring the independent variables to be strictly...

         — similar to polynomials, but exponents need not be integers
      • Posynomial — a signomial with positive coefficients
    • Quadratically constrained quadratic program
      Quadratically constrained quadratic program
      In mathematics, a quadratically constrained quadratic program is an optimization problem in which both the objective function and the constraints are quadratic functions...

    • Linear-fractional programming
      Linear-fractional programming
      In mathematical optimization, linear-fractional programming is a generalization of linear programming . Whereas the objective function in linear programs are linear functions, the objective function in a linear-fractional program is a ratio of two linear functions...

       — objective is ratio of linear functions, constraints are linear
    • Least squares
      Least squares
      The method of least squares is a standard approach to the approximate solution of overdetermined systems, i.e., sets of equations in which there are more equations than unknowns. "Least squares" means that the overall solution minimizes the sum of the squares of the errors made in solving every...

       — the objective function is a sum of squares
      • Non-linear least squares
        Non-linear least squares
        Non-linear least squares is the form of least squares analysis which is used to fit a set of m observations with a model that is non-linear in n unknown parameters . It is used in some forms of non-linear regression. The basis of the method is to approximate the model by a linear one and to...

      • Gauss–Newton algorithm
        • Generalized Gauss–Newton method
          Generalized Gauss–Newton method
          The generalized Gauss–Newton method is a generalization of the least-squares method originally described by Carl Friedrich Gauss and of Newton's method due to Isaac Newton to the case of constrained nonlinear least-squares problems....

           — for constrained nonlinear least-squares problems
      • Levenberg–Marquardt algorithm
    • Mathematical programming with equilibrium constraints
      Mathematical programming with equilibrium constraints
      Mathematical programming with equilibrium constraints is the study ofconstrained optimization problems where the constraints include variational inequalities or complementarities...

       — constraints include variational inequalities or complementarities
    • Univariate optimization:
      • Golden section search
        Golden section search
        The golden section search is a technique for finding the extremum of a unimodal function by successively narrowing the range of values inside which the extremum is known to exist. The technique derives its name from the fact that the algorithm maintains the function values for triples of points...

      • Successive parabolic interpolation
        Successive parabolic interpolation
        Successive parabolic interpolation is a technique for finding the extremum of a continuous unimodal function by successively fitting parabolas to the function at three unique points, and at each iteration replacing the "oldest" point with the extremum of the fitted parabola.- Advantages :Only...

         — based on quadratic interpolation through the last three iterates
  • General algorithms:
    • Concepts:
      • Descent direction
      • Guess value
        Guess value
        A guess value is more commonly called a starting value or initial value. These are necessary for most optimization problems which use search algorithms, because those algorithms are mainly deterministic and iterative, and they need to start somewhere...

         — the initial guess for a solution with which an algorithm starts
      • Line search
        • Backtracking line search
        • Wolfe conditions
          Wolfe conditions
          In the unconstrained minimization problem, the Wolfe conditions are a set of inequalities for performing inexact line search, especially in quasi-Newton methods.In these methods the idea is to find\min_x f...

    • Gradient descent
      Gradient descent
      Gradient descent is a first-order optimization algorithm. To find a local minimum of a function using gradient descent, one takes steps proportional to the negative of the gradient of the function at the current point...

      • Stochastic gradient descent
        Stochastic gradient descent
        Stochastic gradient descent is an optimization method for minimizing an objective function that is written as a sum of differentiable functions.- Background :...

    • Successive linear programming
      Successive linear programming
      Successive Linear Programming , also known as Sequential Linear Programming, is an optimization technique for approximately solving nonlinear optimization problems....

       (SLP) — replace problem by a linear programming problem, solve that, and repeat
    • Newton's method in optimization
      Newton's method in optimization
      In mathematics, Newton's method is an iterative method for finding roots of equations. More generally, Newton's method is used to find critical points of differentiable functions, which are the zeros of the derivative function.-Method:...

      • See also under Newton algorithm in the section Finding roots of nonlinear equations
    • Nonlinear conjugate gradient method
      Nonlinear conjugate gradient method
      In numerical optimization, the nonlinear conjugate gradient method generalizes the conjugate gradient method to nonlinear optimization. For a quadratic function \displaystyle f:The minimum of f is obtained when the gradient is 0:...

    • Nelder-Mead method
      Nelder-Mead method
      The Nelder–Mead method or downhill simplex method or amoeba method is a commonly used nonlinear optimization technique, which is a well-defined numerical method for twice differentiable and unimodal problems...

    • Rosenbrock methods
      Rosenbrock methods
      Rosenbrock methods may refer to either of two distinct ideas in numerical computation, both named for Howard H. Rosenbrock. Rosenbrock optimization methods are a family of numerical optimization algorithms applicable to optimization problems in which the objective function is inexpensive to compute...

       — derivative-free method, similar to Nelder–Mead but with guaranteed convergence
    • Ternary search
      Ternary search
      A ternary search algorithm is a technique in computer science for finding the minimum or maximum of a unimodal function...

    • Tabu search
      Tabu search
      Tabu search is a mathematical optimization method, belonging to the class of trajectory based techniques. Tabu search enhances the performance of a local search method by using memory structures that describe the visited solutions: once a potential solution has been determined, it is marked as...

    • Guided Local Search
      Guided Local Search
      Guided Local Search is a metaheuristic search method. A meta-heuristic method is a method that sits on top of a local search algorithm to change its behaviour....

       — modification of search algorithms which builds up penalties during a search
    • Reactive search optimization
      Reactive search optimization
      Reactive search optimization defines local-search heuristics based on machine learning, a family of optimization algorithms based on the local search techniques...

       (RSO) — the algorithm adapts its parameters automatically
    • Least absolute deviations
      Least absolute deviations
      Least absolute deviations , also known as Least Absolute Errors , Least Absolute Value , or the L1 norm problem, is a mathematical optimization technique similar to the popular least squares technique that attempts to find a function which closely approximates a set of data...

      • Expectation-maximization algorithm
        Expectation-maximization algorithm
        In statistics, an expectation–maximization algorithm is an iterative method for finding maximum likelihood or maximum a posteriori estimates of parameters in statistical models, where the model depends on unobserved latent variables...

        • Ordered subset expectation maximization
    • Nearest neighbor search
      Nearest neighbor search
      Nearest neighbor search , also known as proximity search, similarity search or closest point search, is an optimization problem for finding closest points in metric spaces. The problem is: given a set S of points in a metric space M and a query point q ∈ M, find the closest point in S to q...


Uncertainty and randomness

  • Approaches to deal with uncertainty:
    • Markov decision process
      Markov decision process
      Markov decision processes , named after Andrey Markov, provide a mathematical framework for modeling decision-making in situations where outcomes are partly random and partly under the control of a decision maker. MDPs are useful for studying a wide range of optimization problems solved via...

    • Partially observable Markov decision process
      Partially observable Markov decision process
      A Partially Observable Markov Decision Process is a generalization of a Markov Decision Process. A POMDP models an agent decision process in which it is assumed that the system dynamics are determined by an MDP, but the agent cannot directly observe the underlying state...

    • Robust optimization
      Robust optimization
      Robust optimization is a field of optimization theory that deals with optimization problems where robustness is sought against uncertainty and/or variability in the value of a parameter of the problem.- History :...

    • Stochastic approximation
      Stochastic approximation
      Stochastic approximation methods are a family of iterative stochastic optimization algorithms that attempt to find zeroes or extrema of functions which cannot be computed directly, but only estimated via noisy observations....

    • Stochastic optimization
      Stochastic optimization
      Stochastic optimization methods are optimization methods that generate and use random variables. For stochastic problems, the random variables appear in the formulation of the optimization problem itself, which involve random objective functions or random constraints, for example. Stochastic...

    • Stochastic programming
      Stochastic programming
      Stochastic programming is a framework for modeling optimization problems that involve uncertainty. Whereas deterministic optimization problems are formulated with known parameters, real world problems almost invariably include some unknown parameters. When the parameters are known only within...

    • Stochastic gradient descent
      Stochastic gradient descent
      Stochastic gradient descent is an optimization method for minimizing an objective function that is written as a sum of differentiable functions.- Background :...

  • Random optimization
    Random optimization
    Random optimization is a family of numerical optimization methods that do not require the gradient of the problem to be optimized and RO can hence be used on functions that are not continuous or differentiable...

     algorithms:
    • Simulated annealing
      Simulated annealing
      Simulated annealing is a generic probabilistic metaheuristic for the global optimization problem of locating a good approximation to the global optimum of a given function in a large search space. It is often used when the search space is discrete...

      • Adaptive simulated annealing
        Adaptive simulated annealing
        Adaptive simulated annealing is a variant of simulated annealing algorithm in which the algorithm parameters that control temperature schedule and random step selection are automatically adjusted according to algorithm progress. This makes the algorithm more efficient and less sensitive to user...

         — variant in which the algorithm parameters are adjusted during the computation.
      • Great Deluge algorithm
        Great Deluge algorithm
        The Great Deluge algorithm is a generic algorithm applied to optimization problems. It is similar in many ways to the hill-climbing and simulated annealing algorithms....

    • Evolutionary algorithm
      Evolutionary algorithm
      In artificial intelligence, an evolutionary algorithm is a subset of evolutionary computation, a generic population-based metaheuristic optimization algorithm. An EA uses some mechanisms inspired by biological evolution: reproduction, mutation, recombination, and selection...

      , Evolution strategy
      Evolution strategy
      In computer science, evolution strategy is an optimization technique based on ideas of adaptation and evolution. It belongs to the general class of evolutionary computation or artificial evolution methodologies.-History:...

      • Differential evolution
        Differential evolution
        In computer science, differential evolution is a method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. Such methods are commonly known as metaheuristics as they make few or no assumptions about the problem being optimized...

      • Evolutionary programming
        Evolutionary programming
        Evolutionary programming is one of the four major evolutionary algorithm paradigms. It is similar to genetic programming, but the structure of the program to be optimized is fixed, while its numerical parameters are allowed to evolve....

      • Evolution window
        Evolution window
        It was observed in evolution strategies that significant progress toward the fitness/objective function's optimum, generally, can only happen in a narrow band of the mutation step size σ...

      • Genetic algorithm
        Genetic algorithm
        A genetic algorithm is a search heuristic that mimics the process of natural evolution. This heuristic is routinely used to generate useful solutions to optimization and search problems...

        , Genetic programming
        Genetic programming
        In artificial intelligence, genetic programming is an evolutionary algorithm-based methodology inspired by biological evolution to find computer programs that perform a user-defined task. It is a specialization of genetic algorithms where each individual is a computer program...

        • Genetic algorithm in economics
          Genetic algorithm in economics
          Genetic algorithms have increasingly been applied to economics over the last two decades. It has been used to characterize a variety of models including the cobweb model, the overlapping generations model, game theory, schedule optimization and asset pricing...

        • Speciation (genetic algorithm)
          Speciation (genetic algorithm)
          Speciation is a process that occurs naturally in evolution and is modeled explicitly in some genetic algorithms.Speciation in nature occurs when two similar reproducing beings evolve to become too dissimilar to share genetic information effectively or correctly. In the case of living organisms,...

      • Genetic representation
        Genetic representation
        Genetic representation is a way of representing solutions/individuals in evolutionary computation methods. Genetic representation can encode appearance, behavior, physical qualities of individuals. Designing a good genetic representation that is expressive and evolvable is a hard problem in...

      • Gaussian adaptation
        Gaussian adaptation
        Gaussian adaptation is an evolutionary algorithm designed for the maximization of manufacturing yield due to statistical deviation of component values of signal processing systems...

    • Memetic algorithm
      Memetic algorithm
      Memetic algorithms represent one of the recent growing areas of research in evolutionary computation. The term MA is now widely used as a synergy of evolutionary or any population-based approach with separate individual learning or local improvement procedures for problem search...

    • Particle swarm optimization
      Particle swarm optimization
      In computer science, particle swarm optimization is a computational method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality...

    • Cooperative optimization
      • Repulsive particle swarm optimization
    • Stochastic tunneling
      Stochastic tunneling
      Stochastic tunneling is an approach to global optimization based on the Monte Carlo method-sampling of the function to be minimized.- Idea :...

    • Harmony search
      Harmony search
      In computer science and operations research, harmony search is a phenomenon-mimicking algorithm inspired by the improvisation process of musicians...

       — mimicks the improvisation process of musicians
    • see also the section Monte Carlo method

Theoretical aspects

  • Convex analysis
    Convex analysis
    Convex analysis is the branch of mathematics devoted to the study of properties of convex functions and convex sets, often with applications in convex minimization, a subdomain of optimization theory....

    • Quasiconvex function
      Quasiconvex function
      In mathematics, a quasiconvex function is a real-valued function defined on an interval or on a convex subset of a real vector space such that the inverse image of any set of the form is a convex set...

    • Subderivative
      Subderivative
      In mathematics, the concepts of subderivative, subgradient, and subdifferential arise in convex analysis, that is, in the study of convex functions, often in connection to convex optimization....

  • Duality:
    • Dual problem
      Dual problem
      In constrained optimization, it is often possible to convert the primal problem to a dual form, which is termed a dual problem. Usually dual problem refers to the Lagrangian dual problem but other dual problems are used, for example, the Wolfe dual problem and the Fenchel dual problem...

      , Shadow price
      Shadow price
      In constrained optimization in economics, the shadow price is the instantaneous change per unit of the constraint in the objective value of the optimal solution of an optimization problem obtained by relaxing the constraint...

    • Dual cone and polar cone
      Dual cone and polar cone
      Dual cone and polar cone are closely related concepts in convex analysis, a branch of mathematics.-Dual cone:The dual cone C^* of a subset C in a Euclidean space \mathbb R^n is the set...

    • Fenchel's duality theorem
      Fenchel's duality theorem
      In mathematics, Fenchel's duality theorem is a result in the theory of convex functions named after Werner Fenchel.Let ƒ be a proper convex function on Rn and let g be a proper concave function on Rn...

       — relates minimization problems with maximization problems of convex conjugates
  • Farkas' lemma
  • Karush–Kuhn–Tucker conditions
  • Lagrange multipliers
    Lagrange multipliers
    In mathematical optimization, the method of Lagrange multipliers provides a strategy for finding the maxima and minima of a function subject to constraints.For instance , consider the optimization problem...

    • Lagrange multipliers on Banach spaces
      Lagrange multipliers on Banach spaces
      In the field of calculus of variations in mathematics, the method of Lagrange multipliers on Banach spaces can be used to solve certain infinite-dimensional constrained optimization problems...

  • Complementarity theory
    Complementarity theory
    A complementarity problem is a type of mathematical optimization problem. It is the problem of optimizing a function of two vector variables subject to certain requirements which include: that the inner product of the two variables must equal zero, i.e.  = 0...

     — study of problems with constraints of the form ⟨u, v⟩ = 0
    • Mixed complementarity problem
      Mixed complementarity problem
      Mixed Complementarity Problem is a problem formulation in mathematical programming. Many well-known problem types are special cases of, or may be reduced to MCP...

      • Mixed linear complementarity problem
        Mixed linear complementarity problem
        In mathematical optimization theory, the mixed linear complementarity problem, often abbreviated as MLCP or LMCP, is a generalization of the linear complementarity problem to include free variables.- References :*...

      • Lemke's algorithm
        Lemke's algorithm
        In mathematical optimization, Lemke's algorithm is an procedure for solving linear complementarity problems, and more generally mixed linear complementarity problems.Lemke's algorithm is of pivoting or basis-exchange type...

         — method for solving (mixed) linear complementarity problems
  • Danskin's theorem — used in the analysis of minimax problems
  • No free lunch in search and optimization
  • Relaxation technique (mathematics)
    Relaxation technique (mathematics)
    In mathematical optimization and related fields, relaxation is a modeling strategy. A relaxation is an approximation of a difficult problem by a nearby problem that is easier to solve...

    • Lagrangian relaxation
      Lagrangian relaxation
      In the field of mathematical optimization, Lagrangian relaxation is a relaxation method which approximates a difficult problem of constrained optimization by a simpler problem...

    • Linear programming relaxation — ignoring the integrality constraints in a linear programming problem
  • Self-concordant function
  • Reduced cost
    Reduced cost
    In linear programming, reduced cost, or opportunity cost, is the amount by which an objective function coefficient would have to improve before it would be possible for a corresponding variable to assume a positive value in the optimal solution...

     — cost for increasing a variable by a small amount
  • Hardness of approximation
    Hardness of approximation
    In computer science, hardness of approximation is a field that studies the algorithmic complexity of finding near-optimal solutions to optimization problems. It complements the study of approximation algorithms by proving, for certain problems, a limit on the factors with which their solution can...

     — computational complexity of getting an approximate solution

Applications

  • In geometry:
    • Geometric median
      Geometric median
      The geometric median of a discrete set of sample points in a Euclidean space is the point minimizing the sum of distances to the sample points. This generalizes the median, which has the property of minimizing the sum of distances for one-dimensional data, and provides a central tendency in higher...

       — the point minimizing the sum of distances to a given set of points
    • Chebyshev center
      Chebyshev center
      In geometry, the Chebyshev center of a bounded set Q having non-empty interior is the center of the minimal-radius ball enclosing the entire set Q, or, alternatively, the center of largest inscribed ball of Q ....

       — the centre of the smallest ball containing a given set of points
  • Automatic label placement
    Automatic label placement
    Automatic label placement refers to the computer methods of placing labels automatically on a map or chart. This is related to the typographic design of such labels....

  • Compressed sensing
    Compressed sensing
    Compressed sensing, also known as compressive sensing, compressive sampling and sparse sampling, is a technique for finding sparse solutions to underdetermined linear systems...

     — reconstruct a signal from knowledge that it is sparse or compressible
  • Cutting stock problem
    Cutting stock problem
    The cutting-stock problem is an optimization problem, or more specifically, an integer linear programming problem. It arises from many applications in industry. Imagine that you work in a paper mill and you have a number of rolls of paper of fixed width waiting to be cut, yet different customers...

  • Demand optimization
    Demand Optimization
    Demand optimization is the application of processes and tools to maximize return on sales. This usually involves the application of mathematical modeling techniques using computer software....

  • Destination dispatch
    Destination dispatch
    Destination dispatch is an optimization technique for dispatching elevators to provide passengers with the shortest waiting times and the shortest time to destination. Instead of the classic two-button system, passengers enter the desired destination floor with a keypad or touch screen...

     — an optimization technique for dispatching elevators
  • Energy minimization
    Energy minimization
    In computational chemistry, energy minimization methods are used to compute the equilibrium configuration of molecules and solids....

  • Entropy maximization
  • Inventory control problem
    Inventory control problem
    The inventory control problem is the problem faced by a firm that must decide how much to order in each time period to meet demand for its products. The problem can be modeled using mathematical techniques of optimal control, dynamic programming and network optimization. The study of such models...

    • Newsvendor
      Newsvendor
      The newsvendor model is a mathematical model in operations management and applied economics used to determine optimal inventory levels. It is characterized by fixed prices and uncertain demand for a perishable product. If the inventory level is q, each unit of demand above q is lost...

    • Extended newsvendor models
      Extended Newsvendor models
      Extended newsvendor models are variations of the classic newsvendor model involving production capacity constraints, multiple products, multiple production cycles, demand dependent selling price, etc. that appear in the Operations management literature....

  • Job-shop problem
    Job-shop problem
    The job-shop problem is a problem in discrete or combinatorial optimization, and is a generalization of the famous travelling salesman problem...

  • Linear programming decoding
    Linear programming decoding
    In information theory and coding theory, linear programming decoding is a decoding method which uses concepts from LP theory to solve decoding problems. This approach was first used by Jon Feldman et al. They showed how the LP can be used to decodes block codes....

  • Multidisciplinary design optimization
    Multidisciplinary design optimization
    Multi-disciplinary design optimization is a field of engineering that uses optimization methods to solve design problems incorporating a number of disciplines. As defined by Prof. Carlo Poloni, MDO is "the art of finding the best compromise"...

  • Paper bag problem
    Paper bag problem
    In geometry, the paper bag problem or teabag problem is to calculate the maximum possible inflated volume of an initially flat sealed rectangular bag which has the same shape as a cushion or pillow, made out of two pieces of material which can bend but not stretch.The problem is made even more...

  • Process optimization
    Process optimization
    Process optimization is the discipline of adjusting a process so as to optimize some specified set of parameters without violating some constraint. The most common goals are minimizing cost, maximizing throughput, and/or efficiency...

  • Stigler diet
    Stigler diet
    The Stigler diet is an optimization problem named for George Stigler, a 1982 Nobel Laureate in economics, who posed the following problem: For a moderately active man weighing 154 pounds, how much of each of 77 foods should be eaten on a daily basis so that the man’s intake of nine nutrients will...

  • Stress majorization
    Stress majorization
    Stress majorization is an optimization strategy used in multidimensional scaling where, for a set of n, m-dimensional data items, a configuration X of n points in rStress majorization is an optimization strategy used in multidimensional scaling (MDS) where, for a set of n, m-dimensional data...

  • Trajectory optimization
    Trajectory optimization
    Trajectory optimization is the process of designing a trajectory that minimizes or maximizes some measure of performance within prescribed constraint boundaries...

  • Wing shape optimization
    Wing shape optimization
    Wing-shape optimization is a software implementation of shape optimization primarily used for aircraft design. This allows for engineers to produce more efficient and cheaper aircraft designs.-History:...


Miscellaneous

  • Combinatorial optimization
    Combinatorial optimization
    In applied mathematics and theoretical computer science, combinatorial optimization is a topic that consists of finding an optimal object from a finite set of objects. In many such problems, exhaustive search is not feasible...

  • Dynamic programming
    Dynamic programming
    In mathematics and computer science, dynamic programming is a method for solving complex problems by breaking them down into simpler subproblems. It is applicable to problems exhibiting the properties of overlapping subproblems which are only slightly smaller and optimal substructure...

    • Bellman equation
      Bellman equation
      A Bellman equation , named after its discoverer, Richard Bellman, is a necessary condition for optimality associated with the mathematical optimization method known as dynamic programming...

    • Hamilton–Jacobi–Bellman equation — continuous-time analogue of Bellman equation
    • Backward induction
      Backward induction
      Backward induction is the process of reasoning backwards in time, from the end of a problem or situation, to determine a sequence of optimal actions. It proceeds by first considering the last time a decision might be made and choosing what to do in any situation at that time. Using this...

       — solving dynamic programming problems by reasoning backwards in time
    • Optimal stopping
      Optimal stopping
      In mathematics, the theory of optimal stopping is concerned with the problem of choosing a time to take a particular action, in order to maximise an expected reward or minimise an expected cost. Optimal stopping problems can be found in areas of statistics, economics, and mathematical finance...

       — choosing the optimal time to take a particular action
      • Odds algorithm
        Odds algorithm
        The odds-algorithm is a mathematical method for computing optimalstrategies for a class of problems that belong to the domain of optimal stopping problems. Their solution follows from the odds-strategy, and the importance of the...

      • Robbins' problem (of optimal stopping)
        Robbins' problem (of optimal stopping)
        In probability theory, Robbins' problem of optimal stopping, named after Herbert Robbins, is sometimes referred to as the fourth secretary problem or the problem of minimizing the expected rank with full information. Its statement is as follows....

  • Global optimization
    Global optimization
    Global optimization is a branch of applied mathematics and numerical analysis that deals with the optimization of a function or a set of functions to some criteria.- General :The most common form is the minimization of one real-valued function...

    :
    • BRST algorithm
      BRST algorithm
      Boender-Rinnooy-Stougie-Timmer algorithm is an optimization algorithm suitable for finding global optimum of black box functions. In their paper Boender et al. describe their method as a stochastic method involving a combination of sampling, clustering and local search, terminating with a range...

    • MCS algorithm
      MCS algorithm
      Multilevel Coordinate Search is an algorithm for bound constrained global optimization using function values only.To do so, the n-dimensional search space is represented by a set of non-intersecting hypercubes . The boxes are then iteratively split along an axis plane according to the value of the...

  • Bilevel program
    Bilevel program
    In mathematics, bilevel programs are optimization problems where one optimization problem is embedded in another one.Bilevel programs are multilevel programs with two levels.- Mathematical formulation of the problem :...

     — problem in which one problem is embedded in another
  • Multilevel programming
    Multilevel programming
    The "level" refers to sets of variables. A bilevel program has two sets:min f: x in X, y in Y, h=0, g=0.A reason for identifying levels is to apply a decomposition principle for algorithm design. One example is the bilinear program...

     — problem in which a series of problems is embedded in each other
  • Infinite-dimensional optimization
    Infinite-dimensional optimization
    In certain optimization problems the unknown optimal solution might not be a number or a vector, but rather a continuous quantity, for example a function or the shape of a body...

    • Optimal control
      Optimal control
      Optimal control theory, an extension of the calculus of variations, is a mathematical optimization method for deriving control policies. The method is largely due to the work of Lev Pontryagin and his collaborators in the Soviet Union and Richard Bellman in the United States.-General method:Optimal...

      • Pontryagin's minimum principle
        Pontryagin's minimum principle
        Pontryagin's maximum principle is used in optimal control theory to find the best possible control for taking a dynamical system from one state to another, especially in the presence of constraints for the state or input controls. It was formulated by the Russian mathematician Lev Semenovich...

         — infinite-dimensional version of Lagrange multipliers
      • DNSS point
        DNSS point
        DNSS points arise in optimal control problems that exhibit multiple optimal solutions. A DNSS point-named alphabetically after Deckert and Nishimura, Sethi, and Skiba-is an indifference point in an optimal control problem such that starting from such a point, the problem has more than one different...

         — initial state for certain optimal control problems with multiple optimal solutions
    • Shape optimization
      Shape optimization
      Shape optimization is part of the field of optimal control theory. The typical problem is to find the shape which is optimal in that it minimizes a certain cost functional while satisfying given constraints...

      , Topology optimization
      Topology optimization
      - Introduction :Topology optimisation is a mathematical approach that optimises material layout within a given design space, for a given set of loads and boundary conditions such that the resulting layout meets a prescribed set of performance targets...

       — optimization over a set of regions
      • Topological derivative
        Topological derivative
        In the fields of shape optimization and topology optimization, a topological derivative is, conceptually, a derivative of a function of a region with respect to infinitesimal changes in its topology, such as adding an infinitesimal hole or crack....

         — derivative with respect to changing in the shape
    • Generalized semi-infinite programming
      Generalized semi-infinite programming
      In mathematics, a semi-infinite programming problem is an optimization problem with a finite number of variables and an infinite number of constraints. The constraints are typically parameterized...

       — finite number of variables, infinite number of constraints
  • Optimal substructure
    Optimal substructure
    In computer science, a problem is said to have optimal substructure if an optimal solution can be constructed efficiently from optimal solutions to its subproblems...

  • Algorithmic concepts:
    • Barrier function
      Barrier function
      In constrained optimization, a field of mathematics, a barrier function is a continuous function whose value on a point increases to infinity as the point approaches the boundary of the feasible region . It is used as a penalizing term for violations of constraints...

    • Penalty method
      Penalty method
      Penalty methods are a certain class of algorithms for solving constrained optimization problems.A penalty method replaces a constrained optimization problem by a series of unconstrained problems whose solutions ideally converge to the solution of the original constrained problem...

    • Trust region
      Trust region
      Trust region is a term used in mathematical optimization to denote the subset of the region of the objective function to be optimized that is approximated using a model function . If an adequate model of the objective function is found within the trust region then the region is expanded;...

  • Famous test functions for optimization:
    • Rosenbrock function
      Rosenbrock function
      In mathematical optimization, the Rosenbrock function is a non-convex function used as a performance test problem for optimization algorithms introduced by Howard H. Rosenbrock in 1960. It is also known as Rosenbrock's valley or Rosenbrock's banana function.The global minimum is inside a long,...

       — two-dimensional function with a banana-shaped valley
    • Himmelblau's function
      Himmelblau's function
      In mathematical optimization, the Himmelblau's function is a multi-modal function, used to test the performance of optimization algorithms.The function is defined by:...

       — two-dimensional with four local minima, defined by
    • Shekel function
      Shekel function
      Shekel function is a multidimensional, multimodal, continuous, deterministic function commonly used as a test function for testing optimization techniques.The mathematical form of a function in n dimensions with m maxima is:...

       — multimodal and multidimensional
  • Mathematical Programming Society
    Mathematical Programming Society
    Known as the Mathematical Programming Society until 2010, the Mathematical Optimization Society is an international association of researchers active in optimization...


Numerical quadrature (integration)

Numerical integration
Numerical integration
In numerical analysis, numerical integration constitutes a broad family of algorithms for calculating the numerical value of a definite integral, and by extension, the term is also sometimes used to describe the numerical solution of differential equations. This article focuses on calculation of...

 — the numerical evaluation of an integral
  • Rectangle method
    Rectangle method
    In mathematics, specifically in integral calculus, the rectangle method computes an approximation to a definite integral, made by finding the area of a collection of rectangles whose heights are determined by the values of the function.Specifically, the interval over which the function is to be...

     — first-order method, based on (piecewise) constant approximation
  • Trapezoidal rule — second-order method, based on (piecewise) linear approximation
  • Simpson's rule
    Simpson's rule
    In numerical analysis, Simpson's rule is a method for numerical integration, the numerical approximation of definite integrals. Specifically, it is the following approximation:...

     — fourth-order method, based on (piecewise) quadratic approximation
    • Adaptive Simpson's method
      Adaptive Simpson's method
      Adaptive Simpson's method, also called adaptive Simpson's rule, is a method of numerical integration proposed by G.F. Kuncir in 1962. It is probably the first recursive adaptive algorithm for numerical integration to appear in print, although more modern adaptive methods based on Gauss–Kronrod...

  • Newton–Cotes formulas
  • Romberg's method
    Romberg's method
    In numerical analysis, Romberg's method is used to estimate the definite integral \int_a^b f \, dx by applying Richardson extrapolation repeatedly on the trapezium rule or the rectangle rule . The estimates generate a triangular array...

     — Richardson extrapolation applied to trapezium rule
  • Gaussian quadrature
    Gaussian quadrature
    In numerical analysis, a quadrature rule is an approximation of the definite integral of a function, usually stated as a weighted sum of function values at specified points within the domain of integration....

     — highest possible degree with given number of points
    • Chebyshev–Gauss quadrature — extension of Gaussian quadrature for integrals with weight on [−1, 1]
    • Gauss–Hermite quadrature — extension of Gaussian quadrature for integrals with weight exp(−x2) on [−∞, ∞]
    • Gauss–Jacobi quadrature — extension of Gaussian quadrature for integrals with weight (1 − x)α (1 + x)β on [−1, 1]
    • Gauss–Laguerre quadrature — extension of Gaussian quadrature for integrals with weight exp(−x2) on [0, ∞]
    • Gauss–Kronrod quadrature formula — nested rule based on Gaussian quadrature
    • Gauss-Kronrod rules
      Gaussian quadrature
      In numerical analysis, a quadrature rule is an approximation of the definite integral of a function, usually stated as a weighted sum of function values at specified points within the domain of integration....

  • Tanh-sinh quadrature
    Tanh-sinh quadrature
    Tanh-sinh quadrature is a method for numerical integration introduced by Hidetosi Takahasi and Masatake Mori in 1974. It uses the change of variablesx = \tanh\,...

     — variant of Gaussian quadrature which works well with singularities at the end points
  • Clenshaw–Curtis quadrature — based on expanding the integrand in terms of Chebyshev polynomials
  • Adaptive quadrature
    Adaptive quadrature
    In applied mathematics, adaptive quadrature is a process in which the integral of a function f is approximated using static quadrature rules on adaptively refined subintervals of the integration domain...

     — adapting the subintervals in which the integration interval is divided depending on the integrand
  • Monte Carlo integration — takes random samples of the integrand
    • See also #Monte Carlo method
  • T-integration — a non-standard method
  • Lebedev quadrature — uses a grid on a sphere with octahedral symmetry
  • Sparse grid
    Sparse grid
    Sparse grids are numerical techniques to represent, integrate or interpolate high dimensional functions. They were originally found by the Russian mathematician Smolyak and are based on a sparse tensor product construction...

  • Numerical differentiation
    Numerical differentiation
    In numerical analysis, numerical differentiation describes algorithms for estimating the derivative of a mathematical function or function subroutine using values of the function and perhaps other knowledge about the function.-Finite difference formulae:...

    • Numerical smoothing and differentiation
      Numerical smoothing and differentiation
      An experimental datum value can be conceptually described as the sum of a signal and some noise, but in practice the two contributions cannot be separated. The purpose of smoothing is to increase the Signal-to-noise ratio without greatly distorting the signal...

  • Euler–Maclaurin formula

Numerical ordinary differential equations

Numerical ordinary differential equations
Numerical ordinary differential equations
Numerical ordinary differential equations is the part of numerical analysis which studies the numerical solution of ordinary differential equations...

 — the numerical solution of ordinary differential equations (ODEs)
  • Euler method — the most basic method for solving an ODE
  • Explicit and implicit methods
    Explicit and implicit methods
    Explicit and implicit methods are approaches used in numerical analysis for obtaining numerical solutions of time-dependent ordinary and partial differential equations, as is required in computer simulations of physical processes....

     — implicit methods need to solve an equation at every step
  • Runge–Kutta methods
    Runge–Kutta methods
    In numerical analysis, the Runge–Kutta methods are an important family of implicit and explicit iterative methods for the approximation of solutions of ordinary differential equations. These techniques were developed around 1900 by the German mathematicians C. Runge and M.W. Kutta.See the article...

     — one of the two main classes of methods for initial-value problems
    • Midpoint method
      Midpoint method
      In numerical analysis, a branch of applied mathematics, the midpoint method is a one-step method for solving the differential equation y' = f, \quad y = y_0...

       — a second-order method with two stages
    • Heun's method
      Heun's method
      In mathematics and computational science, Heun's method may refer to the improved or modified Euler's method , or a similar two-stage Runge–Kutta method. It is named after Karl L. W. M. Heun and is a numerical procedure for solving ordinary differential equations with a given initial value...

       — either a second-order method with two stages, or a third-order method with three stages
    • Bogacki–Shampine method
      Bogacki–Shampine method
      The Bogacki–Shampine method is a method for the numerical solution of ordinary differential equations, that was proposed by Przemyslaw Bogacki and Lawrence F. Shampine in 1989 . The Bogacki–Shampine method is a Runge–Kutta method of order three with four stages with the First Same As Last ...

       — a third-order method with four stages (FSAL) and an embedded fourth-order method
    • Cash–Karp method
      Cash–Karp method
      In numerical analysis, the Cash–Karp method is a method for solving ordinary differential equations . It was proposed by Professor Jeff R. Cash from Imperial College London and Alan H. Karp from IBM Scientific Center. The method is a member of the Runge–Kutta family of ODE solvers...

       — a fifth-order method with six stages and an embedded fourth-order method
    • Dormand–Prince method
      Dormand–Prince method
      In numerical analysis, the Dormand–Prince method, or DOPRI method, is a method for solving ordinary differential equations . The method is a member of the Runge–Kutta family of ODE solvers. More specifically, it uses six function evaluations to calculate fourth- and fifth-order accurate solutions...

       — a fifth-order method with seven stages (FSAL) and an embedded fourth-order method
    • Runge–Kutta–Fehlberg method
      Runge–Kutta–Fehlberg method
      In mathematics, the Runge–Kutta–Fehlberg method is an algorithm of numerical analysis for the numerical solution of ordinary differential equations. It was developed by the German mathematician Erwin Fehlberg and is based on the class of Runge–Kutta methods...

       — a fifth-order method with six stages and an embedded fourth-order method
    • Butcher group
      Butcher group
      In mathematics, the Butcher group, named after the New Zealand mathematician John C. Butcher by , is an infinite-dimensional group first introduced in numerical analysis to study solutions of non-linear ordinary differential equations by the Runge–Kutta method...

       — algebraic formalism involving rooted trees for analysing Runge–Kutta methods
    • List of Runge–Kutta methods
  • Linear multistep method — the other main class of methods for initial-value problems
    • Backward differentiation formula
      Backward differentiation formula
      The backward differentiation formula is a family of implicit methods for the numerical integration of ordinary differential equations. They are linear multistep methods that, for a given function and time, approximate the derivative of that function using information from already computed times,...

       — implicit methods of order 2 to 6; especially suitable for stiff equations
    • Numerov's method
      Numerov's method
      Numerov's method is a numerical method to solve ordinary differential equations of second order in which the first-order term does not appear. It is a fourth-order linear multistep method...

       — fourth-order method for equations of the form
  • Bulirsch–Stoer algorithm
    Bulirsch–Stoer algorithm
    In numerical analysis, the Bulirsch–Stoer algorithm is a method for the numerical solution of ordinary differential equations which combines three powerful ideas: Richardson extrapolation, the use of rational function extrapolation in Richardson-type applications, and the modified midpoint method,...

     — combines the midpoint method with Richardson extrapolation to attain arbitrary order
  • Methods designed for the solution of ODEs from classical physics:
    • Newmark-beta method
      Newmark-beta method
      The Newmark-beta method is a method of numerical integration used to solve differential equations. It is widely used in numerical evaluation of the dynamic response of structures and solids such as in finite element analysis to model dynamic systems. The method is named after Nathan M...

       — based on the extended mean-value theorem
    • Verlet integration
      Verlet integration
      Verlet integration is a numerical method used to integrate Newton's equations of motion. It is frequently used to calculate trajectories of particles in molecular dynamics simulations and computer graphics...

       — a popular second-order method
    • Leapfrog integration
      Leapfrog integration
      Leapfrog integration is a simple method for numerically integrating differential equations of the form\ddot x=F,or equivalently of the form\dot v=F,\;\dot x \equiv v,particularly in the case of a dynamical system of classical mechanics...

       — another name for Verlet integration
    • Beeman's algorithm
      Beeman's algorithm
      Beeman's algorithm is a method for numerically integrating ordinary differential equations of order 2, more specifically Newton's equations of motion \ddot x=A. It was designed to allow high numbers of particles in simulations of molecular dynamics. There is a direct or explicit and an implicit...

       — a two-step method extending the Verlet method
    • Dynamic relaxation
      Dynamic relaxation
      Dynamic relaxation is a numerical method, which, among other things, can be used do "form-finding" for cable and fabric structures. The aim is to find a geometry where all forces are in equilibrium...

  • Geometric integrator
    Geometric integrator
    In the mathematical field of numerical ordinary differential equations, a geometric integrator is a numerical method that preserves geometric properties of the exact flow of a differential equation.-Pendulum example:...

     — a method that preserves some geometric structure of the equation
    • Symplectic integrator
      Symplectic integrator
      In mathematics, a symplectic integrator is a numerical integration scheme for a specific group of differential equations related to classical mechanics and symplectic geometry. Symplectic integrators form the subclass of geometric integrators which, by definition, are canonical transformations...

       — a method for the solution of Hamilton's equations that preserves the symplectic structure
      • Variational integrator
        Variational integrator
        Variational integrators are numerical integrators for Hamiltonian systems derived from the Euler-Lagrange equations of a discretized Hamilton's principle...

         — symplectic integrators derived using the underlying variational principle
      • Semi-implicit Euler method
        Semi-implicit Euler method
        In mathematics, the semi-implicit Euler method, also called symplectic Euler, semi-explicit Euler, Euler–Cromer, and Newton–Størmer–Verlet , is a modification of the Euler method for solving Hamilton's equations, a system of ordinary differential equations that arises in classical mechanics...

         — variant of Euler method which is symplectic when applied to separable Hamiltonians
  • Adaptive stepsize
    Adaptive stepsize
    Adaptive stepsize is a technique in numerical analysis used for many problems, but mainly for integration. It can be used for both normal integration , or the process of solving an ordinary differential equation. This article focuses on the latter...

     — automatically changing the step size when that seems advantageous
  • Stiff equation
    Stiff equation
    In mathematics, a stiff equation is a differential equation for which certain numerical methods for solving the equation are numerically unstable, unless the step size is taken to be extremely small. It has proved difficult to formulate a precise definition of stiffness, but the main idea is that...

     — roughly, an ODE for which the unstable methods needs a very short step size, but stable methods do not
  • Methods for solving two-point boundary value problems (BVPs):
    • Shooting method
      Shooting method
      In numerical analysis, the shooting method is a method for solving a boundary value problem by reducing it to the solution of an initial value problem...

    • Direct multiple shooting method
      Direct multiple shooting method
      In the area of mathematics known as numerical ordinary differential equations, the direct multiple shooting method is a numerical method for the solution of boundary value problems...

       — divides interval in several subintervals and applies the shooting method on each subinterval
  • Methods for solving differential-algebraic equations (DAEs), i.e., ODEs with constraints:
    • Constraint algorithm
      Constraint algorithm
      In mechanics, a constraint algorithm is a method for satisfying constraints for bodies that obey Newton's equations of motion. There are three basic approaches to satisfying such constraints: choosing novel unconstrained coordinates , introducing explicit constraint forces, and minimizing...

       — for solving Newton's equations with constraints
  • Methods for solving stochastic differential equations (SDEs):
    • Euler–Maruyama method — generalization of the Euler method for SDEs
    • Milstein method
      Milstein method
      In mathematics, the Milstein method, named after Grigori N. Milstein, is a technique for the approximate numerical solution of a stochastic differential equation.Consider the Itō stochastic differential equation...

       — a method with strong order one
    • Runge–Kutta method (SDE)
      Runge–Kutta method (SDE)
      In mathematics, the Runge–Kutta method is a technique for the approximate numerical solution of a stochastic differential equation. It is a generalization of the Runge–Kutta method for ordinary differential equations to stochastic differential equations....

       — generalization of the family of Runge–Kutta methods for SDEs
  • Methods for solving integral equations:
  • Bi-directional delay line
    Bi-directional delay line
    In mathematics, a bi-directional delay line is a numerical analysis technique used in computer simulation for solving ordinary differential equations by converting them to hyperbolic equations. In this way an explicit solution scheme is obtained with highly robust numerical properties...

  • Partial element equivalent circuit
  • History of numerical solution of differential equations using computers
    History of numerical solution of differential equations using computers
    According to Mary Croarken in her paper "Computing in Britain During World War II," by 1945, the Cambridge Mathematical Laboratory created by John Lennard-Jones utilized the latest computing devices to perform the equations...

  • Truncation error (numerical integration)
    Truncation error (numerical integration)
    Trunction errors in numerical integration are of two kinds:* local truncation errors – the error caused by one iteration, and* global truncation errors – the cumulative error cause by many iterations.- Definitions :...

     — local and global truncation errors, and their relationships

Numerical partial differential equations

Numerical partial differential equations
Numerical partial differential equations
Numerical partial differential equations is the branch of numerical analysis that studies the numerical solution of partial differential equations .Numerical techniques for solving PDEs include the following:...

 — the numerical solution of partial differential equations (PDEs)

Finite difference methods

Finite difference method
Finite difference method
In mathematics, finite-difference methods are numerical methods for approximating the solutions to differential equations using finite difference equations to approximate derivatives.- Derivation from Taylor's polynomial :...

 — based on approximating differential operators with difference operators
  • Finite difference
    Finite difference
    A finite difference is a mathematical expression of the form f − f. If a finite difference is divided by b − a, one gets a difference quotient...

     — the discrete analogue of a differential operator
    • Difference operator — the numerator of a finite difference
    • Discrete Laplace operator
      Discrete Laplace operator
      In mathematics, the discrete Laplace operator is an analog of the continuous Laplace operator, defined so that it has meaning on a graph or a discrete grid...

       — finite-difference approximation of the Laplace operator
    • Discrete Poisson equation
      Discrete Poisson equation
      In mathematics, the discrete Poisson equation is the finite difference analog of the Poisson equation. In it, the discrete Laplace operator takes the place of the Laplace operator...

       — discrete analogue of the Poisson equation using the discrete Laplace operator
  • Stencil (numerical analysis)
    Stencil (numerical analysis)
    In mathematics, especially the areas of numerical analysis concentrating on the numerical solution of partial differential equations, a stencil is a geometric arrangement of a nodal group that relate to the point of interest by using a numerical approximation routine. Stencils are the basis for...

     — the geometric arrangements of grid points affected by a basic step of the algorithm
    • Compact stencil
      Compact stencil
      In mathematics, especially in the areas of numerical analysis called numerical partial differential equations, a compact stencil is a type of stencil that uses only nine nodes for its discretization method in two dimensions. It uses only the center node and the adjacent nodes...

       — stencil which only uses a few grid points, usually only the immediate and diagonal neighbours
    • Non-compact stencil
      Non-compact stencil
      In numerical mathematics, a non-compact stencil is a type of discretization method, where any node surrounding the node of interest may be used in the calculation. A non-compact stencil's computational time increases with an increase of layers of nodes used. Non-compact stencils may be compared to...

       — any stencil that is not compact
    • Five-point stencil
      Five-point stencil
      In numerical analysis, given a square grid in one or two dimensions, the five-point stencil of a point in the grid is made up of the point itself together with its four "neighbors"...

       — two-dimensional stencil consisting of a point and its four immediate neighbours on a rectangular grid
    • Stencil codes
      Stencil codes
      Stencil codes are a class of iterative kernelswhich update array elements according to some fixed pattern, called stencil.They are most commonly found in the codes of computer simulations, e.g...

       — computer codes that update array elements according to some fixed pattern, called stencils
  • Finite difference methods for heat equation and related PDEs:
    • FTCS scheme
      FTCS scheme
      In numerical analysis, the FTCS method is a finite difference method used for numerically solving the heat equation and similar parabolic partial differential equations. It is a first-order method in time, explicit in time, and is conditionally stable...

       (forward-time central-space) — first-order explicit
    • Crank–Nicolson method — second-order implicit
  • Finite difference methods for hyperbolic PDEs like the wave equation:
    • Lax–Friedrichs method
      Lax–Friedrichs method
      The Lax–Friedrichs method, named after Peter Lax and Kurt O. Friedrichs, is a numerical method for the solution of hyperbolic partial differential equations based on finite differences...

       — first-order explicit
    • Lax–Wendroff method
      Lax–Wendroff method
      The Lax–Wendroff method, named after Peter Lax and Burton Wendroff, is a numerical method for the solution of hyperbolic partial differential equations, based on finite differences...

       — second-order explicit
    • MacCormack method
      MacCormack method
      In computational fluid dynamics, the MacCormack method is a widely used discretization scheme for the numerical solution of hyperbolic partial differential equations. This second-order finite difference method is introduced by Robert W. MacCormack in 1969...

       — second-order explicit
    • Upwind scheme
      Upwind scheme
      In computational fluid dynamics, upwind schemes denote a class of numerical discretization methods for solving hyperbolic partial differential equations. Upwind schemes use an adaptive or solution-sensitive finite difference stencil to numerically simulate more properly the direction of propagation...

  • Alternating direction implicit method
    Alternating direction implicit method
    In numerical analysis, the Alternating Direction Implicit method is a finite difference method for solving parabolic and elliptic partial differential equations. It is most notably used to solve the problem of heat conduction or solving the diffusion equation in two or more dimensions...

     (ADI) — update using the flow in x-direction and then using flow in y-direction
  • Specific applications:
    • Finite difference methods for option pricing
      Finite difference methods for option pricing
      Finite difference methods for option pricing are numerical methods used in mathematical finance for the valuation of options. Finite difference methods were first applied to option pricing by Eduardo Schwartz in 1977....

    • Finite-difference time-domain method
      Finite-difference time-domain method
      Finite-difference time-domain is one of the primary available computational electrodynamics modeling techniques. Since it is a time-domain method, FDTD solutions can cover a wide frequency range with a single simulation run, and treat nonlinear material properties in a natural way.The FDTD method...

       — a finite-difference method for electrodynamics

Finite element methods

Finite element method
Finite element method
The finite element method is a numerical technique for finding approximate solutions of partial differential equations as well as integral equations...

 — based on a discretization of the space of solutions
  • Finite element method in structural mechanics
    Finite element method in structural mechanics
    The Finite element method is a powerful technique originally developed for numerical solution of complex problems in structural mechanics, and it remains the method of choice for complex systems. In the FEM, the structural system is modeled by a set of appropriate finite elements interconnected at...

     — a physical approach to finite element methods
  • Engineering treatment of the finite element method — an explanation of finite elements geared towards engineers
  • Galerkin method
    Galerkin method
    In mathematics, in the area of numerical analysis, Galerkin methods are a class of methods for converting a continuous operator problem to a discrete problem. In principle, it is the equivalent of applying the method of variation of parameters to a function space, by converting the equation to a...

     — a finite element method in which the residual is orthogonal to the finite element space
    • Discontinuous Galerkin method
      Discontinuous Galerkin method
      Discontinuous Galerkin methods in mathematics form a class of numerical methods for solving partial differential equations. They combine features of the finite element and the finite volume framework and have been successfully applied to hyperbolic, elliptic and parabolic problems arising from a...

       — a Galerkin method in which the approximate solution is not continuous
  • Rayleigh-Ritz method
    Rayleigh-Ritz method
    In applied mathematics and mechanical engineering, the Rayleigh–Ritz method is a widely used, classical method for the calculation of the natural vibration frequency of a structure in the second or higher order...

     — a finite element method based on variational principles
  • Spectral element method
    Spectral element method
    In mathematics, the spectral element method is a high order finite element method.Introduced in a 1984 paper by A. T. Patera, the abstract begins: "A spectral element method that combines the generality of the finite element method with the accuracy of spectral techniques..."The spectral element...

     — high-order finite element methods
  • hp-FEM
    Hp-FEM
    hp-FEM is a general version of the finite element method , a numerical method for solving partial differential equations based on piecewise-polynomial approximations that employs elements of variable size and polynomial degree ...

     — variant in which both the size and the order of the elements are automatically adapted
  • Direct stiffness method
    Direct stiffness method
    As one of the methods of structural analysis, the direct stiffness method , also known as the displacement method or matrix stiffness method, is particularly suited for computer-automated analysis of complex structures including the statically indeterminate type...

     — a particular implementation of the finite element method, often used in structural analysis
  • Trefftz method
    Trefftz method
    In mathematics, the Trefftz method is a method for the numerical solution of partial differential equations named after the German mathematician Erich Trefftz . It falls within the class of finite element methods.- Introduction :...

  • Finite element updating
    Finite element updating
    Finite element model updating is the process of ensuring that finite element analysis results in models that better reflect the measured data than the initial models. It is part of verification and validation of numerical models.-The process:...

  • Extended finite element method
    Extended finite element method
    The extended finite element method , also known as generalized finite element method or partition of unity method is a numerical technique that extends the classical finite element method approach by enriching the solution space for solutions to differential equations with discontinuous...

     — puts functions tailored to the problem in the approximation space
  • Functionally graded element
    Functionally graded element
    In materials science and mathematics, functionally graded elements are elements used in finite element analysis. They can be used to describe a functionally graded material....

    s — elements for describing functionally graded materials
  • Interval finite element
    Interval finite element
    The interval finite element method is a finite element method that uses interval parameters. Interval FEM can be applied in situations where it is not possible to get reliable probabilistic characteristics of the structure. This is important in concrete structures, wood structures, geomechanics,...

     method — combination of finite elements with interval arithmetic
  • Discrete exterior calculus
    Discrete exterior calculus
    In mathematics, the discrete exterior calculus is the extension of the exterior calculus to discrete spaces including graphs and finite element meshes. DEC methods have proved to be very powerful in improving and analyzing finite element methods: for instance, DEC-based methods allow the use of...

     — discrete form of the exterior calculus of differential geometry
  • Modal analysis using FEM
    Modal analysis using FEM
    The goal of modal analysis in structural mechanics is to determine the natural mode shapes and frequencies of an object or structure during free vibration. It is common to use the finite element method to perform this analysis because, like other calculations using the FEM, the object being...

     — solution of eigenvalue problems to find natural vibrations
  • Céa's lemma
    Céa's lemma
    Céa's lemma is a lemma in mathematics. It is an important tool for proving error estimates for the finite element method applied to elliptic partial differential equations.-Lemma statement:Let V be a real Hilbert space with the norm \|\cdot\|...

     — solution in the finite-element space is an almost best approximation in that space of the true solution
  • Patch test (finite elements)
    Patch test (finite elements)
    The patch test in the finite element method is a simple indicator of the quality of a finite element, developed by Bruce Irons.The patch test uses a partial differential equation on a domain consisting from several elements set up so that the exact solution is known...

     — simple test for the quality of a finite element
  • Multiple-point constraint (MPC)
  • List of finite element software packages
  • NAFEMS
    NAFEMS
    NAFEMS is an independent, not-for-profit organisation that sets and maintains standards in computer-aided engineering analysis and, specifically, finite element analysis .-History:...

     — not-for-profit organisation that sets and maintains standards in computer-aided engineering analysis
  • Multiphase topology optimisation
    Multiphase topology optimisation
    The Multi Phase Topology Optimisation is a simulation technique based on the principle of the finite element method which is able to determine the optimal distribution of two or more different materials in combination under thermal and mechanical loads....

     — technique based on finite elements for determining optimal composition of a mixture
  • Interval finite element
    Interval finite element
    The interval finite element method is a finite element method that uses interval parameters. Interval FEM can be applied in situations where it is not possible to get reliable probabilistic characteristics of the structure. This is important in concrete structures, wood structures, geomechanics,...


Other methods

  • Spectral method
    Spectral method
    Spectral methods are a class of techniques used in applied mathematics and scientific computing to numerically solve certain Dynamical Systems, often involving the use of the Fast Fourier Transform. Where applicable, spectral methods have excellent error properties, with the so called "exponential...

     — based on the Fourier transformation
    • Pseudo-spectral method
      Pseudo-spectral method
      Pseudo-spectral methods are a class of numerical methods used in applied mathematics and scientific computing for the solution of PDEs, such as the direct simulation of a particle with an arbitrary wavefunction interacting with an arbitrary potential...

  • Method of lines
    Method of lines
    The method of lines is a technique for solving partial differential equations in which all but one dimension is discretized. MOL allows standard, general-purpose methods and software, developed for the numerical integration of ODEs and DAEs, to be used...

     — reduces the PDE to a large system of ordinary differential equations
  • Boundary element method
    Boundary element method
    The boundary element method is a numerical computational method of solving linear partial differential equations which have been formulated as integral equations . It can be applied in many areas of engineering and science including fluid mechanics, acoustics, electromagnetics, and fracture...

     — based on transforming the PDE to an integral equation on the boundary of the domain
    • Interval boundary element method
      Interval boundary element method
      Interval boundary element method is classical boundary element method with the interval parameters.Boundary element method is based on the following integral equation...

       — a version using interval arithmetics
  • Analytic element method
    Analytic element method
    The analytic element method is a numerical method used for the solution of partial differential equations. It was initially developed by O.D.L. Strack at the University of Minnesota...

     — similar to the boundary element method, but the integral equation is evaluated analytically
  • Finite volume method
    Finite volume method
    The finite volume method is a method for representing and evaluating partial differential equations in the form of algebraic equations [LeVeque, 2002; Toro, 1999]....

     — based on dividing the domain in many small domains; popular in computational fluid dynamics
    • Godunov's scheme
      Godunov's scheme
      In numerical analysis and computational fluid dynamics, Godunov's scheme is a conservative numerical scheme, suggested by S. K. Godunov in 1959, for solving partial differential equations...

       — first-order conservative scheme for fluid flow, based on piecewise constant approximation
    • MUSCL scheme
      MUSCL scheme
      In the study of partial differential equations, the MUSCL scheme is a finite volume method that can provide highly accurate numerical solutions for a given system, even in cases where the solutions exhibit shocks, discontinuities, or large gradients...

       — second-order variant of Godunov's scheme
    • AUSM
      AUSM
      AUSM stands for Advection Upstream Splitting Method. It is developed as a numerical inviscid flux function for solving a general system of conservation equations...

       — advection upstream splitting method
    • Flux limiter
      Flux limiter
      Flux limiters are used in high resolution schemes – numerical schemes used to solve problems in science and engineering, particularly fluid dynamics, described by partial differential equations...

       — limits spatial derivatives (fluxes) in order to avoid spurious oscillations
    • Riemann solver
      Riemann solver
      A Riemann solver is a numerical method used to solve a Riemann problem. They are heavily used in computational fluid dynamics and computational magnetohydrodynamics.-Exact solvers:...

       — a solver for Riemann problems (a conservation law with piecewise constant data)
  • Discrete element method
    Discrete element method
    A discrete element method , also called a distinct element method is any of family of numerical methods for computing the motion of a large number of particles of micrometre-scale size and above...

     — a method in which the elements can move freely relative to each other
  • Meshfree methods
    Meshfree methods
    Meshfree methods are a particular class of numerical simulation algorithms for the simulation of physical phenomena. Traditional simulation algorithms relied on a grid or a mesh, meshfree methods in contrast use the geometry of the simulated object directly for calculations. Meshfree methods exist...

     — does not use a mesh, but uses a particle view of the field
    • Diffuse element method
      Diffuse element method
      The diffuse element method is a computer simulation technique used in engineering analysis. It is a meshfree method.The diffuse element method was developed by B. Nayroles, G. Touzot and Pierre Villon at the Universite de Technologie de Compiegne, in 1992.It is in concept rather similar to the...

  • Methods designed for problems from electromagnetics:
    • Finite-difference time-domain method
      Finite-difference time-domain method
      Finite-difference time-domain is one of the primary available computational electrodynamics modeling techniques. Since it is a time-domain method, FDTD solutions can cover a wide frequency range with a single simulation run, and treat nonlinear material properties in a natural way.The FDTD method...

       — a finite-difference method
    • Transmission line matrix method
      Transmission line matrix method
      The transmission-line matrix method is a space and time discretising method for computation of electromagnetic fields. It is based on the analogy between the electromagnetic field and a mesh of transmission lines...

       (TLM) — based on analogy between electromagnetic field and mesh of transmission lines
    • Uniform theory of diffraction
      Uniform theory of diffraction
      In numerical analysis, the uniform geometrical theory of diffraction is a high-frequency method for solving electromagnetic scattering problems from electrically small discontinuities or discontinuities in more than one dimension at the same point. UTD is an extension of Joseph Keller's...

       — specifically designed for scattering problems
  • Particle-in-cell
    Particle-in-cell
    The Particle-in-Cell method refers to a technique used to solve a certain class of partial differential equations. In this method, individual particles in a Lagrangian frame are tracked in continuous phase space, whereas moments of the distribution such as densities and currents are computed...

     — used especially in fluid dynamics
  • High-resolution scheme
  • Shock capturing methods
    Shock capturing methods
    In computational fluid dynamics, shock-capturing methods are a class of techniques for computing inviscid flows with shock waves. Computation of flow through shock waves is an extremely difficult task because such flows result in sharp, discontinuous changes in flow variables pressure, temperature,...

  • Split-step method
    Split-step method
    In numerical analysis, the split-step method is a pseudo-spectral numerical method used to solve nonlinear partial differential equations like the nonlinear Schrödinger equation. The name arises for two reasons. First, the method relies on computing the solution in small steps, and treating the...

  • Fast marching method
    Fast marching method
    The fast marching method is introduced by James A. Sethian as a numerical method for solving boundary value problems of the Eikonal equation:Typically, such a problem describes the evolution of a closed curve as a function of time T with speed F in the normal direction at a point x on the curve...

  • Lattice Boltzmann methods
    Lattice Boltzmann methods
    Lattice Boltzmann methods is a class of computational fluid dynamics methods for fluid simulation. Instead of solving the Navier–Stokes equations, the discrete Boltzmann equation is solved to simulate the flow of a Newtonian fluid with collision models such as Bhatnagar-Gross-Krook...

     — for the solution of the Navier-Stokes equations
  • Roe solver — for the solution of the Euler equation
  • Relaxation method
    Relaxation method
    In numerical mathematics, relaxation methods are iterative methods for solving systems of equations, including nonlinear systems.Relaxation methods were developed for solving large sparse linear systems, which arose as finite-difference discretizations of differential equations...

     — a method for solving elliptic PDEs by converting them to evolution equations
  • Broad classes of methods:
    • Mimetic methods — methods that respect in some sense the structure of the original problem
    • Multiphysics
      Multiphysics
      Multiphysics treats simulations that involve multiple physical models or multiple simultaneous physical phenomena. For example, combining chemical kinetics and fluid mechanics or combining finite elements with molecular dynamics...

       — models consisting of various submodels with different physics
    • Immersed boundary method
      Immersed boundary method
      The immersed boundary method is an approach – in computational fluid dynamics – to model and simulate mechanical systems in which elastic structures interact with fluid flows...

       — for simulating elastic structures immersed within fluids

Techniques for improving these methods

  • Multigrid method
    Multigrid method
    Multigrid methods in numerical analysis are a group of algorithms for solving differential equations using a hierarchy of discretizations. They are an example of a class of techniques called multiresolution methods, very useful in problems exhibiting multiple scales of behavior...

     — uses a hierarchy of nested meshes to speed up the methods
  • Domain decomposition methods
    Domain decomposition methods
    ]In mathematics, numerical analysis, and numerical partial differential equations, domain decomposition methods solve a boundary value problem by splitting it into smaller boundary value problems on subdomains and iterating to coordinate the solution between adjacent subdomains...

     — divides the domain in a few subdomains and solves the PDE on these subdomains
    • Additive Schwarz method
    • Abstract additive Schwarz method
      Abstract additive Schwarz method
      In mathematics, the abstract additive Schwarz method, named after Hermann Schwarz, is an abstract version of the additive Schwarz method, formulated only in terms of linear algebra without reference to domains, subdomains, etc...

       — abstract version of additive Schwarz without reference to geometric information
    • Balancing domain decomposition
      Balancing domain decomposition
      In numerical analysis, the balancing domain decomposition method is an iterative method to find the solution of a symmetric positive definite system of linear algebraic equations arising from the finite element method . In each iteration, it combines the solution of local problems on...

       (BDD) — preconditioner for symmetric positive definite matrices
    • Balancing domain decomposition by constraints
      BDDC
      In numerical analysis, BDDC is a domain decomposition method for solving large symmetric, positive definite systems of linear equations that arise from the finite element method. BDDC is used as a preconditioner to the conjugate gradient method...

       (BDDC) — further development of BDD
    • Finite element tearing and interconnect
      FETI
      In mathematics, in particular numerical analysis, the FETI method is an iterative substructuring method for solving systems of linear equations from the finite element method for the solution of elliptic partial differential equations, in particular in computational mechanics In each iteration,...

       (FETI)
    • FETI-DP
      FETI-DP
      The FETI-DP method is a domain decomposition method that enforces equality of the solution at subdomain interfaces by Lagrange multipliers except at subdomain corners, which remain primal variables.The first mathematical analysis of the method was provided by Mandel and Tezaur...

       — further development of FETI
    • Fictitious domain method
      Fictitious domain method
      In mathematics, the Fictitious domain method is a method to find the solution of a partial differential equations on a complicated domain D, by substituting a given problem...

       — preconditioner constructed with a structured mesh on a fictitious domain of simple shape
    • Mortar methods
      Mortar methods
      Mortar methods are discretization methods for partial differential equations, which use separate finite element discretization on nonoverlapping subdomains. The meshes on the subdomains do not match on the interface, and the equality of the solution is enforced by Lagrange multipliers, judiciously...

       — meshes on subdomain do not mesh
    • Neumann–Dirichlet method — combines Neumann problem on one subdomain with Dirichlet problem on other subdomain
    • Neumann–Neumann methods — domain decomposition methods that use Neumann problems on the subdomains
    • Poincaré–Steklov operator
      Poincaré–Steklov operator
      In mathematics, a Poincaré–Steklov operator maps the values of one boundary condition of the solution of an elliptic partial differential equation in a domain to the values of another boundary condition. Usually, either of the boundary conditions determines the solution...

       — maps tangential electric field onto the equivalent electric current
    • Schur complement method
      Schur complement method
      In numerical analysis, the Schur complement method, named after Issai Schur, is the basic and the earliest version of non-overlapping domain decomposition method, also called iterative substructuring. A finite element problem is split into non-overlapping subdomains, and the unknowns in the...

       — early and basic method on subdomains that do not overlap
    • Schwarz alternating method
      Schwarz alternating method
      In mathematics, the Schwarz alternating method, named after Hermann Schwarz, is an iterative method to find the solution of a partial differential equations on a domain which is the union of two overlapping subdomains, by solving the equation on each of the two subdomains in turn, taking always the...

       — early and basic method on subdomains that overlap
  • Coarse space
    Coarse space (numerical analysis)
    In numerical analysis, coarse problem is an auxiliary system of equations used in an iterative method for the solution of a given larger system of equations. A coarse problem is basically a version of the same problem at a lower resolution, retaining its essential characteristics, but with fewer...

     — variant of the problem which uses a discretization with fewer degrees of freedom
  • Adaptive mesh refinement
    Adaptive mesh refinement
    In numerical analysis, adaptive mesh refinement is a method of adaptive meshing. Central to any Eulerian method is the manner in which it discretizes the continuous domain of interest into a grid of many individual elements...

     — uses the computed solution to refine the mesh only where necessary
  • Fast multipole method
    Fast Multipole Method
    The fast multipole method is a mathematical technique that was developed to speed up the calculation of long-ranged forces in the n-body problem...

     — hierarchical method for evaluating particle-particle interactions

Miscellaneous

  • Analysis:
    • Lax equivalence theorem
      Lax equivalence theorem
      In numerical analysis, the Lax equivalence theorem is the fundamental theorem in the analysis of finite difference methods for the numerical solution of partial differential equations...

       — a consistent method is convergent if and only if it is stable
    • Courant–Friedrichs–Lewy condition
      Courant–Friedrichs–Lewy condition
      In mathematics, the Courant–Friedrichs–Lewy condition is a necessary condition for convergence while solving certain partial differential equations numerically by the method of finite differences. It arises when explicit time-marching schemes are used for the numerical solution...

       — stability condition for hyperbolic PDEs
    • Von Neumann stability analysis
      Von Neumann stability analysis
      In numerical analysis, von Neumann stability analysis is a procedure used to check the stability of finite difference schemes as applied to linear partial differential equations...

       — all Fourier components of the error should be stable
    • Numerical diffusion
      Numerical diffusion
      Numerical diffusion is a difficulty with computer simulations of continua wherein the simulated medium exhibits a higher diffusivity than the true medium...

       — diffusion introduced by the numerical method, above to that which is naturally present
    • Numerical resistivity
      Numerical resistivity
      Numerical resistivity is a problem in computer simulations of ideal magnetohydrodynamics . It is a form of numerical diffusion. In near-ideal MHD systems, the magnetic field can diffuse only very slowly through the plasma or fluid of the system; it is rate-limited by the resistivity of the fluid...

       — the same, with resistivity instead of diffusion
    • Weak formulation
      Weak formulation
      Weak formulations are an important tool for the analysis of mathematical equations that permit the transfer of concepts of linear algebra to solve problems in other fields such as partial differential equations...

       — a functional-analytic reformulation of the PDE necessary for some methods
    • Total variation diminishing
      Total variation diminishing
      In numerical methods, total variation diminishing is a property of certain discretization schemes used to solve hyperbolic partial differential equations...

       — property of schemes that do not introduce spurious oscillations
    • Godunov's theorem
      Godunov's theorem
      In numerical analysis and computational fluid dynamics, Godunov's theorem — also known as Godunov's order barrier theorem — is a mathematical theorem important in the development of the theory of high resolution schemes for the numerical solution of partial differential equations.The theorem states...

       — linear monotone schemes can only be of first order
  • Grids and meshes:
    • Mesh generation
      Mesh generation
      Mesh generation is the practice of generating a polygonal or polyhedral mesh that approximates a geometric domain. The term "grid generation" is often used interchangeably. Typical uses are for rendering to a computer screen or for physical simulation such as finite element analysis or...

      • Parallel mesh generation
        Parallel mesh generation
        Parallel mesh generation in numerical analysis is a new research area between the boundaries of two scientific computing disciplines: computational geometry and parallel computing. Parallel mesh generation methods decompose the original mesh generation problem into smaller subproblems which are...

    • Geodesic grid
      Geodesic grid
      A geodesic grid is a technique used to model the surface of a sphere with a subdivided polyhedron, usually an icosahedron.-Introduction:...

       — isotropic grid on a sphere
    • Spatial twist continuum
      Spatial twist continuum
      The spatial twist continuum is a dual representation of an all hexahedral mesh that defines the global connectivity constraint.Discovered by Dr...

       — dual representation of a mesh consisting of hexahedra
  • Perfectly matched layer
    Perfectly matched layer
    A perfectly matched layer is an artificial absorbing layer for wave equations, commonly used to truncate computational regions in numerical methods to simulate problems with open boundaries, especially in the FDTD and FEM methods...

     — artificial absorbing layer for wave equations, used to implement absorbing boundary conditions

Monte Carlo method
Monte Carlo method
Monte Carlo methods are a class of computational algorithms that rely on repeated random sampling to compute their results. Monte Carlo methods are often used in computer simulations of physical and mathematical systems...

  • Variants of the Monte Carlo method:
    • Direct simulation Monte Carlo
      Direct simulation Monte Carlo
      Direct Simulation Monte Carlo method uses probabilistic simulation to solve the Boltzmann equation for finite Knudsen number fluid flows....

    • Quasi-Monte Carlo method
      Quasi-Monte Carlo method
      In numerical analysis, a quasi-Monte Carlo method is a method for the computation of an integral that is based on low-discrepancy sequences...

    • Markov chain Monte Carlo
      Markov chain Monte Carlo
      Markov chain Monte Carlo methods are a class of algorithms for sampling from probability distributions based on constructing a Markov chain that has the desired distribution as its equilibrium distribution. The state of the chain after a large number of steps is then used as a sample of the...

      • Metropolis–Hastings algorithm
        • Multiple-try Metropolis
          Multiple-try Metropolis
          In Markov chain Monte Carlo, the Metropolis–Hastings algorithm can be used to sample from a probability distribution which is difficult to sample from directly. However, the MH algorithm requires the user to supply a proposal distribution, which can be relatively arbitrary...

           — modification which allows larger step sizes
        • Wang and Landau algorithm
          Wang and Landau algorithm
          The Wang and Landau algorithm proposed by Fugao Wang and David P. Landau is an extension of Metropolis Monte Carlo sampling. It is designed to calculate the density of states of a computer-simulated system, such as an Ising model of spin glasses, or model atoms in a molecular force field...

           — extension of Metropolis Monte Carlo
        • Equation of State Calculations by Fast Computing Machines
          Equation of State Calculations by Fast Computing Machines
          Equation of State Calculations by Fast Computing Machines is an article published by Nicholas Metropolis, Arianna W. Rosenbluth, Marshall N. Rosenbluth, Augusta H. Teller, and Edward Teller in the Journal of Chemical Physics in 1953...

           — 1953 article proposing the Metropolis Monte Carlo algorithm
      • Gibbs sampling
        Gibbs sampling
        In statistics and in statistical physics, Gibbs sampling or a Gibbs sampler is an algorithm to generate a sequence of samples from the joint probability distribution of two or more random variables...

      • Coupling from the past
        Coupling from the past
        Among Markov chain Monte Carlo algorithms, coupling from the past is a method for sampling from the stationary distribution of a Markov chain. Contrary to many MCMC algorithms, coupling from the past gives in principle a perfect sample from the stationary distribution...

    • Dynamic Monte Carlo method
      Dynamic Monte Carlo method
      In chemistry, dynamic Monte Carlo is a method for modeling the dynamic behaviors of molecules by comparing the rates of individual steps with random numbers...

      • Kinetic Monte Carlo
        Kinetic Monte Carlo
        The kinetic Monte Carlo method is a Monte Carlo method computer simulation intended to simulate the time evolution of some processes occurring in nature. Typically these are processes that occur with a given known rate...

      • Gillespie algorithm
        Gillespie algorithm
        In probability theory, the Gillespie algorithm generates a statistically correct trajectory of a stochastic equation. It was created by Joseph L...

    • Particle filter
      Particle filter
      In statistics, particle filters, also known as Sequential Monte Carlo methods , are sophisticated model estimation techniques based on simulation...

      • Auxiliary particle filter
        Auxiliary particle filter
        The auxiliary particle filter is a particle filtering algorithm introduced by Pitt and Shephard in 1999 to improve some deficiencies of the sequential importance resampling algorithm when dealing with tailed observation densities....

    • Reverse Monte Carlo
      Reverse Monte Carlo
      -Introduction:The Reverse Monte Carlo modelling method is a variation of the standard Metropolis-Hastings algorithm to solve an inverse problem whereby a model is adjusted until its parameters have the greatest consistency with experimental data...

    • Demon algorithm
      Demon algorithm
      The demon algorithm is a Monte Carlo method for efficiently sampling members of a microcanonical ensemble with a given energy. An additional degree of freedom, called 'the demon', is added to the system and is able to store and provide energy. If a drawn microscopic state has lower energy than the...

  • Sampling methods:
    • Inverse transform sampling — general and straightforward method but computationally expensive
    • Rejection sampling
      Rejection sampling
      In mathematics, rejection sampling is a basic pseudo-random number sampling technique used to generate observations from a distribution. It is also commonly called the acceptance-rejection method or "accept-reject algorithm"....

       — sample from a simpler distribution but reject some of the samples
      • Ziggurat algorithm
        Ziggurat algorithm
        The ziggurat algorithm is an algorithm for pseudo-random number sampling. Belonging to the class of rejection sampling algorithms, it relies on an underlying source of uniformly-distributed random numbers, typically from a pseudo-random number generator, as well as precomputed tables. The...

         — uses a pre-computed table covering the probability distribution with rectangular segments
    • For sampling from a normal distribution:
      • Box–Muller transform
      • Marsaglia polar method
        Marsaglia polar method
        The polar method is a pseudo-random number sampling method for generating a pair of independent standard normal random variables...

    • Convolution random number generator — generates a random variable as a sum of other random variables
  • Low-discrepancy sequence
    Low-discrepancy sequence
    In mathematics, a low-discrepancy sequence is a sequence with the property that for all values of N, its subsequence x1, ..., xN has a low discrepancy....

  • Event generator
    Event generator
    Event generators are software libraries that generate simulated high-energy particle physics events.They randomly generate events as those produced in particle accelerators, collider experiments or during the initial phases of the Universe creation....

  • Parallel tempering
    Parallel tempering
    Parallel tempering, also known as replica exchange MCMC sampling, is a simulation method aimed at improving the dynamic properties of Monte Carlo method simulations of physical systems, and of Markov chain Monte Carlo sampling methods more generally...

  • Umbrella sampling
    Umbrella sampling
    Umbrella sampling is a technique in computational physics and chemistry, used to improve sampling of a system where ergodicity is hindered by the form of the system's energy landscape. It was first suggested by Torrie and Valleau in 1977...

     — improves sampling in physical systems with significant energy barriers
  • Variance reduction
    Variance reduction
    In mathematics, more specifically in the theory of Monte Carlo methods, variance reduction is a procedure used to increase the precision of the estimates that can be obtained for a given number of iterations. Every output random variable from the simulation is associated with a variance which...

     techniques:
    • Control variate
      Control variate
      The control variates method is a variance reduction technique used in Monte Carlo methods. It exploits information about the errors in estimates of known quantities to reduce the error of an estimate of an unknown quantity.-Underlying principle:...

    • Importance sampling
      Importance sampling
      In statistics, importance sampling is a general technique for estimating properties of a particular distribution, while only having samples generated from a different distribution rather than the distribution of interest. It is related to Umbrella sampling in computational physics...

    • Stratified sampling
      Stratified sampling
      In statistics, stratified sampling is a method of sampling from a population.In statistical surveys, when subpopulations within an overall population vary, it is advantageous to sample each subpopulation independently. Stratification is the process of dividing members of the population into...

    • VEGAS algorithm
      VEGAS algorithm
      The VEGAS algorithm, due to G. P. Lepage, is a method for reducing error in Monte Carlo simulations by using a known or approximate probability distribution function to concentrate the search in those areas of the graph that make the greatest contribution to the final integral.The VEGAS algorithm...

  • Ensemble Kalman filter
    Ensemble Kalman filter
    The ensemble Kalman filter is a recursive filter suitable for problems with a large number of variables, such as discretizations of partial differential equations in geophysical models...

     — recursive filter suitable for problems with a large number of variables
  • Applications:
    • Bond fluctuation model
      Bond fluctuation model
      The BFM is a lattice model for simulating the conformation and dynamics of polymer systems. There are two versions of the BFM used: The earlier version was first introduced by Carmesin and Kremer in 1988, and the later version by Shaffer in 1994...

       — for simulating the conformation and dynamics of polymer systems
    • Metropolis light transport
      Metropolis light transport
      The Metropolis light transport is a SIGGRAPH 1997 paper by Eric Veach and Leonidas J. Guibas, describing an application of a variant of the Monte Carlo method called the Metropolis-Hastings algorithm to the rendering equation for generating images from detailed physical descriptions of three...

    • Monte Carlo method for photon transport
      Monte Carlo method for photon transport
      Modeling photon propagation with Monte Carlo methods is a flexible yet rigorous approach to simulate photon transport. In the method, local rules of photon transport are expressed as probability distributions which describe the step size of photon movement between sites of photon-tissue interaction...

    • Monte Carlo methods in finance
      Monte Carlo methods in finance
      Monte Carlo methods are used in finance and mathematical finance to value and analyze instruments, portfolios and investments by simulating the various sources of uncertainty affecting their value, and then determining their average value over the range of resultant outcomes. This is usually done...

    • Monte Carlo molecular modeling
      Monte Carlo molecular modeling
      Monte Carlo molecular modeling is the application of Monte Carlo methods to molecular problems. These problems can also be modeled by the molecular dynamics method. The difference is that this approach relies on statistical mechanics rather than molecular dynamics. Instead of trying to reproduce...

    • Quantum Monte Carlo
      Quantum Monte Carlo
      Quantum Monte Carlo is a large class of computer algorithms that simulate quantum systems with the idea of solving the quantum many-body problem. They use, in one way or another, the Monte Carlo method to handle the many-dimensional integrals that arise...

      • Diffusion Monte Carlo
        Diffusion Monte Carlo
        Diffusion Monte Carlo is a quantum Monte Carlo method that uses a Green's function to solve the Schrödinger equation. DMC is potentially numerically exact, meaning that it can find the exact ground state energy within a given error for any quantum system...

         — uses a Green function to solve the Schrödinger equation
      • Gaussian quantum Monte Carlo
        Gaussian quantum Monte Carlo
        Gaussian Quantum Monte Carlo is a quantum Monte Carlo method that shows a potential solution to the fermion sign problem without the deficiencies of alternative approaches. Instead of the Hilbert space, this method works in the space of density matrices that can be spanned by an over-complete basis...

      • Path integral Monte Carlo
        Path integral Monte Carlo
        Path integral Monte Carlo is a quantum Monte Carlo method in the path integral formulation of quantum mechanics.The equations often are applied assuming that quantum exchange does not matter...

      • Reptation Monte Carlo
        Reptation Monte Carlo
        - References :**...

      • Variational Monte Carlo
        Variational Monte Carlo
        In mathematical physics, variational Monte Carlo is a quantum Monte Carlo method that applies the variational method to approximate the ground state of the system.The expectation value necessary can be written in the x representation as...

    • Methods for simulating the Ising model:
      • Swendsen–Wang algorithm — entire sample is divided into equal-spin clusters
      • Wolff algorithm
        Wolff algorithm
        The Wolff algorithm, named after Ulli Wolff, is an algorithm for Monte Carlo simulation of the Ising model in which an equal-spin cluster is formed around one spin. That cluster is then flipped. The Wolff algorithm is an improvement over the Swendsen–Wang algorithm because it tends to form bigger...

         — improvement of the Swendsen–Wang algorithm
    • Auxiliary field Monte Carlo
      Auxiliary field Monte Carlo
      Auxiliary field Monte Carlo is a method that allows the calculation, by use of Monte Carlo techniques, of averages of operators in many-body quantum mechanical or classical problems .-Reweighting procedure and numerical sign problem:The distinctive ingredient of "auxiliary field Monte Carlo" is...

       — computes averages of operators in many-body quantum mechanical problems
    • Cross-entropy method
      Cross-entropy method
      The cross-entropy method attributed to Reuven Rubinstein is a general Monte Carlo approach tocombinatorial and continuous multi-extremal optimization and importance sampling.The method originated from the field of rare event simulation, where...

       — for multi-extremal optimization and importance sampling
  • Also see the list of statistics topics

Applications

  • Computational physics
    Computational physics
    Computational physics is the study and implementation of numerical algorithms to solve problems in physics for which a quantitative theory already exists...

    • Computational electromagnetics
      Computational electromagnetics
      Computational electromagnetics, computational electrodynamics or electromagnetic modeling is the process of modeling the interaction of electromagnetic fields with physical objects and the environment....

    • Computational fluid dynamics
      Computational fluid dynamics
      Computational fluid dynamics, usually abbreviated as CFD, is a branch of fluid mechanics that uses numerical methods and algorithms to solve and analyze problems that involve fluid flows. Computers are used to perform the calculations required to simulate the interaction of liquids and gases with...

       (CFD)
      • Large eddy simulation
        Large eddy simulation
        Large eddy simulation is a mathematical model for turbulence used in computational fluid dynamics. It was initially proposed in 1963 by Joseph Smagorinsky to simulate atmospheric air currents, and many of the issues unique to LES were first explored by Deardorff...

      • Smoothed-particle hydrodynamics
      • Acoustic analogy
        Acoustic analogy
        Acoustic analogies are applied mostly in numerical aeroacoustics to reduce aeroacoustic sound sources to simple emitter types. They are therefore often also referred to as aeroacoustic analogies....

         — used in numerical aeroacoustics to reduce sound sources to simple emitter types
    • Computational magnetohydrodynamics
      Computational Magnetohydrodynamics
      Computational magnetohydrodynamics is a rapidly developing branch of magnetohydrodynamics that uses numerical methods and algorithms to solve and analyze problems that involve electrically conducting fluids. Most of the methods used in CMHD are borrowed from the well established techniques...

       (CMHD) — studies electrically conducting fluids
    • Climate model
      Climate model
      Climate models use quantitative methods to simulate the interactions of the atmosphere, oceans, land surface, and ice. They are used for a variety of purposes from study of the dynamics of the climate system to projections of future climate...

    • Numerical weather prediction
      Numerical weather prediction
      Numerical weather prediction uses mathematical models of the atmosphere and oceans to predict the weather based on current weather conditions. Though first attempted in the 1920s, it was not until the advent of computer simulation in the 1950s that numerical weather predictions produced realistic...

      • Geodesic grid
        Geodesic grid
        A geodesic grid is a technique used to model the surface of a sphere with a subdivided polyhedron, usually an icosahedron.-Introduction:...

    • Celestial mechanics
      Celestial mechanics
      Celestial mechanics is the branch of astronomy that deals with the motions of celestial objects. The field applies principles of physics, historically classical mechanics, to astronomical objects such as stars and planets to produce ephemeris data. Orbital mechanics is a subfield which focuses on...

      • Numerical model of solar system
        Numerical model of solar system
        A numerical model of the Solar System is a set of mathematical equations, which, when solved, give the approximate positions of the planets as a function of time. Attempts to create such a model established the more general field of celestial mechanics. The results of this simulation can be...

    • Multiphysics Methods Group
      Multiphysics Methods Group
      The Multiphysics Methods Group is a program at Idaho National Laboratory begun in 2004. It uses modeling software to simulate complex physical and chemical reactions inside nuclear reactors...

       — program at Idaho National Laboratory studying simulations of nuclear reactors
  • Computational chemistry
    Computational chemistry
    Computational chemistry is a branch of chemistry that uses principles of computer science to assist in solving chemical problems. It uses the results of theoretical chemistry, incorporated into efficient computer programs, to calculate the structures and properties of molecules and solids...

    • Cell lists
      Cell lists
      Cell lists are a tool for finding all atom pairs within a given cut-off distance of each other in Molecular dynamics simulations...

    • Coupled cluster
      Coupled cluster
      Coupled cluster is a numerical technique used for describing many-body systems. Its most common use is as one of several quantum chemical post-Hartree–Fock ab initio quantum chemistry methods in the field of computational chemistry...

    • Density functional theory
      Density functional theory
      Density functional theory is a quantum mechanical modelling method used in physics and chemistry to investigate the electronic structure of many-body systems, in particular atoms, molecules, and the condensed phases. With this theory, the properties of a many-electron system can be determined by...

    • Self-consistent field method
  • Computational statistics
    Computational statistics
    Computational statistics, or statistical computing, is the interface between statistics and computer science. It is the area of computational science specific to the mathematical science of statistics....

The source of this article is wikipedia, the free encyclopedia.  The text of this article is licensed under the GFDL.
 
x
OK