List of numerical analysis topics
Encyclopedia
General
- Iterative methodIterative methodIn computational mathematics, an iterative method is a mathematical procedure that generates a sequence of improving approximate solutions for a class of problems. A specific implementation of an iterative method, including the termination criteria, is an algorithm of the iterative method...
- Rate of convergenceRate of convergenceIn numerical analysis, the speed at which a convergent sequence approaches its limit is called the rate of convergence. Although strictly speaking, a limit does not give information about any finite first part of the sequence, this concept is of practical importance if we deal with a sequence of...
— the speed at which a convergent sequence approaches its limit - Series accelerationSeries accelerationIn mathematics, series acceleration is one of a collection of sequence transformations for improving the rate of convergence of a series. Techniques for series acceleration are often applied in numerical analysis, where they are used to improve the speed of numerical integration...
— methods to accelerate the speed of convergence of a series- Aitken's delta-squared processAitken's delta-squared processIn numerical analysis, Aitken's delta-squared process is a series acceleration method, used for accelerating the rate of convergence of a sequence. It is named after Alexander Aitken, who introduced this method in 1926. Its early form was known to Seki Kōwa and was found for rectification of the...
— most useful for linearly converging sequences - Minimum polynomial extrapolationMinimum polynomial extrapolationIn mathematics, minimum polynomial extrapolation is a sequence transformation used for convergence acceleration.While Aitken's method is the most famous, it often fails for vector sequences. An effective method for vector sequences is the minimum polynomial extrapolation...
— for vector sequences - Richardson extrapolationRichardson extrapolationIn numerical analysis, Richardson extrapolation is a sequence acceleration method, used to improve the rate of convergence of a sequence. It is named after Lewis Fry Richardson, who introduced the technique in the early 20th century. In the words of Birkhoff and Rota, ".....
- Shanks transformationShanks transformationIn numerical analysis, the Shanks transformation is a non-linear series acceleration method to increase the rate of convergence of a sequence. This method is named after Daniel Shanks, who rediscovered this sequence transformation in 1955. It was first derived and published by R...
— similar to Aitken's delta-squared process, but applied to the partial sums - Van Wijngaarden transformation — for accelerating the convergence of an alternating series
- Aitken's delta-squared process
- Level set methodLevel set methodThe level set method is a numerical technique for tracking interfaces and shapes. The advantage of the level set method is that one can perform numerical computations involving curves and surfaces on a fixed Cartesian grid without having to parameterize these objects...
- Level set (data structures)Level set (data structures)In computer science a level set data structure is designed to represent discretely sampled dynamic level sets functions.A common use of this form of data structure is in efficient image rendering...
— data structures for representing level sets
- Level set (data structures)
- Abramowitz and StegunAbramowitz and StegunAbramowitz and Stegun is the informal name of a mathematical reference work edited by Milton Abramowitz and Irene Stegun of the U.S. National Bureau of Standards...
— book containing formulas and tables of many special functions- Digital Library of Mathematical FunctionsDigital Library of Mathematical FunctionsThe Digital Library of Mathematical Functions is an online project at the National Institute of Standards and Technology to develop a major resource of mathematical reference data for special functions and their applications. It is intended as an update of Abramowitz's and Stegun's Handbook of...
— successor of book by Abramowitz and Stegun
- Digital Library of Mathematical Functions
- Curse of dimensionalityCurse of dimensionalityThe curse of dimensionality refers to various phenomena that arise when analyzing and organizing high-dimensional spaces that do not occur in low-dimensional settings such as the physical space commonly modeled with just three dimensions.There are multiple phenomena referred to by this name in...
- Local convergenceLocal convergenceIn numerical analysis, an iterative method is called locally convergent if the successive approximations produced by the method are guaranteed to converge to a solution when the initial approximation is already close enough to the solution...
and global convergence — whether you need a good initial guess to get convergence - SuperconvergenceSuperconvergenceIn numerical analysis, a superconvergent method is one which converges faster than generally expected. For example in the Finite Element Method approximation to Poisson's equation in two dimensions, using piecewise linear elements, the average error in the gradient is first order...
- DiscretizationDiscretizationIn mathematics, discretization concerns the process of transferring continuous models and equations into discrete counterparts. This process is usually carried out as a first step toward making them suitable for numerical evaluation and implementation on digital computers...
- Collocation methodCollocation methodIn mathematics, a collocation method is a method for the numerical solution of ordinary differential equations, partial differential equations and integral equations...
— discretizes a continuous equation by requiring it only to hold at certain points
- Collocation method
- Difference quotientDifference quotientThe primary vehicle of calculus and other higher mathematics is the function. Its "input value" is its argument, usually a point expressible on a graph...
- Complexity:
- Computational complexity of mathematical operationsComputational complexity of mathematical operationsThe following tables list the running time of various algorithms for common mathematical operations.Here, complexity refers to the time complexity of performing computations on a multitape Turing machine...
- Smoothed analysisSmoothed analysisSmoothed analysis is a way of measuring the complexity of an algorithm. It gives a more realistic analysis of the practical performance of the algorithm, such as its running time, than using worst-case or average-case scenarios....
— measuring the expected performance of algorithms under slight random perturbations of worst-case inputs
- Computational complexity of mathematical operations
- Symbolic-numeric computationSymbolic-numeric computationIn mathematics and computer science, symbolic-numeric computation is the use of software that combines symbolic and numeric methods to solve problems.-References:*, Dongming Wang, Lihong Zhi, Springer, 2007, ISBN 3764379839...
— combination of symbolic and numeric methods - ABS methodsABS methodsABS methods, where the acronym contains the initials of Jozsef Abaffy, Charles G. Broyden and Emilio Spedicato, have been developed since 1981 to generate a large class of algorithms for the following applications:...
- Cultural aspects:
- International Workshops on Lattice QCD and Numerical AnalysisInternational Workshops on Lattice QCD and Numerical AnalysisThe International Workshops on Lattice QCD and Numerical Analysis first started in 1995. The aim is to bring together applied mathematicians and theoretical physicists as well as to stimulate the exchange of ideas between leading experts in the fields of Lattice QCD and numerical analysis...
- Hundred-dollar, Hundred-digit Challenge problemsHundred-dollar, Hundred-digit Challenge problemsThe Hundred-dollar, Hundred-digit Challenge problems are 10 problems in numerical mathematics published in 2002 by . A $100 prize was offered to whoever produced the most accurate solutions, measured up to 10 significant digits. The deadline for the contest was May 20, 2002...
— list of ten problems proposed by Nick Trefethen in 2002
- International Workshops on Lattice QCD and Numerical Analysis
Error
Error analysisError analysis
Error analysis is the study of kind and quantity of error that occurs, particularly in the fields of applied mathematics , applied linguistics and statistics.- Error analysis in numerical modeling :...
- ApproximationApproximationAn approximation is a representation of something that is not exact, but still close enough to be useful. Although approximation is most often applied to numbers, it is also frequently applied to such things as mathematical functions, shapes, and physical laws.Approximations may be used because...
- Approximation errorApproximation errorThe approximation error in some data is the discrepancy between an exact value and some approximation to it. An approximation error can occur because#the measurement of the data is not precise due to the instruments...
- Arithmetic precision
- Condition numberCondition numberIn the field of numerical analysis, the condition number of a function with respect to an argument measures the asymptotically worst case of how much the function can change in proportion to small changes in the argument...
- Discretization errorDiscretization errorIn numerical analysis, computational physics, and simulation, discretization error is error resulting from the fact that a function of a continuous variable is represented in the computer by a finite number of evaluations, for example, on a lattice...
- Floating pointFloating pointIn computing, floating point describes a method of representing real numbers in a way that can support a wide range of values. Numbers are, in general, represented approximately to a fixed number of significant digits and scaled using an exponent. The base for the scaling is normally 2, 10 or 16...
number- Guard digit — extra precision introduced during a computation to reduce round-off error
- Arbitrary-precision arithmeticArbitrary-precision arithmeticIn computer science, arbitrary-precision arithmetic indicates that calculations are performed on numbers whose digits of precision are limited only by the available memory of the host system. This contrasts with the faster fixed-precision arithmetic found in most ALU hardware, which typically...
- TruncationTruncationIn mathematics and computer science, truncation is the term for limiting the number of digits right of the decimal point, by discarding the least significant ones.For example, consider the real numbersThe result would be:- Truncation and floor function :...
— rounding a floating-point number by discarding all digits after a certain digit
- Interval arithmeticInterval arithmeticInterval arithmetic, interval mathematics, interval analysis, or interval computation, is a method developed by mathematicians since the 1950s and 1960s as an approach to putting bounds on rounding errors and measurement errors in mathematical computation and thus developing numerical methods that...
— represent every number by two floating-point numbers guaranteed to have the unknown number between them- See also: Interval boundary element methodInterval boundary element methodInterval boundary element method is classical boundary element method with the interval parameters.Boundary element method is based on the following integral equation...
, Interval finite elementInterval finite elementThe interval finite element method is a finite element method that uses interval parameters. Interval FEM can be applied in situations where it is not possible to get reliable probabilistic characteristics of the structure. This is important in concrete structures, wood structures, geomechanics,...
- See also: Interval boundary element method
- Loss of significanceLoss of significanceLoss of significance is an undesirable effect in calculations using floating-point arithmetic. It occurs when an operation on two numbers increases relative error substantially more than it increases absolute error, for example in subtracting two large and nearly equal numbers. The effect is that...
- Numerical errorNumerical errorIn software engineering and mathematics, numerical error is the combined effect of two kinds of error in a calculation. The first is caused by the finite precision of computations involving floating-point or integer values...
- Numerical stabilityNumerical stabilityIn the mathematical subfield of numerical analysis, numerical stability is a desirable property of numerical algorithms. The precise definition of stability depends on the context, but it is related to the accuracy of the algorithm....
- Error propagation:
- Propagation of uncertaintyPropagation of uncertaintyIn statistics, propagation of error is the effect of variables' uncertainties on the uncertainty of a function based on them...
- Significance arithmeticSignificance arithmeticSignificance arithmetic is a set of rules for approximating the propagation of uncertainty in scientific or statistical calculations. These rules can be used to find the appropriate number of significant figures to use to represent the result of a calculation...
- Residual (numerical analysis)
- Propagation of uncertainty
- Relative difference — the relative difference between x and y is |x − y| / max(|x|, |y|)
- Round-off errorRound-off errorA round-off error, also called rounding error, is the difference between the calculated approximation of a number and its exact mathematical value. Numerical analysis specifically tries to estimate this error when using approximation equations and/or algorithms, especially when using finitely many...
- Stochastic rounding
- Significant figuresSignificant figuresThe significant figures of a number are those digits that carry meaning contributing to its precision. This includes all digits except:...
- False precisionFalse precisionFalse precision occurs when numerical data are presented in a manner that implies better precision than is actually the case; since precision is a limit to accuracy, this often leads to overconfidence in the accuracy as well.In science and engineering, convention dictates that...
— giving more significant figures than appropriate
- False precision
- Truncation errorTruncation errorTruncation error or local truncation error is error made by numerical algorithms that arises from taking finite number of steps in computation...
— error committed by doing only a finite numbers of steps - Well-posed problemWell-posed problemThe mathematical term well-posed problem stems from a definition given by Jacques Hadamard. He believed that mathematical models of physical phenomena should have the properties that# A solution exists# The solution is unique...
- Affine arithmeticAffine arithmeticAffine arithmetic is a model for self-validated numerical analysis. In AA, the quantities of interest are represented as affine combinations of certain primitive variables, which stand for sources of uncertainty in the data or approximations made during the computation.Affine arithmetic is meant...
Elementary and special functions
- Summation:
- Kahan summation algorithmKahan summation algorithmIn numerical analysis, the Kahan summation algorithm significantly reduces the numerical error in the total obtained by adding a sequence of finite precision floating point numbers, compared to the obvious approach...
- Binary splittingBinary splittingIn mathematics, binary splitting is a technique for speeding up numerical evaluation of many types of series with rational terms. In particular, it can be used to evaluate hypergeometric series at rational points...
- Kahan summation algorithm
- Multiplication:
- Multiplication algorithmMultiplication algorithmA multiplication algorithm is an algorithm to multiply two numbers. Depending on the size of the numbers, different algorithms are in use...
— general discussion, simple methods - Karatsuba algorithm — the first algorithm which is faster than straightforward multiplication
- Toom–Cook multiplication — generalization of Karatsuba multiplication
- Schönhage–Strassen algorithm — based on Fourier transform, asymptotically very fast
- Fürer's algorithmFürer's algorithmFürer's algorithm is an integer multiplication algorithm for very large numbers possessing a very low asymptotic complexity. It was created in 2007 by Swiss mathematician Martin Fürer of Pennsylvania State University as an asymptotically faster algorithm than its predecessor, the...
— asymptotically slightly faster than Schönhage-Strassen
- Multiplication algorithm
- Exponentiation:
- Exponentiation by squaringExponentiation by squaringExponentiating by squaring is a general method for fast computation of large integer powers of a number. Some variants are commonly referred to as square-and-multiply algorithms or binary exponentiation. In additive notation the appropriate term is double-and-add...
- Addition-chain exponentiationAddition-chain exponentiationIn mathematics and computer science, optimal addition-chain exponentiation is a method of exponentiation by positive integer powers that requires a minimal number of multiplications. It works by creating a shortest addition chain that generates the desired exponent. Each exponentiation in the chain...
- Exponentiation by squaring
- Polynomials:
- Horner schemeHorner schemeIn numerical analysis, the Horner scheme , named after William George Horner, is an algorithm for the efficient evaluation of polynomials in monomial form. Horner's method describes a manual process by which one may approximate the roots of a polynomial equation...
- Estrin's schemeEstrin's schemeIn numerical analysis Estrin's scheme is an algorithm for numerical evaluation of polynomials.The Horner scheme for evaluation of polynomials is one of the most commonly used algorithms for this purpose and unlike Estrin's scheme it is optimal in the sense that it minimizes the number of...
— modification of the Horner scheme with more possibilities for parallellization - Clenshaw algorithmClenshaw algorithmIn numerical analysis, the Clenshaw algorithm is a recursive method to evaluate a linear combination of Chebyshev polynomials. In general it applies to any class of functions that can be defined by a three-term recurrence relation.-Clenshaw algorithm:...
- De Casteljau's algorithmDe Casteljau's algorithmIn the mathematical field of numerical analysis, De Casteljau's algorithm is a recursive method to evaluate polynomials in Bernstein form or Bézier curves, named after its inventor Paul de Casteljau...
- Horner scheme
- Square roots and other roots:
- Integer square rootInteger square rootIn number theory, the integer square root of a positive integer n is the positive integer m which is the greatest integer less than or equal to the square root of n,...
- Methods of computing square rootsMethods of computing square rootsThere are several methods for calculating the principal square root of a nonnegative real number. For the square roots of a negative or complex number, see below.- Rough estimation :...
- nth root algorithm
- Shifting nth root algorithm — similar to long division
- Alpha max plus beta min algorithm — approximates (x2 + y2)1/2
- Fast inverse square rootFast inverse square rootFast inverse square root is a method of calculating x-½, the reciprocal of a square root for a 32-bit floating point number in IEEE 754 floating point format...
— calculates 1 / √x using details of the IEEE floating-point system
- Integer square root
- Elementary functions (exponential, logarithm, trigonometric functions):
- Generating trigonometric tablesGenerating trigonometric tablesIn mathematics, tables of trigonometric functions are useful in a number of areas. Before the existence of pocket calculators, trigonometric tables were essential for navigation, science and engineering...
- CORDICCORDICCORDIC is a simple and efficient algorithm to calculate hyperbolic and trigonometric functions...
— shift-and-add algorithm using a table of arc tangents - BKM algorithmBKM algorithmThe BKM algorithm is a shift-and-add algorithm for computing elementary functions, first published in 1994 by J.C. Bajard, S. Kla, and J.M. Muller. BKM is based on computing complex logarithms and exponentials using a method similar to the algorithm Henry Briggs used to compute logarithms...
— shift-and-add algorithm using a table of logarithms and complex numbers
- Generating trigonometric tables
- Gamma function:
- Lanczos approximation
- Spouge's approximationSpouge's approximationIn mathematics, Spouge's approximation is a formula for the gamma function due to John L. Spouge. The formula is a modification of Stirling's approximation, and has the form...
— modification of Stirling's approximation; easier to apply than Lanczos
- AGM methodAGM methodIn mathematics, the AGM method makes it possible to construct fast algorithms for calculation of exponential and trigonometric functions, and some mathematical constants and in particular, to quickly compute \pi....
— computes arithmetic–geometric mean; related methods compute special functions - Gal's accurate tablesGal's accurate tablesGal's accurate tables is a method devised by Shmuel Gal to provide accurate values of special functions using a lookup table and interpolation. It is a fast and efficient method for generating values of functions like the exponential or the trigonometric functions to within last-bit accuracy for...
— table of function values with unequal spacing to reduce round-off error - Spigot algorithmSpigot algorithmA spigot algorithm is a type of algorithm used to compute the value of a mathematical constant such as π or e. Unlike recursive algorithms, a spigot algorithm yields digits incrementally without using previously computed digits...
— algorithms that can compute individual digits of a real number - Approximations of :
- Liu Hui's pi algorithm — first algorithm that can compute π to arbitrary precision
- Gauss–Legendre algorithm — iteration which converges quadratically to π, based on arithmetic–geometric mean
- Borwein's algorithmBorwein's algorithmIn mathematics, Borwein's algorithm is an algorithm devised by Jonathan and Peter Borwein to calculate the value of 1/π.-Jonathan Borwein and Peter Borwein's Version :Start out by setting A &= 63365028312971999585426220 \\...
— iteration which converges quartically to 1/π, and other algorithms - Chudnovsky algorithmChudnovsky algorithmThe Chudnovsky algorithm is a fast method for calculating the digits of π. It was used by the Chudnovsky brothers to calculate more than one billion digits...
— fast algorithm that calculates a hypergeometric series - Bailey–Borwein–Plouffe formulaBailey–Borwein–Plouffe formulaThe Bailey–Borwein–Plouffe formula provides a spigot algorithm for the computation of the nth binary digit of π. This summation formula was discovered in 1995 by Simon Plouffe. The formula is named after the authors of the paper in which the formula was published, David H. Bailey, Peter Borwein,...
— can be used to compute individual hexadecimal digits of π - Bellard's formulaBellard's formulaBellard's formula, as used by PiHex, the now-completed distributed computing project, is used to calculate the nth digit of π in base 2. It is a faster version of the Bailey–Borwein–Plouffe formula...
— faster version of Bailey–Borwein–Plouffe formula
Numerical linear algebra
Numerical linear algebraNumerical linear algebra
Numerical linear algebra is the study of algorithms for performing linear algebra computations, most notably matrix operations, on computers. It is often a fundamental part of engineering and computational science problems, such as image and signal processing, Telecommunication, computational...
— study of numerical algorithms for linear algebra problems
Basic concepts
- Types of matrices appearing in numerical analysis:
- Sparse matrixSparse matrixIn the subfield of numerical analysis, a sparse matrix is a matrix populated primarily with zeros . The term itself was coined by Harry M. Markowitz....
- Band matrixBand matrixIn mathematics, particularly matrix theory, a band matrix is a sparse matrix whose non-zero entries are confined to a diagonal band, comprising the main diagonal and zero or more diagonals on either side.-Matrix bandwidth:...
- Tridiagonal matrix
- Pentadiagonal matrixPentadiagonal matrixIn linear algebra, a pentadiagonal matrix is a matrix that is nearly diagonal; to be exact, it is a matrix in which the only nonzero entries are on the main diagonal, and the first two diagonals above and below it...
- Skyline matrixSkyline matrixA skyline matrix, or a variable band matrix, or envelope storage scheme is a form of a sparse matrix storage format matrix that reduces the storage requirement of a matrix more than banded storage. In banded storage, all entries within a fixed distance from the diagonal are stored...
- Band matrix
- Circulant matrixCirculant matrixIn linear algebra, a circulant matrix is a special kind of Toeplitz matrix where each row vector is rotated one element to the right relative to the preceding row vector. In numerical analysis, circulant matrices are important because they are diagonalized by a discrete Fourier transform, and hence...
- Triangular matrixTriangular matrixIn the mathematical discipline of linear algebra, a triangular matrix is a special kind of square matrix where either all the entries below or all the entries above the main diagonal are zero...
- Diagonally dominant matrixDiagonally dominant matrixIn mathematics, a matrix is said to be diagonally dominant if for every row of the matrix, the magnitude of the diagonal entry in a row is larger than or equal to the sum of the magnitudes of all the other entries in that row...
- Stieltjes matrixStieltjes matrixIn mathematics, particularly matrix theory, a Stieltjes matrix, named after Thomas Joannes Stieltjes, is a real symmetric positive definite matrix with nonpositive off-diagonal entries. A Stieltjes matrix is necessarily an M-matrix...
— symmetric positive definite with non-positive off-diagonal entries - Hilbert matrix — example of a matrix which is extremely ill-conditioned (and thus difficult to handle)
- Wilkinson matrixWilkinson matrixIn linear algebra, Wilkinson matrices are symmetric, tridiagonal, order-N matrices with pairs of nearly, but not exactly, equal eigenvalues. It is named after the British mathematician James H. Wilkinson...
— example of a symmetric tridiagonal matrix with pairs of nearly, but not exactly, equal eigenvalues
- Sparse matrix
- Algorithms for matrix multiplication:
- Strassen algorithmStrassen algorithmIn the mathematical discipline of linear algebra, the Strassen algorithm, named after Volker Strassen, is an algorithm used for matrix multiplication...
- Coppersmith–Winograd algorithmCoppersmith–Winograd algorithmIn the mathematical discipline of linear algebra, the Coppersmith–Winograd algorithm, named after Don Coppersmith and Shmuel Winograd, is the asymptotically fastest known algorithm for square matrix multiplication. It can multiply two n \times n matrices in O time...
- Cannon's algorithmCannon's algorithmIn computer science, Cannon's algorithm is a distributed algorithm for matrix multiplication for two-dimensional meshes first described in 1969 by Lynn Elliot Cannon.It is especially suitable for computers laid out in an N × N mesh...
— a distributed algorithm, especially suitable for processors laid out in a 2d grid - Freivald's algorithmFreivald's algorithmFreivalds' algorithm is a randomized algorithm used to verify matrix multiplication. Given three n × n matrices A, B, and C, a general problem is to verify whether A × B = C...
— a randomized algorithm for checking the result of a multiplication
- Strassen algorithm
- Matrix decompositionMatrix decompositionIn the mathematical discipline of linear algebra, a matrix decomposition is a factorization of a matrix into some canonical form. There are many different matrix decompositions; each finds use among a particular class of problems.- Example :...
s:- LU decompositionLU decompositionIn linear algebra, LU decomposition is a matrix decomposition which writes a matrix as the product of a lower triangular matrix and an upper triangular matrix. The product sometimes includes a permutation matrix as well. This decomposition is used in numerical analysis to solve systems of linear...
— lower triangular times upper triangular - QR decompositionQR decompositionIn linear algebra, a QR decomposition of a matrix is a decomposition of a matrix A into a product A=QR of an orthogonal matrix Q and an upper triangular matrix R...
— orthogonal matrix times triangular matrix - Polar decomposition — unitary matrix times positive-semidefinite Hermitian matrix
- Decompositions by similarity:
- EigendecompositionEigendecomposition of a matrixIn the mathematical discipline of linear algebra, eigendecomposition or sometimes spectral decomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors...
— decomposition in terms of eigenvectors and eigenvalues - Jordan normal formJordan normal formIn linear algebra, a Jordan normal form of a linear operator on a finite-dimensional vector space is an upper triangular matrix of a particular form called Jordan matrix, representing the operator on some basis...
— bidiagonal matrix of a certain form; generalizes the eigendecomposition - Jordan–Chevalley decomposition — sum of commuting nilpotent matrix and diagonalizable matrix
- Schur decompositionSchur decompositionIn the mathematical discipline of linear algebra, the Schur decomposition or Schur triangulation, named after Issai Schur, is a matrix decomposition.- Statement :...
— similarity transform bringing the matrix to a triangular matrix
- Eigendecomposition
- LU decomposition
Solving systems of linear equations
- Gaussian eliminationGaussian eliminationIn linear algebra, Gaussian elimination is an algorithm for solving systems of linear equations. It can also be used to find the rank of a matrix, to calculate the determinant of a matrix, and to calculate the inverse of an invertible square matrix...
- Row echelon formRow echelon formIn linear algebra a matrix is in row echelon form if* All nonzero rows are above any rows of all zeroes, and...
— matrix in which all entries below a nonzero entry are zero - Gauss–Jordan eliminationGauss–Jordan eliminationIn linear algebra, Gauss–Jordan elimination is an algorithm for getting matrices in reduced row echelon form using elementary row operations. It is a variation of Gaussian elimination. Gaussian elimination places zeros below each pivot in the matrix, starting with the top row and working downwards....
— variant in which the entries below the pivot are also zeroed - Montante's method — variant which ensures that all entries remain integers if the initial matrix has integer entries
- Tridiagonal matrix algorithmTridiagonal matrix algorithmIn numerical linear algebra, the tridiagonal matrix algorithm , also known as the Thomas algorithm , is a simplified form of Gaussian elimination that can be used to solve tridiagonal systems of equations...
— simplified form of Gaussian elimination for tridiagonal matrices
- Row echelon form
- LU decompositionLU decompositionIn linear algebra, LU decomposition is a matrix decomposition which writes a matrix as the product of a lower triangular matrix and an upper triangular matrix. The product sometimes includes a permutation matrix as well. This decomposition is used in numerical analysis to solve systems of linear...
— write a matrix as a product of an upper- and a lower-triangular matrix- Crout matrix decompositionCrout matrix decompositionIn linear algebra, the Crout matrix decomposition is an LU decomposition which decomposes a matrix into a lower triangular matrix , an upper triangular matrix and, although not always needed, a permutation matrix ....
- LU reductionLU reductionLU reduction is an algorithm related to LU decomposition. This term is usually used in the context of super computing and highly parallel computing. In this context it is used as a benchmarking algorithm, i.e. to provide a comparative measurement of speed for different computers...
— a special parallelized version of a LU decomposition algorithm
- Crout matrix decomposition
- Block LU decompositionBlock LU decompositionIn linear algebra, a Block LU decomposition is a matrix decomposition of a block matrix into a lower block triangular matrix L and an upper block triangular matrix U...
- Cholesky decompositionCholesky decompositionIn linear algebra, the Cholesky decomposition or Cholesky triangle is a decomposition of a Hermitian, positive-definite matrix into the product of a lower triangular matrix and its conjugate transpose. It was discovered by André-Louis Cholesky for real matrices...
— for solving a system with a positive definite matrix- Minimum degree algorithmMinimum degree algorithmIn numerical analysis the minimum degree algorithm is an algorithm used to permute the rows and columns of a symmetric sparse matrix before applying the Cholesky decomposition, to reduce the number of non-zeros in the Cholesky factor....
- Symbolic Cholesky decompositionSymbolic Cholesky decompositionIn the mathematical subfield of numerical analysis the symbolic Cholesky decomposition is an algorithm used to determine the non-zero pattern for the L factors of a symmetric sparse matrix when applying the Cholesky decomposition or variants.-Algorithm:...
- Minimum degree algorithm
- Frontal solverFrontal solverA frontal solver, due to Irons is an approach to solving sparse linear systems which is used extensively in finite element analysis. It is a variant of Gauss elimination that automatically avoids a large number of operations involving zero terms....
— for sparse matrices; used in finite element methods - Levinson recursionLevinson recursionLevinson recursion or Levinson-Durbin recursion is a procedure in linear algebra to recursively calculate the solution to an equation involving a Toeplitz matrix...
— for Toeplitz matrices - SPIKE algorithmSPIKE algorithmThe SPIKE algorithm is a hybrid parallel solver for banded linear systems developed by Eric Polizzi and Ahmed Sameh.-Overview:The SPIKE algorithm deals with a linear system , where is a banded n\times n matrix of bandwidth much less than n, and is an n\times s matrix containing s right-hand sides...
— hybrid parallel solver for narrow-banded matrices - Iterative methods:
- Jacobi methodJacobi methodIn numerical linear algebra, the Jacobi method is an algorithm for determining the solutions of a system of linear equations with largest absolute values in each row and column dominated by the diagonal element. Each diagonal element is solved for, and an approximate value plugged in. The process...
- Gauss–Seidel method
- Successive over-relaxationSuccessive over-relaxationIn numerical linear algebra, the method of successive over-relaxation is a variant of the Gauss–Seidel method for solving a linear system of equations, resulting in faster convergence. A similar method can be used for any slowly converging iterative process.It was devised simultaneously by David...
(SOR) — a technique to accelerate the Gauss–Seidel method
- Successive over-relaxation
- Modified Richardson iterationModified Richardson iterationModified Richardson iteration is an iterative method for solving a system of linear equations. Richardson iteration was proposed by Lewis Richardson in his work dated 1910...
- Conjugate gradient methodConjugate gradient methodIn mathematics, the conjugate gradient method is an algorithm for the numerical solution of particular systems of linear equations, namely those whose matrix is symmetric and positive-definite. The conjugate gradient method is an iterative method, so it can be applied to sparse systems that are too...
(CG) — assumes that the matrix is positive definite- Nonlinear conjugate gradient methodNonlinear conjugate gradient methodIn numerical optimization, the nonlinear conjugate gradient method generalizes the conjugate gradient method to nonlinear optimization. For a quadratic function \displaystyle f:The minimum of f is obtained when the gradient is 0:...
— generalization for nonlinear optimization problems
- Nonlinear conjugate gradient method
- Biconjugate gradient methodBiconjugate gradient methodIn mathematics, more specifically in numerical linear algebra, the biconjugate gradient method is an algorithm to solve systems of linear equationsA x= b.\,...
(BiCG) - Generalized minimal residual methodGeneralized minimal residual methodIn mathematics, the generalized minimal residual method is an iterative method for the numerical solution of a system of linear equations. The method approximates the solution by the vector in a Krylov subspace with minimal residual. The Arnoldi iteration is used to find this vector.The GMRES...
(GMRES) — based on the Arnoldi iteration - Stone's methodStone methodStone's method, also known as the strongly implicit procedure or SIP, is an algorithm for solving a sparse linear system of equations. The method uses an incomplete LU decomposition, which approximates the exact LU decomposition, to get an iterative solution of the problem. The method is named...
(SIP - Srongly Implicit Procedure) — uses an incomplete LU decomposition - Kaczmarz methodKaczmarz methodThe Kaczmarz method, based on the work of the Polishmathematican Stefan Kaczmarz,is a method for solving linear systems of equations A x = b .It was rediscovered in the field of image reconstruction from projections by...
- PreconditionerPreconditionerIn mathematics, preconditioning is a procedure of an application of a transformation, called the preconditioner, that conditions a given problem into a form that is more suitable for numerical solution. Preconditioning is typically related to reducing a condition number of the problem...
- Incomplete Cholesky factorizationIncomplete Cholesky factorizationIn numerical analysis, an incomplete Cholesky factorization of a symmetric positive definite matrix is a sparse approximation of the Cholesky factorization...
— sparse approximation to the Cholesky factorization - Incomplete LU factorizationIncomplete LU factorizationIn numerical analysis, a field within mathematics, an incomplete LU factorization of a matrix is an sparse approximation of the LU factorization. Incomplete LU factorization is often used as a preconditioner.-External links:*...
— sparse approximation to the LU factorization
- Incomplete Cholesky factorization
- Jacobi method
- Underdetermined and overdetermined systems (systems that have no or more than one solution):
- Numerical computation of null space — find all solutions of an underdetermined system
- Moore–Penrose pseudoinverse — for finding solution with smallest 2-norm (for underdetermined systems) or smallest residual
- Sparse approximationSparse approximationSparse approximation is the problem of finding a signal or vector estimate with sparseness property, that is having a small number of nonzero elements, that satisfies a system of equations....
— for finding the sparsest solution (i.e., the solution with as many zeros as possible)
Eigenvalue algorithms
Eigenvalue algorithmEigenvalue algorithm
In linear algebra, one of the most important problems is designing efficient and stable algorithms for finding the eigenvalues of a matrix. These eigenvalue algorithms may also find eigenvectors.-Characteristic polynomial:...
— a numerical algorithm for locating the eigenvalues of a matrix
- Power iterationPower iterationIn mathematics, the power iteration is an eigenvalue algorithm: given a matrix A, the algorithm will produce a number λ and a nonzero vector v , such that Av = λv....
- Inverse iterationInverse iterationIn numerical analysis, inverse iteration is an iterative eigenvalue algorithm. It allows to find an approximateeigenvector when an approximation to a corresponding eigenvalue is already known....
- Rayleigh quotient iterationRayleigh quotient iterationRayleigh quotient iteration is an eigenvalue algorithm which extends the idea of the inverse iteration by using the Rayleigh quotient to obtain increasingly accurate eigenvalue estimates....
- Arnoldi iterationArnoldi iterationIn numerical linear algebra, the Arnoldi iteration is an eigenvalue algorithm and an important example of iterative methods. Arnoldi finds the eigenvalues of general matrices; an analogous method for Hermitian matrices is the Lanczos iteration. The Arnoldi iteration was invented by W. E...
— based on Krylov subspaces - Lanczos algorithmLanczos algorithmThe Lanczos algorithm is an iterative algorithm invented by Cornelius Lanczos that is an adaptation of power methods to find eigenvalues and eigenvectors of a square matrix or the singular value decomposition of a rectangular matrix. It is particularly useful for finding decompositions of very...
— Arnoldi, specialized for positive-definite matrices - QR algorithmQR algorithmIn numerical linear algebra, the QR algorithm is an eigenvalue algorithm: that is, a procedure to calculate the eigenvalues and eigenvectors of a matrix. The QR transformation was developed in the late 1950s by John G.F. Francis and by Vera N. Kublanovskaya , working independently...
- Jacobi eigenvalue algorithmJacobi eigenvalue algorithmIn numerical linear algebra, the Jacobi eigenvalue algorithm is an iterative method for the calculation of the eigenvalues and eigenvectors of a real symmetric matrix...
— select a small submatrix which can be diagonalized exactly, and repeat- Jacobi rotation — the building block, almost a Givens rotation
- Divide-and-conquer eigenvalue algorithmDivide-and-conquer eigenvalue algorithmDivide-and-conquer eigenvalue algorithms are a class of eigenvalue algorithms for Hermitian or real symmetric matrices that have recently become competitive in terms of stability and efficiency with more traditional algorithms such as the QR algorithm. The basic concept behind these algorithms is...
- Folded spectrum methodFolded spectrum methodIn mathematics, the folded spectrum method is a iterative method for solving large eigenvalue problems.Here you always find a vector with an eigenvalue close to a search-value \varepsilon...
- LOBPCGLOBPCGLocally Optimal Block Preconditioned Conjugate Gradient Method is an algorithm, proposed in , for finding the largest eigenvalues and the corresponding eigenvectors of a symmetric positive definite generalized eigenvalue problem.A x= \lambda B x,for a given pair of complex Hermitian or real...
- Locally Optimal Block Preconditioned Conjugate Gradient Method
Other concepts and algorithms
- Orthogonalization algorithms:
- Gram–Schmidt processGram–Schmidt processIn mathematics, particularly linear algebra and numerical analysis, the Gram–Schmidt process is a method for orthonormalising a set of vectors in an inner product space, most commonly the Euclidean space Rn...
- Householder transformationHouseholder transformationIn linear algebra, a Householder transformation is a linear transformation that describes a reflection about a plane or hyperplane containing the origin. Householder transformations are widely used in numerical linear algebra, to perform QR decompositions and in the first step of the QR algorithm...
- Givens rotation
- Gram–Schmidt process
- Krylov subspaceKrylov subspaceIn linear algebra, the order-r Krylov subspace generated by an n-by-n matrix A and a vector b of dimension n is the linear subspace spanned by the images of b under the first r powers of A , that is,...
- Block matrix pseudoinverseBlock matrix pseudoinverseIn mathematics, block matrix pseudoinverse is a formula of pseudoinverse of a partitioned matrix. This is useful for decomposing or approximating many algorithms updating parameters in signal processing, which are based on least squares method.- Derivation :...
- BidiagonalizationBidiagonalizationBidiagonalization is one of unitary matrix decompositions such that U* A V = B, where U and V are unitary matrices; * denotes Hermitian transpose; and B is upper bidiagonal...
- In-place matrix transpositionIn-place matrix transpositionIn-place matrix transposition, also called in-situ matrix transposition, is the problem of transposing an N×M matrix in-place in computer memory, ideally with O additional storage, or at most with additional storage much less than NM...
— computing the transpose of a matrix without using much additional storage - Pivot elementPivot elementThe pivot or pivot element is the element of a matrix, an array, or some other kind of finite set, which is selected first by an algorithm , to do certain calculations...
— entry in a matrix on which the algorithm concentrates - Matrix-free methodsMatrix-free methodsIn computational mathematics, a matrix-free method is an algorithm for solving a linear system of equations or an eigenvalue problem that does not store the coefficient matrix explicitly, but accesses the matrix by evaluating matrix-vector products...
— methods that only access the matrix by evaluating matrix-vector products
Polynomial interpolation
Polynomial interpolationPolynomial interpolation
In numerical analysis, polynomial interpolation is the interpolation of a given data set by a polynomial: given some points, find a polynomial which goes exactly through these points.- Applications :...
— interpolation by polynomials
- Linear interpolationLinear interpolationLinear interpolation is a method of curve fitting using linear polynomials. Lerp is an abbreviation for linear interpolation, which can also be used as a verb .-Linear interpolation between two known points:...
- Runge's phenomenonRunge's phenomenonIn the mathematical field of numerical analysis, Runge's phenomenon is a problem of oscillation at the edges of an interval that occurs when using polynomial interpolation with polynomials of high degree...
- Vandermonde matrix
- Chebyshev polynomialsChebyshev polynomialsIn mathematics the Chebyshev polynomials, named after Pafnuty Chebyshev, are a sequence of orthogonal polynomials which are related to de Moivre's formula and which can be defined recursively. One usually distinguishes between Chebyshev polynomials of the first kind which are denoted Tn and...
- Chebyshev nodesChebyshev nodesIn numerical analysis, Chebyshev nodes are the roots of the Chebyshev polynomial of the first kind. They are often used as nodes in polynomial interpolation because the resulting interpolation polynomial minimizes the Runge's phenomenon.-Definition:...
- Lebesgue constant (interpolation)Lebesgue constant (interpolation)In mathematics, the Lebesgue constants give an idea of how good the interpolant of a function is in comparison with the best polynomial approximation of the function...
- Different forms for the interpolant:
- Newton polynomialNewton polynomialIn the mathematical field of numerical analysis, a Newton polynomial, named after its inventor Isaac Newton, is the interpolation polynomial for a given set of data points in the Newton form...
- Divided differencesDivided differencesIn mathematics divided differences is a recursive division process.The method can be used to calculate the coefficients in the interpolation polynomial in the Newton form.-Definition:Given n data points,\ldots,...
- Neville's algorithmNeville's algorithmIn mathematics, Neville's algorithm is an algorithm used for polynomial interpolation that was derived by the mathematician Eric Harold Neville. Given n + 1 points, there is a unique polynomial of degree ≤ n which goes through the given points...
— for evaluating the interpolant; based on the Newton form
- Divided differences
- Lagrange polynomialLagrange polynomialIn numerical analysis, Lagrange polynomials are used for polynomial interpolation. For a given set of distinct points x_j and numbers y_j, the Lagrange polynomial is the polynomial of the least degree that at each point x_j assumes the corresponding value y_j...
- Bernstein polynomialBernstein polynomialIn the mathematical field of numerical analysis, a Bernstein polynomial, named after Sergei Natanovich Bernstein, is a polynomial in the Bernstein form, that is a linear combination of Bernstein basis polynomials....
— especially useful for approximation
- Newton polynomial
- Extensions to multiple dimensions:
- Bilinear interpolationBilinear interpolationIn mathematics, bilinear interpolation is an extension of linear interpolation for interpolating functions of two variables on a regular grid. The interpolated function should not use the term of x^2 or y^2, but x y, which is the bilinear form of x and y.The key idea is to perform linear...
- Trilinear interpolationTrilinear interpolationTrilinear interpolation is a method of multivariate interpolation on a 3-dimensional regular grid. It approximates the value of an intermediate point within the local axial rectangular prism linearly, using data on the lattice points...
- Bicubic interpolationBicubic interpolationIn mathematics, bicubic interpolation is an extension of cubic interpolation for interpolating data points on a two dimensional regular grid. The interpolated surface is smoother than corresponding surfaces obtained by bilinear interpolation or nearest-neighbor interpolation...
- Tricubic interpolationTricubic interpolationIn the mathematical subfield numerical analysis, tricubic interpolation is a method for obtaining values at arbitrary points in 3D space of a function defined on a regular grid...
- Padua pointsPadua pointsIn polynomial interpolation of two variables, the Padua points are the first known example of unisolvent point set with minimal growth of their Lebesgue constant, proven to be O.Their name is due to the University of Padua, where they were originally discovered.The points are defined...
— set of points in R2 with unique polynomial interpolant and minimal growth of Lebesgue constant
- Bilinear interpolation
- Hermite interpolationHermite interpolationIn numerical analysis, Hermite interpolation, named after Charles Hermite, is a method of interpolating data points as a polynomial function. The generated Hermite polynomial is closely related to the Newton polynomial, in that both are derived from the calculation of divided differences.Unlike...
- Birkhoff interpolation
Spline interpolation
Spline interpolationSpline interpolation
In the mathematical field of numerical analysis, spline interpolation is a form of interpolation where the interpolant is a special type of piecewise polynomial called a spline. Spline interpolation is preferred over polynomial interpolation because the interpolation error can be made small even...
— interpolation by piecewise polynomials
- Spline (mathematics)Spline (mathematics)In mathematics, a spline is a sufficiently smooth piecewise-polynomial function. In interpolating problems, spline interpolation is often preferred to polynomial interpolation because it yields similar results, even when using low-degree polynomials, while avoiding Runge's phenomenon for higher...
— the piecewise polynomials used as interpolants - Perfect splinePerfect splineIn the mathematical subfields function theory and numerical analysis, a univariate polynomial spline of order m is called a perfect spline if its m-th derivative is equal to +1 or -1 between knots and changes its sign at every knot....
— polynomial spline of degree m whose mth derivate is ±1 - Cubic Hermite splineCubic Hermite splineIn the mathematical subfield of numerical analysis a cubic Hermite spline , named in honor of Charles Hermite, is a third-degree spline with each polynomial of the spline in Hermite form...
- Monotone cubic interpolationMonotone cubic interpolationIn the mathematical subfield of numerical analysis, monotone cubic interpolation is a variant of cubic interpolation that preserves monotonicity of the data set being interpolated....
- Hermite splineHermite splineIn the mathematical subfield of numerical analysis, a Hermite spline is a spline curve where each polynomial of the spline is in Hermite form.-See also:*Cubic Hermite spline*Hermite polynomials*Hermite interpolation...
- Cardinal spline
- Bézier splineBézier splineIn the mathematical field of numerical analysis and in computer graphics, a Bézier spline is a spline curve where each polynomial of the spline is in Bézier form....
- Bézier curveBézier curveA Bézier curve is a parametric curve frequently used in computer graphics and related fields. Generalizations of Bézier curves to higher dimensions are called Bézier surfaces, of which the Bézier triangle is a special case....
- De Casteljau's algorithmDe Casteljau's algorithmIn the mathematical field of numerical analysis, De Casteljau's algorithm is a recursive method to evaluate polynomials in Bernstein form or Bézier curves, named after its inventor Paul de Casteljau...
- Bézier curve
- Smoothing splineSmoothing splineThe smoothing spline is a method of smoothing using a spline function.-Definition:Let ;x_1...
- Generalizations to more dimensions:
- Bézier triangle — maps a triangle to R3
- Bézier surfaceBézier surfaceBézier surfaces are a species of mathematical spline used in computer graphics, computer-aided design, and finite element modeling.As with the Bézier curve, a Bézier surface is defined by a set of control points...
— maps a square to R3
- Generalizations to more dimensions:
- B-splineB-splineIn the mathematical subfield of numerical analysis, a B-spline is a spline function that has minimal support with respect to a given degree, smoothness, and domain partition. B-splines were investigated as early as the nineteenth century by Nikolai Lobachevsky...
- Truncated power function
- De Boor's algorithmDe Boor's algorithmIn the mathematical subfield of numerical analysis the de Boor's algorithm is a fast and numerically stable algorithm for evaluating spline curves in B-spline form. It is a generalization of the de Casteljau's algorithm for Bézier curves. The algorithm was devised by Carl R. de Boor...
— generalizes De Casteljau's algorithm
- Nonuniform rational B-splineNonuniform rational B-splineNon-uniform rational basis spline is a mathematical model commonly used in computer graphics for generating and representing curves and surfaces which offers great flexibility and precision for handling both analytic and freeform shapes.- History :Development of NURBS began in the 1950s by...
(NURBS) - Kochanek–Bartels spline
- Blossom (functional)Blossom (functional)In numerical analysis, a blossom is a functional that can be applied to any polynomial, but is mostly used for Bézier and spline curves and surfaces....
— a unique, affine, symmetric map associated to a polynomial or spline - See also: List of numerical computational geometry topics
Trigonometric interpolation
Trigonometric interpolationTrigonometric interpolation
In mathematics, trigonometric interpolation is interpolation with trigonometric polynomials. Interpolation is the process of finding a function which goes through some given data points. For trigonometric interpolation, this function has to be a trigonometric polynomial, that is, a sum of sines and...
— interpolation by trigonometric polynomials
- Discrete Fourier transformDiscrete Fourier transformIn mathematics, the discrete Fourier transform is a specific kind of discrete transform, used in Fourier analysis. It transforms one function into another, which is called the frequency domain representation, or simply the DFT, of the original function...
— can be viewed as trigonometric interpolation at equidistant points- Relations between Fourier transforms and Fourier seriesRelations between Fourier transforms and Fourier seriesIn the mathematical field of harmonic analysis, the continuous Fourier transform has very precise relations with Fourier series. It is also closely related to the discrete-time Fourier transform and the discrete Fourier transform ....
- Relations between Fourier transforms and Fourier series
- Fast Fourier transformFast Fourier transformA fast Fourier transform is an efficient algorithm to compute the discrete Fourier transform and its inverse. "The FFT has been called the most important numerical algorithm of our lifetime ." There are many distinct FFT algorithms involving a wide range of mathematics, from simple...
— a fast method for computing the discrete Fourier transform- Bluestein's FFT algorithmBluestein's FFT algorithmBluestein's FFT algorithm , commonly called the chirp z-transform algorithm , is a fast Fourier transform algorithm that computes the discrete Fourier transform of arbitrary sizes by re-expressing the DFT as a convolution...
- Bruun's FFT algorithmBruun's FFT algorithmBruun's algorithm is a fast Fourier transform algorithm based on an unusual recursive polynomial-factorization approach, proposed for powers of two by G. Bruun in 1978 and generalized to arbitrary even composite sizes by H. Murakami in 1996...
- Cooley–Tukey FFT algorithm
- Split-radix FFT algorithmSplit-radix FFT algorithmThe split-radix FFT is a fast Fourier transform algorithm for computing the discrete Fourier transform , and was first described in an initially little-appreciated paper by R. Yavne and subsequently rediscovered simultaneously by various authors in 1984. The split-radix FFT is a fast Fourier...
— variant of Cooley–Tukey that uses a blend of radices 2 and 4 - Goertzel algorithmGoertzel algorithmThe Goertzel algorithm is a digital signal processing technique for identifying frequency components of a signal, published by Gerald Goertzel in 1958...
- Prime-factor FFT algorithmPrime-factor FFT algorithmThe prime-factor algorithm , also called the Good–Thomas algorithm , is a fast Fourier transform algorithm that re-expresses the discrete Fourier transform of a size N = N1N2 as a two-dimensional N1×N2 DFT, but only for the case where N1 and N2 are relatively prime...
- Rader's FFT algorithmRader's FFT algorithmRader's algorithm is a fast Fourier transform algorithm that computes the discrete Fourier transform of prime sizes by re-expressing the DFT as a cyclic convolution...
- Butterfly diagramButterfly diagramIn the context of fast Fourier transform algorithms, a butterfly is a portion of the computation that combines the results of smaller discrete Fourier transforms into a larger DFT, or vice versa . The name "butterfly" comes from the shape of the data-flow diagram in the radix-2 case, as described...
- Twiddle factorTwiddle factorA twiddle factor, in fast Fourier transform algorithms, is any of the trigonometric constant coefficients that are multiplied by the data in the course of the algorithm...
— the trigonometric constant coefficients that are multiplied by the data - Methods for computing discrete convolutions with finite impulse response filters using the FFT:
- Overlap-add method
- Overlap-save methodOverlap-save methodOverlap–save is the traditional name for an efficient way to evaluate the discrete convolution between a very long signal x[n] and a finite impulse response filter h[n]:...
- Bluestein's FFT algorithm
- Sigma approximationSigma approximationIn mathematics, σ-approximation adjusts a Fourier summation to eliminate the Gibbs phenomenon which would otherwise occur at discontinuities.A σ-approximated summation for a series of period T can be written as follows:...
- Dirichlet kernel — convolving any function with the Dirichlet kernel yields its trigonometric interpolant
- Gibbs phenomenonGibbs phenomenonIn mathematics, the Gibbs phenomenon, named after the American physicist J. Willard Gibbs, is the peculiar manner in which the Fourier series of a piecewise continuously differentiable periodic function behaves at a jump discontinuity: the nth partial sum of the Fourier series has large...
Other interpolants
- Simple rational approximationSimple rational approximationSimple rational approximation is a subset of interpolating methods using rational functions. Especially, SRA interpolates a given function with a specific rational function whose poles and zeros are simple, which means that there is no multiplicity in poles and zeros. Sometimes, it only implies...
- Polynomial and rational function modelingPolynomial and rational function modelingIn statistical modeling , polynomial functions and rational functions are sometimes used as an empirical technique for curve fitting.-Polynomial function models:A polynomial function is one that has the form...
— comparison of polynomial and rational interpolation
- Polynomial and rational function modeling
- WaveletWaveletA wavelet is a wave-like oscillation with an amplitude that starts out at zero, increases, and then decreases back to zero. It can typically be visualized as a "brief oscillation" like one might see recorded by a seismograph or heart monitor. Generally, wavelets are purposefully crafted to have...
- Continuous waveletContinuous waveletIn numerical analysis, continuous wavelets are functions used by the continuous wavelet transform. These functions are defined as analytical expressions, as functions either of time or of frequency....
- Transfer matrixTransfer matrixThe transfer matrix is a formulation in terms of a block-Toeplitz matrix of the two-scale equation, which characterizes refinable functions. Refinable functions play an important role in wavelet theory and finite element theory....
- See also: List of functional analysis topics, List of wavelet-related transforms
- Continuous wavelet
- Inverse distance weightingInverse distance weightingInverse distance weighting is a method for multivariate interpolation, a process of assigning values to unknown points by using values from usually scattered set of known points...
- Cascade algorithmCascade algorithmIn the mathematical topic of wavelet theory, the cascade algorithm is a numerical method for calculating function values of the basic scaling and wavelet functions of a discrete wavelet transform using an iterative algorithm. It starts from values on a coarse sequence of sampling points and...
— iterative algorithm to compute wavelets
- Cascade algorithm
- Radial basis functionRadial basis functionA radial basis function is a real-valued function whose value depends only on the distance from the origin, so that \phi = \phi; or alternatively on the distance from some other point c, called a center, so that \phi = \phi...
(RBF) — a function of the form ƒ(x) = φ(|x−x0|)- Polyharmonic splinePolyharmonic splineIn mathematics, polyharmonic splines are used forfunction approximation and data interpolation.They are very useful for interpolation of scattered datain many dimensions.- Definition :Polyharmonic splines are a special case of radial basis functions and...
— a commonly used radial basis function - Thin plate splineThin plate splineThis is a brief derivation for the closed form solutions for smoothing Thin Plate Spline. Details about these splines can be found in ....
— a specific polyharmonic spline: r2 log r - Radial basis function networkRadial basis function networkA radial basis function network is an artificial neural network that uses radial basis functions as activation functions. It is a linear combination of radial basis functions...
— neural network using radial basis functions as activation functions - Hierarchical RBFHierarchical RBFA hierarchical RBF is an interpolation method based on RBF.Interpolation based on RBF used for the construction of shape models in 3D computer graphics , treatment of results from a 3D scanner, terrain reconstruction and others.This problem often named "large scattered data point set...
- Polyharmonic spline
- Subdivision surfaceSubdivision surfaceA subdivision surface, in the field of 3D computer graphics, is a method of representing a smooth surface via the specification of a coarser piecewise linear polygon mesh...
— constructed by recursively subdividing a piecewise linear interpolant- Catmull–Clark subdivision surface
- Doo–Sabin subdivision surface
- Loop subdivision surfaceLoop subdivision surfaceIn computer graphics, Loop subdivision surface is a subdivision scheme developed by Charles Loop in 1987 for triangular meshes.Each triangle is divided into four subtriangles, adding new vertices in the middle of each edge.- External links :...
- SlerpSlerpIn computer graphics, Slerp is shorthand for spherical linear interpolation, introduced by Ken Shoemake in the context of quaternion interpolation for the purpose of animating 3D rotation...
- Irrational base discrete weighted transformIrrational base discrete weighted transformIn mathematics, the irrational base discrete weighted transform is a variant of the fast Fourier transform using an irrational base; it was developed by Richard Crandall , Barry Fagin and Joshua Doenias in the early 1990s using Mathematica.The IBDWT is used in the Great Internet Mersenne Prime...
- Nevanlinna–Pick interpolationNevanlinna–Pick interpolationIn complex analysis, Nevanlinna–Pick interpolation is the problem of finding a holomorphic function from the unit disc to the unit disc , which takes specified points to specified points...
— interpolation by analytic functions in the unit disc subject to a bound- Pick matrix — the Nevanlinna–Pick interpolation has a solution if this matrix is positive semi-definite
- Pareto interpolationPareto interpolationPareto interpolation is a method of estimating the median and other properties of a population that follows a Pareto distribution. It is used in economics when analysing the distribution of incomes in a population, when one must base estimates on a relatively small random sample taken from the...
- Multivariate interpolationMultivariate interpolationIn numerical analysis, multivariate interpolation or spatial interpolation is interpolation on functions of more than one variable.The function to be interpolated is known at given points and the interpolation problem consist of yielding values at arbitrary points .-Regular grid:For function...
— the function being interpolated depends on more than one variable- Barnes interpolationBarnes interpolationBarnes interpolation, named after Stanley L. Barnes, is the interpolation of unstructured data points from a set of measurements of an unknown function in two dimensions into an analytic function of two variables...
— method for two-dimensional functions using Gaussians common in meteorology - Lanczos resamplingLanczos resamplingLanczos resampling is an interpolation method used to compute new values for sampled data. It is often used in multivariate interpolation, for example for image scaling , but can be used for any other digital signal...
— based on convolution with a sinc function - Natural neighborNatural neighbor300px|thumb|right|Natural neighbor interpolation. The colored circles. which represent the interpolating weights, wi, are generated using the ratio of the shaded area to that of the cell area of the surrounding points...
interpolation - PDE surfacePDE surfacePDE surfaces are used in geometric modelling and computer graphics for creating smooth surfaces conforming to a given boundary configuration. PDE surfaces utilise partial differential equations to generate a surface which usually satisfy a mathematical boundary value problem.PDE surfaces were first...
- Method based on polynomials are listed under Polynomial interpolation
- Barnes interpolation
Approximation theory
Approximation theoryApproximation theory
In mathematics, approximation theory is concerned with how functions can best be approximated with simpler functions, and with quantitatively characterizing the errors introduced thereby...
- Orders of approximationOrders of approximationIn science, engineering, and other quantitative disciplines, orders of approximation refer to formal or informal terms for how precise an approximation is, and to indicate progressively more refined approximations: in increasing order of precision, a zeroth order approximation, a first order...
- Lebesgue's lemmaLebesgue's lemmaFor Lebesgue's lemma for open covers of compact spaces in topology see Lebesgue's number lemmaIn mathematics, Lebesgue's lemma is an important statement in approximation theory. It provides a bound for the projection error.-Statement:...
- Curve fittingCurve fittingCurve fitting is the process of constructing a curve, or mathematical function, that has the best fit to a series of data points, possibly subject to constraints. Curve fitting can involve either interpolation, where an exact fit to the data is required, or smoothing, in which a "smooth" function...
- Vector field reconstructionVector field reconstructionVector field reconstruction is a method of creating a vector field from experimental or computer generated data, usually with the goal of finding a differential equation model of the system....
- Vector field reconstruction
- Modulus of continuityModulus of continuityIn mathematical analysis, a modulus of continuity is a function\omega:[0,\infty]\to[0,\infty]used to measure quantitatively the uniform continuity of functions. So, a function f:I\to\R admits \omega as a modulus of continuity if and only if|f-f|\leq\omega,for all x and y in the domain of f...
— measures smoothness of a function - Minimax approximation algorithmMinimax approximation algorithmPolynomial expansions such as the Taylor series expansion are often convenient for theoretical work but less useful for practical applications. For practical work it is often desirable to minimize the maximum absolute or relative error of a polynomial fit for any given number of terms in an effort...
— minimizes the maximum error over an interval (the L∞-norm) - Approximation by polynomials:
- Linear approximationLinear approximationIn mathematics, a linear approximation is an approximation of a general function using a linear function . They are widely used in the method of finite differences to produce first order methods for solving or approximating solutions to equations.-Definition:Given a twice continuously...
- Bernstein polynomialBernstein polynomialIn the mathematical field of numerical analysis, a Bernstein polynomial, named after Sergei Natanovich Bernstein, is a polynomial in the Bernstein form, that is a linear combination of Bernstein basis polynomials....
— basis of polynomials useful for approximating a function - Remez algorithmRemez algorithmThe Remez algorithm , published by Evgeny Yakovlevich Remez in 1934 is an iterative algorithm used to find simple approximations to functions, specifically, approximations by functions in a Chebyshev space that are the best in the uniform norm L∞ sense.A typical...
— for constructing the best polynomial approximation in the L∞-norm - Bramble–Hilbert lemma — upper bound on Lp error of polynomial approximation in multiple dimensions
- Bernstein's constantBernstein's constantBernstein's constant, usually denoted by the Greek letter β , is a mathematical constant named after Sergei Natanovich Bernstein and is approximately equal to 0.2801694990.- Definition :...
— error when approximating |x| by a polynomial
- Linear approximation
- Surrogate modelSurrogate modelMost engineering design problems require experiments and/or simulations to evaluate design objective and constraint functions as function of design variables. For example, in order to find the optimal airfoil shape for an aircraft wing, an engineer simulates the air flow around the wing for...
— application: replacing a function that is hard to evaluate by a simpler function - Jackson's inequalityJackson's inequalityIn approximation theory, Jackson's inequality is an inequality bounding the value of function's best approximation by algebraic or trigonometric polynomials in terms of the modulus of continuity of its derivatives...
— upper bound for best approximation by a trigonometric polynomial - Different approximations:
- Moving least squaresMoving least squaresMoving least squares is a method of reconstructing continuous functions from a set of unorganized point samples via the calculation of a weighted least squares measure biased towards the region around the point at which the reconstructed value is requested....
- Padé approximantPadé approximantPadé approximant is the "best" approximation of a function by a rational function of given order - under this technique, the approximant's power series agrees with the power series of the function it is approximating....
- Padé tablePadé tableIn complex analysis, a Padé table is an array, possibly of infinite extent, of the rational Padé approximantsto a given complex formal power series...
— table of Padé approximants
- Padé table
- Szász–Mirakyan operator — approximation by e−n xk on a semi-infinite interval
- Szász–Mirakjan–Kantorovich operator
- Baskakov operatorBaskakov operatorIn functional analysis, a branch of mathematics, the Baskakov operators are generalizations of Bernstein polynomials, Szász–Mirakyan operators, and Lupas operators...
— generalize Bernstein polynomials, Szász–Mirakyan operators, and Lupas operators - Favard operator — approximation by sums of Gaussians
- Moving least squares
Miscellaneous
- ExtrapolationExtrapolationIn mathematics, extrapolation is the process of constructing new data points. It is similar to the process of interpolation, which constructs new points between known points, but the results of extrapolations are often less meaningful, and are subject to greater uncertainty. It may also mean...
- Linear predictive analysisLinear predictive analysisLinear predictive analysis can be thought of as a simple form of first-order extrapolation: if it has been changing at this rate then it will probably continue to change at approximately the same rate, at least in the short term...
— linear extrapolation
- Linear predictive analysis
- Unisolvent functionsUnisolvent functionsIn mathematics, a collection of n functions ƒ1, ƒ2, ..., ƒn is unisolvent on domain Ω if the vectorsare linearly independent for any choice of n distinct points x1, x2 ... xn in Ω...
— functions for which the interpolation problem has a unique solution - Regression analysisRegression analysisIn statistics, regression analysis includes many techniques for modeling and analyzing several variables, when the focus is on the relationship between a dependent variable and one or more independent variables...
- Curve-fitting compactionCurve-fitting compactionCurve-fitting compaction is data compaction accomplished by replacing data to be stored or transmitted with an analytical expression.Examples of curve-fitting compaction consisting of discretization and then interpolation are:...
- Interpolation (computer programming)Interpolation (computer programming)In the context of computer animation, interpolation refers to inbetweening, or filling in frames between the key frames. It typically calculates the inbetween frames through use of piecewise polynomial interpolation to draw images semi-automatically....
— interpolation in the context of computer graphics
Finding roots of nonlinear equations
- See #Numerical linear algebra for linear equations
Root-finding algorithm
Root-finding algorithm
A root-finding algorithm is a numerical method, or algorithm, for finding a value x such that f = 0, for a given function f. Such an x is called a root of the function f....
— algorithms for solving the equation f(x) = 0
- General methods:
- Bisection methodBisection methodThe bisection method in mathematics is a root-finding method which repeatedly bisects an interval and then selects a subinterval in which a root must lie for further processing. It is a very simple and robust method, but it is also relatively slow...
— simple and robust; linear convergence- Lehmer-Schur algorithmLehmer-Schur algorithmIn mathematics, the Lehmer–Schur algorithm is a root-finding algorithm extending the one-dimensional bracketing used by the bisection method to find the roots of a function of one complex variable inside any rectangular region of the function's holomorphicity .The rectangle in question is...
— variant for complex functions
- Lehmer-Schur algorithm
- Fixed point iteration
- Newton's methodNewton's methodIn numerical analysis, Newton's method , named after Isaac Newton and Joseph Raphson, is a method for finding successively better approximations to the roots of a real-valued function. The algorithm is first in the class of Householder's methods, succeeded by Halley's method...
— based on linear approximation around the current iterate; quadratic convergence- Newton fractal
- Quasi-Newton methodQuasi-Newton methodIn optimization, quasi-Newton methods are algorithms for finding local maxima and minima of functions. Quasi-Newton methods are based on...
— uses an approximation of the Jacobian:- Broyden's methodBroyden's methodIn numerical analysis, Broyden's method is a quasi-Newton method for the numerical solution of nonlinear equations in k variables. It was originally described by C. G. Broyden in 1965....
— uses a rank-one update for the Jacobian - SR1 formulaSR1 formulaThe Symmetric Rank 1 method is a quasi-Newton method to update the second derivative based on the derivatives calculated at two points...
— a symmetric (but not necessarily positive definite) rank-one update of the Jacobian - Davidon-Fletcher-Powell formulaDavidon-Fletcher-Powell formulaThe Davidon–Fletcher–Powell formula finds the solution to the secant equation that is closest to the current estimate and satisfies the curvature condition . It was the first quasi-Newton method which generalize the secant method to a multidimensional problem...
— update of the Jacobian in which the matrix remains positive definite - BFGS methodBFGS methodIn numerical optimization, the Broyden–Fletcher–Goldfarb–Shanno method is a method for solving nonlinear optimization problems ....
— rank-two update of the Jacobian in which the matrix remains positive definite - L-BFGSL-BFGSThe limited-memory BFGS algorithm is a member of the broad family of quasi-Newton optimization methods that uses a limited memory variation of the Broyden–Fletcher–Goldfarb–Shanno update to approximate the inverse Hessian matrix...
method — truncated, matrix-free variant of BFGS method suitable for large problems
- Broyden's method
- Steffensen's methodSteffensen's methodIn numerical analysis, Steffensen's method is a root-finding method, similar to Newton's method, named after Johan Frederik Steffensen. Steffensen's method also achieves quadratic convergence, but without using derivatives as Newton's method does....
— uses divided differences instead of the derivative
- Secant methodSecant methodIn numerical analysis, the secant method is a root-finding algorithm that uses a succession of roots of secant lines to better approximate a root of a function f. The secant method can be thought of as a finite difference approximation of Newton's method. However, the method was developed...
— based on linear interpolation at last two iterates - False position methodFalse position methodThe false position method or regula falsi method is a term for problem-solving methods in algebra and calculus. In simple terms, these methods begin by attempting to evaluate a problem using test values for the variables, and then adjust the values accordingly.-Algebra:In algebra, the false...
— secant method with ideas from the bisection method - Müller's methodMüller's methodMüller's method is a root-finding algorithm, a numerical method for solving equations of the form f = 0. It is first presented by D. E. Müller in 1956....
— based on quadratic interpolation at last three iterates - Inverse quadratic interpolationInverse quadratic interpolationIn numerical analysis, inverse quadratic interpolation is a root-finding algorithm, meaning that it is an algorithm for solving equations of the form f = 0. The idea is to use quadratic interpolation to approximate the inverse of f...
— similar to Müller's method, but interpolates the inverse - Brent's methodBrent's methodIn numerical analysis, Brent's method is a complicated but popular root-finding algorithm combining the bisection method, the secant method and inverse quadratic interpolation. It has the reliability of bisection but it can be as quick as some of the less reliable methods...
— combines bisection method, secant method and inverse quadratic interpolation - Ridders' methodRidders' methodIn numerical analysis, Ridders' method is a root-finding algorithm based on the false position method and the use of an exponential function to successively approximate a root of a function f....
— fits a linear function times an exponential to last two iterates and their midpoint - Halley's methodHalley's methodIn numerical analysis, Halley’s method is a root-finding algorithm used for functions of one real variable with a continuous second derivative, i.e., C2 functions. It is named after its inventor Edmond Halley, who also discovered Halley's Comet....
— uses f, f' and f; achieves the cubic convergence - Householder's methodHouseholder's methodIn numerical analysis, the class of Householder's methods are root-finding algorithms used for functions of one real variable with continuous derivatives up to some order d+1, where d will be the order of the Householder's method....
— uses first d derivatives to achieve order d + 1; generalizes Newton's and Halley's method
- Bisection method
- Methods for polynomials:
- Aberth methodAberth methodThe Aberth method, or Aberth–Ehrlich method, named after Oliver Aberth and Louis W. Ehrlich, is a root-finding algorithm for simultaneous approximation of all the roots of a univariate polynomial....
- Bairstow's methodBairstow's methodIn numerical analysis, Bairstow's method is an efficient algorithm for finding the roots of a real polynomial of arbitrary degree. The algorithm first appeared in the appendix of the 1920 book "Applied Aerodynamics" by Leonard Bairstow...
- Durand–Kerner method
- Graeffe's methodGraeffe's methodIn mathematics, Graeffe's method or Dandelin–Graeffe method is an algorithm for finding all of the roots of a polynomial. It was developed independently by Germinal Pierre Dandelin in 1826 and Karl Heinrich Gräffe in 1837. Lobachevsky in 1834 also discovered the principal idea of the method....
- Jenkins-Traub algorithmJenkins-Traub algorithmThe Jenkins–Traub algorithm for polynomial zeros is a fast globally convergent iterative method published in 1970 by Michael A. Jenkins and Joseph F. Traub...
— fast, reliable, and widely used - Laguerre's methodLaguerre's methodIn numerical analysis, Laguerre's method is a root-finding algorithm tailored to polynomials. In other words, Laguerre's method can be used to solve numerically the equation\ p = 0 for a given polynomial p...
- Splitting circle methodSplitting circle methodIn mathematics, the splitting circle method is a numerical algorithm for the numerical factorization of a polynomial and, ultimately, for finding its complex roots. It was introduced by Arnold Schönhage in his 1982 paper The fundamental theorem of algebra in terms of computational complexity...
- Aberth method
- Analysis:
- Numerical continuationNumerical continuationNumerical continuation is a method of computing approximate solutions of a system of parameterized nonlinear equations,F = 0The parameter \lambda is usually a real scalar, and the solution an n-vector...
— tracking a root as one parameters in the equation changes- Piecewise linear continuationPiecewise linear continuation- Simplicial Continuation :Simplicial Continuation, or Piecewise Linear Continuation is a one parameter continuation method which is well suited to small to medium embedding spaces...
- Piecewise linear continuation
Optimization
Optimization (mathematics)Optimization (mathematics)
In mathematics, computational science, or management science, mathematical optimization refers to the selection of a best element from some set of available alternatives....
— algorithm for finding maxima or minima of a given function
Basic concepts
- Active set
- Candidate solutionCandidate solutionIn optimization , a candidate solution is a member of a set of possible solutions to a given problem. A candidate solution does not have to be a likely or reasonable solution to the problem – it is simply in the set that satisfies all constraints.The space of all candidate solutions is called the...
- Constraint (mathematics)Constraint (mathematics)In mathematics, a constraint is a condition that a solution to an optimization problem must satisfy. There are two types of constraints: equality constraints and inequality constraints...
- Corner solutionCorner solutionA corner solution is a special solution to an agent's maximization problem in which the quantity of one of the arguments in the maximized function is zero. The more usual solution will lie in the non-zero interior at the point of tangency between the objective function and the constraint...
- Fitness functionFitness functionA fitness function is a particular type of objective function that is used to summarise, as a single figure of merit, how close a given design solution is to achieving the set aims....
— (esp. in genetic algorithms) an approximation to the objective function that is easier to evaluate- Fitness approximationFitness approximationIn function optimization, fitness approximation is a method for decreasing the number of fitness function evaluations to reach a target solution...
- Fitness approximation
- Global optimumGlobal optimumIn mathematics, a global optimum is a selection from a given domain which yields either the highest value or lowest value , when a specific function is applied. For example, for the function...
and Local optimumLocal optimumLocal optimum is a term in applied mathematics and computer science.A local optimum of a combinatorial optimization problem is a solution that is optimal within a neighboring set of solutions... - Maxima and minimaMaxima and minimaIn mathematics, the maximum and minimum of a function, known collectively as extrema , are the largest and smallest value that the function takes at a point either within a given neighborhood or on the function domain in its entirety .More generally, the...
- Slack variableSlack variableIn an optimization problem, a slack variable is a variable that is added to an inequality constraint to transform it to an equality. Introducing a slack variable replaces an inequality constraint with an equality constraint and a nonnegativity constraint....
- Continuous optimizationContinuous optimizationContinuous optimization is a branch of optimization in applied mathematics.As opposed to discrete optimization, the variables used in the objective function can assume real values, e.g., values from intervals of the real line....
- Discrete optimizationDiscrete optimizationDiscrete optimization is a branch of optimization in applied mathematics and computer science.As opposed to continuous optimization, the variables used in the mathematical program are restricted to assume only discrete values, such as the integers.Two notable branches of discrete optimization...
Linear programming
Linear programmingLinear programming
Linear programming is a mathematical method for determining a way to achieve the best outcome in a given mathematical model for some list of requirements represented as linear relationships...
(also treats integer programming) — objective function and constraints are linear
- Algorithms for linear programming:
- Simplex algorithmSimplex algorithmIn mathematical optimization, Dantzig's simplex algorithm is a popular algorithm for linear programming. The journal Computing in Science and Engineering listed it as one of the top 10 algorithms of the twentieth century....
- Bland's ruleBland's ruleIn mathematical optimization, Bland's rule is an algorithmic refinement of the simplex method for linear optimization....
— rule to avoid cycling in the simplex method
- Bland's rule
- Interior point methodInterior point methodInterior point methods are a certain class of algorithms to solve linear and nonlinear convex optimization problems.The interior point method was invented by John von Neumann...
- Karmarkar's algorithmKarmarkar's algorithmKarmarkar's algorithm is an algorithm introduced by Narendra Karmarkar in 1984 for solving linear programming problems. It was the first reasonably efficient algorithm that solves these problems in polynomial time...
- Mehrotra predictor-corrector methodMehrotra predictor-corrector methodMehrotra's predictor–corrector method in optimization is an implementation of interior point methods. It was proposed in 1989 by Sanjay Mehrotra....
- Karmarkar's algorithm
- Delayed column generationDelayed column generationDelayed column generation is an efficient algorithm for solving larger linear programs.The overarching idea is that many linear programs are too large to consider all the variables explicitly. Since most of the variables will be non-basic and assume a value of zero in the optimal solution, only a...
- k-approximation of k-hitting setK-approximation of k-hitting setIn computer science, k-approximation of k-hitting set is an approximation algorithm for weighted hitting set. The input is a collection S of subsets of some universe T and a mapping W from S to non-negative numbers called the weights of the elements of S. In k-hitting set the size of the sets in S...
— algorithm for specific LP problems (to find a weighted hitting set)
- Simplex algorithm
- Linear complementarity problemLinear complementarity problemIn mathematical optimization theory, the linear complementarity problem arises frequently in computational mechanics and encompasses the well-known quadratic programming as a special case...
- Decompositions:
- Benders' decompositionBenders' decompositionBenders' decomposition is a technique in mathematical programming that allows the solution of very large linear programming problems that have a special block structure...
- Dantzig–Wolfe decompositionDantzig–Wolfe decompositionDantzig–Wolfe decomposition is an algorithm for solving linear programming problems with special structure. It was originally developed by George Dantzig and Phil Wolfe and initially published in 1960...
- Benders' decomposition
- Fourier–Motzkin eliminationFourier–Motzkin eliminationFourier–Motzkin elimination, FME method, is a mathematical algorithm for eliminating variables from a system of linear inequalities. It can look for both real and integer solutions...
- Hilbert basis (linear programming)Hilbert basis (linear programming)In linear programming, a Hilbert basis for a convex cone C is an integer cone basis: minimal set of integer vectors such that every integer vector in C is a conical combination of the vectors in the Hilbert basis with integer coefficients....
— set of integer vectors in a convex cone which generate all integer vectors in the cone
Nonlinear programming
Nonlinear programmingNonlinear programming
In mathematics, nonlinear programming is the process of solving a system of equalities and inequalities, collectively termed constraints, over a set of unknown real variables, along with an objective function to be maximized or minimized, where some of the constraints or the objective function are...
— the most general optimization problem in the usual framework
- Special cases of nonlinear programming:
- Quadratic programmingQuadratic programmingQuadratic programming is a special type of mathematical optimization problem. It is the problem of optimizing a quadratic function of several variables subject to linear constraints on these variables....
- Linear least squaresLinear least squaresIn statistics and mathematics, linear least squares is an approach to fitting a mathematical or statistical model to data in cases where the idealized value provided by the model for any data point is expressed linearly in terms of the unknown parameters of the model...
- Frank–Wolfe algorithmFrank–Wolfe algorithmIn mathematical optimization, the reduced gradient method of Frank and Wolfe is an iterative method for nonlinear programming. Also known as the Frank–Wolfe algorithm and the convex combination algorithm, the reduced gradient method was proposed by Marguerite Frank and Phil Wolfe in 1956 as an...
- Sequential Minimal OptimizationSequential Minimal OptimizationSequential minimal optimization is an algorithm for efficiently solving the optimization problem which arises during the training of support vector machines. It was invented by John Platt in 1998 at Microsoft Research. SMO is widely used for training support vector machines and is implemented by...
— breaks up large QP problems into a series of smallest possible QP problems - Bilinear programBilinear programIn mathematics, a bilinear program is a nonlinear optimization problem whose objective and/or constraint functions are bilinear. An example is the pooling problem.-References:* at the Mathematical Programming Glossary....
- Linear least squares
- Convex optimization
- Linear matrix inequality
- Conic optimizationConic optimizationConic optimization is a subfield of convex optimization that studies a class of structured convex optimization problems called conic optimization problems...
- Semidefinite programmingSemidefinite programmingSemidefinite programming is a subfield of convex optimization concerned with the optimization of a linear objective functionover the intersection of the cone of positive semidefinite matrices with an affine space, i.e., a spectrahedron....
- Second-order cone programming
- Quadratic programming (see above)
- Semidefinite programming
- Bregman methodBregman methodBregman's method is an iterative algorithm to solve certain convex optimization problems. The algorithm is a row-action method accessing constraint functions one by one and the method is particularly suited for large optimization problems where constraints can be efficiently enumerated.The...
— row-action method for strictly convex optimization problems - Subgradient methodSubgradient methodSubgradient methods are iterative methods for solving convex minimization problems. Originally developed by Naum Z. Shor and others in the 1960s and 1970s, subgradient methods are convergent when applied even to a non-differentiable objective function...
— extension of steepest descent for problems with a nondifferentiable objective function
- Geometric programming — problems involving signomials or posynomials
- SignomialSignomialA "signomial" is an algebraic function of one or more independent variables. It is perhaps most easily thought of as an algebraic extension of multi-dimensional polynomials -- an extension that permits exponents to be arbitrary real numbers while requiring the independent variables to be strictly...
— similar to polynomials, but exponents need not be integers - Posynomial — a signomial with positive coefficients
- Signomial
- Quadratically constrained quadratic programQuadratically constrained quadratic programIn mathematics, a quadratically constrained quadratic program is an optimization problem in which both the objective function and the constraints are quadratic functions...
- Linear-fractional programmingLinear-fractional programmingIn mathematical optimization, linear-fractional programming is a generalization of linear programming . Whereas the objective function in linear programs are linear functions, the objective function in a linear-fractional program is a ratio of two linear functions...
— objective is ratio of linear functions, constraints are linear - Least squaresLeast squaresThe method of least squares is a standard approach to the approximate solution of overdetermined systems, i.e., sets of equations in which there are more equations than unknowns. "Least squares" means that the overall solution minimizes the sum of the squares of the errors made in solving every...
— the objective function is a sum of squares- Non-linear least squaresNon-linear least squaresNon-linear least squares is the form of least squares analysis which is used to fit a set of m observations with a model that is non-linear in n unknown parameters . It is used in some forms of non-linear regression. The basis of the method is to approximate the model by a linear one and to...
- Gauss–Newton algorithm
- Generalized Gauss–Newton methodGeneralized Gauss–Newton methodThe generalized Gauss–Newton method is a generalization of the least-squares method originally described by Carl Friedrich Gauss and of Newton's method due to Isaac Newton to the case of constrained nonlinear least-squares problems....
— for constrained nonlinear least-squares problems
- Generalized Gauss–Newton method
- Levenberg–Marquardt algorithm
- Non-linear least squares
- Mathematical programming with equilibrium constraintsMathematical programming with equilibrium constraintsMathematical programming with equilibrium constraints is the study ofconstrained optimization problems where the constraints include variational inequalities or complementarities...
— constraints include variational inequalities or complementarities - Univariate optimization:
- Golden section searchGolden section searchThe golden section search is a technique for finding the extremum of a unimodal function by successively narrowing the range of values inside which the extremum is known to exist. The technique derives its name from the fact that the algorithm maintains the function values for triples of points...
- Successive parabolic interpolationSuccessive parabolic interpolationSuccessive parabolic interpolation is a technique for finding the extremum of a continuous unimodal function by successively fitting parabolas to the function at three unique points, and at each iteration replacing the "oldest" point with the extremum of the fitted parabola.- Advantages :Only...
— based on quadratic interpolation through the last three iterates
- Golden section search
- Quadratic programming
- General algorithms:
- Concepts:
- Descent direction
- Guess valueGuess valueA guess value is more commonly called a starting value or initial value. These are necessary for most optimization problems which use search algorithms, because those algorithms are mainly deterministic and iterative, and they need to start somewhere...
— the initial guess for a solution with which an algorithm starts - Line search
- Backtracking line search
- Wolfe conditionsWolfe conditionsIn the unconstrained minimization problem, the Wolfe conditions are a set of inequalities for performing inexact line search, especially in quasi-Newton methods.In these methods the idea is to find\min_x f...
- Gradient descentGradient descentGradient descent is a first-order optimization algorithm. To find a local minimum of a function using gradient descent, one takes steps proportional to the negative of the gradient of the function at the current point...
- Stochastic gradient descentStochastic gradient descentStochastic gradient descent is an optimization method for minimizing an objective function that is written as a sum of differentiable functions.- Background :...
- Stochastic gradient descent
- Successive linear programmingSuccessive linear programmingSuccessive Linear Programming , also known as Sequential Linear Programming, is an optimization technique for approximately solving nonlinear optimization problems....
(SLP) — replace problem by a linear programming problem, solve that, and repeat - Newton's method in optimizationNewton's method in optimizationIn mathematics, Newton's method is an iterative method for finding roots of equations. More generally, Newton's method is used to find critical points of differentiable functions, which are the zeros of the derivative function.-Method:...
- See also under Newton algorithm in the section Finding roots of nonlinear equations
- Nonlinear conjugate gradient methodNonlinear conjugate gradient methodIn numerical optimization, the nonlinear conjugate gradient method generalizes the conjugate gradient method to nonlinear optimization. For a quadratic function \displaystyle f:The minimum of f is obtained when the gradient is 0:...
- Nelder-Mead methodNelder-Mead methodThe Nelder–Mead method or downhill simplex method or amoeba method is a commonly used nonlinear optimization technique, which is a well-defined numerical method for twice differentiable and unimodal problems...
- Rosenbrock methodsRosenbrock methodsRosenbrock methods may refer to either of two distinct ideas in numerical computation, both named for Howard H. Rosenbrock. Rosenbrock optimization methods are a family of numerical optimization algorithms applicable to optimization problems in which the objective function is inexpensive to compute...
— derivative-free method, similar to Nelder–Mead but with guaranteed convergence - Ternary searchTernary searchA ternary search algorithm is a technique in computer science for finding the minimum or maximum of a unimodal function...
- Tabu searchTabu searchTabu search is a mathematical optimization method, belonging to the class of trajectory based techniques. Tabu search enhances the performance of a local search method by using memory structures that describe the visited solutions: once a potential solution has been determined, it is marked as...
- Guided Local SearchGuided Local SearchGuided Local Search is a metaheuristic search method. A meta-heuristic method is a method that sits on top of a local search algorithm to change its behaviour....
— modification of search algorithms which builds up penalties during a search - Reactive search optimizationReactive search optimizationReactive search optimization defines local-search heuristics based on machine learning, a family of optimization algorithms based on the local search techniques...
(RSO) — the algorithm adapts its parameters automatically - Least absolute deviationsLeast absolute deviationsLeast absolute deviations , also known as Least Absolute Errors , Least Absolute Value , or the L1 norm problem, is a mathematical optimization technique similar to the popular least squares technique that attempts to find a function which closely approximates a set of data...
- Expectation-maximization algorithmExpectation-maximization algorithmIn statistics, an expectation–maximization algorithm is an iterative method for finding maximum likelihood or maximum a posteriori estimates of parameters in statistical models, where the model depends on unobserved latent variables...
- Ordered subset expectation maximization
- Expectation-maximization algorithm
- Nearest neighbor searchNearest neighbor searchNearest neighbor search , also known as proximity search, similarity search or closest point search, is an optimization problem for finding closest points in metric spaces. The problem is: given a set S of points in a metric space M and a query point q ∈ M, find the closest point in S to q...
- Concepts:
Uncertainty and randomness
- Approaches to deal with uncertainty:
- Markov decision processMarkov decision processMarkov decision processes , named after Andrey Markov, provide a mathematical framework for modeling decision-making in situations where outcomes are partly random and partly under the control of a decision maker. MDPs are useful for studying a wide range of optimization problems solved via...
- Partially observable Markov decision processPartially observable Markov decision processA Partially Observable Markov Decision Process is a generalization of a Markov Decision Process. A POMDP models an agent decision process in which it is assumed that the system dynamics are determined by an MDP, but the agent cannot directly observe the underlying state...
- Robust optimizationRobust optimizationRobust optimization is a field of optimization theory that deals with optimization problems where robustness is sought against uncertainty and/or variability in the value of a parameter of the problem.- History :...
- Stochastic approximationStochastic approximationStochastic approximation methods are a family of iterative stochastic optimization algorithms that attempt to find zeroes or extrema of functions which cannot be computed directly, but only estimated via noisy observations....
- Stochastic optimizationStochastic optimizationStochastic optimization methods are optimization methods that generate and use random variables. For stochastic problems, the random variables appear in the formulation of the optimization problem itself, which involve random objective functions or random constraints, for example. Stochastic...
- Stochastic programmingStochastic programmingStochastic programming is a framework for modeling optimization problems that involve uncertainty. Whereas deterministic optimization problems are formulated with known parameters, real world problems almost invariably include some unknown parameters. When the parameters are known only within...
- Stochastic gradient descentStochastic gradient descentStochastic gradient descent is an optimization method for minimizing an objective function that is written as a sum of differentiable functions.- Background :...
- Markov decision process
- Random optimizationRandom optimizationRandom optimization is a family of numerical optimization methods that do not require the gradient of the problem to be optimized and RO can hence be used on functions that are not continuous or differentiable...
algorithms:- Simulated annealingSimulated annealingSimulated annealing is a generic probabilistic metaheuristic for the global optimization problem of locating a good approximation to the global optimum of a given function in a large search space. It is often used when the search space is discrete...
- Adaptive simulated annealingAdaptive simulated annealingAdaptive simulated annealing is a variant of simulated annealing algorithm in which the algorithm parameters that control temperature schedule and random step selection are automatically adjusted according to algorithm progress. This makes the algorithm more efficient and less sensitive to user...
— variant in which the algorithm parameters are adjusted during the computation. - Great Deluge algorithmGreat Deluge algorithmThe Great Deluge algorithm is a generic algorithm applied to optimization problems. It is similar in many ways to the hill-climbing and simulated annealing algorithms....
- Adaptive simulated annealing
- Evolutionary algorithmEvolutionary algorithmIn artificial intelligence, an evolutionary algorithm is a subset of evolutionary computation, a generic population-based metaheuristic optimization algorithm. An EA uses some mechanisms inspired by biological evolution: reproduction, mutation, recombination, and selection...
, Evolution strategyEvolution strategyIn computer science, evolution strategy is an optimization technique based on ideas of adaptation and evolution. It belongs to the general class of evolutionary computation or artificial evolution methodologies.-History:...
- Differential evolutionDifferential evolutionIn computer science, differential evolution is a method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. Such methods are commonly known as metaheuristics as they make few or no assumptions about the problem being optimized...
- Evolutionary programmingEvolutionary programmingEvolutionary programming is one of the four major evolutionary algorithm paradigms. It is similar to genetic programming, but the structure of the program to be optimized is fixed, while its numerical parameters are allowed to evolve....
- Evolution windowEvolution windowIt was observed in evolution strategies that significant progress toward the fitness/objective function's optimum, generally, can only happen in a narrow band of the mutation step size σ...
- Genetic algorithmGenetic algorithmA genetic algorithm is a search heuristic that mimics the process of natural evolution. This heuristic is routinely used to generate useful solutions to optimization and search problems...
, Genetic programmingGenetic programmingIn artificial intelligence, genetic programming is an evolutionary algorithm-based methodology inspired by biological evolution to find computer programs that perform a user-defined task. It is a specialization of genetic algorithms where each individual is a computer program...
- Genetic algorithm in economicsGenetic algorithm in economicsGenetic algorithms have increasingly been applied to economics over the last two decades. It has been used to characterize a variety of models including the cobweb model, the overlapping generations model, game theory, schedule optimization and asset pricing...
- Speciation (genetic algorithm)Speciation (genetic algorithm)Speciation is a process that occurs naturally in evolution and is modeled explicitly in some genetic algorithms.Speciation in nature occurs when two similar reproducing beings evolve to become too dissimilar to share genetic information effectively or correctly. In the case of living organisms,...
- Genetic algorithm in economics
- Genetic representationGenetic representationGenetic representation is a way of representing solutions/individuals in evolutionary computation methods. Genetic representation can encode appearance, behavior, physical qualities of individuals. Designing a good genetic representation that is expressive and evolvable is a hard problem in...
- Gaussian adaptationGaussian adaptationGaussian adaptation is an evolutionary algorithm designed for the maximization of manufacturing yield due to statistical deviation of component values of signal processing systems...
- Differential evolution
- Memetic algorithmMemetic algorithmMemetic algorithms represent one of the recent growing areas of research in evolutionary computation. The term MA is now widely used as a synergy of evolutionary or any population-based approach with separate individual learning or local improvement procedures for problem search...
- Particle swarm optimizationParticle swarm optimizationIn computer science, particle swarm optimization is a computational method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality...
- Cooperative optimization
- Repulsive particle swarm optimization
- Stochastic tunnelingStochastic tunnelingStochastic tunneling is an approach to global optimization based on the Monte Carlo method-sampling of the function to be minimized.- Idea :...
- Harmony searchHarmony searchIn computer science and operations research, harmony search is a phenomenon-mimicking algorithm inspired by the improvisation process of musicians...
— mimicks the improvisation process of musicians - see also the section Monte Carlo method
- Simulated annealing
Theoretical aspects
- Convex analysisConvex analysisConvex analysis is the branch of mathematics devoted to the study of properties of convex functions and convex sets, often with applications in convex minimization, a subdomain of optimization theory....
- Quasiconvex functionQuasiconvex functionIn mathematics, a quasiconvex function is a real-valued function defined on an interval or on a convex subset of a real vector space such that the inverse image of any set of the form is a convex set...
- SubderivativeSubderivativeIn mathematics, the concepts of subderivative, subgradient, and subdifferential arise in convex analysis, that is, in the study of convex functions, often in connection to convex optimization....
- Quasiconvex function
- Duality:
- Dual problemDual problemIn constrained optimization, it is often possible to convert the primal problem to a dual form, which is termed a dual problem. Usually dual problem refers to the Lagrangian dual problem but other dual problems are used, for example, the Wolfe dual problem and the Fenchel dual problem...
, Shadow priceShadow priceIn constrained optimization in economics, the shadow price is the instantaneous change per unit of the constraint in the objective value of the optimal solution of an optimization problem obtained by relaxing the constraint... - Dual cone and polar coneDual cone and polar coneDual cone and polar cone are closely related concepts in convex analysis, a branch of mathematics.-Dual cone:The dual cone C^* of a subset C in a Euclidean space \mathbb R^n is the set...
- Fenchel's duality theoremFenchel's duality theoremIn mathematics, Fenchel's duality theorem is a result in the theory of convex functions named after Werner Fenchel.Let ƒ be a proper convex function on Rn and let g be a proper concave function on Rn...
— relates minimization problems with maximization problems of convex conjugates
- Dual problem
- Farkas' lemma
- Karush–Kuhn–Tucker conditions
- Lagrange multipliersLagrange multipliersIn mathematical optimization, the method of Lagrange multipliers provides a strategy for finding the maxima and minima of a function subject to constraints.For instance , consider the optimization problem...
- Lagrange multipliers on Banach spacesLagrange multipliers on Banach spacesIn the field of calculus of variations in mathematics, the method of Lagrange multipliers on Banach spaces can be used to solve certain infinite-dimensional constrained optimization problems...
- Lagrange multipliers on Banach spaces
- Complementarity theoryComplementarity theoryA complementarity problem is a type of mathematical optimization problem. It is the problem of optimizing a function of two vector variables subject to certain requirements which include: that the inner product of the two variables must equal zero, i.e. = 0...
— study of problems with constraints of the form 〈u, v〉 = 0- Mixed complementarity problemMixed complementarity problemMixed Complementarity Problem is a problem formulation in mathematical programming. Many well-known problem types are special cases of, or may be reduced to MCP...
- Mixed linear complementarity problemMixed linear complementarity problemIn mathematical optimization theory, the mixed linear complementarity problem, often abbreviated as MLCP or LMCP, is a generalization of the linear complementarity problem to include free variables.- References :*...
- Lemke's algorithmLemke's algorithmIn mathematical optimization, Lemke's algorithm is an procedure for solving linear complementarity problems, and more generally mixed linear complementarity problems.Lemke's algorithm is of pivoting or basis-exchange type...
— method for solving (mixed) linear complementarity problems
- Mixed linear complementarity problem
- Mixed complementarity problem
- Danskin's theorem — used in the analysis of minimax problems
- No free lunch in search and optimization
- Relaxation technique (mathematics)Relaxation technique (mathematics)In mathematical optimization and related fields, relaxation is a modeling strategy. A relaxation is an approximation of a difficult problem by a nearby problem that is easier to solve...
- Lagrangian relaxationLagrangian relaxationIn the field of mathematical optimization, Lagrangian relaxation is a relaxation method which approximates a difficult problem of constrained optimization by a simpler problem...
- Linear programming relaxation — ignoring the integrality constraints in a linear programming problem
- Lagrangian relaxation
- Self-concordant function
- Reduced costReduced costIn linear programming, reduced cost, or opportunity cost, is the amount by which an objective function coefficient would have to improve before it would be possible for a corresponding variable to assume a positive value in the optimal solution...
— cost for increasing a variable by a small amount - Hardness of approximationHardness of approximationIn computer science, hardness of approximation is a field that studies the algorithmic complexity of finding near-optimal solutions to optimization problems. It complements the study of approximation algorithms by proving, for certain problems, a limit on the factors with which their solution can...
— computational complexity of getting an approximate solution
Applications
- In geometry:
- Geometric medianGeometric medianThe geometric median of a discrete set of sample points in a Euclidean space is the point minimizing the sum of distances to the sample points. This generalizes the median, which has the property of minimizing the sum of distances for one-dimensional data, and provides a central tendency in higher...
— the point minimizing the sum of distances to a given set of points - Chebyshev centerChebyshev centerIn geometry, the Chebyshev center of a bounded set Q having non-empty interior is the center of the minimal-radius ball enclosing the entire set Q, or, alternatively, the center of largest inscribed ball of Q ....
— the centre of the smallest ball containing a given set of points
- Geometric median
- Automatic label placementAutomatic label placementAutomatic label placement refers to the computer methods of placing labels automatically on a map or chart. This is related to the typographic design of such labels....
- Compressed sensingCompressed sensingCompressed sensing, also known as compressive sensing, compressive sampling and sparse sampling, is a technique for finding sparse solutions to underdetermined linear systems...
— reconstruct a signal from knowledge that it is sparse or compressible - Cutting stock problemCutting stock problemThe cutting-stock problem is an optimization problem, or more specifically, an integer linear programming problem. It arises from many applications in industry. Imagine that you work in a paper mill and you have a number of rolls of paper of fixed width waiting to be cut, yet different customers...
- Demand optimizationDemand OptimizationDemand optimization is the application of processes and tools to maximize return on sales. This usually involves the application of mathematical modeling techniques using computer software....
- Destination dispatchDestination dispatchDestination dispatch is an optimization technique for dispatching elevators to provide passengers with the shortest waiting times and the shortest time to destination. Instead of the classic two-button system, passengers enter the desired destination floor with a keypad or touch screen...
— an optimization technique for dispatching elevators - Energy minimizationEnergy minimizationIn computational chemistry, energy minimization methods are used to compute the equilibrium configuration of molecules and solids....
- Entropy maximization
- Inventory control problemInventory control problemThe inventory control problem is the problem faced by a firm that must decide how much to order in each time period to meet demand for its products. The problem can be modeled using mathematical techniques of optimal control, dynamic programming and network optimization. The study of such models...
- NewsvendorNewsvendorThe newsvendor model is a mathematical model in operations management and applied economics used to determine optimal inventory levels. It is characterized by fixed prices and uncertain demand for a perishable product. If the inventory level is q, each unit of demand above q is lost...
- Extended newsvendor modelsExtended Newsvendor modelsExtended newsvendor models are variations of the classic newsvendor model involving production capacity constraints, multiple products, multiple production cycles, demand dependent selling price, etc. that appear in the Operations management literature....
- Newsvendor
- Job-shop problemJob-shop problemThe job-shop problem is a problem in discrete or combinatorial optimization, and is a generalization of the famous travelling salesman problem...
- Linear programming decodingLinear programming decodingIn information theory and coding theory, linear programming decoding is a decoding method which uses concepts from LP theory to solve decoding problems. This approach was first used by Jon Feldman et al. They showed how the LP can be used to decodes block codes....
- Multidisciplinary design optimizationMultidisciplinary design optimizationMulti-disciplinary design optimization is a field of engineering that uses optimization methods to solve design problems incorporating a number of disciplines. As defined by Prof. Carlo Poloni, MDO is "the art of finding the best compromise"...
- Paper bag problemPaper bag problemIn geometry, the paper bag problem or teabag problem is to calculate the maximum possible inflated volume of an initially flat sealed rectangular bag which has the same shape as a cushion or pillow, made out of two pieces of material which can bend but not stretch.The problem is made even more...
- Process optimizationProcess optimizationProcess optimization is the discipline of adjusting a process so as to optimize some specified set of parameters without violating some constraint. The most common goals are minimizing cost, maximizing throughput, and/or efficiency...
- Stigler dietStigler dietThe Stigler diet is an optimization problem named for George Stigler, a 1982 Nobel Laureate in economics, who posed the following problem: For a moderately active man weighing 154 pounds, how much of each of 77 foods should be eaten on a daily basis so that the man’s intake of nine nutrients will...
- Stress majorizationStress majorizationStress majorization is an optimization strategy used in multidimensional scaling where, for a set of n, m-dimensional data items, a configuration X of n points in rStress majorization is an optimization strategy used in multidimensional scaling (MDS) where, for a set of n, m-dimensional data...
- Trajectory optimizationTrajectory optimizationTrajectory optimization is the process of designing a trajectory that minimizes or maximizes some measure of performance within prescribed constraint boundaries...
- Wing shape optimizationWing shape optimizationWing-shape optimization is a software implementation of shape optimization primarily used for aircraft design. This allows for engineers to produce more efficient and cheaper aircraft designs.-History:...
Miscellaneous
- Combinatorial optimizationCombinatorial optimizationIn applied mathematics and theoretical computer science, combinatorial optimization is a topic that consists of finding an optimal object from a finite set of objects. In many such problems, exhaustive search is not feasible...
- Dynamic programmingDynamic programmingIn mathematics and computer science, dynamic programming is a method for solving complex problems by breaking them down into simpler subproblems. It is applicable to problems exhibiting the properties of overlapping subproblems which are only slightly smaller and optimal substructure...
- Bellman equationBellman equationA Bellman equation , named after its discoverer, Richard Bellman, is a necessary condition for optimality associated with the mathematical optimization method known as dynamic programming...
- Hamilton–Jacobi–Bellman equation — continuous-time analogue of Bellman equation
- Backward inductionBackward inductionBackward induction is the process of reasoning backwards in time, from the end of a problem or situation, to determine a sequence of optimal actions. It proceeds by first considering the last time a decision might be made and choosing what to do in any situation at that time. Using this...
— solving dynamic programming problems by reasoning backwards in time - Optimal stoppingOptimal stoppingIn mathematics, the theory of optimal stopping is concerned with the problem of choosing a time to take a particular action, in order to maximise an expected reward or minimise an expected cost. Optimal stopping problems can be found in areas of statistics, economics, and mathematical finance...
— choosing the optimal time to take a particular action- Odds algorithmOdds algorithmThe odds-algorithm is a mathematical method for computing optimalstrategies for a class of problems that belong to the domain of optimal stopping problems. Their solution follows from the odds-strategy, and the importance of the...
- Robbins' problem (of optimal stopping)Robbins' problem (of optimal stopping)In probability theory, Robbins' problem of optimal stopping, named after Herbert Robbins, is sometimes referred to as the fourth secretary problem or the problem of minimizing the expected rank with full information. Its statement is as follows....
- Odds algorithm
- Bellman equation
- Global optimizationGlobal optimizationGlobal optimization is a branch of applied mathematics and numerical analysis that deals with the optimization of a function or a set of functions to some criteria.- General :The most common form is the minimization of one real-valued function...
:- BRST algorithmBRST algorithmBoender-Rinnooy-Stougie-Timmer algorithm is an optimization algorithm suitable for finding global optimum of black box functions. In their paper Boender et al. describe their method as a stochastic method involving a combination of sampling, clustering and local search, terminating with a range...
- MCS algorithmMCS algorithmMultilevel Coordinate Search is an algorithm for bound constrained global optimization using function values only.To do so, the n-dimensional search space is represented by a set of non-intersecting hypercubes . The boxes are then iteratively split along an axis plane according to the value of the...
- BRST algorithm
- Bilevel programBilevel programIn mathematics, bilevel programs are optimization problems where one optimization problem is embedded in another one.Bilevel programs are multilevel programs with two levels.- Mathematical formulation of the problem :...
— problem in which one problem is embedded in another - Multilevel programmingMultilevel programmingThe "level" refers to sets of variables. A bilevel program has two sets:min f: x in X, y in Y, h=0, g=0.A reason for identifying levels is to apply a decomposition principle for algorithm design. One example is the bilinear program...
— problem in which a series of problems is embedded in each other - Infinite-dimensional optimizationInfinite-dimensional optimizationIn certain optimization problems the unknown optimal solution might not be a number or a vector, but rather a continuous quantity, for example a function or the shape of a body...
- Optimal controlOptimal controlOptimal control theory, an extension of the calculus of variations, is a mathematical optimization method for deriving control policies. The method is largely due to the work of Lev Pontryagin and his collaborators in the Soviet Union and Richard Bellman in the United States.-General method:Optimal...
- Pontryagin's minimum principlePontryagin's minimum principlePontryagin's maximum principle is used in optimal control theory to find the best possible control for taking a dynamical system from one state to another, especially in the presence of constraints for the state or input controls. It was formulated by the Russian mathematician Lev Semenovich...
— infinite-dimensional version of Lagrange multipliers - DNSS pointDNSS pointDNSS points arise in optimal control problems that exhibit multiple optimal solutions. A DNSS point-named alphabetically after Deckert and Nishimura, Sethi, and Skiba-is an indifference point in an optimal control problem such that starting from such a point, the problem has more than one different...
— initial state for certain optimal control problems with multiple optimal solutions
- Pontryagin's minimum principle
- Shape optimizationShape optimizationShape optimization is part of the field of optimal control theory. The typical problem is to find the shape which is optimal in that it minimizes a certain cost functional while satisfying given constraints...
, Topology optimizationTopology optimization- Introduction :Topology optimisation is a mathematical approach that optimises material layout within a given design space, for a given set of loads and boundary conditions such that the resulting layout meets a prescribed set of performance targets...
— optimization over a set of regions- Topological derivativeTopological derivativeIn the fields of shape optimization and topology optimization, a topological derivative is, conceptually, a derivative of a function of a region with respect to infinitesimal changes in its topology, such as adding an infinitesimal hole or crack....
— derivative with respect to changing in the shape
- Topological derivative
- Generalized semi-infinite programmingGeneralized semi-infinite programmingIn mathematics, a semi-infinite programming problem is an optimization problem with a finite number of variables and an infinite number of constraints. The constraints are typically parameterized...
— finite number of variables, infinite number of constraints
- Optimal control
- Optimal substructureOptimal substructureIn computer science, a problem is said to have optimal substructure if an optimal solution can be constructed efficiently from optimal solutions to its subproblems...
- Algorithmic concepts:
- Barrier functionBarrier functionIn constrained optimization, a field of mathematics, a barrier function is a continuous function whose value on a point increases to infinity as the point approaches the boundary of the feasible region . It is used as a penalizing term for violations of constraints...
- Penalty methodPenalty methodPenalty methods are a certain class of algorithms for solving constrained optimization problems.A penalty method replaces a constrained optimization problem by a series of unconstrained problems whose solutions ideally converge to the solution of the original constrained problem...
- Trust regionTrust regionTrust region is a term used in mathematical optimization to denote the subset of the region of the objective function to be optimized that is approximated using a model function . If an adequate model of the objective function is found within the trust region then the region is expanded;...
- Barrier function
- Famous test functions for optimization:
- Rosenbrock functionRosenbrock functionIn mathematical optimization, the Rosenbrock function is a non-convex function used as a performance test problem for optimization algorithms introduced by Howard H. Rosenbrock in 1960. It is also known as Rosenbrock's valley or Rosenbrock's banana function.The global minimum is inside a long,...
— two-dimensional function with a banana-shaped valley - Himmelblau's functionHimmelblau's functionIn mathematical optimization, the Himmelblau's function is a multi-modal function, used to test the performance of optimization algorithms.The function is defined by:...
— two-dimensional with four local minima, defined by - Shekel functionShekel functionShekel function is a multidimensional, multimodal, continuous, deterministic function commonly used as a test function for testing optimization techniques.The mathematical form of a function in n dimensions with m maxima is:...
— multimodal and multidimensional
- Rosenbrock function
- Mathematical Programming SocietyMathematical Programming SocietyKnown as the Mathematical Programming Society until 2010, the Mathematical Optimization Society is an international association of researchers active in optimization...
Numerical quadrature (integration)
Numerical integrationNumerical integration
In numerical analysis, numerical integration constitutes a broad family of algorithms for calculating the numerical value of a definite integral, and by extension, the term is also sometimes used to describe the numerical solution of differential equations. This article focuses on calculation of...
— the numerical evaluation of an integral
- Rectangle methodRectangle methodIn mathematics, specifically in integral calculus, the rectangle method computes an approximation to a definite integral, made by finding the area of a collection of rectangles whose heights are determined by the values of the function.Specifically, the interval over which the function is to be...
— first-order method, based on (piecewise) constant approximation - Trapezoidal rule — second-order method, based on (piecewise) linear approximation
- Simpson's ruleSimpson's ruleIn numerical analysis, Simpson's rule is a method for numerical integration, the numerical approximation of definite integrals. Specifically, it is the following approximation:...
— fourth-order method, based on (piecewise) quadratic approximation- Adaptive Simpson's methodAdaptive Simpson's methodAdaptive Simpson's method, also called adaptive Simpson's rule, is a method of numerical integration proposed by G.F. Kuncir in 1962. It is probably the first recursive adaptive algorithm for numerical integration to appear in print, although more modern adaptive methods based on Gauss–Kronrod...
- Adaptive Simpson's method
- Newton–Cotes formulas
- Romberg's methodRomberg's methodIn numerical analysis, Romberg's method is used to estimate the definite integral \int_a^b f \, dx by applying Richardson extrapolation repeatedly on the trapezium rule or the rectangle rule . The estimates generate a triangular array...
— Richardson extrapolation applied to trapezium rule - Gaussian quadratureGaussian quadratureIn numerical analysis, a quadrature rule is an approximation of the definite integral of a function, usually stated as a weighted sum of function values at specified points within the domain of integration....
— highest possible degree with given number of points- Chebyshev–Gauss quadrature — extension of Gaussian quadrature for integrals with weight on [−1, 1]
- Gauss–Hermite quadrature — extension of Gaussian quadrature for integrals with weight exp(−x2) on [−∞, ∞]
- Gauss–Jacobi quadrature — extension of Gaussian quadrature for integrals with weight (1 − x)α (1 + x)β on [−1, 1]
- Gauss–Laguerre quadrature — extension of Gaussian quadrature for integrals with weight exp(−x2) on [0, ∞]
- Gauss–Kronrod quadrature formula — nested rule based on Gaussian quadrature
- Gauss-Kronrod rulesGaussian quadratureIn numerical analysis, a quadrature rule is an approximation of the definite integral of a function, usually stated as a weighted sum of function values at specified points within the domain of integration....
- Tanh-sinh quadratureTanh-sinh quadratureTanh-sinh quadrature is a method for numerical integration introduced by Hidetosi Takahasi and Masatake Mori in 1974. It uses the change of variablesx = \tanh\,...
— variant of Gaussian quadrature which works well with singularities at the end points - Clenshaw–Curtis quadrature — based on expanding the integrand in terms of Chebyshev polynomials
- Adaptive quadratureAdaptive quadratureIn applied mathematics, adaptive quadrature is a process in which the integral of a function f is approximated using static quadrature rules on adaptively refined subintervals of the integration domain...
— adapting the subintervals in which the integration interval is divided depending on the integrand - Monte Carlo integration — takes random samples of the integrand
- See also #Monte Carlo method
- T-integration — a non-standard method
- Lebedev quadrature — uses a grid on a sphere with octahedral symmetry
- Sparse gridSparse gridSparse grids are numerical techniques to represent, integrate or interpolate high dimensional functions. They were originally found by the Russian mathematician Smolyak and are based on a sparse tensor product construction...
- Numerical differentiationNumerical differentiationIn numerical analysis, numerical differentiation describes algorithms for estimating the derivative of a mathematical function or function subroutine using values of the function and perhaps other knowledge about the function.-Finite difference formulae:...
- Numerical smoothing and differentiationNumerical smoothing and differentiationAn experimental datum value can be conceptually described as the sum of a signal and some noise, but in practice the two contributions cannot be separated. The purpose of smoothing is to increase the Signal-to-noise ratio without greatly distorting the signal...
- Numerical smoothing and differentiation
- Euler–Maclaurin formula
Numerical ordinary differential equations
Numerical ordinary differential equationsNumerical ordinary differential equations
Numerical ordinary differential equations is the part of numerical analysis which studies the numerical solution of ordinary differential equations...
— the numerical solution of ordinary differential equations (ODEs)
- Euler method — the most basic method for solving an ODE
- Explicit and implicit methodsExplicit and implicit methodsExplicit and implicit methods are approaches used in numerical analysis for obtaining numerical solutions of time-dependent ordinary and partial differential equations, as is required in computer simulations of physical processes....
— implicit methods need to solve an equation at every step - Runge–Kutta methodsRunge–Kutta methodsIn numerical analysis, the Runge–Kutta methods are an important family of implicit and explicit iterative methods for the approximation of solutions of ordinary differential equations. These techniques were developed around 1900 by the German mathematicians C. Runge and M.W. Kutta.See the article...
— one of the two main classes of methods for initial-value problems- Midpoint methodMidpoint methodIn numerical analysis, a branch of applied mathematics, the midpoint method is a one-step method for solving the differential equation y' = f, \quad y = y_0...
— a second-order method with two stages - Heun's methodHeun's methodIn mathematics and computational science, Heun's method may refer to the improved or modified Euler's method , or a similar two-stage Runge–Kutta method. It is named after Karl L. W. M. Heun and is a numerical procedure for solving ordinary differential equations with a given initial value...
— either a second-order method with two stages, or a third-order method with three stages - Bogacki–Shampine methodBogacki–Shampine methodThe Bogacki–Shampine method is a method for the numerical solution of ordinary differential equations, that was proposed by Przemyslaw Bogacki and Lawrence F. Shampine in 1989 . The Bogacki–Shampine method is a Runge–Kutta method of order three with four stages with the First Same As Last ...
— a third-order method with four stages (FSAL) and an embedded fourth-order method - Cash–Karp methodCash–Karp methodIn numerical analysis, the Cash–Karp method is a method for solving ordinary differential equations . It was proposed by Professor Jeff R. Cash from Imperial College London and Alan H. Karp from IBM Scientific Center. The method is a member of the Runge–Kutta family of ODE solvers...
— a fifth-order method with six stages and an embedded fourth-order method - Dormand–Prince methodDormand–Prince methodIn numerical analysis, the Dormand–Prince method, or DOPRI method, is a method for solving ordinary differential equations . The method is a member of the Runge–Kutta family of ODE solvers. More specifically, it uses six function evaluations to calculate fourth- and fifth-order accurate solutions...
— a fifth-order method with seven stages (FSAL) and an embedded fourth-order method - Runge–Kutta–Fehlberg methodRunge–Kutta–Fehlberg methodIn mathematics, the Runge–Kutta–Fehlberg method is an algorithm of numerical analysis for the numerical solution of ordinary differential equations. It was developed by the German mathematician Erwin Fehlberg and is based on the class of Runge–Kutta methods...
— a fifth-order method with six stages and an embedded fourth-order method - Butcher groupButcher groupIn mathematics, the Butcher group, named after the New Zealand mathematician John C. Butcher by , is an infinite-dimensional group first introduced in numerical analysis to study solutions of non-linear ordinary differential equations by the Runge–Kutta method...
— algebraic formalism involving rooted trees for analysing Runge–Kutta methods - List of Runge–Kutta methods
- Midpoint method
- Linear multistep method — the other main class of methods for initial-value problems
- Backward differentiation formulaBackward differentiation formulaThe backward differentiation formula is a family of implicit methods for the numerical integration of ordinary differential equations. They are linear multistep methods that, for a given function and time, approximate the derivative of that function using information from already computed times,...
— implicit methods of order 2 to 6; especially suitable for stiff equations - Numerov's methodNumerov's methodNumerov's method is a numerical method to solve ordinary differential equations of second order in which the first-order term does not appear. It is a fourth-order linear multistep method...
— fourth-order method for equations of the form
- Backward differentiation formula
- Bulirsch–Stoer algorithmBulirsch–Stoer algorithmIn numerical analysis, the Bulirsch–Stoer algorithm is a method for the numerical solution of ordinary differential equations which combines three powerful ideas: Richardson extrapolation, the use of rational function extrapolation in Richardson-type applications, and the modified midpoint method,...
— combines the midpoint method with Richardson extrapolation to attain arbitrary order - Methods designed for the solution of ODEs from classical physics:
- Newmark-beta methodNewmark-beta methodThe Newmark-beta method is a method of numerical integration used to solve differential equations. It is widely used in numerical evaluation of the dynamic response of structures and solids such as in finite element analysis to model dynamic systems. The method is named after Nathan M...
— based on the extended mean-value theorem - Verlet integrationVerlet integrationVerlet integration is a numerical method used to integrate Newton's equations of motion. It is frequently used to calculate trajectories of particles in molecular dynamics simulations and computer graphics...
— a popular second-order method - Leapfrog integrationLeapfrog integrationLeapfrog integration is a simple method for numerically integrating differential equations of the form\ddot x=F,or equivalently of the form\dot v=F,\;\dot x \equiv v,particularly in the case of a dynamical system of classical mechanics...
— another name for Verlet integration - Beeman's algorithmBeeman's algorithmBeeman's algorithm is a method for numerically integrating ordinary differential equations of order 2, more specifically Newton's equations of motion \ddot x=A. It was designed to allow high numbers of particles in simulations of molecular dynamics. There is a direct or explicit and an implicit...
— a two-step method extending the Verlet method - Dynamic relaxationDynamic relaxationDynamic relaxation is a numerical method, which, among other things, can be used do "form-finding" for cable and fabric structures. The aim is to find a geometry where all forces are in equilibrium...
- Newmark-beta method
- Geometric integratorGeometric integratorIn the mathematical field of numerical ordinary differential equations, a geometric integrator is a numerical method that preserves geometric properties of the exact flow of a differential equation.-Pendulum example:...
— a method that preserves some geometric structure of the equation- Symplectic integratorSymplectic integratorIn mathematics, a symplectic integrator is a numerical integration scheme for a specific group of differential equations related to classical mechanics and symplectic geometry. Symplectic integrators form the subclass of geometric integrators which, by definition, are canonical transformations...
— a method for the solution of Hamilton's equations that preserves the symplectic structure- Variational integratorVariational integratorVariational integrators are numerical integrators for Hamiltonian systems derived from the Euler-Lagrange equations of a discretized Hamilton's principle...
— symplectic integrators derived using the underlying variational principle - Semi-implicit Euler methodSemi-implicit Euler methodIn mathematics, the semi-implicit Euler method, also called symplectic Euler, semi-explicit Euler, Euler–Cromer, and Newton–Størmer–Verlet , is a modification of the Euler method for solving Hamilton's equations, a system of ordinary differential equations that arises in classical mechanics...
— variant of Euler method which is symplectic when applied to separable Hamiltonians
- Variational integrator
- Symplectic integrator
- Adaptive stepsizeAdaptive stepsizeAdaptive stepsize is a technique in numerical analysis used for many problems, but mainly for integration. It can be used for both normal integration , or the process of solving an ordinary differential equation. This article focuses on the latter...
— automatically changing the step size when that seems advantageous - Stiff equationStiff equationIn mathematics, a stiff equation is a differential equation for which certain numerical methods for solving the equation are numerically unstable, unless the step size is taken to be extremely small. It has proved difficult to formulate a precise definition of stiffness, but the main idea is that...
— roughly, an ODE for which the unstable methods needs a very short step size, but stable methods do not - Methods for solving two-point boundary value problems (BVPs):
- Shooting methodShooting methodIn numerical analysis, the shooting method is a method for solving a boundary value problem by reducing it to the solution of an initial value problem...
- Direct multiple shooting methodDirect multiple shooting methodIn the area of mathematics known as numerical ordinary differential equations, the direct multiple shooting method is a numerical method for the solution of boundary value problems...
— divides interval in several subintervals and applies the shooting method on each subinterval
- Shooting method
- Methods for solving differential-algebraic equations (DAEs), i.e., ODEs with constraints:
- Constraint algorithmConstraint algorithmIn mechanics, a constraint algorithm is a method for satisfying constraints for bodies that obey Newton's equations of motion. There are three basic approaches to satisfying such constraints: choosing novel unconstrained coordinates , introducing explicit constraint forces, and minimizing...
— for solving Newton's equations with constraints
- Constraint algorithm
- Methods for solving stochastic differential equations (SDEs):
- Euler–Maruyama method — generalization of the Euler method for SDEs
- Milstein methodMilstein methodIn mathematics, the Milstein method, named after Grigori N. Milstein, is a technique for the approximate numerical solution of a stochastic differential equation.Consider the Itō stochastic differential equation...
— a method with strong order one - Runge–Kutta method (SDE)Runge–Kutta method (SDE)In mathematics, the Runge–Kutta method is a technique for the approximate numerical solution of a stochastic differential equation. It is a generalization of the Runge–Kutta method for ordinary differential equations to stochastic differential equations....
— generalization of the family of Runge–Kutta methods for SDEs
- Methods for solving integral equations:
- Nyström method — replaces the integral with a quadrature rule
- Bi-directional delay lineBi-directional delay lineIn mathematics, a bi-directional delay line is a numerical analysis technique used in computer simulation for solving ordinary differential equations by converting them to hyperbolic equations. In this way an explicit solution scheme is obtained with highly robust numerical properties...
- Partial element equivalent circuit
- History of numerical solution of differential equations using computersHistory of numerical solution of differential equations using computersAccording to Mary Croarken in her paper "Computing in Britain During World War II," by 1945, the Cambridge Mathematical Laboratory created by John Lennard-Jones utilized the latest computing devices to perform the equations...
- Truncation error (numerical integration)Truncation error (numerical integration)Trunction errors in numerical integration are of two kinds:* local truncation errors – the error caused by one iteration, and* global truncation errors – the cumulative error cause by many iterations.- Definitions :...
— local and global truncation errors, and their relationships
Numerical partial differential equations
Numerical partial differential equationsNumerical partial differential equations
Numerical partial differential equations is the branch of numerical analysis that studies the numerical solution of partial differential equations .Numerical techniques for solving PDEs include the following:...
— the numerical solution of partial differential equations (PDEs)
Finite difference methods
Finite difference methodFinite difference method
In mathematics, finite-difference methods are numerical methods for approximating the solutions to differential equations using finite difference equations to approximate derivatives.- Derivation from Taylor's polynomial :...
— based on approximating differential operators with difference operators
- Finite differenceFinite differenceA finite difference is a mathematical expression of the form f − f. If a finite difference is divided by b − a, one gets a difference quotient...
— the discrete analogue of a differential operator- Difference operator — the numerator of a finite difference
- Discrete Laplace operatorDiscrete Laplace operatorIn mathematics, the discrete Laplace operator is an analog of the continuous Laplace operator, defined so that it has meaning on a graph or a discrete grid...
— finite-difference approximation of the Laplace operator - Discrete Poisson equationDiscrete Poisson equationIn mathematics, the discrete Poisson equation is the finite difference analog of the Poisson equation. In it, the discrete Laplace operator takes the place of the Laplace operator...
— discrete analogue of the Poisson equation using the discrete Laplace operator
- Stencil (numerical analysis)Stencil (numerical analysis)In mathematics, especially the areas of numerical analysis concentrating on the numerical solution of partial differential equations, a stencil is a geometric arrangement of a nodal group that relate to the point of interest by using a numerical approximation routine. Stencils are the basis for...
— the geometric arrangements of grid points affected by a basic step of the algorithm- Compact stencilCompact stencilIn mathematics, especially in the areas of numerical analysis called numerical partial differential equations, a compact stencil is a type of stencil that uses only nine nodes for its discretization method in two dimensions. It uses only the center node and the adjacent nodes...
— stencil which only uses a few grid points, usually only the immediate and diagonal neighbours - Non-compact stencilNon-compact stencilIn numerical mathematics, a non-compact stencil is a type of discretization method, where any node surrounding the node of interest may be used in the calculation. A non-compact stencil's computational time increases with an increase of layers of nodes used. Non-compact stencils may be compared to...
— any stencil that is not compact - Five-point stencilFive-point stencilIn numerical analysis, given a square grid in one or two dimensions, the five-point stencil of a point in the grid is made up of the point itself together with its four "neighbors"...
— two-dimensional stencil consisting of a point and its four immediate neighbours on a rectangular grid - Stencil codesStencil codesStencil codes are a class of iterative kernelswhich update array elements according to some fixed pattern, called stencil.They are most commonly found in the codes of computer simulations, e.g...
— computer codes that update array elements according to some fixed pattern, called stencils
- Compact stencil
- Finite difference methods for heat equation and related PDEs:
- FTCS schemeFTCS schemeIn numerical analysis, the FTCS method is a finite difference method used for numerically solving the heat equation and similar parabolic partial differential equations. It is a first-order method in time, explicit in time, and is conditionally stable...
(forward-time central-space) — first-order explicit - Crank–Nicolson method — second-order implicit
- FTCS scheme
- Finite difference methods for hyperbolic PDEs like the wave equation:
- Lax–Friedrichs methodLax–Friedrichs methodThe Lax–Friedrichs method, named after Peter Lax and Kurt O. Friedrichs, is a numerical method for the solution of hyperbolic partial differential equations based on finite differences...
— first-order explicit - Lax–Wendroff methodLax–Wendroff methodThe Lax–Wendroff method, named after Peter Lax and Burton Wendroff, is a numerical method for the solution of hyperbolic partial differential equations, based on finite differences...
— second-order explicit - MacCormack methodMacCormack methodIn computational fluid dynamics, the MacCormack method is a widely used discretization scheme for the numerical solution of hyperbolic partial differential equations. This second-order finite difference method is introduced by Robert W. MacCormack in 1969...
— second-order explicit - Upwind schemeUpwind schemeIn computational fluid dynamics, upwind schemes denote a class of numerical discretization methods for solving hyperbolic partial differential equations. Upwind schemes use an adaptive or solution-sensitive finite difference stencil to numerically simulate more properly the direction of propagation...
- Lax–Friedrichs method
- Alternating direction implicit methodAlternating direction implicit methodIn numerical analysis, the Alternating Direction Implicit method is a finite difference method for solving parabolic and elliptic partial differential equations. It is most notably used to solve the problem of heat conduction or solving the diffusion equation in two or more dimensions...
(ADI) — update using the flow in x-direction and then using flow in y-direction - Specific applications:
- Finite difference methods for option pricingFinite difference methods for option pricingFinite difference methods for option pricing are numerical methods used in mathematical finance for the valuation of options. Finite difference methods were first applied to option pricing by Eduardo Schwartz in 1977....
- Finite-difference time-domain methodFinite-difference time-domain methodFinite-difference time-domain is one of the primary available computational electrodynamics modeling techniques. Since it is a time-domain method, FDTD solutions can cover a wide frequency range with a single simulation run, and treat nonlinear material properties in a natural way.The FDTD method...
— a finite-difference method for electrodynamics
- Finite difference methods for option pricing
Finite element methods
Finite element methodFinite element method
The finite element method is a numerical technique for finding approximate solutions of partial differential equations as well as integral equations...
— based on a discretization of the space of solutions
- Finite element method in structural mechanicsFinite element method in structural mechanicsThe Finite element method is a powerful technique originally developed for numerical solution of complex problems in structural mechanics, and it remains the method of choice for complex systems. In the FEM, the structural system is modeled by a set of appropriate finite elements interconnected at...
— a physical approach to finite element methods - Engineering treatment of the finite element method — an explanation of finite elements geared towards engineers
- Galerkin methodGalerkin methodIn mathematics, in the area of numerical analysis, Galerkin methods are a class of methods for converting a continuous operator problem to a discrete problem. In principle, it is the equivalent of applying the method of variation of parameters to a function space, by converting the equation to a...
— a finite element method in which the residual is orthogonal to the finite element space- Discontinuous Galerkin methodDiscontinuous Galerkin methodDiscontinuous Galerkin methods in mathematics form a class of numerical methods for solving partial differential equations. They combine features of the finite element and the finite volume framework and have been successfully applied to hyperbolic, elliptic and parabolic problems arising from a...
— a Galerkin method in which the approximate solution is not continuous
- Discontinuous Galerkin method
- Rayleigh-Ritz methodRayleigh-Ritz methodIn applied mathematics and mechanical engineering, the Rayleigh–Ritz method is a widely used, classical method for the calculation of the natural vibration frequency of a structure in the second or higher order...
— a finite element method based on variational principles - Spectral element methodSpectral element methodIn mathematics, the spectral element method is a high order finite element method.Introduced in a 1984 paper by A. T. Patera, the abstract begins: "A spectral element method that combines the generality of the finite element method with the accuracy of spectral techniques..."The spectral element...
— high-order finite element methods - hp-FEMHp-FEMhp-FEM is a general version of the finite element method , a numerical method for solving partial differential equations based on piecewise-polynomial approximations that employs elements of variable size and polynomial degree ...
— variant in which both the size and the order of the elements are automatically adapted - Direct stiffness methodDirect stiffness methodAs one of the methods of structural analysis, the direct stiffness method , also known as the displacement method or matrix stiffness method, is particularly suited for computer-automated analysis of complex structures including the statically indeterminate type...
— a particular implementation of the finite element method, often used in structural analysis - Trefftz methodTrefftz methodIn mathematics, the Trefftz method is a method for the numerical solution of partial differential equations named after the German mathematician Erich Trefftz . It falls within the class of finite element methods.- Introduction :...
- Finite element updatingFinite element updatingFinite element model updating is the process of ensuring that finite element analysis results in models that better reflect the measured data than the initial models. It is part of verification and validation of numerical models.-The process:...
- Extended finite element methodExtended finite element methodThe extended finite element method , also known as generalized finite element method or partition of unity method is a numerical technique that extends the classical finite element method approach by enriching the solution space for solutions to differential equations with discontinuous...
— puts functions tailored to the problem in the approximation space - Functionally graded elementFunctionally graded elementIn materials science and mathematics, functionally graded elements are elements used in finite element analysis. They can be used to describe a functionally graded material....
s — elements for describing functionally graded materials - Interval finite elementInterval finite elementThe interval finite element method is a finite element method that uses interval parameters. Interval FEM can be applied in situations where it is not possible to get reliable probabilistic characteristics of the structure. This is important in concrete structures, wood structures, geomechanics,...
method — combination of finite elements with interval arithmetic - Discrete exterior calculusDiscrete exterior calculusIn mathematics, the discrete exterior calculus is the extension of the exterior calculus to discrete spaces including graphs and finite element meshes. DEC methods have proved to be very powerful in improving and analyzing finite element methods: for instance, DEC-based methods allow the use of...
— discrete form of the exterior calculus of differential geometry - Modal analysis using FEMModal analysis using FEMThe goal of modal analysis in structural mechanics is to determine the natural mode shapes and frequencies of an object or structure during free vibration. It is common to use the finite element method to perform this analysis because, like other calculations using the FEM, the object being...
— solution of eigenvalue problems to find natural vibrations - Céa's lemmaCéa's lemmaCéa's lemma is a lemma in mathematics. It is an important tool for proving error estimates for the finite element method applied to elliptic partial differential equations.-Lemma statement:Let V be a real Hilbert space with the norm \|\cdot\|...
— solution in the finite-element space is an almost best approximation in that space of the true solution - Patch test (finite elements)Patch test (finite elements)The patch test in the finite element method is a simple indicator of the quality of a finite element, developed by Bruce Irons.The patch test uses a partial differential equation on a domain consisting from several elements set up so that the exact solution is known...
— simple test for the quality of a finite element - Multiple-point constraint (MPC)
- List of finite element software packages
- NAFEMSNAFEMSNAFEMS is an independent, not-for-profit organisation that sets and maintains standards in computer-aided engineering analysis and, specifically, finite element analysis .-History:...
— not-for-profit organisation that sets and maintains standards in computer-aided engineering analysis - Multiphase topology optimisationMultiphase topology optimisationThe Multi Phase Topology Optimisation is a simulation technique based on the principle of the finite element method which is able to determine the optimal distribution of two or more different materials in combination under thermal and mechanical loads....
— technique based on finite elements for determining optimal composition of a mixture - Interval finite elementInterval finite elementThe interval finite element method is a finite element method that uses interval parameters. Interval FEM can be applied in situations where it is not possible to get reliable probabilistic characteristics of the structure. This is important in concrete structures, wood structures, geomechanics,...
Other methods
- Spectral methodSpectral methodSpectral methods are a class of techniques used in applied mathematics and scientific computing to numerically solve certain Dynamical Systems, often involving the use of the Fast Fourier Transform. Where applicable, spectral methods have excellent error properties, with the so called "exponential...
— based on the Fourier transformation- Pseudo-spectral methodPseudo-spectral methodPseudo-spectral methods are a class of numerical methods used in applied mathematics and scientific computing for the solution of PDEs, such as the direct simulation of a particle with an arbitrary wavefunction interacting with an arbitrary potential...
- Pseudo-spectral method
- Method of linesMethod of linesThe method of lines is a technique for solving partial differential equations in which all but one dimension is discretized. MOL allows standard, general-purpose methods and software, developed for the numerical integration of ODEs and DAEs, to be used...
— reduces the PDE to a large system of ordinary differential equations - Boundary element methodBoundary element methodThe boundary element method is a numerical computational method of solving linear partial differential equations which have been formulated as integral equations . It can be applied in many areas of engineering and science including fluid mechanics, acoustics, electromagnetics, and fracture...
— based on transforming the PDE to an integral equation on the boundary of the domain- Interval boundary element methodInterval boundary element methodInterval boundary element method is classical boundary element method with the interval parameters.Boundary element method is based on the following integral equation...
— a version using interval arithmetics
- Interval boundary element method
- Analytic element methodAnalytic element methodThe analytic element method is a numerical method used for the solution of partial differential equations. It was initially developed by O.D.L. Strack at the University of Minnesota...
— similar to the boundary element method, but the integral equation is evaluated analytically - Finite volume methodFinite volume methodThe finite volume method is a method for representing and evaluating partial differential equations in the form of algebraic equations [LeVeque, 2002; Toro, 1999]....
— based on dividing the domain in many small domains; popular in computational fluid dynamics- Godunov's schemeGodunov's schemeIn numerical analysis and computational fluid dynamics, Godunov's scheme is a conservative numerical scheme, suggested by S. K. Godunov in 1959, for solving partial differential equations...
— first-order conservative scheme for fluid flow, based on piecewise constant approximation - MUSCL schemeMUSCL schemeIn the study of partial differential equations, the MUSCL scheme is a finite volume method that can provide highly accurate numerical solutions for a given system, even in cases where the solutions exhibit shocks, discontinuities, or large gradients...
— second-order variant of Godunov's scheme - AUSMAUSMAUSM stands for Advection Upstream Splitting Method. It is developed as a numerical inviscid flux function for solving a general system of conservation equations...
— advection upstream splitting method - Flux limiterFlux limiterFlux limiters are used in high resolution schemes – numerical schemes used to solve problems in science and engineering, particularly fluid dynamics, described by partial differential equations...
— limits spatial derivatives (fluxes) in order to avoid spurious oscillations - Riemann solverRiemann solverA Riemann solver is a numerical method used to solve a Riemann problem. They are heavily used in computational fluid dynamics and computational magnetohydrodynamics.-Exact solvers:...
— a solver for Riemann problems (a conservation law with piecewise constant data)
- Godunov's scheme
- Discrete element methodDiscrete element methodA discrete element method , also called a distinct element method is any of family of numerical methods for computing the motion of a large number of particles of micrometre-scale size and above...
— a method in which the elements can move freely relative to each other - Meshfree methodsMeshfree methodsMeshfree methods are a particular class of numerical simulation algorithms for the simulation of physical phenomena. Traditional simulation algorithms relied on a grid or a mesh, meshfree methods in contrast use the geometry of the simulated object directly for calculations. Meshfree methods exist...
— does not use a mesh, but uses a particle view of the field- Diffuse element methodDiffuse element methodThe diffuse element method is a computer simulation technique used in engineering analysis. It is a meshfree method.The diffuse element method was developed by B. Nayroles, G. Touzot and Pierre Villon at the Universite de Technologie de Compiegne, in 1992.It is in concept rather similar to the...
- Diffuse element method
- Methods designed for problems from electromagnetics:
- Finite-difference time-domain methodFinite-difference time-domain methodFinite-difference time-domain is one of the primary available computational electrodynamics modeling techniques. Since it is a time-domain method, FDTD solutions can cover a wide frequency range with a single simulation run, and treat nonlinear material properties in a natural way.The FDTD method...
— a finite-difference method - Transmission line matrix methodTransmission line matrix methodThe transmission-line matrix method is a space and time discretising method for computation of electromagnetic fields. It is based on the analogy between the electromagnetic field and a mesh of transmission lines...
(TLM) — based on analogy between electromagnetic field and mesh of transmission lines - Uniform theory of diffractionUniform theory of diffractionIn numerical analysis, the uniform geometrical theory of diffraction is a high-frequency method for solving electromagnetic scattering problems from electrically small discontinuities or discontinuities in more than one dimension at the same point. UTD is an extension of Joseph Keller's...
— specifically designed for scattering problems
- Finite-difference time-domain method
- Particle-in-cellParticle-in-cellThe Particle-in-Cell method refers to a technique used to solve a certain class of partial differential equations. In this method, individual particles in a Lagrangian frame are tracked in continuous phase space, whereas moments of the distribution such as densities and currents are computed...
— used especially in fluid dynamics - High-resolution scheme
- Shock capturing methodsShock capturing methodsIn computational fluid dynamics, shock-capturing methods are a class of techniques for computing inviscid flows with shock waves. Computation of flow through shock waves is an extremely difficult task because such flows result in sharp, discontinuous changes in flow variables pressure, temperature,...
- Split-step methodSplit-step methodIn numerical analysis, the split-step method is a pseudo-spectral numerical method used to solve nonlinear partial differential equations like the nonlinear Schrödinger equation. The name arises for two reasons. First, the method relies on computing the solution in small steps, and treating the...
- Fast marching methodFast marching methodThe fast marching method is introduced by James A. Sethian as a numerical method for solving boundary value problems of the Eikonal equation:Typically, such a problem describes the evolution of a closed curve as a function of time T with speed F in the normal direction at a point x on the curve...
- Lattice Boltzmann methodsLattice Boltzmann methodsLattice Boltzmann methods is a class of computational fluid dynamics methods for fluid simulation. Instead of solving the Navier–Stokes equations, the discrete Boltzmann equation is solved to simulate the flow of a Newtonian fluid with collision models such as Bhatnagar-Gross-Krook...
— for the solution of the Navier-Stokes equations - Roe solver — for the solution of the Euler equation
- Relaxation methodRelaxation methodIn numerical mathematics, relaxation methods are iterative methods for solving systems of equations, including nonlinear systems.Relaxation methods were developed for solving large sparse linear systems, which arose as finite-difference discretizations of differential equations...
— a method for solving elliptic PDEs by converting them to evolution equations - Broad classes of methods:
- Mimetic methods — methods that respect in some sense the structure of the original problem
- MultiphysicsMultiphysicsMultiphysics treats simulations that involve multiple physical models or multiple simultaneous physical phenomena. For example, combining chemical kinetics and fluid mechanics or combining finite elements with molecular dynamics...
— models consisting of various submodels with different physics - Immersed boundary methodImmersed boundary methodThe immersed boundary method is an approach – in computational fluid dynamics – to model and simulate mechanical systems in which elastic structures interact with fluid flows...
— for simulating elastic structures immersed within fluids
Techniques for improving these methods
- Multigrid methodMultigrid methodMultigrid methods in numerical analysis are a group of algorithms for solving differential equations using a hierarchy of discretizations. They are an example of a class of techniques called multiresolution methods, very useful in problems exhibiting multiple scales of behavior...
— uses a hierarchy of nested meshes to speed up the methods - Domain decomposition methodsDomain decomposition methods]In mathematics, numerical analysis, and numerical partial differential equations, domain decomposition methods solve a boundary value problem by splitting it into smaller boundary value problems on subdomains and iterating to coordinate the solution between adjacent subdomains...
— divides the domain in a few subdomains and solves the PDE on these subdomains- Additive Schwarz method
- Abstract additive Schwarz methodAbstract additive Schwarz methodIn mathematics, the abstract additive Schwarz method, named after Hermann Schwarz, is an abstract version of the additive Schwarz method, formulated only in terms of linear algebra without reference to domains, subdomains, etc...
— abstract version of additive Schwarz without reference to geometric information - Balancing domain decompositionBalancing domain decompositionIn numerical analysis, the balancing domain decomposition method is an iterative method to find the solution of a symmetric positive definite system of linear algebraic equations arising from the finite element method . In each iteration, it combines the solution of local problems on...
(BDD) — preconditioner for symmetric positive definite matrices - Balancing domain decomposition by constraintsBDDCIn numerical analysis, BDDC is a domain decomposition method for solving large symmetric, positive definite systems of linear equations that arise from the finite element method. BDDC is used as a preconditioner to the conjugate gradient method...
(BDDC) — further development of BDD - Finite element tearing and interconnectFETIIn mathematics, in particular numerical analysis, the FETI method is an iterative substructuring method for solving systems of linear equations from the finite element method for the solution of elliptic partial differential equations, in particular in computational mechanics In each iteration,...
(FETI) - FETI-DPFETI-DPThe FETI-DP method is a domain decomposition method that enforces equality of the solution at subdomain interfaces by Lagrange multipliers except at subdomain corners, which remain primal variables.The first mathematical analysis of the method was provided by Mandel and Tezaur...
— further development of FETI - Fictitious domain methodFictitious domain methodIn mathematics, the Fictitious domain method is a method to find the solution of a partial differential equations on a complicated domain D, by substituting a given problem...
— preconditioner constructed with a structured mesh on a fictitious domain of simple shape - Mortar methodsMortar methodsMortar methods are discretization methods for partial differential equations, which use separate finite element discretization on nonoverlapping subdomains. The meshes on the subdomains do not match on the interface, and the equality of the solution is enforced by Lagrange multipliers, judiciously...
— meshes on subdomain do not mesh - Neumann–Dirichlet method — combines Neumann problem on one subdomain with Dirichlet problem on other subdomain
- Neumann–Neumann methods — domain decomposition methods that use Neumann problems on the subdomains
- Poincaré–Steklov operatorPoincaré–Steklov operatorIn mathematics, a Poincaré–Steklov operator maps the values of one boundary condition of the solution of an elliptic partial differential equation in a domain to the values of another boundary condition. Usually, either of the boundary conditions determines the solution...
— maps tangential electric field onto the equivalent electric current - Schur complement methodSchur complement methodIn numerical analysis, the Schur complement method, named after Issai Schur, is the basic and the earliest version of non-overlapping domain decomposition method, also called iterative substructuring. A finite element problem is split into non-overlapping subdomains, and the unknowns in the...
— early and basic method on subdomains that do not overlap - Schwarz alternating methodSchwarz alternating methodIn mathematics, the Schwarz alternating method, named after Hermann Schwarz, is an iterative method to find the solution of a partial differential equations on a domain which is the union of two overlapping subdomains, by solving the equation on each of the two subdomains in turn, taking always the...
— early and basic method on subdomains that overlap
- Coarse spaceCoarse space (numerical analysis)In numerical analysis, coarse problem is an auxiliary system of equations used in an iterative method for the solution of a given larger system of equations. A coarse problem is basically a version of the same problem at a lower resolution, retaining its essential characteristics, but with fewer...
— variant of the problem which uses a discretization with fewer degrees of freedom - Adaptive mesh refinementAdaptive mesh refinementIn numerical analysis, adaptive mesh refinement is a method of adaptive meshing. Central to any Eulerian method is the manner in which it discretizes the continuous domain of interest into a grid of many individual elements...
— uses the computed solution to refine the mesh only where necessary - Fast multipole methodFast Multipole MethodThe fast multipole method is a mathematical technique that was developed to speed up the calculation of long-ranged forces in the n-body problem...
— hierarchical method for evaluating particle-particle interactions
Miscellaneous
- Analysis:
- Lax equivalence theoremLax equivalence theoremIn numerical analysis, the Lax equivalence theorem is the fundamental theorem in the analysis of finite difference methods for the numerical solution of partial differential equations...
— a consistent method is convergent if and only if it is stable - Courant–Friedrichs–Lewy conditionCourant–Friedrichs–Lewy conditionIn mathematics, the Courant–Friedrichs–Lewy condition is a necessary condition for convergence while solving certain partial differential equations numerically by the method of finite differences. It arises when explicit time-marching schemes are used for the numerical solution...
— stability condition for hyperbolic PDEs - Von Neumann stability analysisVon Neumann stability analysisIn numerical analysis, von Neumann stability analysis is a procedure used to check the stability of finite difference schemes as applied to linear partial differential equations...
— all Fourier components of the error should be stable - Numerical diffusionNumerical diffusionNumerical diffusion is a difficulty with computer simulations of continua wherein the simulated medium exhibits a higher diffusivity than the true medium...
— diffusion introduced by the numerical method, above to that which is naturally present - Numerical resistivityNumerical resistivityNumerical resistivity is a problem in computer simulations of ideal magnetohydrodynamics . It is a form of numerical diffusion. In near-ideal MHD systems, the magnetic field can diffuse only very slowly through the plasma or fluid of the system; it is rate-limited by the resistivity of the fluid...
— the same, with resistivity instead of diffusion - Weak formulationWeak formulationWeak formulations are an important tool for the analysis of mathematical equations that permit the transfer of concepts of linear algebra to solve problems in other fields such as partial differential equations...
— a functional-analytic reformulation of the PDE necessary for some methods - Total variation diminishingTotal variation diminishingIn numerical methods, total variation diminishing is a property of certain discretization schemes used to solve hyperbolic partial differential equations...
— property of schemes that do not introduce spurious oscillations - Godunov's theoremGodunov's theoremIn numerical analysis and computational fluid dynamics, Godunov's theorem — also known as Godunov's order barrier theorem — is a mathematical theorem important in the development of the theory of high resolution schemes for the numerical solution of partial differential equations.The theorem states...
— linear monotone schemes can only be of first order
- Lax equivalence theorem
- Grids and meshes:
- Mesh generationMesh generationMesh generation is the practice of generating a polygonal or polyhedral mesh that approximates a geometric domain. The term "grid generation" is often used interchangeably. Typical uses are for rendering to a computer screen or for physical simulation such as finite element analysis or...
- Parallel mesh generationParallel mesh generationParallel mesh generation in numerical analysis is a new research area between the boundaries of two scientific computing disciplines: computational geometry and parallel computing. Parallel mesh generation methods decompose the original mesh generation problem into smaller subproblems which are...
- Parallel mesh generation
- Geodesic gridGeodesic gridA geodesic grid is a technique used to model the surface of a sphere with a subdivided polyhedron, usually an icosahedron.-Introduction:...
— isotropic grid on a sphere - Spatial twist continuumSpatial twist continuumThe spatial twist continuum is a dual representation of an all hexahedral mesh that defines the global connectivity constraint.Discovered by Dr...
— dual representation of a mesh consisting of hexahedra
- Mesh generation
- Perfectly matched layerPerfectly matched layerA perfectly matched layer is an artificial absorbing layer for wave equations, commonly used to truncate computational regions in numerical methods to simulate problems with open boundaries, especially in the FDTD and FEM methods...
— artificial absorbing layer for wave equations, used to implement absorbing boundary conditions
Monte Carlo methodMonte Carlo methodMonte Carlo methods are a class of computational algorithms that rely on repeated random sampling to compute their results. Monte Carlo methods are often used in computer simulations of physical and mathematical systems...
- Variants of the Monte Carlo method:
- Direct simulation Monte CarloDirect simulation Monte CarloDirect Simulation Monte Carlo method uses probabilistic simulation to solve the Boltzmann equation for finite Knudsen number fluid flows....
- Quasi-Monte Carlo methodQuasi-Monte Carlo methodIn numerical analysis, a quasi-Monte Carlo method is a method for the computation of an integral that is based on low-discrepancy sequences...
- Markov chain Monte CarloMarkov chain Monte CarloMarkov chain Monte Carlo methods are a class of algorithms for sampling from probability distributions based on constructing a Markov chain that has the desired distribution as its equilibrium distribution. The state of the chain after a large number of steps is then used as a sample of the...
- Metropolis–Hastings algorithm
- Multiple-try MetropolisMultiple-try MetropolisIn Markov chain Monte Carlo, the Metropolis–Hastings algorithm can be used to sample from a probability distribution which is difficult to sample from directly. However, the MH algorithm requires the user to supply a proposal distribution, which can be relatively arbitrary...
— modification which allows larger step sizes - Wang and Landau algorithmWang and Landau algorithmThe Wang and Landau algorithm proposed by Fugao Wang and David P. Landau is an extension of Metropolis Monte Carlo sampling. It is designed to calculate the density of states of a computer-simulated system, such as an Ising model of spin glasses, or model atoms in a molecular force field...
— extension of Metropolis Monte Carlo - Equation of State Calculations by Fast Computing MachinesEquation of State Calculations by Fast Computing MachinesEquation of State Calculations by Fast Computing Machines is an article published by Nicholas Metropolis, Arianna W. Rosenbluth, Marshall N. Rosenbluth, Augusta H. Teller, and Edward Teller in the Journal of Chemical Physics in 1953...
— 1953 article proposing the Metropolis Monte Carlo algorithm
- Multiple-try Metropolis
- Gibbs samplingGibbs samplingIn statistics and in statistical physics, Gibbs sampling or a Gibbs sampler is an algorithm to generate a sequence of samples from the joint probability distribution of two or more random variables...
- Coupling from the pastCoupling from the pastAmong Markov chain Monte Carlo algorithms, coupling from the past is a method for sampling from the stationary distribution of a Markov chain. Contrary to many MCMC algorithms, coupling from the past gives in principle a perfect sample from the stationary distribution...
- Metropolis–Hastings algorithm
- Dynamic Monte Carlo methodDynamic Monte Carlo methodIn chemistry, dynamic Monte Carlo is a method for modeling the dynamic behaviors of molecules by comparing the rates of individual steps with random numbers...
- Kinetic Monte CarloKinetic Monte CarloThe kinetic Monte Carlo method is a Monte Carlo method computer simulation intended to simulate the time evolution of some processes occurring in nature. Typically these are processes that occur with a given known rate...
- Gillespie algorithmGillespie algorithmIn probability theory, the Gillespie algorithm generates a statistically correct trajectory of a stochastic equation. It was created by Joseph L...
- Kinetic Monte Carlo
- Particle filterParticle filterIn statistics, particle filters, also known as Sequential Monte Carlo methods , are sophisticated model estimation techniques based on simulation...
- Auxiliary particle filterAuxiliary particle filterThe auxiliary particle filter is a particle filtering algorithm introduced by Pitt and Shephard in 1999 to improve some deficiencies of the sequential importance resampling algorithm when dealing with tailed observation densities....
- Auxiliary particle filter
- Reverse Monte CarloReverse Monte Carlo-Introduction:The Reverse Monte Carlo modelling method is a variation of the standard Metropolis-Hastings algorithm to solve an inverse problem whereby a model is adjusted until its parameters have the greatest consistency with experimental data...
- Demon algorithmDemon algorithmThe demon algorithm is a Monte Carlo method for efficiently sampling members of a microcanonical ensemble with a given energy. An additional degree of freedom, called 'the demon', is added to the system and is able to store and provide energy. If a drawn microscopic state has lower energy than the...
- Direct simulation Monte Carlo
- Sampling methods:
- Inverse transform sampling — general and straightforward method but computationally expensive
- Rejection samplingRejection samplingIn mathematics, rejection sampling is a basic pseudo-random number sampling technique used to generate observations from a distribution. It is also commonly called the acceptance-rejection method or "accept-reject algorithm"....
— sample from a simpler distribution but reject some of the samples- Ziggurat algorithmZiggurat algorithmThe ziggurat algorithm is an algorithm for pseudo-random number sampling. Belonging to the class of rejection sampling algorithms, it relies on an underlying source of uniformly-distributed random numbers, typically from a pseudo-random number generator, as well as precomputed tables. The...
— uses a pre-computed table covering the probability distribution with rectangular segments
- Ziggurat algorithm
- For sampling from a normal distribution:
- Box–Muller transform
- Marsaglia polar methodMarsaglia polar methodThe polar method is a pseudo-random number sampling method for generating a pair of independent standard normal random variables...
- Convolution random number generator — generates a random variable as a sum of other random variables
- Low-discrepancy sequenceLow-discrepancy sequenceIn mathematics, a low-discrepancy sequence is a sequence with the property that for all values of N, its subsequence x1, ..., xN has a low discrepancy....
- Constructions of low-discrepancy sequences
- Illustration of a low-discrepancy sequence
- Event generatorEvent generatorEvent generators are software libraries that generate simulated high-energy particle physics events.They randomly generate events as those produced in particle accelerators, collider experiments or during the initial phases of the Universe creation....
- Parallel temperingParallel temperingParallel tempering, also known as replica exchange MCMC sampling, is a simulation method aimed at improving the dynamic properties of Monte Carlo method simulations of physical systems, and of Markov chain Monte Carlo sampling methods more generally...
- Umbrella samplingUmbrella samplingUmbrella sampling is a technique in computational physics and chemistry, used to improve sampling of a system where ergodicity is hindered by the form of the system's energy landscape. It was first suggested by Torrie and Valleau in 1977...
— improves sampling in physical systems with significant energy barriers - Variance reductionVariance reductionIn mathematics, more specifically in the theory of Monte Carlo methods, variance reduction is a procedure used to increase the precision of the estimates that can be obtained for a given number of iterations. Every output random variable from the simulation is associated with a variance which...
techniques:- Control variateControl variateThe control variates method is a variance reduction technique used in Monte Carlo methods. It exploits information about the errors in estimates of known quantities to reduce the error of an estimate of an unknown quantity.-Underlying principle:...
- Importance samplingImportance samplingIn statistics, importance sampling is a general technique for estimating properties of a particular distribution, while only having samples generated from a different distribution rather than the distribution of interest. It is related to Umbrella sampling in computational physics...
- Stratified samplingStratified samplingIn statistics, stratified sampling is a method of sampling from a population.In statistical surveys, when subpopulations within an overall population vary, it is advantageous to sample each subpopulation independently. Stratification is the process of dividing members of the population into...
- VEGAS algorithmVEGAS algorithmThe VEGAS algorithm, due to G. P. Lepage, is a method for reducing error in Monte Carlo simulations by using a known or approximate probability distribution function to concentrate the search in those areas of the graph that make the greatest contribution to the final integral.The VEGAS algorithm...
- Control variate
- Ensemble Kalman filterEnsemble Kalman filterThe ensemble Kalman filter is a recursive filter suitable for problems with a large number of variables, such as discretizations of partial differential equations in geophysical models...
— recursive filter suitable for problems with a large number of variables - Applications:
- Bond fluctuation modelBond fluctuation modelThe BFM is a lattice model for simulating the conformation and dynamics of polymer systems. There are two versions of the BFM used: The earlier version was first introduced by Carmesin and Kremer in 1988, and the later version by Shaffer in 1994...
— for simulating the conformation and dynamics of polymer systems - Metropolis light transportMetropolis light transportThe Metropolis light transport is a SIGGRAPH 1997 paper by Eric Veach and Leonidas J. Guibas, describing an application of a variant of the Monte Carlo method called the Metropolis-Hastings algorithm to the rendering equation for generating images from detailed physical descriptions of three...
- Monte Carlo method for photon transportMonte Carlo method for photon transportModeling photon propagation with Monte Carlo methods is a flexible yet rigorous approach to simulate photon transport. In the method, local rules of photon transport are expressed as probability distributions which describe the step size of photon movement between sites of photon-tissue interaction...
- Monte Carlo methods in financeMonte Carlo methods in financeMonte Carlo methods are used in finance and mathematical finance to value and analyze instruments, portfolios and investments by simulating the various sources of uncertainty affecting their value, and then determining their average value over the range of resultant outcomes. This is usually done...
- Monte Carlo methods for option pricing
- Quasi-Monte Carlo methods in finance
- Monte Carlo molecular modelingMonte Carlo molecular modelingMonte Carlo molecular modeling is the application of Monte Carlo methods to molecular problems. These problems can also be modeled by the molecular dynamics method. The difference is that this approach relies on statistical mechanics rather than molecular dynamics. Instead of trying to reproduce...
- Quantum Monte CarloQuantum Monte CarloQuantum Monte Carlo is a large class of computer algorithms that simulate quantum systems with the idea of solving the quantum many-body problem. They use, in one way or another, the Monte Carlo method to handle the many-dimensional integrals that arise...
- Diffusion Monte CarloDiffusion Monte CarloDiffusion Monte Carlo is a quantum Monte Carlo method that uses a Green's function to solve the Schrödinger equation. DMC is potentially numerically exact, meaning that it can find the exact ground state energy within a given error for any quantum system...
— uses a Green function to solve the Schrödinger equation - Gaussian quantum Monte CarloGaussian quantum Monte CarloGaussian Quantum Monte Carlo is a quantum Monte Carlo method that shows a potential solution to the fermion sign problem without the deficiencies of alternative approaches. Instead of the Hilbert space, this method works in the space of density matrices that can be spanned by an over-complete basis...
- Path integral Monte CarloPath integral Monte CarloPath integral Monte Carlo is a quantum Monte Carlo method in the path integral formulation of quantum mechanics.The equations often are applied assuming that quantum exchange does not matter...
- Reptation Monte CarloReptation Monte Carlo- References :**...
- Variational Monte CarloVariational Monte CarloIn mathematical physics, variational Monte Carlo is a quantum Monte Carlo method that applies the variational method to approximate the ground state of the system.The expectation value necessary can be written in the x representation as...
- Diffusion Monte Carlo
- Methods for simulating the Ising model:
- Swendsen–Wang algorithm — entire sample is divided into equal-spin clusters
- Wolff algorithmWolff algorithmThe Wolff algorithm, named after Ulli Wolff, is an algorithm for Monte Carlo simulation of the Ising model in which an equal-spin cluster is formed around one spin. That cluster is then flipped. The Wolff algorithm is an improvement over the Swendsen–Wang algorithm because it tends to form bigger...
— improvement of the Swendsen–Wang algorithm
- Auxiliary field Monte CarloAuxiliary field Monte CarloAuxiliary field Monte Carlo is a method that allows the calculation, by use of Monte Carlo techniques, of averages of operators in many-body quantum mechanical or classical problems .-Reweighting procedure and numerical sign problem:The distinctive ingredient of "auxiliary field Monte Carlo" is...
— computes averages of operators in many-body quantum mechanical problems - Cross-entropy methodCross-entropy methodThe cross-entropy method attributed to Reuven Rubinstein is a general Monte Carlo approach tocombinatorial and continuous multi-extremal optimization and importance sampling.The method originated from the field of rare event simulation, where...
— for multi-extremal optimization and importance sampling
- Bond fluctuation model
- Also see the list of statistics topics
Applications
- Computational physicsComputational physicsComputational physics is the study and implementation of numerical algorithms to solve problems in physics for which a quantitative theory already exists...
- Computational electromagneticsComputational electromagneticsComputational electromagnetics, computational electrodynamics or electromagnetic modeling is the process of modeling the interaction of electromagnetic fields with physical objects and the environment....
- Computational fluid dynamicsComputational fluid dynamicsComputational fluid dynamics, usually abbreviated as CFD, is a branch of fluid mechanics that uses numerical methods and algorithms to solve and analyze problems that involve fluid flows. Computers are used to perform the calculations required to simulate the interaction of liquids and gases with...
(CFD)- Large eddy simulationLarge eddy simulationLarge eddy simulation is a mathematical model for turbulence used in computational fluid dynamics. It was initially proposed in 1963 by Joseph Smagorinsky to simulate atmospheric air currents, and many of the issues unique to LES were first explored by Deardorff...
- Smoothed-particle hydrodynamics
- Acoustic analogyAcoustic analogyAcoustic analogies are applied mostly in numerical aeroacoustics to reduce aeroacoustic sound sources to simple emitter types. They are therefore often also referred to as aeroacoustic analogies....
— used in numerical aeroacoustics to reduce sound sources to simple emitter types
- Large eddy simulation
- Computational magnetohydrodynamicsComputational MagnetohydrodynamicsComputational magnetohydrodynamics is a rapidly developing branch of magnetohydrodynamics that uses numerical methods and algorithms to solve and analyze problems that involve electrically conducting fluids. Most of the methods used in CMHD are borrowed from the well established techniques...
(CMHD) — studies electrically conducting fluids - Climate modelClimate modelClimate models use quantitative methods to simulate the interactions of the atmosphere, oceans, land surface, and ice. They are used for a variety of purposes from study of the dynamics of the climate system to projections of future climate...
- Numerical weather predictionNumerical weather predictionNumerical weather prediction uses mathematical models of the atmosphere and oceans to predict the weather based on current weather conditions. Though first attempted in the 1920s, it was not until the advent of computer simulation in the 1950s that numerical weather predictions produced realistic...
- Geodesic gridGeodesic gridA geodesic grid is a technique used to model the surface of a sphere with a subdivided polyhedron, usually an icosahedron.-Introduction:...
- Geodesic grid
- Celestial mechanicsCelestial mechanicsCelestial mechanics is the branch of astronomy that deals with the motions of celestial objects. The field applies principles of physics, historically classical mechanics, to astronomical objects such as stars and planets to produce ephemeris data. Orbital mechanics is a subfield which focuses on...
- Numerical model of solar systemNumerical model of solar systemA numerical model of the Solar System is a set of mathematical equations, which, when solved, give the approximate positions of the planets as a function of time. Attempts to create such a model established the more general field of celestial mechanics. The results of this simulation can be...
- Numerical model of solar system
- Multiphysics Methods GroupMultiphysics Methods GroupThe Multiphysics Methods Group is a program at Idaho National Laboratory begun in 2004. It uses modeling software to simulate complex physical and chemical reactions inside nuclear reactors...
— program at Idaho National Laboratory studying simulations of nuclear reactors
- Computational electromagnetics
- Computational chemistryComputational chemistryComputational chemistry is a branch of chemistry that uses principles of computer science to assist in solving chemical problems. It uses the results of theoretical chemistry, incorporated into efficient computer programs, to calculate the structures and properties of molecules and solids...
- Cell listsCell listsCell lists are a tool for finding all atom pairs within a given cut-off distance of each other in Molecular dynamics simulations...
- Coupled clusterCoupled clusterCoupled cluster is a numerical technique used for describing many-body systems. Its most common use is as one of several quantum chemical post-Hartree–Fock ab initio quantum chemistry methods in the field of computational chemistry...
- Density functional theoryDensity functional theoryDensity functional theory is a quantum mechanical modelling method used in physics and chemistry to investigate the electronic structure of many-body systems, in particular atoms, molecules, and the condensed phases. With this theory, the properties of a many-electron system can be determined by...
- Self-consistent field method
- Cell lists
- Computational statisticsComputational statisticsComputational statistics, or statistical computing, is the interface between statistics and computer science. It is the area of computational science specific to the mathematical science of statistics....