Numerical analysis
Overview
Algorithm
In mathematics and computer science, an algorithm is an effective method expressed as a finite list of welldefined instructions for calculating a function. Algorithms are used for calculation, data processing, and automated reasoning...
s that use numerical approximation
Approximation
An approximation is a representation of something that is not exact, but still close enough to be useful. Although approximation is most often applied to numbers, it is also frequently applied to such things as mathematical functions, shapes, and physical laws.Approximations may be used because...
(as opposed to general symbolic manipulations
Symbolic computation
Symbolic computation or algebraic computation, relates to the use of machines, such as computers, to manipulate mathematical equations and expressions in symbolic form, as opposed to manipulating the approximations of specific numerical quantities represented by those symbols...
) for the problems of mathematical analysis
Mathematical analysis
Mathematical analysis, which mathematicians refer to simply as analysis, has its beginnings in the rigorous formulation of infinitesimal calculus. It is a branch of pure mathematics that includes the theories of differentiation, integration and measure, limits, infinite series, and analytic functions...
(as distinguished from discrete mathematics
Discrete mathematics
Discrete mathematics is the study of mathematical structures that are fundamentally discrete rather than continuous. In contrast to real numbers that have the property of varying "smoothly", the objects studied in discrete mathematics – such as integers, graphs, and statements in logic – do not...
).
One of the earliest mathematical writings is the Babylonian tablet BC 7289, which gives a sexagesimal numerical approximation of , the length of the diagonal in a unit square.
Unanswered Questions
Discussions
Encyclopedia
Numerical analysis is the study of algorithm
s that use numerical approximation
(as opposed to general symbolic manipulations
) for the problems of mathematical analysis
(as distinguished from discrete mathematics
).
One of the earliest mathematical writings is the Babylonian tablet BC 7289, which gives a sexagesimal numerical approximation of , the length of the diagonal in a unit square. Being able to compute the sides of a triangle (and hence, being able to compute square roots) is extremely important, for instance, in carpentry and construction.
Numerical analysis continues this long tradition of practical mathematical calculations. Much like the Babylonian approximation of , modern numerical analysis does not seek exact answers, because exact answers are often impossible to obtain in practice. Instead, much of numerical analysis is concerned with obtaining approximate solutions while maintaining reasonable bounds on errors.
Numerical analysis naturally finds applications in all fields of engineering and the physical sciences, but in the 21st century, the life sciences and even the arts have adopted elements of scientific computations. Ordinary differential equation
s appear in the movement of heavenly bodies (planets, stars and galaxies)
; optimization occurs in portfolio management; numerical linear algebra
is important for data analysis; stochastic differential equation
s and Markov chain
s are essential in simulating living cells for medicine and biology.
Before the advent of modern computers numerical methods often depended on hand interpolation
in large printed tables. Since the mid 20th century, computers calculate the required functions instead. The interpolation algorithms nevertheless may be used as part of the software for solving differential equations.
The rest of this section outlines several important themes of numerical analysis.
was already in use more than 2000 years ago. Many great mathematicians of the past were preoccupied by numerical analysis, as is obvious from the names of important algorithms like Newton's method
, Lagrange interpolation polynomial
, Gaussian elimination
, or Euler's method.
To facilitate computations by hand, large books were produced with formulas and tables of data such as interpolation points and function coefficients. Using these tables, often calculated out to 16 decimal places or more for some functions, one could look up values to plug into the formulas given and achieve very good numerical estimates of some functions. The canonical work in the field is the NIST publication edited by Abramowitz and Stegun
, a 1000plus page book of a very large number of commonly used formulas and functions and their values at many points. The function values are no longer very useful when a computer is available, but the large listing of formulas can still be very handy.
The mechanical calculator
was also developed as a tool for hand computation. These calculators evolved into electronic computers in the 1940s, and it was then found that these computers were also useful for administrative purposes. But the invention of the computer also influenced the field of numerical analysis, since now longer and more complicated calculations could be done.

Direct vs iterative methods
Consider the problem of solving
for the unknown quantity x.
+ Direct method

 >
For the iterative method, apply the bisection method
to f(x) = 3x^{3} − 24. The initial values are a = 0, b = 3, f(a) = −24, f(b) = 57.
+ Iterative method

! a !! b !! mid !! f(mid)

 0 >
We conclude from this table that the solution is between 1.875 and 2.0625. The algorithm might return any number in that range with an error less than 0.2.
A discretization would be to say that the speed of the car was constant from 0:00 to 0:40, then from 0:40 to 1:20 and finally from 1:20 to 2:00. For instance, the total distance traveled in the first 40 minutes is approximately (2/3h × 140 km/h) = 93.3 km. This would allow us to estimate the total distance traveled as 93.3 km + 100 km + 120 km = 313.3 km, which is an example of numerical integration (see below) using a Riemann sum
, because displacement is the integral
of velocity.
Ill posed problem: Take the function f(x) = 1/(x − 1). Note that f(1.1) = 10 and f(1.001) = 1000: a change in x of less than 0.1 turns into a change in f(x) of nearly 1000. Evaluating f(x) near x = 1 is an illconditioned problem.
Wellposed problem: By contrast, the function is continuous and so evaluating it is wellposed, at least for x being not close to zero.>
Direct methods compute the solution to a problem in a finite number of steps. These methods would give the precise answer if they were performed in infinite precision arithmetic
. Examples include Gaussian elimination
, the QR
factorization method for solving systems
of linear equations, and the simplex method of linear programming
. In practice, finite precision
is used and the result is an approximation of the true solution (assuming stability).
In contrast to direct methods, iterative method
s are not expected to terminate in a number of steps. Starting from an initial guess, iterative methods form successive approximations that converge
to the exact solution only in the limit. A convergence test is specified in order to decide when a sufficiently accurate solution has (hopefully) been found. Even using infinite precision arithmetic these methods would not reach the solution within a finite number of steps (in general). Examples include Newton's method
, the bisection method
, and Jacobi iteration. In computational matrix algebra, iterative methods are generally needed for large problems.
Iterative methods are more common than direct methods in numerical analysis. Some methods are direct in principle but are usually used as though they were not, e.g. GMRES and the conjugate gradient method
. For these methods the number of steps needed to obtain the exact solution is so large that an approximation is accepted in the same manner as for an iterative method.
. For example, the solution of a differential equation
is a function. This function must be represented by a finite amount of data, for instance by its value at a finite number of points at its domain, even though this domain is a continuum.
s arise because it is impossible to represent all real number
s exactly on a machine with finite memory (which is what all practical digital computers are).
errors are committed when an iterative method is terminated or a mathematical procedure is approximated, and the approximate solution differs from the exact solution. Similarly, discretization induces a discretization error
because the solution of the discrete problem does not coincide with the solution of the continuous problem. For instance, in the iteration in the sidebar to compute the solution of , after 10 or so iterations, we conclude that the root is roughly 1.99 (for example). We therefore have a truncation error of 0.01.
Once an error is generated, it will generally propagate through the calculation. For instance, we have already noted that the operation + on a calculator (or a computer) is inexact. It follows that a calculation of the type a+b+c+d+e is even more inexact.
What does it mean when we say that the truncation error is created when we approximate a mathematical procedure? We know that to integrate a function exactly requires one to find the sum of infinite trapezoids. But numerically one can find the sum of only finite trapezoids, and hence the approximation of the mathematical procedure. Similarly, to differentiate a function, the differential element approaches to zero but numerically we can only choose a finite value of the differential element.
is an important notion in numerical analysis. An algorithm is called numerically stable if an error, whatever its cause, does not grow to be much larger during the calculation. This happens if the problem is wellconditioned
, meaning that the solution changes by only a small amount if the problem data are changed by a small amount. To the contrary, if a problem is illconditioned, then any small error in the data will grow to be a large error.
Both the original problem and the algorithm used to solve that problem can be wellconditioned and/or illconditioned, and any combination is possible.
So an algorithm that solves a wellconditioned problem may be either numerically stable or numerically unstable. An art of numerical analysis is to find a stable algorithm for solving a wellposed mathematical problem. For instance, computing the square root of 2 (which is roughly 1.41421) is a wellposed problem. Many algorithms solve this problem by starting with an initial approximation x_{1} to , for instance x_{1}=1.4, and then computing improved guesses x_{2}, x_{3}, etc.. One such method is the famous Babylonian method, which is given by x_{k+1} = x_{k}/2 + 1/x_{k}. Another iteration, which we will call Method X, is given by x_{k + 1} = (x_{k}^{2}−2)^{2} + x_{k}. We have calculated a few iterations of each scheme in table form below, with initial guesses x_{1} = 1.4 and x_{1} = 1.42.
>
Observe that the Babylonian method converges fast regardless of the initial guess, whereas Method X converges extremely slowly with initial guess 1.4 and diverges for initial guess 1.42. Hence, the Babylonian method is numerically stable, while Method X is numerically unstable.

Interpolation: We have observed the temperature to vary from 20 degrees Celsius at 1:00 to 14 degrees at 3:00. A linear interpolation of this data would conclude that it was 17 degrees at 2:00 and 18.5 degrees at 1:30pm.
Extrapolation: If the gross domestic product
of a country has been growing an average of 5% per year and was 100 billion dollars last year, we might extrapolate that it will be 105 billion dollars this year.
Regression: In linear regression, given n points, we compute a line that passes as close as possible to those n points.
Optimization: Say you sell lemonade at a lemonade stand
, and notice that at $1, you can sell 197 glasses of lemonade per day, and that for each increase of $0.01, you will sell one glass of lemonade less per day. If you could charge $1.485, you would maximize your profit, but due to the constraint of having to charge a whole cent amount, charging $1.49 per glass will yield the maximum income of $220.52 per day.
Differential equation: If you set up 100 fans to blow air from one end of the room to the other and then you drop a feather into the wind, what happens? The feather will follow the air currents, which may be very complex. One approximation is to measure the speed at which the air is blowing near the feather every second, and advance the simulated feather as if it were moving in a straight line at that same speed for one second, before measuring the wind speed again. This is called the Euler method for solving an ordinary differential equation.>
One of the simplest problems is the evaluation of a function at a given point. The most straightforward approach, of just plugging in the number in the formula is sometimes not very efficient. For polynomials, a better approach is using the Horner scheme
, since it reduces the necessary number of multiplications and additions. Generally, it is important to estimate and control roundoff error
s arising from the use of floating point
arithmetic.
solves the following problem: given the value of some unknown function at a number of points, what value does that function have at some other point between the given points?
Extrapolation
is very similar to interpolation, except that now we want to find the value of the unknown function at a point which is outside the given points.
Regression
is also similar, but it takes into account that the data is imprecise. Given some points, and a measurement of the value of some function at these points (with an error), we want to determine the unknown function. The least squares
method is one popular way to achieve this.
Much effort has been put in the development of methods for solving systems of linear equations. Standard direct methods, i.e., methods that use some matrix decomposition
are Gaussian elimination
, LU decomposition
, Cholesky decomposition
for symmetric (or hermitian) and positivedefinite matrix
, and QR decomposition
for nonsquare matrices. Iterative method
s such as the Jacobi method
, Gauss–Seidel method, successive overrelaxation
and conjugate gradient method
are usually preferred for large systems.
Rootfinding algorithm
s are used to solve nonlinear equations (they are so named since a root of a function is an argument for which the function yields zero). If the function is differentiable
and the derivative is known, then Newton's method
is a popular choice. Linearization
is another technique for solving nonlinear equations.
s. For instance, the spectral image compression
algorithm is based on the singular value decomposition. The corresponding tool in statistics is called principal component analysis.
s.
The field of optimization is further split in several subfields, depending on the form of the objective function and the constraint. For instance, linear programming
deals with the case that both the objective function and the constraints are linear. A famous method in linear programming is the simplex method.
The method of Lagrange multipliers
can be used to reduce optimization problems with constraints to unconstrained optimization problems.
, asks for the value of a definite integral
. Popular methods use one of the Newton–Cotes formulas (like the midpoint rule or Simpson's rule
) or Gaussian quadrature
. These methods rely on a "divide and conquer" strategy, whereby an integral on a relatively large set is broken down into integrals on smaller sets. In higher dimensions, where these methods become prohibitively expensive in terms of computational effort, one may use Monte Carlo
or quasiMonte Carlo method
s (see Monte Carlo integration), or, in modestly large dimensions, the method of sparse grid
s.
s, both ordinary differential equations and partial differential equation
s.
Partial differential equations are solved by first discretizing the equation, bringing it into a finitedimensional subspace. This can be done by a finite element method
, a finite difference
method, or (particularly in engineering) a finite volume method
. The theoretical justification of these methods often involves theorems from functional analysis
. This reduces the problem to the solution of an algebraic equation.
repository contains various collections of software routines for numerical problems, mostly in Fortran
and C
. Commercial products implementing many different numerical algorithms include the IMSL
and NAG
libraries; a free alternative is the GNU Scientific Library
.
There are several popular numerical computing applications such as MATLAB
, SPLUS
, LabVIEW
, and IDL as well as free and open source alternatives such as FreeMat
, Scilab
, GNU Octave
(similar to Matlab), IT++ (a C++ library), R
(similar to SPLUS) and certain variants of Python
. Performance varies widely: while vector and matrix operations are usually fast, scalar loops may vary in speed by more than an order of magnitude.
Many computer algebra system
s such as Mathematica
also benefit from the availability of arbitrary precision arithmetic which can provide more accurate results.
Also, any spreadsheet
software can be used to solve simple problems relating to numerical analysis.
Software and Code
Online Texts
Online Course Material
Algorithm
In mathematics and computer science, an algorithm is an effective method expressed as a finite list of welldefined instructions for calculating a function. Algorithms are used for calculation, data processing, and automated reasoning...
s that use numerical approximation
Approximation
An approximation is a representation of something that is not exact, but still close enough to be useful. Although approximation is most often applied to numbers, it is also frequently applied to such things as mathematical functions, shapes, and physical laws.Approximations may be used because...
(as opposed to general symbolic manipulations
Symbolic computation
Symbolic computation or algebraic computation, relates to the use of machines, such as computers, to manipulate mathematical equations and expressions in symbolic form, as opposed to manipulating the approximations of specific numerical quantities represented by those symbols...
) for the problems of mathematical analysis
Mathematical analysis
Mathematical analysis, which mathematicians refer to simply as analysis, has its beginnings in the rigorous formulation of infinitesimal calculus. It is a branch of pure mathematics that includes the theories of differentiation, integration and measure, limits, infinite series, and analytic functions...
(as distinguished from discrete mathematics
Discrete mathematics
Discrete mathematics is the study of mathematical structures that are fundamentally discrete rather than continuous. In contrast to real numbers that have the property of varying "smoothly", the objects studied in discrete mathematics – such as integers, graphs, and statements in logic – do not...
).
One of the earliest mathematical writings is the Babylonian tablet BC 7289, which gives a sexagesimal numerical approximation of , the length of the diagonal in a unit square. Being able to compute the sides of a triangle (and hence, being able to compute square roots) is extremely important, for instance, in carpentry and construction.
Numerical analysis continues this long tradition of practical mathematical calculations. Much like the Babylonian approximation of , modern numerical analysis does not seek exact answers, because exact answers are often impossible to obtain in practice. Instead, much of numerical analysis is concerned with obtaining approximate solutions while maintaining reasonable bounds on errors.
Numerical analysis naturally finds applications in all fields of engineering and the physical sciences, but in the 21st century, the life sciences and even the arts have adopted elements of scientific computations. Ordinary differential equation
Ordinary differential equation
In mathematics, an ordinary differential equation is a relation that contains functions of only one independent variable, and one or more of their derivatives with respect to that variable....
s appear in the movement of heavenly bodies (planets, stars and galaxies)
Celestial mechanics
Celestial mechanics is the branch of astronomy that deals with the motions of celestial objects. The field applies principles of physics, historically classical mechanics, to astronomical objects such as stars and planets to produce ephemeris data. Orbital mechanics is a subfield which focuses on...
; optimization occurs in portfolio management; numerical linear algebra
Numerical linear algebra
Numerical linear algebra is the study of algorithms for performing linear algebra computations, most notably matrix operations, on computers. It is often a fundamental part of engineering and computational science problems, such as image and signal processing, Telecommunication, computational...
is important for data analysis; stochastic differential equation
Stochastic differential equation
A stochastic differential equation is a differential equation in which one or more of the terms is a stochastic process, thus resulting in a solution which is itself a stochastic process....
s and Markov chain
Markov chain
A Markov chain, named after Andrey Markov, is a mathematical system that undergoes transitions from one state to another, between a finite or countable number of possible states. It is a random process characterized as memoryless: the next state depends only on the current state and not on the...
s are essential in simulating living cells for medicine and biology.
Before the advent of modern computers numerical methods often depended on hand interpolation
Interpolation
In the mathematical field of numerical analysis, interpolation is a method of constructing new data points within the range of a discrete set of known data points....
in large printed tables. Since the mid 20th century, computers calculate the required functions instead. The interpolation algorithms nevertheless may be used as part of the software for solving differential equations.
General introduction
The overall goal of the field of numerical analysis is the design and analysis of techniques to give approximate but accurate solutions to hard problems, the variety of which is suggested by the following. Advanced numerical methods are essential in making numerical weather predictionNumerical weather predictionNumerical weather prediction uses mathematical models of the atmosphere and oceans to predict the weather based on current weather conditions. Though first attempted in the 1920s, it was not until the advent of computer simulation in the 1950s that numerical weather predictions produced realistic...
feasible.  Computing the trajectory of a spacecraft requires the accurate numerical solution of a system of ordinary differential equationOrdinary differential equationIn mathematics, an ordinary differential equation is a relation that contains functions of only one independent variable, and one or more of their derivatives with respect to that variable....
s.  Car companies can improve the crash safety of their vehicles by using computer simulations of car crashes. Such simulations essentially consist of solving partial differential equationPartial differential equationIn mathematics, partial differential equations are a type of differential equation, i.e., a relation involving an unknown function of several independent variables and their partial derivatives with respect to those variables...
s numerically.  Hedge fundHedge fundA hedge fund is a private pool of capital actively managed by an investment adviser. Hedge funds are only open for investment to a limited number of accredited or qualified investors who meet criteria set by regulators. These investors can be institutions, such as pension funds, university...
s (private investment funds) use tools from all fields of numerical analysis to calculate the value of stocks and derivatives more precisely than other market participants.  Airlines use sophisticated optimization algorithms to decide ticket prices, airplane and crew assignments and fuel needs. This field is also called operations researchOperations researchOperations research is an interdisciplinary mathematical science that focuses on the effective use of technology by organizations...
.  Insurance companies use numerical programs for actuarialActuaryAn actuary is a business professional who deals with the financial impact of risk and uncertainty. Actuaries provide expert assessments of financial security systems, with a focus on their complexity, their mathematics, and their mechanisms ....
analysis.
The rest of this section outlines several important themes of numerical analysis.
History
The field of numerical analysis predates the invention of modern computers by many centuries. Linear interpolationLinear interpolation
Linear interpolation is a method of curve fitting using linear polynomials. Lerp is an abbreviation for linear interpolation, which can also be used as a verb .Linear interpolation between two known points:...
was already in use more than 2000 years ago. Many great mathematicians of the past were preoccupied by numerical analysis, as is obvious from the names of important algorithms like Newton's method
Newton's method
In numerical analysis, Newton's method , named after Isaac Newton and Joseph Raphson, is a method for finding successively better approximations to the roots of a realvalued function. The algorithm is first in the class of Householder's methods, succeeded by Halley's method...
, Lagrange interpolation polynomial
Lagrange polynomial
In numerical analysis, Lagrange polynomials are used for polynomial interpolation. For a given set of distinct points x_j and numbers y_j, the Lagrange polynomial is the polynomial of the least degree that at each point x_j assumes the corresponding value y_j...
, Gaussian elimination
Gaussian elimination
In linear algebra, Gaussian elimination is an algorithm for solving systems of linear equations. It can also be used to find the rank of a matrix, to calculate the determinant of a matrix, and to calculate the inverse of an invertible square matrix...
, or Euler's method.
To facilitate computations by hand, large books were produced with formulas and tables of data such as interpolation points and function coefficients. Using these tables, often calculated out to 16 decimal places or more for some functions, one could look up values to plug into the formulas given and achieve very good numerical estimates of some functions. The canonical work in the field is the NIST publication edited by Abramowitz and Stegun
Abramowitz and Stegun
Abramowitz and Stegun is the informal name of a mathematical reference work edited by Milton Abramowitz and Irene Stegun of the U.S. National Bureau of Standards...
, a 1000plus page book of a very large number of commonly used formulas and functions and their values at many points. The function values are no longer very useful when a computer is available, but the large listing of formulas can still be very handy.
The mechanical calculator
Mechanical calculator
A mechanical calculator is a device used to perform the basic operations of arithmetic. Mechanical calculators are comparable in size to small desktop computers and have been rendered obsolete by the advent of the electronic calculator....
was also developed as a tool for hand computation. These calculators evolved into electronic computers in the 1940s, and it was then found that these computers were also useful for administrative purposes. But the invention of the computer also influenced the field of numerical analysis, since now longer and more complicated calculations could be done.
Direct and iterative methods

Direct vs iterative methods
Consider the problem of solving
 3x^{3} + 4 = 28
for the unknown quantity x.

 >
x^{3} + 4 = 28. >  Subtract 4 
x^{3} = 24. >  Divide by 3 
x^{3} = 8. >  Take cube roots 
For the iterative method, apply the bisection method
Bisection method
The bisection method in mathematics is a rootfinding method which repeatedly bisects an interval and then selects a subinterval in which a root must lie for further processing. It is a very simple and robust method, but it is also relatively slow...
to f(x) = 3x^{3} − 24. The initial values are a = 0, b = 3, f(a) = −24, f(b) = 57.
+ Iterative method

! a !! b !! mid !! f(mid)

 0 >
3  1.5 
>  1.5 
3  2.25 
>  1.5 
2.25  1.875 
>  1.875 
2.25  2.0625 
We conclude from this table that the solution is between 1.875 and 2.0625. The algorithm might return any number in that range with an error less than 0.2.
Discretization and numerical integration
In a two hour race, we have measured the speed of the car at three instants and recorded them in the following table.Time  0:20  1:00  1:40 

km/h  140  150  180 
A discretization would be to say that the speed of the car was constant from 0:00 to 0:40, then from 0:40 to 1:20 and finally from 1:20 to 2:00. For instance, the total distance traveled in the first 40 minutes is approximately (2/3h × 140 km/h) = 93.3 km. This would allow us to estimate the total distance traveled as 93.3 km + 100 km + 120 km = 313.3 km, which is an example of numerical integration (see below) using a Riemann sum
Riemann sum
In mathematics, a Riemann sum is a method for approximating the total area underneath a curve on a graph, otherwise known as an integral. It mayalso be used to define the integration operation. The method was named after German mathematician Bernhard Riemann....
, because displacement is the integral
Integral
Integration is an important concept in mathematics and, together with its inverse, differentiation, is one of the two main operations in calculus...
of velocity.
Ill posed problem: Take the function f(x) = 1/(x − 1). Note that f(1.1) = 10 and f(1.001) = 1000: a change in x of less than 0.1 turns into a change in f(x) of nearly 1000. Evaluating f(x) near x = 1 is an illconditioned problem.
Wellposed problem: By contrast, the function is continuous and so evaluating it is wellposed, at least for x being not close to zero.>
Direct methods compute the solution to a problem in a finite number of steps. These methods would give the precise answer if they were performed in infinite precision arithmetic
Computer numbering formats
A computer number format is the internal representation of numeric values in digital computer and calculator hardware and software.Bits:The concept of a bit can be understood as a value of either 1 or 0, on or off, yes or no, true or false, or encoded by a switch or toggle of some kind...
. Examples include Gaussian elimination
Gaussian elimination
In linear algebra, Gaussian elimination is an algorithm for solving systems of linear equations. It can also be used to find the rank of a matrix, to calculate the determinant of a matrix, and to calculate the inverse of an invertible square matrix...
, the QR
QR algorithm
In numerical linear algebra, the QR algorithm is an eigenvalue algorithm: that is, a procedure to calculate the eigenvalues and eigenvectors of a matrix. The QR transformation was developed in the late 1950s by John G.F. Francis and by Vera N. Kublanovskaya , working independently...
factorization method for solving systems
of linear equations, and the simplex method of linear programming
Linear programming
Linear programming is a mathematical method for determining a way to achieve the best outcome in a given mathematical model for some list of requirements represented as linear relationships...
. In practice, finite precision
Floating point
In computing, floating point describes a method of representing real numbers in a way that can support a wide range of values. Numbers are, in general, represented approximately to a fixed number of significant digits and scaled using an exponent. The base for the scaling is normally 2, 10 or 16...
is used and the result is an approximation of the true solution (assuming stability).
In contrast to direct methods, iterative method
Iterative method
In computational mathematics, an iterative method is a mathematical procedure that generates a sequence of improving approximate solutions for a class of problems. A specific implementation of an iterative method, including the termination criteria, is an algorithm of the iterative method...
s are not expected to terminate in a number of steps. Starting from an initial guess, iterative methods form successive approximations that converge
Limit of a sequence
The limit of a sequence is, intuitively, the unique number or point L such that the terms of the sequence become arbitrarily close to L for "large" values of n...
to the exact solution only in the limit. A convergence test is specified in order to decide when a sufficiently accurate solution has (hopefully) been found. Even using infinite precision arithmetic these methods would not reach the solution within a finite number of steps (in general). Examples include Newton's method
Newton's method
In numerical analysis, Newton's method , named after Isaac Newton and Joseph Raphson, is a method for finding successively better approximations to the roots of a realvalued function. The algorithm is first in the class of Householder's methods, succeeded by Halley's method...
, the bisection method
Bisection method
The bisection method in mathematics is a rootfinding method which repeatedly bisects an interval and then selects a subinterval in which a root must lie for further processing. It is a very simple and robust method, but it is also relatively slow...
, and Jacobi iteration. In computational matrix algebra, iterative methods are generally needed for large problems.
Iterative methods are more common than direct methods in numerical analysis. Some methods are direct in principle but are usually used as though they were not, e.g. GMRES and the conjugate gradient method
Conjugate gradient method
In mathematics, the conjugate gradient method is an algorithm for the numerical solution of particular systems of linear equations, namely those whose matrix is symmetric and positivedefinite. The conjugate gradient method is an iterative method, so it can be applied to sparse systems that are too...
. For these methods the number of steps needed to obtain the exact solution is so large that an approximation is accepted in the same manner as for an iterative method.
Discretization
Furthermore, continuous problems must sometimes be replaced by a discrete problem whose solution is known to approximate that of the continuous problem; this process is called discretizationDiscretization
In mathematics, discretization concerns the process of transferring continuous models and equations into discrete counterparts. This process is usually carried out as a first step toward making them suitable for numerical evaluation and implementation on digital computers...
. For example, the solution of a differential equation
Differential equation
A differential equation is a mathematical equation for an unknown function of one or several variables that relates the values of the function itself and its derivatives of various orders...
is a function. This function must be represented by a finite amount of data, for instance by its value at a finite number of points at its domain, even though this domain is a continuum.
The generation and propagation of errors
The study of errors forms an important part of numerical analysis. There are several ways in which error can be introduced in the solution of the problem.Roundoff
Roundoff errorRoundoff error
A roundoff error, also called rounding error, is the difference between the calculated approximation of a number and its exact mathematical value. Numerical analysis specifically tries to estimate this error when using approximation equations and/or algorithms, especially when using finitely many...
s arise because it is impossible to represent all real number
Real number
In mathematics, a real number is a value that represents a quantity along a continuum, such as 5 , 4/3 , 8.6 , √2 and π...
s exactly on a machine with finite memory (which is what all practical digital computers are).
Truncation and discretization error
TruncationTruncation
In mathematics and computer science, truncation is the term for limiting the number of digits right of the decimal point, by discarding the least significant ones.For example, consider the real numbersThe result would be: Truncation and floor function :...
errors are committed when an iterative method is terminated or a mathematical procedure is approximated, and the approximate solution differs from the exact solution. Similarly, discretization induces a discretization error
Discretization error
In numerical analysis, computational physics, and simulation, discretization error is error resulting from the fact that a function of a continuous variable is represented in the computer by a finite number of evaluations, for example, on a lattice...
because the solution of the discrete problem does not coincide with the solution of the continuous problem. For instance, in the iteration in the sidebar to compute the solution of , after 10 or so iterations, we conclude that the root is roughly 1.99 (for example). We therefore have a truncation error of 0.01.
Once an error is generated, it will generally propagate through the calculation. For instance, we have already noted that the operation + on a calculator (or a computer) is inexact. It follows that a calculation of the type a+b+c+d+e is even more inexact.
What does it mean when we say that the truncation error is created when we approximate a mathematical procedure? We know that to integrate a function exactly requires one to find the sum of infinite trapezoids. But numerically one can find the sum of only finite trapezoids, and hence the approximation of the mathematical procedure. Similarly, to differentiate a function, the differential element approaches to zero but numerically we can only choose a finite value of the differential element.
Numerical stability and wellposed problems
Numerical stabilityNumerical stability
In the mathematical subfield of numerical analysis, numerical stability is a desirable property of numerical algorithms. The precise definition of stability depends on the context, but it is related to the accuracy of the algorithm....
is an important notion in numerical analysis. An algorithm is called numerically stable if an error, whatever its cause, does not grow to be much larger during the calculation. This happens if the problem is wellconditioned
Condition number
In the field of numerical analysis, the condition number of a function with respect to an argument measures the asymptotically worst case of how much the function can change in proportion to small changes in the argument...
, meaning that the solution changes by only a small amount if the problem data are changed by a small amount. To the contrary, if a problem is illconditioned, then any small error in the data will grow to be a large error.
Both the original problem and the algorithm used to solve that problem can be wellconditioned and/or illconditioned, and any combination is possible.
So an algorithm that solves a wellconditioned problem may be either numerically stable or numerically unstable. An art of numerical analysis is to find a stable algorithm for solving a wellposed mathematical problem. For instance, computing the square root of 2 (which is roughly 1.41421) is a wellposed problem. Many algorithms solve this problem by starting with an initial approximation x_{1} to , for instance x_{1}=1.4, and then computing improved guesses x_{2}, x_{3}, etc.. One such method is the famous Babylonian method, which is given by x_{k+1} = x_{k}/2 + 1/x_{k}. Another iteration, which we will call Method X, is given by x_{k + 1} = (x_{k}^{2}−2)^{2} + x_{k}. We have calculated a few iterations of each scheme in table form below, with initial guesses x_{1} = 1.4 and x_{1} = 1.42.
>
Observe that the Babylonian method converges fast regardless of the initial guess, whereas Method X converges extremely slowly with initial guess 1.4 and diverges for initial guess 1.42. Hence, the Babylonian method is numerically stable, while Method X is numerically unstable.
 Numerical stability is affected by the number of the significant digits the machine keeps on, if we use a machine that keeps on the first four floatingpoint digits, a good example on loss of significance is given by these two equivalent functions
 If we compare the results of
 and
 by looking to the two above results, we realize that loss of significance which is also called Subtractive Cancelation has a huge effect on the results, even though both functions are equivalent; to show that they are equivalent simply we need to start by f(x) and end with g(x), and so

 The true value for the result is 11.174755..., which is exactly g(500) = 11.1748 after rounding the result to 4 decimal digits.
 Now imagine that lots of terms like these functions are used in the program; the error will increase as one proceeds in the program, unless one uses the suitable formula of the two functions each time one evaluates either f(x), or g(x); the choice is dependent on the parity of x.
 The example is taken from Mathew; Numerical methods using matlab, 3rd ed.
Areas of study
The field of numerical analysis is divided into different disciplines according to the problem that is to be solved.
Computing values of functions

 by looking to the two above results, we realize that loss of significance which is also called Subtractive Cancelation has a huge effect on the results, even though both functions are equivalent; to show that they are equivalent simply we need to start by f(x) and end with g(x), and so
 If we compare the results of
Babylonian  Babylonian  Method X 
>  x_{1} = 1.4  x_{1} = 1.42  x_{1} = 1.4  x_{1} = 1.42   x_{2} = 1.4142857...  x_{2} = 1.41422535...  x_{2} = 1.4016  x_{2} = 1.42026896   x_{3} = 1.414213564...  x_{3} = 1.41421356242...  x_{3} = 1.4028614...  x_{3} = 1.42056...     ...  ...     x_{1000000} = 1.41421...  x_{28} = 7280.2284... } 

Interpolation: We have observed the temperature to vary from 20 degrees Celsius at 1:00 to 14 degrees at 3:00. A linear interpolation of this data would conclude that it was 17 degrees at 2:00 and 18.5 degrees at 1:30pm.
Extrapolation: If the gross domestic product
Gross domestic product
Gross domestic product refers to the market value of all final goods and services produced within a country in a given period. GDP per capita is often considered an indicator of a country's standard of living....
of a country has been growing an average of 5% per year and was 100 billion dollars last year, we might extrapolate that it will be 105 billion dollars this year.
Regression: In linear regression, given n points, we compute a line that passes as close as possible to those n points.
Optimization: Say you sell lemonade at a lemonade stand
Lemonade Stand
Lemonade Stand is a basic economics game created in 1973 by Bob Jamison of the Minnesota Educational Computing Consortium. Charlie Kellner ported the game to the Apple II platform in February 1979...
, and notice that at $1, you can sell 197 glasses of lemonade per day, and that for each increase of $0.01, you will sell one glass of lemonade less per day. If you could charge $1.485, you would maximize your profit, but due to the constraint of having to charge a whole cent amount, charging $1.49 per glass will yield the maximum income of $220.52 per day.
Differential equation: If you set up 100 fans to blow air from one end of the room to the other and then you drop a feather into the wind, what happens? The feather will follow the air currents, which may be very complex. One approximation is to measure the speed at which the air is blowing near the feather every second, and advance the simulated feather as if it were moving in a straight line at that same speed for one second, before measuring the wind speed again. This is called the Euler method for solving an ordinary differential equation.>
One of the simplest problems is the evaluation of a function at a given point. The most straightforward approach, of just plugging in the number in the formula is sometimes not very efficient. For polynomials, a better approach is using the Horner scheme
Horner scheme
In numerical analysis, the Horner scheme , named after William George Horner, is an algorithm for the efficient evaluation of polynomials in monomial form. Horner's method describes a manual process by which one may approximate the roots of a polynomial equation...
, since it reduces the necessary number of multiplications and additions. Generally, it is important to estimate and control roundoff error
Roundoff error
A roundoff error, also called rounding error, is the difference between the calculated approximation of a number and its exact mathematical value. Numerical analysis specifically tries to estimate this error when using approximation equations and/or algorithms, especially when using finitely many...
s arising from the use of floating point
Floating point
In computing, floating point describes a method of representing real numbers in a way that can support a wide range of values. Numbers are, in general, represented approximately to a fixed number of significant digits and scaled using an exponent. The base for the scaling is normally 2, 10 or 16...
arithmetic.
Interpolation, extrapolation, and regression
InterpolationInterpolation
In the mathematical field of numerical analysis, interpolation is a method of constructing new data points within the range of a discrete set of known data points....
solves the following problem: given the value of some unknown function at a number of points, what value does that function have at some other point between the given points?
Extrapolation
Extrapolation
In mathematics, extrapolation is the process of constructing new data points. It is similar to the process of interpolation, which constructs new points between known points, but the results of extrapolations are often less meaningful, and are subject to greater uncertainty. It may also mean...
is very similar to interpolation, except that now we want to find the value of the unknown function at a point which is outside the given points.
Regression
Regression analysis
In statistics, regression analysis includes many techniques for modeling and analyzing several variables, when the focus is on the relationship between a dependent variable and one or more independent variables...
is also similar, but it takes into account that the data is imprecise. Given some points, and a measurement of the value of some function at these points (with an error), we want to determine the unknown function. The least squares
Least squares
The method of least squares is a standard approach to the approximate solution of overdetermined systems, i.e., sets of equations in which there are more equations than unknowns. "Least squares" means that the overall solution minimizes the sum of the squares of the errors made in solving every...
method is one popular way to achieve this.
Solving equations and systems of equations
Another fundamental problem is computing the solution of some given equation. Two cases are commonly distinguished, depending on whether the equation is linear or not. For instance, the equation is linear while is not.Much effort has been put in the development of methods for solving systems of linear equations. Standard direct methods, i.e., methods that use some matrix decomposition
Matrix decomposition
In the mathematical discipline of linear algebra, a matrix decomposition is a factorization of a matrix into some canonical form. There are many different matrix decompositions; each finds use among a particular class of problems. Example :...
are Gaussian elimination
Gaussian elimination
In linear algebra, Gaussian elimination is an algorithm for solving systems of linear equations. It can also be used to find the rank of a matrix, to calculate the determinant of a matrix, and to calculate the inverse of an invertible square matrix...
, LU decomposition
LU decomposition
In linear algebra, LU decomposition is a matrix decomposition which writes a matrix as the product of a lower triangular matrix and an upper triangular matrix. The product sometimes includes a permutation matrix as well. This decomposition is used in numerical analysis to solve systems of linear...
, Cholesky decomposition
Cholesky decomposition
In linear algebra, the Cholesky decomposition or Cholesky triangle is a decomposition of a Hermitian, positivedefinite matrix into the product of a lower triangular matrix and its conjugate transpose. It was discovered by AndréLouis Cholesky for real matrices...
for symmetric (or hermitian) and positivedefinite matrix
Positivedefinite matrix
In linear algebra, a positivedefinite matrix is a matrix that in many ways is analogous to a positive real number. The notion is closely related to a positivedefinite symmetric bilinear form ....
, and QR decomposition
QR decomposition
In linear algebra, a QR decomposition of a matrix is a decomposition of a matrix A into a product A=QR of an orthogonal matrix Q and an upper triangular matrix R...
for nonsquare matrices. Iterative method
Iterative method
In computational mathematics, an iterative method is a mathematical procedure that generates a sequence of improving approximate solutions for a class of problems. A specific implementation of an iterative method, including the termination criteria, is an algorithm of the iterative method...
s such as the Jacobi method
Jacobi method
In numerical linear algebra, the Jacobi method is an algorithm for determining the solutions of a system of linear equations with largest absolute values in each row and column dominated by the diagonal element. Each diagonal element is solved for, and an approximate value plugged in. The process...
, Gauss–Seidel method, successive overrelaxation
Successive overrelaxation
In numerical linear algebra, the method of successive overrelaxation is a variant of the Gauss–Seidel method for solving a linear system of equations, resulting in faster convergence. A similar method can be used for any slowly converging iterative process.It was devised simultaneously by David...
and conjugate gradient method
Conjugate gradient method
In mathematics, the conjugate gradient method is an algorithm for the numerical solution of particular systems of linear equations, namely those whose matrix is symmetric and positivedefinite. The conjugate gradient method is an iterative method, so it can be applied to sparse systems that are too...
are usually preferred for large systems.
Rootfinding algorithm
Rootfinding algorithm
A rootfinding algorithm is a numerical method, or algorithm, for finding a value x such that f = 0, for a given function f. Such an x is called a root of the function f....
s are used to solve nonlinear equations (they are so named since a root of a function is an argument for which the function yields zero). If the function is differentiable
Derivative
In calculus, a branch of mathematics, the derivative is a measure of how a function changes as its input changes. Loosely speaking, a derivative can be thought of as how much one quantity is changing in response to changes in some other quantity; for example, the derivative of the position of a...
and the derivative is known, then Newton's method
Newton's method
In numerical analysis, Newton's method , named after Isaac Newton and Joseph Raphson, is a method for finding successively better approximations to the roots of a realvalued function. The algorithm is first in the class of Householder's methods, succeeded by Halley's method...
is a popular choice. Linearization
Linearization
In mathematics and its applications, linearization refers to finding the linear approximation to a function at a given point. In the study of dynamical systems, linearization is a method for assessing the local stability of an equilibrium point of a system of nonlinear differential equations or...
is another technique for solving nonlinear equations.
Solving eigenvalue or singular value problems
Several important problems can be phrased in terms of eigenvalue decompositions or singular value decompositionSingular value decomposition
In linear algebra, the singular value decomposition is a factorization of a real or complex matrix, with many useful applications in signal processing and statistics....
s. For instance, the spectral image compression
Image compression
The objective of image compression is to reduce irrelevance and redundancy of the image data in order to be able to store or transmit data in an efficient form. Lossy and lossless compression :...
algorithm is based on the singular value decomposition. The corresponding tool in statistics is called principal component analysis.
Optimization
Optimization problems ask for the point at which a given function is maximized (or minimized). Often, the point also has to satisfy some constraintConstraint (mathematics)
In mathematics, a constraint is a condition that a solution to an optimization problem must satisfy. There are two types of constraints: equality constraints and inequality constraints...
s.
The field of optimization is further split in several subfields, depending on the form of the objective function and the constraint. For instance, linear programming
Linear programming
Linear programming is a mathematical method for determining a way to achieve the best outcome in a given mathematical model for some list of requirements represented as linear relationships...
deals with the case that both the objective function and the constraints are linear. A famous method in linear programming is the simplex method.
The method of Lagrange multipliers
Lagrange multipliers
In mathematical optimization, the method of Lagrange multipliers provides a strategy for finding the maxima and minima of a function subject to constraints.For instance , consider the optimization problem...
can be used to reduce optimization problems with constraints to unconstrained optimization problems.
Evaluating integrals
Numerical integration, in some instances also known as numerical quadratureQuadrature (mathematics)
Quadrature — historical mathematical term which means calculating of the area. Quadrature problems have served as one of the main sources of mathematical analysis. History :...
, asks for the value of a definite integral
Integral
Integration is an important concept in mathematics and, together with its inverse, differentiation, is one of the two main operations in calculus...
. Popular methods use one of the Newton–Cotes formulas (like the midpoint rule or Simpson's rule
Simpson's rule
In numerical analysis, Simpson's rule is a method for numerical integration, the numerical approximation of definite integrals. Specifically, it is the following approximation:...
) or Gaussian quadrature
Gaussian quadrature
In numerical analysis, a quadrature rule is an approximation of the definite integral of a function, usually stated as a weighted sum of function values at specified points within the domain of integration....
. These methods rely on a "divide and conquer" strategy, whereby an integral on a relatively large set is broken down into integrals on smaller sets. In higher dimensions, where these methods become prohibitively expensive in terms of computational effort, one may use Monte Carlo
Monte Carlo method
Monte Carlo methods are a class of computational algorithms that rely on repeated random sampling to compute their results. Monte Carlo methods are often used in computer simulations of physical and mathematical systems...
or quasiMonte Carlo method
QuasiMonte Carlo method
In numerical analysis, a quasiMonte Carlo method is a method for the computation of an integral that is based on lowdiscrepancy sequences...
s (see Monte Carlo integration), or, in modestly large dimensions, the method of sparse grid
Sparse grid
Sparse grids are numerical techniques to represent, integrate or interpolate high dimensional functions. They were originally found by the Russian mathematician Smolyak and are based on a sparse tensor product construction...
s.
Differential equations
Numerical analysis is also concerned with computing (in an approximate way) the solution of differential equationDifferential equation
A differential equation is a mathematical equation for an unknown function of one or several variables that relates the values of the function itself and its derivatives of various orders...
s, both ordinary differential equations and partial differential equation
Partial differential equation
In mathematics, partial differential equations are a type of differential equation, i.e., a relation involving an unknown function of several independent variables and their partial derivatives with respect to those variables...
s.
Partial differential equations are solved by first discretizing the equation, bringing it into a finitedimensional subspace. This can be done by a finite element method
Finite element method
The finite element method is a numerical technique for finding approximate solutions of partial differential equations as well as integral equations...
, a finite difference
Finite difference
A finite difference is a mathematical expression of the form f − f. If a finite difference is divided by b − a, one gets a difference quotient...
method, or (particularly in engineering) a finite volume method
Finite volume method
The finite volume method is a method for representing and evaluating partial differential equations in the form of algebraic equations [LeVeque, 2002; Toro, 1999]....
. The theoretical justification of these methods often involves theorems from functional analysis
Functional analysis
Functional analysis is a branch of mathematical analysis, the core of which is formed by the study of vector spaces endowed with some kind of limitrelated structure and the linear operators acting upon these spaces and respecting these structures in a suitable sense...
. This reduces the problem to the solution of an algebraic equation.
Software
Since the late twentieth century, most algorithms are implemented in a variety of programming languages. The NetlibNetlib
Netlib is a repository of software for scientific computing maintained by AT&T, Bell Laboratories, the University of Tennessee and Oak Ridge National Laboratory. Netlib comprises a large number of separate programs and libraries...
repository contains various collections of software routines for numerical problems, mostly in Fortran
Fortran
Fortran is a generalpurpose, procedural, imperative programming language that is especially suited to numeric computation and scientific computing...
and C
C (programming language)
C is a generalpurpose computer programming language developed between 1969 and 1973 by Dennis Ritchie at the Bell Telephone Laboratories for use with the Unix operating system....
. Commercial products implementing many different numerical algorithms include the IMSL
IMSL Numerical Libraries
IMSL is a commercial collection of software libraries of numerical analysis functionality that are implemented in the computer programming languages of C, Java, C#.NET, and Fortran...
and NAG
Numerical Algorithms Group
The Numerical Algorithms Group is a software company which provides methods for the solution of mathematical and statistical problems, and offers services to users of HPC systems. Its products and services are employed by tens of thousands of users from Global 500 companies, universities,...
libraries; a free alternative is the GNU Scientific Library
GNU Scientific Library
In computing, the GNU Scientific Library is a software library written in the C programming language for numerical calculations in applied mathematics and science...
.
There are several popular numerical computing applications such as MATLAB
MATLAB
MATLAB is a numerical computing environment and fourthgeneration programming language. Developed by MathWorks, MATLAB allows matrix manipulations, plotting of functions and data, implementation of algorithms, creation of user interfaces, and interfacing with programs written in other languages,...
, SPLUS
SPLUS
SPLUS is a commercial implementation of the S programming language sold by TIBCO Software Inc..It features objectoriented programming capabilities and advanced analytical algorithms.Historical timeline:...
, LabVIEW
LabVIEW
LabVIEW is a system design platform and development environment for a visual programming language from National Instruments. LabVIEW provides engineers and scientists with the tools needed to create and deploy measurement and control systems.The graphical language is named "G"...
, and IDL as well as free and open source alternatives such as FreeMat
FreeMat
FreeMat is a free open source numerical computing environment and programming language, similar to MATLAB and GNU Octave. In addition to supporting many MATLAB functions and some IDL functionality, it features a codeless interface to external C, C++, and Fortran code, further parallel distributed...
, Scilab
Scilab
Scilab is an open source, crossplatform numerical computational package and a highlevel, numerically oriented programming language. Itcan be used for signal processing, statistical analysis, image enhancement, fluid dynamics simulations, numerical optimization, and modeling and simulation of...
, GNU Octave
GNU Octave
GNU Octave is a highlevel language, primarily intended for numerical computations. It provides a convenient commandline interface for solving linear and nonlinear problems numerically, and for performing other numerical experiments using a language that is mostly compatible with MATLAB...
(similar to Matlab), IT++ (a C++ library), R
R (programming language)
R is a programming language and software environment for statistical computing and graphics. The R language is widely used among statisticians for developing statistical software, and R is widely used for statistical software development and data analysis....
(similar to SPLUS) and certain variants of Python
Python (programming language)
Python is a generalpurpose, highlevel programming language whose design philosophy emphasizes code readability. Python claims to "[combine] remarkable power with very clear syntax", and its standard library is large and comprehensive...
. Performance varies widely: while vector and matrix operations are usually fast, scalar loops may vary in speed by more than an order of magnitude.
Many computer algebra system
Computer algebra system
A computer algebra system is a software program that facilitates symbolic mathematics. The core functionality of a CAS is manipulation of mathematical expressions in symbolic form.Symbolic manipulations:...
s such as Mathematica
Mathematica
Mathematica is a computational software program used in scientific, engineering, and mathematical fields and other areas of technical computing...
also benefit from the availability of arbitrary precision arithmetic which can provide more accurate results.
Also, any spreadsheet
Spreadsheet
A spreadsheet is a computer application that simulates a paper accounting worksheet. It displays multiple cells usually in a twodimensional matrix or grid consisting of rows and columns. Each cell contains alphanumeric text, numeric values or formulas...
software can be used to solve simple problems relating to numerical analysis.
See also
 Scientific computing
 List of numerical analysis topics
 GramSchmidt process
 Numerical differentiationNumerical differentiationIn numerical analysis, numerical differentiation describes algorithms for estimating the derivative of a mathematical function or function subroutine using values of the function and perhaps other knowledge about the function.Finite difference formulae:...
 Symbolicnumeric computationSymbolicnumeric computationIn mathematics and computer science, symbolicnumeric computation is the use of software that combines symbolic and numeric methods to solve problems.References:*, Dongming Wang, Lihong Zhi, Springer, 2007, ISBN 3764379839...
 Analysis of algorithmsAnalysis of algorithmsTo analyze an algorithm is to determine the amount of resources necessary to execute it. Most algorithms are designed to work with inputs of arbitrary length...
 Numerical RecipesNumerical RecipesNumerical Recipes is the generic title of a series of books on algorithms and numerical analysis by William H. Press, Saul Teukolsky, William Vetterling and Brian Flannery. In various editions, the books have been in print since 1986...
External links
Journals Numerische Mathematik, volumes 166, Springer, 19591994 (searchable; pages are images).
 Numerische Mathematik at SpringerLink, volumes 1112, Springer, 1959–2009
 SIAM Journal on Numerical Analysis, volumes 147, SIAM, 1964–2009
Software and Code
 Numerical methods for Fortran programmers
 Java Number Cruncher features free, downloadable code samples that graphically illustrate common numerical algorithms
 Excel Implementations
 Several Numerical Mathematical Utilities (in Javascript)
Online Texts
 Numerical Recipes, William H. Press (free, downloadable previous editions)
 First Steps in Numerical Analysis, R.J.Hosking, S.Joe, D.C.Joyce, and J.C.Turner
 Numerical Analysis for Engineering, D. W. Harder
 CSEP (Computational Science Education Project), U.S. Department of Energy
Online Course Material
 Numerical Methods, Stuart Dalziel University of CambridgeUniversity of CambridgeThe University of Cambridge is a public research university located in Cambridge, United Kingdom. It is the secondoldest university in both the United Kingdom and the Englishspeaking world , and the seventholdest globally...
 Lectures on Numerical Analysis, Dennis Deturck and Herbert S. Wilf University of PennsylvaniaUniversity of PennsylvaniaThe University of Pennsylvania is a private, Ivy League university located in Philadelphia, Pennsylvania, United States. Penn is the fourtholdest institution of higher education in the United States,Penn is the fourtholdest using the founding dates claimed by each institution...
 Numerical methods, John D. Fenton University of Karlsruhe
 Numerical Methods for Science, Technology, Engineering and Mathematics, Autar Kaw University of South FloridaUniversity of South FloridaThe University of South Florida, also known as USF, is a member institution of the State University System of Florida, one of the state's three flagship universities for public research, and is located in Tampa, Florida, USA...
 Numerical Analysis Project, John H. Mathews California State University, FullertonCalifornia State University, FullertonCalifornia State University, Fullerton is a public university located in Fullerton, California. It is the largest institution in the CSU System by enrollment, it offers longdistance education and adultdegree programs...
 Numerical Methods  Online Course, Aaron Naiman Jerusalem College of TechnologyJerusalem College of TechnologyThe Jerusalem College of Technology , , is an Orthodox Jewish college. JCT's main campus are situated in the Givat Mordechai neighbourhood of Jerusalem. Other branches are located in the Givat Shaul neighbourhood of Jerusalem and Ramat Gan...
 Numerical Methods for Physicists, Anthony O’Hare Oxford University
 Lectures in Numerical Analysis, R. Radok Mahidol UniversityMahidol UniversityMahidol University is a public research university in Bangkok, Thailand. Established back in 1888 as School of Medical Practitioners, Siriraj Hospital and reorganized in 1943 as University of Medical Sciences . The university originally focused on Health Sciences but also expanded to other...
 Introduction to Numerical Analysis for Engineering, Henrik Schmidt Massachusetts Institute of TechnologyMassachusetts Institute of TechnologyThe Massachusetts Institute of Technology is a private research university located in Cambridge, Massachusetts. MIT has five schools and one college, containing a total of 32 academic departments, with a strong emphasis on scientific and technological education and research.Founded in 1861 in...