Machine epsilon
Encyclopedia
Machine epsilon gives an upper bound on the relative error
due to rounding
in floating point arithmetic. This value characterizes computer arithmetic in the field of numerical analysis
, and by extension in the subject of computational science
. The quantity is also called macheps or unit roundoff, and it has the symbols Greek epsilon
or bold Roman u.
Machine epsilon depends on both the kind of floating point numbers and the kind of rounding. There are several possible rounding modes for use in floating point arithmetic. The four most commonly used rounding modes are relevant because most general-purpose computer chips conform to the 1985 IEEE standard and the 2008 revision. Other schemes are of interest, but while the standard allows several methods of rounding, programming languages and systems vendors do not provide methods
to change the rounding mode from the default: round-to-nearest with the tie-breaking scheme round-to-even.
in a floating point
number system. For a number system
and a rounding procedure, machine epsilon is the maximum relative error of the chosen rounding procedure.
Some background is needed to determine a value from this definition. A floating point number system is characterized by a radix
which is also called the base, , and by the number of significant figures
, . (The whole number out in front is yet another digit.) All the numbers with the same exponent, , have the spacing, . The spacing changes at the numbers that are perfect powers of ; the spacing on the side of larger magnitude
is times larger than the spacing on the side of smaller magnitude.
Since machine epsilon is a bound for relative error, it suffices to consider numbers with exponent . It also suffices to consider positive numbers. For the usual round-to-nearest kind of rounding, the absolute rounding error is at most half the spacing, or . This value is the biggest possible numerator for the relative error. The denominator in the relative error is the number being rounded, which should be as small as possible to make the relative error large. The worst relative error therefore happens when rounding is applied to numbers of the form where is between and . All these numbers round to with relative error . The maximum occurs when is at the upper end of its range. The in the denominator hardly matters, so it is left off for expediency, and just is taken as machine epsilon. As has been shown here, the relative error is worst for numbers that round to , so machine epsilon also is called unit roundoff meaning roughly "the maximum error that can occur when rounding to the unit value".
By the meaning of machine epsilon, the relative error of the rounding is at most machine epsilon in magnitude, so:
where in absolute magnitude is at most or u. The books by Demmel and Higham in the references can be consulted to see how this model is used to analyze the errors of, say, Gaussian elimination.
The definition given here for machine epsilon is the one used by LAPACK and by James Demmel
, a student and colleague of the primary architect for the IEEE standard, Prof. William Kahan
. Most numerical analysts use the words machine epsilon and unit roundoff interchangeably and with this meaning.
Sometimes machine epsilon means the spacing of floating point numbers with zero exponent. By this definition, equals the value of the unit in the last place
relative to 1, and then for the round-to-nearest kind of rounding procedure, u. Prof. Nicholas Higham uses this scheme of definitions. The MATLAB
eps function returns this , not the unit roundoff.
Note that results depend on the particular floating-point format used, such as float, double, long double, or similar as supported by the programming language, the compiler, and the runtime library for the actual platform.
Some formats supported by the processor might be not supported by the chosen compiler and operating system. Other formats might be emulated by the runtime library, including arbitrary-precision arithmetic
available in some languages and libraries.
In a strict sense the term machine epsilon means the 1+eps accuracy directly supported by the processor (or coprocessor), not some 1+eps accuracy supported by a specific compiler for a specific operating system, unless it's known to use the best format.
A trivial example is the machine epsilon for integer arithmetic on processors without floating point formats; it is 1, because 1+1=2 is the smallest integer greater than 1.
IEEE 754 floating-point formats monotonically increase over positive values and monotonically decrease over negative values. They also have the property that where f(x) is the reinterpretation of x from an unsigned or twos-complement integer format to a floating-point format of the same width, and 0 < |f(x)| < ∞, |f(x+1) − f(x)| ≥ |f(x) − f(x−1)|. In languages that allow type punning
and always use IEEE 754-1985, we can exploit this to compute a machine epsilon in constant time. For example, in C:
This will give a result of the same sign as value. If a positive result is always desired, the return statement of machine_eps can be replaced with:
64-bit doubles give 2.220446e-16, which is 2-52 as expected.
) of the true machine epsilon, using a linear search
.
Abridged Output
$ cc machine_epsilon.c; ./a.out
current Epsilon, 1 + current Epsilon
1 2.00000000000000000000
0.5 1.50000000000000000000
...
0.000244141 1.00024414062500000000
0.00012207 1.00012207031250000000
6.10352E-05 1.00006103515625000000
3.05176E-05 1.00003051757812500000
1.52588E-05 1.00001525878906250000
7.62939E-06 1.00000762939453125000
3.8147E-06 1.00000381469726562500
1.90735E-06 1.00000190734863281250
9.53674E-07 1.00000095367431640625
4.76837E-07 1.00000047683715820312
2.38419E-07 1.00000023841857910156
Calculated Machine epsilon: 1.19209E-07
Another Java implementation (together with a Smalltalk version) can be found in the appendix to Besset's (2000) numerical methods in Java & Smalltalk book. This appendix presents a complete implementation of MACHAR (Cody, 1988) in Smalltalk and Java.
Some examples of its output (using IPython
):
In [1]: machineEpsilon(int)
Out[1]: 1
In [2]: machineEpsilon(float)
Out[2]: 2.2204460492503131e-16
In [3]: machineEpsilon(complex)
Out[3]: (2.2204460492503131e-16+0j)
Using NumPy, the value of an inexact type's machine epsilon can be determined using the eps attribute of numpy.finfo as follows (again, in iPython):
In [1]: import numpy
In [2]: numpy.finfo(numpy.float).eps
Out[2]: 2.2204460492503131e-16
In [3]: numpy.finfo(numpy.complex).eps
Out[3]: 2.2204460492503131e-16
, epsilon maybe inferred from the Digits attribute.
An example execution in the SWI-Prolog interpreter:
1 ?- epsilon(1.0).
1.1102230246251565e-16
true .
Approximation error
The approximation error in some data is the discrepancy between an exact value and some approximation to it. An approximation error can occur because#the measurement of the data is not precise due to the instruments...
due to rounding
Rounding
Rounding a numerical value means replacing it by another value that is approximately equal but has a shorter, simpler, or more explicit representation; for example, replacing $23.4476 with $23.45, or the fraction 312/937 with 1/3, or the expression √2 with 1.414.Rounding is often done on purpose to...
in floating point arithmetic. This value characterizes computer arithmetic in the field of numerical analysis
Numerical analysis
Numerical analysis is the study of algorithms that use numerical approximation for the problems of mathematical analysis ....
, and by extension in the subject of computational science
Computational science
Computational science is the field of study concerned with constructing mathematical models and quantitative analysis techniques and using computers to analyze and solve scientific problems...
. The quantity is also called macheps or unit roundoff, and it has the symbols Greek epsilon
Epsilon
Epsilon is the fifth letter of the Greek alphabet, corresponding phonetically to a close-mid front unrounded vowel . In the system of Greek numerals it has a value of 5. It was derived from the Phoenician letter He...
or bold Roman u.
Values for standard hardware floating point arithmetics
The following values of machine epsilon are encountered in practice.IEEE 754 - 2008 | Common name | C++ data type | Base | Fractional digits | Machine epsilon | C++ or Python formula | Value |
---|---|---|---|---|---|---|---|
binary16 Half precision floating-point format In computing, half precision is a binary floating-point computer number format that occupies 16 bits in computer memory.In IEEE 754-2008 the 16-bit base 2 format is officially referred to as binary16... |
half precision | not available | 2 | 10 | 2 -11 | pow (2, -11) | 4.88e-04 |
binary32 | single precision | float | 2 | 23 | 2 -24 | pow (2, -24) | 5.96e-08 |
binary64 | double precision | double | 2 | 52 | 2 -53 | pow (2, -53) | 1.11e-16 |
binary128 Quadruple precision floating-point format In computing, quadruple precision is a binary floating-point computer number format that occupies 16 bytes in computer memory.... |
quad(ruple) precision | long double | 2 | 112 | 2 -113 | pow (2, -113) | 9.63e-35 |
Machine epsilon depends on both the kind of floating point numbers and the kind of rounding. There are several possible rounding modes for use in floating point arithmetic. The four most commonly used rounding modes are relevant because most general-purpose computer chips conform to the 1985 IEEE standard and the 2008 revision. Other schemes are of interest, but while the standard allows several methods of rounding, programming languages and systems vendors do not provide methods
Method (computer programming)
In object-oriented programming, a method is a subroutine associated with a class. Methods define the behavior to be exhibited by instances of the associated class at program run time...
to change the rounding mode from the default: round-to-nearest with the tie-breaking scheme round-to-even.
Formal definition
Rounding is a procedure for choosing the representation of a real numberReal number
In mathematics, a real number is a value that represents a quantity along a continuum, such as -5 , 4/3 , 8.6 , √2 and π...
in a floating point
Floating point
In computing, floating point describes a method of representing real numbers in a way that can support a wide range of values. Numbers are, in general, represented approximately to a fixed number of significant digits and scaled using an exponent. The base for the scaling is normally 2, 10 or 16...
number system. For a number system
Number system
In mathematics, a 'number system' is a set of numbers, , together with one or more operations, such as addition or multiplication....
and a rounding procedure, machine epsilon is the maximum relative error of the chosen rounding procedure.
Some background is needed to determine a value from this definition. A floating point number system is characterized by a radix
Radix
In mathematical numeral systems, the base or radix for the simplest case is the number of unique digits, including zero, that a positional numeral system uses to represent numbers. For example, for the decimal system the radix is ten, because it uses the ten digits from 0 through 9.In any numeral...
which is also called the base, , and by the number of significant figures
Significant figures
The significant figures of a number are those digits that carry meaning contributing to its precision. This includes all digits except:...
, . (The whole number out in front is yet another digit.) All the numbers with the same exponent, , have the spacing, . The spacing changes at the numbers that are perfect powers of ; the spacing on the side of larger magnitude
Magnitude (mathematics)
The magnitude of an object in mathematics is its size: a property by which it can be compared as larger or smaller than other objects of the same kind; in technical terms, an ordering of the class of objects to which it belongs....
is times larger than the spacing on the side of smaller magnitude.
Since machine epsilon is a bound for relative error, it suffices to consider numbers with exponent . It also suffices to consider positive numbers. For the usual round-to-nearest kind of rounding, the absolute rounding error is at most half the spacing, or . This value is the biggest possible numerator for the relative error. The denominator in the relative error is the number being rounded, which should be as small as possible to make the relative error large. The worst relative error therefore happens when rounding is applied to numbers of the form where is between and . All these numbers round to with relative error . The maximum occurs when is at the upper end of its range. The in the denominator hardly matters, so it is left off for expediency, and just is taken as machine epsilon. As has been shown here, the relative error is worst for numbers that round to , so machine epsilon also is called unit roundoff meaning roughly "the maximum error that can occur when rounding to the unit value".
Arithmetic model
Numerical analysis uses machine epsilon to study the effects of rounding error. The actual errors of machine arithmetic are far too complicated to be studied directly, so instead, the following simple model is used. The IEEE arithmetic standard says all floating point operations are done as if it were possible to perform the infinite-precision operation, and then, the result is rounded to a floating point number. Suppose (1) , are floating point numbers, (2) is an arithmetic operation on floating point numbers such as addition or multiplication, and (3) is the infinite precision operation. According to the standard, the computer calculates:By the meaning of machine epsilon, the relative error of the rounding is at most machine epsilon in magnitude, so:
where in absolute magnitude is at most or u. The books by Demmel and Higham in the references can be consulted to see how this model is used to analyze the errors of, say, Gaussian elimination.
Variant definitions
The IEEE standard does not define the terms machine epsilon and unit roundoff, so people are free to use different meanings which can cause some confusion.The definition given here for machine epsilon is the one used by LAPACK and by James Demmel
James Demmel
James Weldon Demmel is an American mathematician and computer scientist, the Dr. Richard Carl Dehmel Distinguished Professor of Mathematics and Computer Science at the University of California, Berkeley....
, a student and colleague of the primary architect for the IEEE standard, Prof. William Kahan
William Kahan
William Morton Kahan is a mathematician and computer scientist who received the Turing Award in 1989 for "his fundamental contributions to numerical analysis", and was named an ACM Fellow in 1994....
. Most numerical analysts use the words machine epsilon and unit roundoff interchangeably and with this meaning.
Sometimes machine epsilon means the spacing of floating point numbers with zero exponent. By this definition, equals the value of the unit in the last place
Unit in the Last Place
In computer science and numerical analysis, unit in the last place or unit of least precision is the spacing between floating-point numbers, i.e., the value the least significant bit represents if it is 1...
relative to 1, and then for the round-to-nearest kind of rounding procedure, u. Prof. Nicholas Higham uses this scheme of definitions. The MATLAB
MATLAB
MATLAB is a numerical computing environment and fourth-generation programming language. Developed by MathWorks, MATLAB allows matrix manipulations, plotting of functions and data, implementation of algorithms, creation of user interfaces, and interfacing with programs written in other languages,...
eps function returns this , not the unit roundoff.
How to determine machine epsilon
Where standard libraries do not provide precomputed values (as float.h does with FLT_EPSILON, DBL_EPSILON and LDBL_EPSILON for C and C++), the best way to determine machine epsilon is to refer to the table, above, and use the appropriate pow formula. Computing machine epsilon is often given as a textbook exercise. The following examples compute machine epsilon in the sense of the spacing of the floating point numbers at 1 rather than in the sense of the unit roundoff.Note that results depend on the particular floating-point format used, such as float, double, long double, or similar as supported by the programming language, the compiler, and the runtime library for the actual platform.
Some formats supported by the processor might be not supported by the chosen compiler and operating system. Other formats might be emulated by the runtime library, including arbitrary-precision arithmetic
Arbitrary-precision arithmetic
In computer science, arbitrary-precision arithmetic indicates that calculations are performed on numbers whose digits of precision are limited only by the available memory of the host system. This contrasts with the faster fixed-precision arithmetic found in most ALU hardware, which typically...
available in some languages and libraries.
In a strict sense the term machine epsilon means the 1+eps accuracy directly supported by the processor (or coprocessor), not some 1+eps accuracy supported by a specific compiler for a specific operating system, unless it's known to use the best format.
A trivial example is the machine epsilon for integer arithmetic on processors without floating point formats; it is 1, because 1+1=2 is the smallest integer greater than 1.
IEEE 754 floating-point formats monotonically increase over positive values and monotonically decrease over negative values. They also have the property that where f(x) is the reinterpretation of x from an unsigned or twos-complement integer format to a floating-point format of the same width, and 0 < |f(x)| < ∞, |f(x+1) − f(x)| ≥ |f(x) − f(x−1)|. In languages that allow type punning
Type punning
In computer science, type punning is a common term for any programming technique that subverts or circumvents the type system of a programming language in order to achieve an effect that would be difficult or impossible to achieve within the bounds of the formal language.In C and C++, constructs...
and always use IEEE 754-1985, we can exploit this to compute a machine epsilon in constant time. For example, in C:
This will give a result of the same sign as value. If a positive result is always desired, the return statement of machine_eps can be replaced with:
64-bit doubles give 2.220446e-16, which is 2-52 as expected.
Approximation using C
The following C program does not actually determine the machine epsilon; rather, it determines a number within a factor of two (one order of magnitudeOrder of magnitude
An order of magnitude is the class of scale or magnitude of any amount, where each class contains values of a fixed ratio to the class preceding it. In its most common usage, the amount being scaled is 10 and the scale is the exponent being applied to this amount...
) of the true machine epsilon, using a linear search
Linear search
In computer science, linear search or sequential search is a method for finding a particular value in a list, that consists of checking every one of its elements, one at a time and in sequence, until the desired one is found....
.
Abridged Output
$ cc machine_epsilon.c; ./a.out
current Epsilon, 1 + current Epsilon
1 2.00000000000000000000
0.5 1.50000000000000000000
...
0.000244141 1.00024414062500000000
0.00012207 1.00012207031250000000
6.10352E-05 1.00006103515625000000
3.05176E-05 1.00003051757812500000
1.52588E-05 1.00001525878906250000
7.62939E-06 1.00000762939453125000
3.8147E-06 1.00000381469726562500
1.90735E-06 1.00000190734863281250
9.53674E-07 1.00000095367431640625
4.76837E-07 1.00000047683715820312
2.38419E-07 1.00000023841857910156
Calculated Machine epsilon: 1.19209E-07
Approximation using Java
A similar Java method:Another Java implementation (together with a Smalltalk version) can be found in the appendix to Besset's (2000) numerical methods in Java & Smalltalk book. This appendix presents a complete implementation of MACHAR (Cody, 1988) in Smalltalk and Java.
Approximation using Haskell
Approximation using Python
The following Python function uses an approximation method to determine the value of machine epsilon for an arbitrary numerical data type.Some examples of its output (using IPython
IPython
IPython is an interactive shell for the Python programming language that offers enhanced introspection, additional shell syntax, tab completion and rich history.- Other features :...
):
In [1]: machineEpsilon(int)
Out[1]: 1
In [2]: machineEpsilon(float)
Out[2]: 2.2204460492503131e-16
In [3]: machineEpsilon(complex)
Out[3]: (2.2204460492503131e-16+0j)
Using NumPy, the value of an inexact type's machine epsilon can be determined using the eps attribute of numpy.finfo as follows (again, in iPython):
In [1]: import numpy
In [2]: numpy.finfo(numpy.float).eps
Out[2]: 2.2204460492503131e-16
In [3]: numpy.finfo(numpy.complex).eps
Out[3]: 2.2204460492503131e-16
Approximation using Ada
In AdaAda (programming language)
Ada is a structured, statically typed, imperative, wide-spectrum, and object-oriented high-level computer programming language, extended from Pascal and other languages...
, epsilon maybe inferred from the Digits attribute.
Approximation using Prolog
An approximation using arithmetic and recursive predicates in Prolog is:An example execution in the SWI-Prolog interpreter:
1 ?- epsilon(1.0).
1.1102230246251565e-16
true .
See also
- Floating pointFloating pointIn computing, floating point describes a method of representing real numbers in a way that can support a wide range of values. Numbers are, in general, represented approximately to a fixed number of significant digits and scaled using an exponent. The base for the scaling is normally 2, 10 or 16...
, general discussion of accuracy issues in floating point arithmetic - UlpULPULP may stand for:* Université Louis Pasteur, a university in Strasbourg, France* Unfair labor practice* Unity Labour Party, a political party from Saint Vincent and the Grenadines* Unleaded petrol...
, unit in the last place
External links
- MACHAR, a routine (in C and Fortran) to "dynamically compute machine constants" (ACM algorithm 722)
- Diagnosing floating point calculations precision, Implementation of MACHAR in Component PascalComponent PascalComponent Pascal is a programming language in the tradition of Niklaus Wirth's Pascal, Modula-2, Oberon and Oberon-2. It bears the name of the Pascal programming language but is incompatible with it. Instead, it is a minor variant and refinement of Oberon-2, designed and supported by a small ETH...
and OberonOberon (programming language)Oberon is a programming language created in 1986 by Professor Niklaus Wirth and his associates at ETH Zurich in Switzerland. It was developed as part of the implementation of the Oberon operating system...
based on the Fortran 77 version of MACHAR published in Numerical Recipes (Press et al., 1992).