Division (digital)
Encyclopedia
Several algorithms exist to perform division
Division (mathematics)
right|thumb|200px|20 \div 4=5In mathematics, especially in elementary arithmetic, division is an arithmetic operation.Specifically, if c times b equals a, written:c \times b = a\,...

in digital designs. These algorithms fall into two main categories: slow division and fast division. Slow division algorithms produce one digit of the final quotient per iteration. Examples of slow division include restoring, non-performing restoring, non-restoring, and SRT division. Fast division methods start with a close approximation to the final quotient and produce twice as many digits of the final quotient on each iteration. Newton-Raphson and Goldschmidt fall into this category.

The following division methods are all based on the form where
  • Q = Quotient
  • N = Numerator (dividend)
  • D = Denominator (divisor).

Slow division methods

Slow division methods are all based on a standard recurrence equation:
where:
  • Pj = the partial remainder of the division
  • R = the radix
    Radix
    In mathematical numeral systems, the base or radix for the simplest case is the number of unique digits, including zero, that a positional numeral system uses to represent numbers. For example, for the decimal system the radix is ten, because it uses the ten digits from 0 through 9.In any numeral...

  • q n-( j + 1) = the digit of the quotient in position n-(j+1), where the digit positions are numbered from least-significant 0 to most significant n-1
  • n = number of digits in the quotient
  • D = the denominator.

Restoring division

Restoring division operates on fixed-point fractional numbers and depends on the following assumptions:
  • D < N
  • 0 < N,D < 1.


The quotient digits q are formed from the digit set {0,1}.

The basic algorithm for binary (radix 2) restoring division is:


P := N
D := D << n * P and D need twice the word width of N and Q
for i = n-1..0 do * for example 31..0 for 32 bits
P := 2P - D * trial subtraction from shifted value
if P >= 0 then
q(i) := 1 * result-bit 1
else
q(i) := 0 * result-bit 0
P := P + D * new partial remainder is (restored) shifted value
end
end

where N=Numerator, D=Denominator, n=#bits, P=Partial remainder, q(i)=bit #i of quotient

The above restoring division algorithm can avoid the restoring step by saving the shifted value 2P before the subtraction in an additional register T (i.e., T=P<<1) and copying register T to P when the result of the subtraction 2P - D is negative.

Non-performing restoring division is similar to restoring division except that the value of 2*P[i] is saved, so D does not need to be added back in for the case of TP[i] ≤ 0.

Non-restoring division

Non-restoring division uses the digit set {−1,1} for the quotient digits instead of {0,1}. The basic algorithm for binary (radix 2) non-restoring division is:

P[0] := N
i := 0
while i < n do
if P[i] >= 0 then
q[n-(i+1)] := 1
P[i+1] := 2*P[i] - D
else
q[n-(i+1)] := -1
P[i+1] := 2*P[i] + D
end if
i := i + 1
end while

Following this algorithm, the quotient is in a non-standard form consisting of digits of −1 and +1. This form needs to be converted to binary to form the final quotient. Example:
Convert the following quotient to the digit set {0,1}:
Steps:
1. Mask the negative term:
2. Form the two's complement
Two's complement
The two's complement of a binary number is defined as the value obtained by subtracting the number from a large power of two...

 of N:
3. Form the positive term:
4. Sum and :

SRT division

Named for its creators (Sweeney, Robertson, and Tocher), SRT division is a popular method for division in many microprocessor
Microprocessor
A microprocessor incorporates the functions of a computer's central processing unit on a single integrated circuit, or at most a few integrated circuits. It is a multipurpose, programmable device that accepts digital data as input, processes it according to instructions stored in its memory, and...

 implementations. SRT division is similar to non-restoring division, but it uses a lookup table
Lookup table
In computer science, a lookup table is a data structure, usually an array or associative array, often used to replace a runtime computation with a simpler array indexing operation. The savings in terms of processing time can be significant, since retrieving a value from memory is often faster than...

 based on the dividend and the divisor to determine each quotient digit. The Intel Pentium processor's infamous floating-point division bug
Pentium FDIV bug
The Pentium FDIV bug was a bug in the Intel P5 Pentium floating point unit . Certain floating point division operations performed with these processors would produce incorrect results...

 was caused by an incorrectly coded lookup table. Five entries that were believed to be theoretically unreachable had been omitted from more than one thousand table entries.

Newton–Raphson division

Newton–Raphson uses Newton's method
Newton's method
In numerical analysis, Newton's method , named after Isaac Newton and Joseph Raphson, is a method for finding successively better approximations to the roots of a real-valued function. The algorithm is first in the class of Householder's methods, succeeded by Halley's method...

 to find the reciprocal
Multiplicative inverse
In mathematics, a multiplicative inverse or reciprocal for a number x, denoted by 1/x or x−1, is a number which when multiplied by x yields the multiplicative identity, 1. The multiplicative inverse of a fraction a/b is b/a. For the multiplicative inverse of a real number, divide 1 by the...

 of , and multiply that reciprocal by to find the final quotient .

The steps of Newton–Raphson are:
  1. Calculate an estimate for the reciprocal of the divisor (): .
  2. Compute successively more accurate estimates of the reciprocal:
  3. Compute the quotient by multiplying the dividend by the reciprocal of the divisor: .


In order to apply Newton's method to find the reciprocal of , it is necessary to find a function which has a zero at . The obvious such function is , but the Newton–Raphson iteration for this is unhelpful since it cannot be computed without already knowing the reciprocal of . A function which does work is , for which the Newton–Raphson iteration gives

which can be calculated from using only multiplication and subtraction, or using two fused multiply–adds.

If the error is defined as then

Apply a bit-shift to the divisor D to scale it so that 0.5 ≤ D ≤ 1 . The same bit-shift should be applied to the numerator N so that the quotient does not change. Then one could use a linear approximation
Approximation
An approximation is a representation of something that is not exact, but still close enough to be useful. Although approximation is most often applied to numbers, it is also frequently applied to such things as mathematical functions, shapes, and physical laws.Approximations may be used because...

 in the form

to initialize Newton–Raphson. To minimize the maximum of the absolute value of the error of this approximation on interval one should use

Using this approximation, the error of the initial value is less than

Since for this method the convergence
Rate of convergence
In numerical analysis, the speed at which a convergent sequence approaches its limit is called the rate of convergence. Although strictly speaking, a limit does not give information about any finite first part of the sequence, this concept is of practical importance if we deal with a sequence of...

 is exactly quadratic, it follows that
steps is enough to calculate the value up to binary places.

Goldschmidt division

Goldschmidt (after Robert Elliott Goldschmidt) division uses an iterative process to repeatedly multiply both the dividend and divisor by a common factor Fi to converge the divisor, D, to 1 as the dividend, N, converges to the quotient Q:


The steps for Goldschmidt division are:
  1. Generate an estimate for the multiplication factor Fi .
  2. Multiply the dividend and divisor by Fi .
  3. If the divisor is sufficiently close to 1, return the dividend, otherwise, loop to step 1.


Assuming N/D has been scaled so that 0 < D < 1, each Fi is based on D:


Multiplying the dividend and divisor by the factor yields: .

After a sufficient number of iterations k:

The Goldschmidt method is used in AMD Athlon CPUs and later models.

Binomial theorem

The Goldschmidt method can be used with factors that allow simplifications by the Binomial theorem
Binomial theorem
In elementary algebra, the binomial theorem describes the algebraic expansion of powers of a binomial. According to the theorem, it is possible to expand the power n into a sum involving terms of the form axbyc, where the exponents b and c are nonnegative integers with , and the coefficient a of...

.
Assuming N/D has been scaled by a power of two
Power of two
In mathematics, a power of two means a number of the form 2n where n is an integer, i.e. the result of exponentiation with as base the number two and as exponent the integer n....

 such that .
We choose and .
This yields
.
Since after steps we can round to 1 with a relative error of at most and thus we obtain binary digits precision.
This algorithm is referred to as the IBM method in.

Large integer methods

Methods designed for hardware implementation generally do not scale to integers with thousands or millions of decimal digits; these frequently occur, for example, in modular
Modular arithmetic
In mathematics, modular arithmetic is a system of arithmetic for integers, where numbers "wrap around" after they reach a certain value—the modulus....

 reductions in cryptography
Cryptography
Cryptography is the practice and study of techniques for secure communication in the presence of third parties...

. For these large integers, more efficient division algorithms transform the problem to use a small number of multiplications, which can then be done using an asymptotically efficient multiplication algorithm
Multiplication algorithm
A multiplication algorithm is an algorithm to multiply two numbers. Depending on the size of the numbers, different algorithms are in use...

 such as Toom–Cook multiplication or the Schönhage–Strassen algorithm. Examples include reduction to multiplication by Newton's method
Newton's method
In numerical analysis, Newton's method , named after Isaac Newton and Joseph Raphson, is a method for finding successively better approximations to the roots of a real-valued function. The algorithm is first in the class of Householder's methods, succeeded by Halley's method...

 as described above as well as the slightly faster Barrett reduction
Barrett reduction
In modular arithmetic, Barrett reduction is a reduction algorithm introduced in 1986 by P.D. Barrett. A naive way of computingc = a \times b \pmod n. \,...

 algorithm. Newton's method's is particularly efficient in scenarios where one must divide by the same divisor many times, since after the initial Newton inversion only one (truncated) multiplication is needed for each division.

Division by a constant

Division by a constant is equivalent to multiplication by its reciprocal
Reciprocal
-In mathematics:*Multiplicative inverse, in mathematics, the number 1/x, which multiplied by x gives the product 1, also known as a reciprocal*Reciprocal rule, a technique in calculus for calculating derivatives of reciprocal functions...

.
Since the denominator is constant, so is its reciprocal . Thus it is possible to compute the value of once at compile time, and at run time perform the multiplication rather than the division

When doing floating point
Floating point
In computing, floating point describes a method of representing real numbers in a way that can support a wide range of values. Numbers are, in general, represented approximately to a fixed number of significant digits and scaled using an exponent. The base for the scaling is normally 2, 10 or 16...

 arithmetic the use of presents no problem.
But when doing integer
Integer (computer science)
In computer science, an integer is a datum of integral data type, a data type which represents some finite subset of the mathematical integers. Integral data types may be of different sizes and may or may not be allowed to contain negative values....

 arithmetic it is problematic, as will always evaluate to zero (assuming D > 1), so it is necessary to do some manipulations to make it work.

Note that it is not necessary to use . Any value will work as long as it reduces to . For example, for division by 3 the reciprocal is 1/3. So the division could be changed to multiplying by 1/3, but it could also be a multiplication by 2/6, or 3/9, or 194/582.
So the desired operation of can be changed to , where equals . Although the quotient would still evaluate to zero, it is possible to do another adjustment and reorder the operations to produce .

This form appears to be less efficient because it involves both a multiplication and a division, but if Y is a power of two, then the division can be replaced by a fast bit shift. So the effect is to replace a division by a multiply and a shift.
There's one final obstacle to overcome - in general it is not possible to find values X and Y such that Y is a power of 2 and . But it turns out that it is not necessary for to be exactly equal to in order to get the correct final result. It is sufficient to find values for X and Y such that is "close enough" to .
Note that the shift operation loses information by throwing away bits. It is always possible to find values of X and Y (with Y being a power of 2) such that the error introduced by the fact that is only approximately equal to is in the bits that are discarded. For further details please see the reference.

As a concrete example - for 32 bit unsigned integers, division by 3 can be replaced with a multiply by . The denominator in this case is equal to .
In some cases, division by a constant can be accomplished in even less time by converting the "multiply by a constant" into a series of shifts and adds or subtracts.

Rounding error

Round-off error
Round-off error
A round-off error, also called rounding error, is the difference between the calculated approximation of a number and its exact mathematical value. Numerical analysis specifically tries to estimate this error when using approximation equations and/or algorithms, especially when using finitely many...

 can be introduced by division operations due to limited precision
Precision (computer science)
In computer science, precision of a numerical quantity is a measure of the detail in which the quantity is expressed. This is usually measured in bits, but sometimes in decimal digits. It is related to precision in mathematics, which describes the number of digits that are used to express a...

.

External links

The source of this article is wikipedia, the free encyclopedia.  The text of this article is licensed under the GFDL.
 
x
OK