Decimal floating point


Decimal floating-point arithmetic refers to both a representation and operations on decimal floating-point numbers. Working directly with decimal fractions can avoid the rounding errors that otherwise typically occur when converting between decimal fractions and binary fractions.
The advantage of decimal floating-point representation over decimal fixed-point and integer representation is that it supports a much wider range of values. For example, while a fixed-point representation that allocates 8 decimal digits and 2 decimal places can represent the numbers 123456.78, 8765.43, 123.00, and so on, a floating-point representation with 8 decimal digits could also represent 1.2345678, 1234567.8, 0.000012345678, 12345678000000000, and so on. This wider range can dramatically slow the accumulation of rounding errors during successive calculations; for example, the Kahan summation algorithm can be used in floating point to add many numbers with no asymptotic accumulation of rounding error.

Implementations

Early mechanical uses of decimal floating point are evident in the abacus, slide rule, the Smallwood calculator, and some other calculators that support entries in scientific notation. In the case of the mechanical calculators, the exponent is often treated as side information that is accounted for separately.
The IBM 650 computer supported an 8-digit decimal floating-point format in 1953. The otherwise binary Wang VS machine supported a 64-bit decimal floating-point format in 1977. The floating-point support library for the Motorola 68040 processor provided a 96-bit decimal floating-point storage format in 1990.
Some computer languages have implementations of decimal floating-point arithmetic, including PL/I, C#, Java with big decimal, emacs with calc, and Python's decimal module.
In 1987, the IEEE released IEEE 854, a standard for computing with decimal floating point, which lacked a specification for how floating-point data should be encoded for interchange with other systems. This was subsequently addressed in IEEE 754-2008, which standardized the encoding of decimal floating-point data, albeit with two different alternative methods.
IBM POWER6 and newer POWER processors include DFP in hardware, as does the IBM System z9. SilMinds offers SilAx, a configurable vector DFP coprocessor. IEEE 754-2008 defines this in more detail. Fujitsu also has 64-bit Sparc processors with DFP in hardware.
Microsoft C#, or.NET, uses System.Decimal.

IEEE 754-2008 encoding

The IEEE 754-2008 standard defines 32-, 64- and 128-bit decimal floating-point representations. Like the binary floating-point formats, the number is divided into a sign, an exponent, and a significand. Unlike binary floating-point, numbers are not necessarily normalized; values with few significant digits have multiple possible representations: 1×102=0.1×103=0.01×104, etc. When the significand is zero, the exponent can be any value at all.
decimal32decimal64decimal128decimalFormat
1111Sign field
5555Combination field
6812w = 2×k + 4Exponent continuation field
2050110t = 30×k−10Coefficient continuation field
326412832×kTotal size
71634p = 3×t/10+1 = 9×k−2Coefficient size
192768122883×2w = 48×4kExponent range
963846144Emax = 3×2w−1Largest value is 9.99...×10Emax
−95−383−6143Emin = 1−EmaxSmallest normalized value is 1.00...×10Emin
−101−398−6176Etiny = 2−p−EmaxSmallest non-zero value is 1×10Etiny

The exponent ranges were chosen so that the range available to normalized values is approximately symmetrical. Since this cannot be done exactly with an even number of possible exponent values, the extra value was given to Emax.
Two different representations are defined:
Both alternatives provide exactly the same range of representable values.
The most significant two bits of the exponent are limited to the range of 0−2, and the most significant 4 bits of the significand are limited to the range of 0−9. The 30 possible combinations are encoded in a 5-bit field, along with special forms for infinity and NaN.
If the most significant 4 bits of the significand are between 0 and 7, the encoded value begins as follows:
s 00mmm xxx Exponent begins with 00, significand with 0mmm
s 01mmm xxx Exponent begins with 01, significand with 0mmm
s 10mmm xxx Exponent begins with 10, significand with 0mmm
If the leading 4 bits of the significand are binary 1000 or 1001, the number begins as follows:
s 1100m xxx Exponent begins with 00, significand with 100m
s 1101m xxx Exponent begins with 01, significand with 100m
s 1110m xxx Exponent begins with 10, significand with 100m
The leading bit is a sign bit, and the following bits encode the additional exponent bits and the remainder of the most significant digit, but the details vary depending on the encoding alternative used.
The final combinations are used for infinities and NaNs, and are the same for both alternative encodings:
s 11110 x ±Infinity
s 11111 0 quiet NaN
s 11111 1 signaling NaN
In the latter cases, all other bits of the encoding are ignored. Thus, it is possible to initialize an array to NaNs by filling it with a single byte value.

Binary integer significand field

This format uses a binary significand from 0 to 10p−1. For example, the Decimal32 significand can be up to 107−1 = = 98967F16 =. While the encoding can represent larger significands, they are illegal and the standard requires implementations to treat them as 0, if encountered on input.
As described above, the encoding varies depending on whether the most significant 4 bits of the significand are in the range 0 to 7, or higher.
If the 2 bits after the sign bit are "00", "01", or "10", then the exponent field consists of the 8 bits following the sign bit, and the significand is the remaining 23 bits, with an implicit leading 0 bit, shown here in parentheses:

s 00eeeeee ttt tttttttttt tttttttttt
s 01eeeeee ttt tttttttttt tttttttttt
s 10eeeeee ttt tttttttttt tttttttttt

This includes subnormal numbers where the leading significand digit is 0.
If the 2 bits after the sign bit are "11", then the 8-bit exponent field is shifted 2 bits to the right, and the represented significand is in the remaining 21 bits. In this case there is an implicit leading 3-bit sequence "100" in the true significand:

s 1100eeeeee t tttttttttt tttttttttt
s 1101eeeeee t tttttttttt tttttttttt
s 1110eeeeee t tttttttttt tttttttttt

The "11" 2-bit sequence after the sign bit indicates that there is an implicit "100" 3-bit prefix to the significand.
Note that the leading bits of the significand field do not encode the most significant decimal digit; they are simply part of a larger pure-binary number. For example, a significand of is encoded as binary, with the leading 4 bits encoding 7; the first significand which requires a 24th bit is 223 =.
In the above cases, the value represented is:
Decimal64 and Decimal128 operate analogously, but with larger exponent continuation and significand fields. For Decimal128, the second encoding form is actually never used; the largest valid significand of 1034−1 = 1ED09BEAD87C0378D8E63FFFFFFFF16 can be represented in 113 bits.

Densely packed decimal significand field

In this version, the significand is stored as a series of decimal digits. The leading digit is between 0 and 9, and the rest of the significand uses the densely packed decimal encoding.
The leading 2 bits of the exponent and the leading digit of the significand are combined into the five bits that follow the sign bit. This is followed by a fixed-offset exponent continuation field.
Finally, the significand continuation field made of 2, 5, or 11 10-bit declets, each encoding 3 decimal digits.
If the first two bits after the sign bit are "00", "01", or "10", then those are the leading bits of the exponent, and the three bits after that are interpreted as the leading decimal digit :

Comb. Exponent Significand
s 00 TTT eeeeee
s 01 TTT eeeeee
s 10 TTT eeeeee

If the first two bits after the sign bit are "11", then the second two bits are the leading bits of the exponent, and the last bit is prefixed with "100" to form the leading decimal digit :

Comb. Exponent Significand
s 1100 T eeeeee
s 1101 T eeeeee
s 1110 T eeeeee

The remaining two combinations of the 5-bit field are used to represent ±infinity and NaNs, respectively.

Floating-point arithmetic operations

The usual rule for performing floating-point arithmetic is that the exact mathematical value is calculated, and the result is then rounded to the nearest representable value in the specified precision. This is in fact the behavior mandated for IEEE-compliant computer hardware, under normal rounding behavior and in the absence of exceptional conditions.
For ease of presentation and understanding, 7-digit precision will be used in the examples. The fundamental principles are the same in any precision.

Addition

A simple method to add floating-point numbers is to first represent them with the same exponent. In the example below, the second number is shifted right by 3 digits. We proceed with the usual addition method:
The following example is decimal, which simply means the base is 10.
123456.7 = 1.234567 × 105
101.7654 = 1.017654 × 102 = 0.001017654 × 105
Hence:
123456.7 + 101.7654 = +
= +
= 105 ×
= 105 × 1.235584654
This is nothing other than converting to scientific notation.
In detail:
e=5; s=1.234567
+ e=2; s=1.017654
e=5; s=1.234567
+ e=5; s=0.001017654
--------------------
e=5; s=1.235584654
This is the true result, the exact sum of the operands. It will be rounded to 7 digits and then normalized if necessary. The final result is:
e=5; s=1.235585
Note that the low 3 digits of the second operand are essentially lost. This is round-off error. In extreme cases, the sum of two non-zero numbers may be equal to one of them:
e=5; s=1.234567
+ e=−3; s=9.876543
e=5; s=1.234567
+ e=5; s=0.00000009876543
----------------------
e=5; s=1.23456709876543
e=5; s=1.234567
Another problem of loss of significance occurs when two close numbers are subtracted.
e=5; s=1.234571 and e=5; s=1.234567 are representations of the rationals 123457.1467 and 123456.659.
e=5; s=1.234571
− e=5; s=1.234567
----------------
e=5; s=0.000004
e=−1; s=4.000000
The best representation of this difference is e=−1; s=4.877000, which differs more than 20% from e=−1; s=4.000000. In extreme cases, the final result may be zero even though an exact calculation may be several million. This cancellation illustrates the danger in assuming that all of the digits of a computed result are meaningful.
Dealing with the consequences of these errors are topics in numerical analysis.

Multiplication

To multiply, the significands are multiplied, while the exponents are added, and the result is rounded and normalized.
e=3; s=4.734612
× e=5; s=5.417242
-----------------------
e=8; s=25.648538980104
e=8; s=25.64854
e=9; s=2.564854
Division is done similarly, but that is more complicated.
There are no cancellation or absorption problems with multiplication or division, though small errors may accumulate as operations are performed repeatedly. In practice, the way these operations are carried out in digital logic can be quite complex.