Page 225 - Introduction to Microcontrollers Architecture, Programming, and Interfacing of The Motorola 68HC12
P. 225
202 Chapter 7 Arithmetic Operations
We begin our discussion of floating-point representations by considering just
unsigned (nonnegative) numbers. Suppose that we use our 32 bits b3j, ... .bo to
represent the number
S * 2 E
where S, the significant}, is of the form
r>23 .b22 • • • bo
E
and 2 , the exponential part, has an exponent E, which is represented by the bits b3i ,
. . . , b24- If these bits are used as an 8-bit two's-complement representation of E, the
151
range of the numbers represented with these 32 bits goes from 2~ to 2 127 , enclosing
the range for the 32-bit fixed-point numbers (13) by several orders of magnitude. (To get
the smallest exponent of-151, put all of the significand bits equal to 0, except bo for an
exponent of-128-23 = -151.)
This type of representation is called a floating-point representation because the
binary point is allowed to vary from one number to another even though the total
number of bits representing each number stays the same. Although the range has
increased for this method of representation, the number of points represented per unit
interval with the floating-point representation is far less than the fixed-point
representation that has the same range. Furthermore, the density of numbers represented
per unit interval gets smaller as the numbers get larger. In fact, in our 32-bit floating-
point example, there are 273 + 1 uniformly spaced points represented in the interval from
n
2 to 2 n+l as n varies between -128 and 127.
Looking more closely at this same floating-point example, notice that some of the
numbers have several representations; for instance, a significand of 1.100 . . .0 with an
exponent of 6 also equals a significand of 0.1100 .. . 0 with an exponent of 7.
Additionally, a zero significand, which corresponds to the number zero, has 256 possible
exponents. To eliminate this multiplicity, some form of standard representation is
usually adopted. For example, with the bits b$i, . . . . , bo we could standardize our
127
representation as follows. For numbers greater than or equal to 2~ we could always
take the representation with b23 equal to 1. For the most negative exponent, in this case
-128, we could always take b23 equal to 0 so that the number zero is represented by a
significand of all zeros and an exponent of -128. Doing this, the bit b23 can always be
determined from the exponent. It is 1 for an exponent greater than -128 and 0 for an
exponent of -128. Because of this, b23 does not have to be stored, so that, in effect, this
standard representation has given us an additional bit of precision in the significand.
When b23 is not explicitly stored in memory but is determined from the exponent in this
way, it is termed a hidden bit.
Floating-point representations can obviously be extended to handle negative
numbers by putting the significand in, say, a two's-complement representation or a
signed-magnitude representation. For that matter, the exponent can also be represented in
any of the various ways that include representation of negative numbers. Although it
might seem natural to use a two's-complement representation for both the significand
and the exponent with the 6812, one would probably not do so, preferring instead to
adopt one of the standard floating-point representations.