Page 331 - Embedded Microprocessor Systems Real World Design
P. 331

different values. This is the concept behind floating-point numbers. A floating-point number
                   is typically represented in a computer in this format (16 bits shown here):
                     s eeeeee fffffffff

                     where s is a sign bit (0 = positive, 1 = negative)
                     eeeeee is the exponent (6 bits)
                     fffffffff is the mantissa (9 bits), always positive

                     We  can represent all the eeeeee bits collectively as E. We  can represent all the ffffffa bits
                   collectively as F.  Then the value of the number is given by:

                                                  -1’  x F x 2E
                     Now we can represent any number within the range of the exponent (-31  to +31). Note
                   that to represent fractional values, we  must be able to use a negative exponent. The expe
                   nent  is  biased  so that a value of  zero  (in this example) represents an exponent  of  -31.
                   An  exponent of  all ones (111111) represents +31. In effect, you  take the binary value of
                   the exponent field and subtract 31 from it to get the actual exponent. If we  were using a
                   7-bit exponent, we  could represent values from -63  to +63, and we  would have  a bias of
                   -63.  This allows representation of  negative exponents without needing to resort to  two’s
                   complement math.
                     For our example, a zero in the exponent field represents an exponent of -31,  a value of
                   25 represents an exponent of -6  (25 - 31), and a value of 44 represents an exponent of 13.
                   Remember, these are exponents of 2, not of 10.
                     So using 9 bits for F, we  can represent our 2.5410 in fractional binary as:
                                   10.1000101000112 = 10.1000101 (truncate at 9 bits)
                     When working in  bases  other than  decimal, the  decimal point is  called a  radix point.
                   We  shift the value to the right so we  always have a number of the form 1.xxxx and add an
                   exponent:

                                                1.01000100 x 2-1
                     We  always arrange binary numbers so that they take the form l.xxxx. Because this is the
                   case, we  can throw away the 1 and gain another bit of precision to the right of  the radix
                   point:
                                                  .010001010
                   This is read as (1 + 2-‘ + 2a)21 or 2.5310. The leading 1 is implied.
                     The obvious question is: How can we  guarantee that the number can always be repre-
                   sented by  1.xxxxx so we can drop that leading l? If you think of scientific notation in decimal,
                   you can represent any decimal number as d.ddddd x low, where d stands for any decimal
                   digit and yy is an exponent  (positive or negative) of  10. Even very small numbers can be
                   represented this way  by  using a large negative exponent. The same rules apply to binary.
                   The only difference is that, in decimal, we have no way of knowing what the digit to the left
                   of the decimal point is. In binary, we know it has to be 1. What if the entire number is zero?
                   We’ll get to that later.


                   312                                                            Appendix B
   326   327   328   329   330   331   332   333   334   335   336