Page 241 -
P. 241

RFID, Business Intelligence (BI), Mobile Computing, and the Cloud
                   The time required to access data from memory is a small fraction of the time required to
                   access data from a hard disk. The primary performance measure for data storage systems is
                   latency, which is the time between when a request is made for data from a storage device
                   and when the data is delivered. For hard disk storage, typical latency is currently around
                   13 milliseconds. For memory, the latency is around 83 nanoseconds. To understand the
                   significance of this speed differential, think of in-computer memory as an F-18 fighter jet
                   that can travel at a speed of 1,190 miles per hour and disk memory as a banana slug with a
                   top speed of 0.007 miles per hour. With such a substantial difference in speed, the obvious
                   question is why would data warehouses use disk memory? The answer is storage capacity.
                   Hard disk storage is now being measured in terabytes, while the maximum capacity of
                   memory chips is still in the gigabytes—so hard disks can store one thousand times more
                   data than memory for a comparable cost. While hard disks can store significantly more data  221
                   than memory chips, the cost and capacity of in-computer memory have reached levels at
                   which in-memory BI is becoming more feasible.
                       Data compression is another technology that makes in-memory BI possible. Figure 8-2
                   shows data for an SAP ERP data table used to store material master data. This is a typical
                   SAP ERP table; it consists of 223 data fields, which are the column headings.




                                                                                     Many
                                                                                     columns are
                                                                                     blank or have
                                                                                     zero values















                   Source Line: SAP AG.
                   FIGURE 8-2  Material master data table


                       As shown in Figure 8-2, many of the fields, or columns, in the data table are blank or
                   contain values of zero. By storing the data as columns rather than rows, in-memory
                   systems can reduce the size of the data by eliminating the large numbers of blank or zero
                   values by just noting their positions in the table. Essentially the system says “this entire
                   column is zero” or “the next 100 items in this column are zero.” When you look across
                   the rows of the table, the number of zero or blank values are not as large, so the saving
                   from noting zero or blank values are not nearly as large.
                       With the data compression provided by column storage, it is now feasible to store
                   large volumes of data in memory without aggregation. This means that multidimensional
                   cubes are not required. An end user can analyze BI data “on the fly” without needing an
                   IT specialist to translate the data into multidimensional cubes.


                 Copyright 2012 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
               Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
   236   237   238   239   240   241   242   243   244   245   246