Page 295 - The Combined Finite-Discrete Element Method
P. 295

278    COMPUTATIONAL ASPECTS

            9.1.1  Minimising RAM requirements

            Minimising RAM requirements usually involves the special design of an in-core database.
            A combined finite-discrete element simulation is in essence a process where certain sets
            of data are repeatedly modified to arrive at an intermediary or final set of data, which is
            then given a physical interpretation such as position, orientation, velocity, etc. of discrete
            elements. Such sets of data are conveniently organised in what is termed a ‘database’.
              The purpose of this database is to store all the information about the combined finite-
            discrete element system. All the data in the database is very frequently accessed by the
            CPU. Thus, access to such a database must be very fast. To achieve this, it is an imperative
            to place the whole database within the available RAM space for the given hardware. This
            is the reason why such a database is called the ‘in-core database’.
              Modern hardware and operating systems do give a possibility of RAM requirements
            of a specific computer job exceeding the available RAM space. For instance, if a given
            computer has 1 gigabyte of RAM space, with a UNIX operating system one could the-
            oretically have a computer job of any size of in-core database. This is termed ‘virtual
            memory’. Thus, one can have an in-core database of, say, 1.5 gigabytes. However, at any
            given time while the job is running only up to a maximum of 1 gigabyte will reside within
            the RAM space. The rest of the in-core database will have to sit somewhere else. This is
            usually specially allocated space on the hard disk.
              Virtual memory is accessed through what is termed ‘paging’. The access speed is
            governed by the access speed of the physical storage device. As access to a hard disk is
            extremely slow in comparison to access to the RAM type storage, access to the in-core
            database will in general be very slow if the whole or even a small part of the in-core
            database has to be stored on the hard-disk.
              The concept of virtual memory is now available across a wide range of operating
            systems. Detailed procedures explaining the paging process are outside the scope of
            this book. However, for the sake of understanding at least the basic idea behind virtual
            memory, one can imagine the paging being a cut and paste operation. One page of data
            is cut from RAM and pasted to the hard disk, while another page is cut from the hard
            disk and pasted to the RAM.
              This pasting operation is very expensive. In a typical combined finite-discrete element
            simulation at least 99% of the in-core database is accessed at least once every time step.
            Most often the access is required several times every time step. This can translate to
            more data than the in-core database being copied onto the hard disk every time step. This
            is generally time consuming, even in cases when only a small proportion of the in-core
            database is residing on the hard-disk.
              In addition to the virtual memory, present day workstations enable what is termed
            ‘multitasking’. Multitasking is described as two or more jobs running simultaneously on
            the same processor. In reality this is not the case; it appears that the jobs are running
            simultaneously, when in fact the processor is juggling the jobs. What happens is that the
            processor works for very short time on job A. It then suspends job A and works for a very
            short time on job B. It then comes back to the job A, etc. In a similar fashion, if there
            are more than two jobs at any given time only one job is running, while the rest of the
            jobs are waiting. Waiting times and run times are very short, so it appears to the user that
            all the jobs are running at the same time. Very often in combined finite-discrete element
            simulations one is in a dilemma–should jobs be run simultaneously or one after the other?
   290   291   292   293   294   295   296   297   298   299   300