Page 444 - Mechanics of Asphalt Microstructure and Micromechanics
P. 444
436 C hapter T h ir te en
In addition, other characterization methods such as AFM, 3D digital optical microscope,
and environmental scanning electron microscope (ESEM) can also provide microstruc-
ture characterizations. Figure 13.5 shows an AFM image of the aggregate-binder interface
taken by the height mode (non-contact mode) after scanning is made. The image in Fig-
ure 13.6 is taken by a 3D digital optical microscope with a magnification of 100X. Figure
13.7 presents an interface image at 500x acquired with an ESEM. ESEM images can pro-
vide a useful chemistry map to characterize the chemical components that can be em-
ployed in the atomistic modeling stage. Figure 13.8 presents the micropores identified by
focused ion beams. Figure 13.9 presents the asphalt-aggregate interface structure charac-
terized by the tunneling electron microscope (TEM). Images by TEM indicate that the
thickness of the interface transition zone is roughly 3 to 5 nm.
13.2.8 LAMMPS and Massive Parallel Computation
Large-scale molecular dynamics simulations require a significant amount of computing
resources. Classical molecular dynamics can be quite efficiently implemented on super-
computers using parallelized computing strategies. Such supercomputers consist of
hundreds of individual computers, which are called clusters. Concurrent computing
using modern parallel computers enables the small computers to work simultaneously
on different parts of the same problem. Information between these small computers is
shared by communicating, which is achieved by message-passing procedures, imple-
mented by software libraries such as the message passing interface (MPI) (Gropp et al.,
1994; Gropp et al., 1999).
A large-scale parallel classical molecular dynamics code (LAMMPS, Plimpton and
Hendrickson, 1994; Plimpton, 1995) is widely employed for MD simulations. LAMMPS
implementation is based on spatial domain decomposition. Its parallel MD mechanism
reaches linear scaling, that is the total execution time scales linearly with the number of
particles ~ N , and scales inversely proportional with the number of processors used to
solve the numerical problem, ~ 1/ P (P is the number of processors) (Kadau et al., 2004).
With a parallel computer, whose number of processors increases with the number
of cells (the number of particles per cell does not change), the computational burden
remains constant. To achieve this, the computational space is divided into cells such
that in searching for neighbors interacting with a given particle, only the cell in which
it is located and the next-nearest neighbors have to be considered. Thus, the domain
decomposition scheme can treat huge systems with several billion particles (Kadau
et al., 2004).
13.2.9 Mineral Crystal’s Elastic Modulus
The MD method can be used to calculate the elastic constants of a crystal structure such
as quartz. In general, it may be used to calculate the elasticity constants of aggregates.
Table 13.2 presents the results showing the elastic constants (the 6 6 elastic constant
matrix) of a perfect quartz lattice structure, calculated using a static method. After an
initial energy minimization, a very small strain (remain within elastic limits, 0.001) is
applied to the system and energy re-minimization is applied. More details can be found
in Lu (2010).
13.2.10 Modeling of Interface Behavior
Interfaces between aggregates and asphalt binder are a zone whose behavior and struc-
ture have not been well understood. The multiscale characterization would allow the

