Page 271 - The Combined Finite-Discrete Element Method
P. 271
254 TRANSITION FROM CONTINUA TO DISCONTINUA
0.45
0.4
Legend: 2 × 2 × 0.1 (m) block
0.35
Kinetic energy (× 0.1 MNm) 0.25 Course mesh, m = 0.04 [kg]
Course mesh, m = 0.01 [kg]
Course mesh, m = 0.02 [kg]
0.3
Course mesh, m = 0.08 [kg]
Fine mesh, m = 0.01 [kg]
0.2
Fine mesh, m = 0.03 [kg]
Fine mesh, m = 0.08 [kg]
0.15
0.1
0.05
0
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
Time (0.001 s)
Figure 7.25 Kinetic energy as a function of time for different meshes and amounts of explosive.
7.4 A NEED FOR MORE ROBUST FRACTURE SOLUTIONS
The fracture and fragmentation algorithms proposed in the context of the combined
finite-discrete element method are in general sensitive to both element size and ele-
ment orientation. This applies to both smeared strain softening localisation based fracture
algorithms (proposed in the early days of the combined finite-discrete element develop-
ment), and single crack based models proposed in recent years, including the most recent
combined single and smeared fracture algorithm.
Only for extremely fine meshes can one expect accurate representation of the stress
and strain fields close to the crack tip. In such cases, these fields are not a function of
either relative size of individual elements or relative orientation of individual elements.
The undulations and errors in the local crack wall geometry are a function of the size of
the finite elements employed, and diminish with decreasing size of finite elements, while
the length of plastic zone stays constant. Thus the influence of the undulations in the
crack walls on the stress field within the plastic zone also diminishes with the decreasing
size of finite elements employed. This leads to the logical conclusion that with very fine
meshes both critical load and fracture pattern are not sensitive to either mesh size or mesh
orientation.
However, extremely fine meshes for complex fracture patterns are difficult to realise
due to extensive CPU requirements. Thus a problem of this type is and remains a so-
called ‘grand challenge problem,’ that is likely to be addressed by hardware of the future.
Alternatively, an algorithmic break-through may occur.