Page 118 - Nanotechnology an introduction
P. 118

Figure 10.2 A two-dimensional piece of matter (white) in which dopants (black) have been distributed at random.
  10.5. Strategies to Overcome Component Failure
  Similar reasoning to that of Section 10.4 can be applied to the failure of individual components (transistors, etc.) on a processor chip, which
  become “defects”. Equation (10.10) can be used to estimate likely numbers of failures, at least as a first approximation, considering them to all
  occur independently of each other. As the number of components on a “chip” is increased, instead of applying more rigorous manufacturing
  standards to reduce the probability of occurrence of a defective component, it may become more cost-effective to build in functional redundancy,
  such that failures of some of the components will not affect the performance of the whole. One strategy for achieving that is to incorporate ways for
  their failure to be detected by their congeners, who would switch in substitutes. One of the reasons for the remarkable robustness of living systems
  is the exploitation of functional redundancy, in neural circuits and elsewhere, although the mechanisms for achieving it are still obscure. The oak
  tree producing millions of acorns each year, of which a mere handful may survive to maturity, provides another example of functional redundancy.

  Standard techniques for achieving reliable overall performance when it is not possible to decrease the number of defects include the NAND
  multiplexing architecture proposed by John von Neuman [124] and simple approaches reminiscent of overcoming errors in message transmission
  by repeating the message and instructing the receiver to apply a majority rule. Thus instead of one logic gate, R gates work in parallel and feed
  their input into a device that selects a majority output. This is called R-fold modular redundancy (RMR), R being an odd number. For even higher
  reliability the redundancy can be cascaded; thus one might have three groups of three modules feeding into a majority selector, feeding into a
  further majority selector. A more sophisticated approach is to arrange for the circuit to be reconfigurable [124]. In all these approaches, the higher
  the probability of a component being defective, the greater the redundancy required to achieve reliable computation.
  10.6. Computational Modeling
  One of the attractions of nanotechnology is that a nanodevice is small enough for it to be possible to explicitly simulate its operation with atomic
  resolution (molecular dynamics), using present-day computing resources. This has been done extensively for the Feynman–Drexler diamondoid
  devices being considered as a way to realize productive nanosystems. Whereas biological nanodevices (e.g., a ribosome) of a similar size are
  extremely difficult to simulate operationally because of relaxation modes at many different timescales, extending to tens of seconds or longer
  because the structures are “soft”, diamondoid devices such as a gear train typically operate in the GHz frequency domain.
  The main atomistic simulation technique is molecular dynamics: the physical system of N atoms is represented by their atomic coordinates, whose
  trajectories x(t) are computed by numerically integrating Newton's second law of motion
            i
                                                                                                                     (10.12)
  where m  is the mass of the ith atom and V is the interatomic potential. A general weakness of such atomic simulations is that they use predefined
         i
  empirical potentials, with parameters adjusted by comparing predictions of the model with available experimental data. It is not possible to give a
  general guarantee of the validity of this approach. Since no complete diamondoid nanodevices have as yet been constructed, the output of the
  simulations cannot be verified by comparison with experiment. One way round this difficulty is to develop first principles molecular dynamics
  simulation  code.  Density  functional  theory  (DFT),  which  defines  a  molecule's  energy  in  terms  of  its  electron  density,  is  a  current  favorite  for
  calculating the electronic structure of a system.

  In many cases a mesoscale simulation may be adequate. For example, if multiatom nanoblocks are used as units of assembly, it may not be
  necessary to explicitly simulate each atom within the block.
  10.7. “Evolutionary” Design
  Although the most obvious consequence of nanotechnology is the creation of very small objects, an immediate corollary is that in most cases of
  useful devices, there must be a great many of these objects (vastification). If r is the relative device size, and R the number of devices, then
                                                     9
  usefulness may require that rR ~ 1, implying the need for 10  devices. This corresponds to the number of components (with a minimum feature
  length of about 100 nm) on a very large-scale integrated electronic chip, for example. At present, all these components are explicitly designed and
  fabricated. But will this still be practicable if the number of components increases by a further two or more orders of magnitude?

  Because it is not possible to give a clear affirmative answer to this question, alternative routes to the design and fabrication of such vast numbers
  are being explored. One of the consequences of vastification is that explicit design of each component, as envisaged in the Feynman–Drexler
  vision of productive nanosystems, is not practicable. The human brain serves as an inspiration here. Its scale is far vaster than even the largest
                               11
  scale integrated chips: it has ~ 10  neurons, and each neuron has hundreds or thousands of connexions to other neurons. There is insufficient
  information contained in our genes (considering that each base, chosen from four possibilities, contributes −0.25 log  0.25 bits) to specify all these
                                                                                                  2
  neurons (suppose that each neuron is chosen from two possibilities) and their interconnexions (each one of which needs at least to have its source
  and  destination  neurons  specified).  Rather,  our  genes  cannot  do  more  than  specify  an  algorithm  for  generating  the  neurons  and  their
   113   114   115   116   117   118   119   120   121   122   123