Page 361 - System on Package_ Miniaturization of the Entire System
P. 361
Integrated Chip-to-Chip Optoelectr onic SOP 333
4. Network node crossing delays due to the hierarchical construct inherent in
copper interconnects and the need for data synchronization at each node, and
node crossing protocols, will result in increased data transfer delays that affect
the performance of the entire system.
These four challenges can become potential bottlenecks and result in both opportunities
and challenges for the emergence of a mixed copper and optical interconnect architecture
in which copper is used for very short reach and low-speed wiring, while optics is used
for high-speed, data broadcast, long-reach synchronization, and long-reach point-to-
point wiring.
Limitations for Copper-Wire Interconnects
The design of copper interconnect density on a printed circuit board (PCB) depends
on distance, bit rate, and board dielectric properties. This is deduced directly from
Equation (6.1), in which it is seen that the maximum bit-carrying capacity of a copper
line increases as the cross-sectional area and decreases as the square of the line length.
The cross-sectional area has to be large enough to accommodate the design bit rate
over the design distance. This leads to many wiring layers with vias, via shielding,
and decoupling capacitors. While a polymer optical waveguide density of 500/cm
(20-μm pitch) on PCBs is practical, the actual optical channel density is limited by the
pitch of the laser and PD arrays, which is generally 250 μm but can decrease to 125 μm.
Laser array driver chips and PD array amplifiers are millimeter size. An optimized
system will have optical I/O drivers and amplifiers that become part of the processor
I/O thereby replacing copper bus drivers, multiplexers, and demultiplexers. The bare
die lasers and PDs will then become the principal limiting factor determining the
optical wiring density. However, since the bit rate carrying capacity of optical
interconnects far surpasses that of copper and is independent of distance, the number
of high-speed processor I/Os will actually be lowered, decreasing the pressure on I/O
ports and wiring density below the level predicted by ITRS [22]. Hence, optical
interconnects offer a paradigm shift as compared to the predictions by Rent’s rule [23].
However, as discussed below, the most effective method for interboard or intraboard
optical wiring may be through the use of high-density, flexible optical interconnects
that are directly and electrically pluggable next to a processor.
Opportunities for Board-to-Board Optical Interconnects
In a typical blade server application [24] the edge connector (with hundreds of pins)
connects the system board with the backplane or the centerplane. The insertion force for
each pin may be in the range of 0.3 to 0.8 N, and a system board with 1000 pins will
require a total insertion force of 73 kg to connect to the backplane. Given the material
set, it will be difficult to increase the pin density. At the same time, state-of-the-art low-
profile optical transceivers can at best provide 12 channels [25]. These optical transceivers,
just as telecom transceivers, are assembled with discrete components such as mirrors,
lenses, and possibly optical isolators. The optical assembly is done largely by hand,
making these devices expensive and prohibitive to scale up to 100 channels [26]. With
the development at Georgia Tech of the flexible optical strap having embedded optical
actives, it is possible to foresee an optical interconnect density of 500/cm. This is because
in the Georgia Tech flexible optical wiring, lasers, photodetectors, and waveguides are
end-coupled and “self-aligned” to the waveguide during the waveguide fabrication
process either in arrays or individually and contain no lenses and no mirrors when