Page 40 - Introduction to Information Optics
P. 40
1.4. Band-limited Analysis 25
Thus, we see that if f(t) is given at various Nyquist intervals (t = n/v m), then
the corresponding Fourier coefficient K n can be obtained. From Eq. (1.86),
however, we see that F(v) can in turn be determined, and from Eq. (1.87) that
the knowledge of JF(v) implies a knowledge of f(t). Therefore, if we substitute
Eq. (1.86) into Eq. (1.87) we have
By interchanging the integration and summation in the preceding equation, we
obtained
K
2nv m(t + n/2v m)
n\ Sm2nv m(t~n/2v m)
j
n— — GO 2v 2nv m(t-n/2vJ '
in which the weighting factor [(sin x)/*] is known as the sampling function. This
is, in fact, the output response of an ideal low-pass channel having a cutoff
frequency at v m, when the samples f(n/2v m) are applied at the input end of the
channel.
1.4.2. GABOR'S INFORMATION CELL
Let us consider the frequency versus the time coordinate shown in Fig. 1.7,
in which v m denotes the maximum frequency limit and T the finite time sample
of the signal f(t). This frequency-time space can be subdivided into elementary
information cells that Gabor called logons, such as
AvAi=l . (1.90)
We quickly recognize that Eq. (1.90) is essentially the lower bound of the
uncertainty relation of Eq. (1.69). However, note that the signal in each of the
information cells has two possible elementary signals (symmetric and antisym-
metric] having the same bandwidth Av and the same duration At. Notice that
the amplitudes of these signals should be given so that the signal function f(t)
can be uniquely described. In view of Fig. 1.7, we see that over the (v m, T) space
there are a total number of information cells; that is,