Page 92 - Master Handbook of Acoustics
P. 92

insofar as direction is concerned. This is the law of the first wavefront. This identification of the
  direction to the source of sound is accomplished within a small fraction of a millisecond.



  The Franssen Effect

  The ear is relatively adept at identifying the locations of sound sources. However, it also employs an

  auditory memory that can sometimes confuse direction. The Franssen effect demonstrates this. Two
  loudspeakers are placed to the left and right of a listener in a live room. The loudspeakers are about 3
  ft from the listener at about 45° angles. A sine wave is played through the left loudspeaker, and the
  signal is immediately faded out and simultaneously faded in at the right loudspeaker, so there is no
  appreciable change in overall level. Most listeners will continue to locate the signal in the left
  loudspeaker, even though it is silent and the sound location has changed to the right loudspeaker. They
  are often surprised when the cable to the left loudspeaker is disconnected, and they continue to “hear”

  the signal coming from the left loudspeaker. This demonstrates the role of auditory memory in sound
  localization.



  The Precedence Effect

  Our hearing mechanism integrates spatially separated sounds over short intervals, and under certain
  conditions tends to perceive them as coming from one location. For example, in an auditorium, the ear
  and brain have the ability to gather all reflections arriving within about 35 msec (millisecond) after

  the direct sound, and combine (integrate) them to give the impression that the entire sound field is
  from the direction of the original source, even though reflections from other directions are involved.
  The sound that arrives first establishes the perceptual source location of later sounds. This is
  variously called the precedence effect, Haas effect, or law of the first wavefront. The sound energy
  integrated over this period also gives an impression of added loudness.
      It is not too surprising that the human ear fuses sounds arriving during a certain time window. After

  all, at the cinema, our eyes fuse a series of still pictures, giving the impression of continuous
  movement. The rate of presentation of the still pictures is important; there must be at least 16 pictures
  per second (62-msec interval) to avoid seeing a series of still pictures or a flicker. Auditory fusion
  similarly is a process of temporal fusion. Auditory fusion works best during the first 35 msec after the
  onset of sound; beyond 50 to 80 msec the integration breaks down, and with long delays, discrete

  echoes are heard.
      Haas placed his subjects 3 m from two loudspeakers arranged so that they subtended an angle of
  45°, the listener’s line of symmetry splitting this angle (there is some ambiguity in the literature about
  the angle). The rooftop conditions were approximately anechoic. Both speakers played the same
  speech content and at the same level, but one speaker was delayed relative to the other. Clearly,
  sound from the undelayed speaker arrived at the listening position slightly before the sound from the

  delayed speaker. Haas studied the effects of varying the delay on speech signals. As shown in Fig. 4-
  18, he found that in the 5- to 35-msec delay range, the sound from the delayed loudspeaker was
  perceived as coming from the undelayed speaker. In other words, listeners localized both sources to
  the location of the undelayed source.
   87   88   89   90   91   92   93   94   95   96   97