|
Tune Around! Topics For
Technicians! |
Frequency Modulation vs. Single Sideband on the VHF Bands
- Paul Bock, K4MSG
Some time ago I was part of a
discussion concerning the relative advantages of single sideband (SSB)
over frequency modulation (FM) for VHF direct station-to-station
communications, commonly referred to as simplex operation. What follows is
an attempt to explain the differences between the two modulation types in
a manner that does not add to the confusion. Let"s begin by recognizing that FM holds sway whenever VHF
repeaters are being used, primarily because of its immunity to amplitude
noise (more on this later). Conversely, SSB seems to be the mode of choice
for weak-signal station-to-station communications. But why does this
latter situation exist? To explain, let's start by supposing that you have
a multimode transceiver for the 144 MHz (2-meter) band that can operate
both SSB and FM. Let's also assume that you have been using this
transceiver to communicate through local 2-meter FM repeaters with a
vertically polarized antenna (such as a whip or ground plane), which is
the polarization used almost universally for ham FM. If you were to
decide to tune down to the low end of the 2-meter band and switch modes to
SSB to work "station-to-station" with your vertically-polarized antenna
your results would be somewhat limited because the vast majority of VHF
SSBers use horizontally-polarized antennas and you would suffer a
cross-polarization loss of as much as 20 db. In a theoretically-perfect world the loss
would be infinite, but since no received signal is ever a purely
horizontal or purely vertical wavefront a measurement of the loss on a
pair of real-world antennas, one horizontal and one vertical, would be
around 20 db. This is still pretty serious because 20 dB is a factor of
100, which means that if you're running 50 watts the cross-polarization
loss has the same effect as reducing your power output to 0.5
watt! Now, if you decide that you're just going to switch to SSB to
talk to your buddy some distance away because FM isn't making the grade,
and if you both have vertical antennas, you will see an improvement by
changing modes and you won't suffer the cross-polarization penalty because
you're both vertically-polarized. There is a possible penalty in slightly
increased path loss because vertically-polarized energy is attenuated
(absorbed) as much as 1-2 dB more than horizontally-polarized energy
(trees, etc., are vertical) but this isn't a huge penalty. If you want to
talk to the other SSBers on the low end of the band you can install a
horizontally polarized antenna, perhaps a small Yagi, and benefit from
horizontal polarization plus whatever forward gain the antenna may have.
Assuming the same antenna polarization, then, what accounts for
the really significant difference (as much as 12 dB) in required signal
strength to maintain communications on FM versus SSB? The real answer lies
not in the antenna, and not in the transmitter, but in the
receiver. First, the obvious: FM requires a wider IF bandwidth than SSB
so the receiver's internal noise floor will be higher, therefore requiring
more signal at the input in order to overcome it. At the bandwidths
typical for FM and SSB in ham equipment --12 to 15 kHz for FM and 2.5 kHz
for SSB - the difference in ambient noise floor will be about 6 to 8
db or between 1 and 2 S-units, which is nothing to sneeze at. But the problem is more complicated than that. FM reception requires that the incoming signal be "hard limited" by amplifying and clipping, then re-amplifying and re-clipping, over and over and over, all to remove the noise that rides on top of the signal. This removes the amplitude variations (noise) and leaves only the phase variations (voice or data) which is all that the FM detector, called a discriminator, will respond to. But as an FM signal grows weaker there is more and more noise (amplitude variation) than the limiter section can "limit out," and at some point the discriminator begins to have difficulty detecting phase variations and the audio output drops out completely. There is no slow fading with the signal getting slowly harder and harder to hear, until the signal is right down at the noise floor; it simply disappears at some point before that because the phase variations have gotten "buried" in the amplitude variations (noise) riding on the signal and the discriminator has nothing to respond to. When a discriminator stops detecting phase variations, its output simply drops to zero. The point at which the FM
receiver discriminator can detect phase variations and respond to
them -- yielding an audio output -- is sometimes called the
"capture point", and as the signal approaches the minimum usable
signal-to-noise ratio the audio output from the discriminator may"pop in
and out" making copy virtually impossible. For example, many of us have had the
experience of listening to a FM broadcast station while traveling in a
vehicle and having the desired station suddenly disappear and be replaced
by another station on the same frequency. Sometimes the radio receiver
will bounce back and forth between the two stations, making listening
nothing but sheer annoyance! This phenomenon, called "capture effect",
occurs because the discriminator only responds to the phase variations of
the strongest FM signal present and is the reason why you never hear a
second FM signal on the same frequency "underneath" a stronger signal.
There are no hard and fast
numbers on when this FM dropout occurs because it varies depending on FM
receiver design, but a reasonable rule of thumb is that it will occur at
about the equivalent of a 7 dB signal-to-noise ratio and it can be higher.
By contrast, a SSB signal should be completely copyable down to 3 dB above
the noise and sometimes even lower. It may experience fading for periods
of several seconds or longer that make it much more difficult to copy but
bits and pieces may still be audible enough to provide some understanding
of what is being said. An FM signal, by contrast, will start popping in
and out, and ultimately disappear completely, well before the noise floor
is reached, and you have no hope of copying what cannot even be
heard. To summarize, then, if we ignore antenna polarization as an issue we can say that a typical ham FM rig will suffer not only from the higher noise floor due to the additional required receiver bandwidth (a 6 to 8 dB penalty) but also from the increased signal-to-noise ratio required (at least 4 dB more than SSB) because of the way FM detectors work. The result is something around a 10 to 12 dB penalty for using FM instead of SSB and is the reason why SSB is preferred for weak-signal work: narrower bandwidth requirements (meaning less receiver noise and greater sensitivity) and the ability to copy signals much closer to the noise floor because they don't just "drop out".
|
|
|
|