Technical Article

The Nyquist–Shannon Sampling Theorem: Exceeding the Nyquist Rate

May 18, 2020 by Robert Keim

This article continues our series on sampling theory by explaining the importance of oversampling in real-life mixed-signal systems.

In the first article of this series, we explored this concept by thinking in the time domain, and in the second article, we approached it from a frequency-domain perspective.

Now, we need to consider this theorem’s role in guiding the decisions of electrical engineers whose goal is to design functional circuits and systems. 

Shannon’s sampling theorem states the following:

 

If a system uniformly samples an analog signal at a rate that exceeds the signal’s highest frequency by at least a factor of two, the original analog signal can be perfectly recovered from the discrete values produced by sampling.

 

Theory informs practice but does not specify it. In other words, Shannon’s theorem doesn’t tell us how to design a sampled system; rather, it helps us to understand sampled systems and provides a framework that orients and supports the work of the engineer. Thus, it’s important to know where theory and practice diverge, and in the case of sampling theory, perhaps the most important divergence is that of the required sampling rate.

 

Sampling and Aliases

In the previous article, we saw that aliasing occurs when the sampling frequency (fS) is less than twice the maximum signal frequency (fMAX), such that the subspectra overlap. 


 

I think that most of us naturally interpret the term “aliasing” as inherently negative, i.e., as a potential problem that must be avoided. However, aliasing in the broader sense is an integral part of converting a signal from a continuous waveform into a sequence of discrete values.

I’ve been using the word “subspectra” to refer to the spectral replicas created by sampling, but a more official name is simply aliases.

We create aliases—i.e., the original signal frequencies “disguised” as different frequencies—every time we perform analog-to-digital conversion, regardless of the sampling rate. When sampled data are converted from digital back to analog, these aliases become part of the analog signal, and consequently, D/A conversion results in an analog signal that is not identical to the sampled signal. Thus, if we wish to perfectly reconstruct the original analog signal, we must eliminate the effect of the aliases.

As we know, to prevent alias-induced signal corruption, we need to sample at or above the Nyquist rate. If we don’t comply with this fundamental requirement, we don’t stand a chance against the aliases—by the time we even so much as glance at our sampled data, the aliases have already permanently mingled with the original spectrum. There’s nothing we can do to separate the authentic frequencies from the impostors.

But signal reconstruction only begins with adequate sampling frequency. A second fundamental requirement is low-pass filtering.

 

Reconstruction via Filtering

If we sample above the Nyquist rate, we still have aliases, but now there is a gap between the authentic spectrum and the aliased spectra:

 


 

This allows us to recover the original signal by converting the digitized waveform into an analog waveform and then applying a low-pass filter:

 

 

The low-pass filter that is intended to eliminate (or more realistically, mitigate) the effect of aliases in the reconstructed analog signal is called a reconstruction filter. This is an analog filter that is applied after D/A conversion. I’m using italics here to emphasize two things.

First, we can’t remove aliases by means of digital filtering. Aliases are inherent to the nature of sampled, quantized data and therefore can’t be eliminated in the digital realm (though oversampling and interpolation can make the analog filtering requirements less severe).

Second, a reconstruction filter is designed to remove aliases, but it is not an anti-aliasing filter! The term “anti-aliasing filter” refers to a low-pass filter that is applied before A/D conversion.

 

Ideal Filter vs. Real Filter

If you ponder the previous diagram for a moment, you may start to understand why Shannon’s theorem is not a “how-to” guide for designing mixed-signal electronic systems. If we bring the sampling rate down to the theoretical limit, the Fourier transform looks like this:

 


 

In the idealized mathematical realm, we can still separate the authentic spectrum from the aliases. However, physical components cannot create the “brick wall” type of frequency response that would be needed to slice straight down and thereby perfectly filter out the unwanted frequency content:

 


 

Furthermore, we typically prefer to avoid the cost, complexity, and board space required for filters that come anywhere near the brick-wall response. Instead, we use oversampling.

By sampling a signal at a rate that is much higher than the Nyquist rate, we ensure that there will be a large frequency gap between the authentic spectrum and the nearest alias. This large gap makes it much easier for us to build an effective reconstruction filter because the magnitude response can roll off slowly and still produce significant attenuation at the alias frequencies. With generous oversampling, even a first-order RC low-pass filter can provide adequate alias suppression.

There’s no fixed rule for how much oversampling is needed in a given application, but I like to have a sampling rate that is at least five times higher than the highest signal frequency of interest. If your signal frequencies are close to the maximum sampling rate of your ADC, you may have to sample closer to the Nyquist rate and then devote more time and money to your reconstruction filter.

 

Conclusion

We’ve seen that Shannon’s sampling theorem needs to be adapted to the constraints of real-life circuit design. Though perfect reconstruction is mathematically possible when the sampling rate is equal to twice the highest signal frequency, this approach requires an idealized low-pass filter and is, therefore, not directly applicable to engineered systems.

Another important issue that I mentioned is the difference between a reconstruction filter and an anti-aliasing filter. We’ll discuss anti-aliasing filters in the next article.

5 Comments
  • B
    Bernie Hutchins May 19, 2020

    Robert – you said:
    “Instead, we use oversampling.  . . . . . .”

    In the brief description that follows this comment it is very apparent that you have BARELY A CLUE as to what the 60-year art of “oversampling” (OS) entails. What you relate adds NOTHING to ordinary low-pass sampling. It is a disservice to the readers of your tutorial and to the developers of the brilliant OS art.


    [1] M. Hauser, “Principles of Oversampling A/D Conversion,” J. Audio Eng. Soc, Vol. 39, No. 1/2, Jan/Feb 1991, pp 3-26;  [2] S. Orfanidis, Introduction to Signal Processing. Prentice-Hall (1996);  [3] K. Pohlman, Principles of Digital Audio, Sams (2000);  [4] J.G. Proakis & D.G. Manolakis, Digital Signal Processing, Macmillan (1992).


    Here are the basis points:


    (1)  During audio recording (sampling) samples ARE taken at a very high rate (perhaps x128), but are quantized (using a “discrete-time filter”) usually to just one bit, then DIGITALLY FILTERED (a “pre-decimation” filter) to reduce to the audio bandwidth (you say impossible!); and at the much lower sampling rate, decimated, and stored (full bit-size on a CD at 44.1 kHz). This solves the anti-aliasing problem using a simple well-defined, cheap, digital filter for an impractical analog filter.  It does much more!


    (2) Because there is “quantization noise” (QN); (inherent, quantization following the usual sampling), a one-bit quantization would be extremely noisy were in not for the fact that the “required” noise is distributed uniformly over the much larger (OS) bandwidth, making MOST of it inaudible (gaining ½ bit per octave of OS – not much, but it’s free). By a simple manipulation of the sampling digital filter’s structure, we can “NOISE SHAPE” (NS) the QN so as to achieve 1.5 or even 2.5 bits/octave of OS.  Soon enough, one bit is enough.


    (3)  For playback we have perhaps a CD with 16 bit 44.1 kHz samples, (perhaps by “brute force” – perhaps by OS-NS).  The process is much the inverse of recording: digital interpolation, reconstruction (with NS) to an OS rate followed by a trivial one bit D/A and RC low-pass.


    For more details, visit:    http://electronotes.netfirms.com/EN204.pdf
    see pages 20-34.


    -Bernie    

     

    Like. Reply
    • RK37 May 20, 2020
      Thanks for the comment, Mr. Hutchins, and for compensating for the deficiencies in my article.
      Like. Reply
  • MikPDrake May 21, 2020

    “There’s nothing we can do to separate the authentic frequencies from the impostors.” Not strictly true in every case. You can find ‘imposters’ or ‘aliases’ by varying the sampling frequency slightly and noting behaviour of the resultant sampled signals. Hewlett Packard used a technique in their very old microwave spectrum analysers (141T?). These used a harmonic mixer which on its own made it impossible to know what the frequency of the desired signal was. They added a little sprung switch to slightly shift the base local oscillator frequency. Signals moving right on the display were ‘real’ and those moving left were likely ‘imposters’. This technique may be similarly employed to ‘cheat’ using subsampling ADCs on higher frequency signals. I make this observation from memories that are nearly 50 years old so the detail may be lost. The idea remains.

    Like. Reply