Professor Sampler’s Notes: Reconstruction

Topics Covered:

  • Reconstruction
  • Definition of Interpolation
  • The Nyquist-Raabe criterion
  • Definition of Extrapolation
  • The boxcar, rectangular and sinc functions
  • One typical reconstruction methodology

I have had my results for a long time: but I do not yet know how I am to arrive at them – Carl Gauss

Can we bring back our T-Rex? That cute Veggie-saur sitting on the porch isn’t what we bargained for, is it?

If you are like me, you might share the same feeling. How can a bunch of dots (samples) bring back the original signal? Is this really going to work? If there’s a smart high-schooler reading this, she might ask: “What about the dots between the dots?”

This chapter is all about the missing dots between the dots.

You already know the answer, right? Once we miss a piece, the only way to bring it back is to guess its value. This thankless job is called interpolation.

Our scientists and mathematicians have gotten so good at this, that most of us can’t tell the difference between the real thing any more. In fact, we’ve automated the process, so that humans aren’t required (of course they are! If not, then who are we doing all this for?). This automated process is called an algorithm. There are rigorous mathematical algorithms (more complex than they sound, trust me) out there to create new dots in between the ones we have. It’s connect-the-dots, except this one’s within the backdrop of a nightmare slasher starring a killer who never quits, and producers who don’t want to either. The process of reconstructing a signal from its discrete cousin is called reconstruction.

How is it done? In short, we use the same wizardry we used to get us into this whole mess in the first place. What’s that? Remember the message on the plaque on my wall?

If you know stuff about the original signal (which you must in order to sample it!), then you can use that knowledge to make educated (whatever that means) guesses when reconstructing. This is the gist of it. Something like: if you know what a full apple looks like, you can be shown a partial apple and your brain will correctly guess what it is.

Think about it. Our brain uses partial knowledge every day to make assumptions – assumptions about the weather, people, our wants and needs, the stock market, dress sizes, age, you name it. In fact, this chapter is a series of discrete paragraphs on a certain subject, and your brain is using its experience and power to fill in the gaps left by the author – that’s me! So small wonder we use the same methodology to reconstruct our continuous signals, eh?

One curious side effect of reconstruction is this: the resulting continuous signal will never have any frequency that is one-half the frequency of the original signal. We used the Nyquist rate to sample, correct? Now it comes back to haunt us in another way. We can only hope to reconstruct the original signal if it doesn’t have frequencies higher than twice the sampling rate. This condition is called the Nyquist criterion or the Raabe condition.

Before we proceed we’ll have to distinguish between interpolation and extrapolation. If joining the dots is interpolation, what is extrapolation? Extrapolation is also the process of creating new dots where none existed, but instead of creating them between any two dots, we use the same idea to create dots outside the scope of our current signal. E.g., in the case of Hertz’s path, extrapolation would involve finding his path before he begins or after he finishes duty, etc.

You will immediately realize that we don’t know much about Hertz’s life beyond his path, and this makes extrapolation a more uncertain process when compared to interpolation. Caveat emptor! The reason why we have many successful practical interpolation algorithms is because we already know quite a bit about the signals we are sampling. Extrapolation is going where your brain (or computer) hasn’t gone before.

Interpolation or Extrapolation?

So, is reconstruction like taking apart lego bricks and putting them back together again? No. It’s more like recycling paper. The end product is paper, and can be used as paper, but it’s not the same paper it was before it went into the recycling plant. But why can’t we just redo the math backwards to get our original signal? Simple. We no longer have the information we haven’t sampled. Made a mistake in sampling? Too bad, that signal is lost forever. If you’re lucky, you might get another shot (thankfully a lot of natural phenomenon is periodical!).

In our exercise of sampling postman Hertz’s path, we came across a signal that looked something like this:

This is what is called a Boxcar function. A boxcar function when normalized (just think of this term as ‘parking it mathematically at the zero mark’ for now) results in a rectangular function:

Kind of looks the same, doesn’t it? Except it’s normalized, and the boxcar is centered at the zero mark. Now we can remember where we parked our boxcar. The rectangular function is similar to what one would get after a typical exercise in simple sampling. The idea is, from this box-like function, we need to get a fluid wave that resembles a continuous signal. Mathematically, this can be done via something called a cardinal sine (cardinal sin?), in short: sinc. Such a function is called a sinc function. This is what it looks like:

A normalized sinc function (in blue) has neat and graceful wavy lines like a continuous analog signal, doesn’t it? And how do we get from a rectangular function to a sinc function? The Fourier transform again (there’s one in every corner!). The Fourier transform of a normalized sinc function is a normalized rectangular function. Now we have a way back towards a continuous analog signal.

Here’s the thrilling climax on how a typical reconstruction works:

Each sample value (rectangle function) is multiplied by the sinc function so that the zero-crossings of the latter occur exactly at the sampling instant (in other words, they are in sync or step). When every sample has been multiplied, we just add all of them up to get the original signal. All these sinc functions added up look like a chain of continuous analog signal, and if you’ve done the math right, you shouldn’t see any ‘joins’. We have our T-Rex back in the house! But, is he the same?

One last word of caution. This type of reconstruction, via sinc functions, is not the only method of interpolation. There are others, too. Each scenario demands its own methodology, and one would do well to not take seriously claims of many pseudo-experts who bandy about these terms as if none else existed. You know better now.

Takeaways:

  • Reconstruction is the interpolation process that mathematically generates a continuous signal (hopefully the original one) from discrete samples.
  • Interpolation is the act of finding missing information between two points that we know much about
  • Extrapolation is the act of finding missing information beyond our current knowledge-domain
  • We can only hope to reconstruct the original signal if it doesn’t have frequencies higher than twice the sampling rate. This condition is called the Nyquist criterion or the Raabe condition.
  • The Fourier transform of a normalized sinc function is a normalized rectangular function. It’s the equivalent of a hair straightener.

Links for further study:

Next: Professor Sampler’s Notes: Interpolation
Previous: Professor Sampler’s Notes: The Shannon-Nyquist Sampling Theorem Part III

2 replies on “Professor Sampler’s Notes: Reconstruction”

Comments are closed.