Tue 04 June 2013 - 11:00:32 CDT
You may have noticed in some recent posts that I’ve linked several times to Wikipedia. This maybe raises two questions. First, is Wikipedia a legitimate resource for scientific citation? Then, is Wikipedia a learning tool?
To the first question, I would say “yes, but…” Certainly, you have to be careful of any source, and you have to review the standards of your particular medium if they exist. To wit: I have no issue referencing accurate info in Wikipedia for a blog post, but I may reference a standard text instead for an article in a refereed journal. (In the latter case, reluctance stems from the mutability of Wikipedia; it can change at any time and the reference may no longer make sense.)
To the second question, I say “no, but…” Unfortunately, the same can be said of most textbooks. They are less learning tool and more reference volume primarily due to lack of context. I use Wikipedia as an always available reference for:
So in a way, I am learning things, but not always with context. Not in a way where I can see the application and the path forward. Adding some context is part of what I am trying to do with this blog; not just regurgitating facts from Wikipedia. Leave some comments if it isn’t–or is–working.
(And xkcd fans, don’t miss the citation gag in the recent what-if.)
Mon 20 May 2013 - 15:19:00 CDT
Two previous posts began the discussion of using the Fourier series to break down a periodic signal into constituent sine/cosine waves. Now, to finish that up, we can look at a general case that varies both the phase and the duty cycle of the example square wave. Recall from last time that shifting a square wave by an arbitrary amount would move energy between the constituent cosines and sines, but that we never saw any energy in the even harmonics. The reason for this becomes clear if instead of looking at the Fourier series, we look at the Fourier Transform and employ a couple useful identities.
We use the Fourier Transform to convert a signal between two domains, say time and frequency—although you also often use space and spatial frequency in optics. In this way, it is a generalized form of the Fourier series we have been discussing. The Fourier series breaks down periodic functions into sines and cosines of various frequencies and amplitudes. The Fourier Transform does the same thing for arbitrary signals resulting in amplitudes as a function of frequency.
The two processes also use the same basis although the Fourier transform is often written to use complex exponentials, and the resulting function represents complex amplitudes. But this is just the same thing as the Fourier series! A complex exponential can be written as:
$$ e^{j \theta} = \cos \theta + j \sin \theta $$
…so we can equivalently say that the a coefficients—on the cosine terms—of the Fourier series represent the same thing as the real part of the Fourier transform, and the b coefficients represent the imaginary part.
Back to the square wave. But instead of a square wave, let’s think of this as a combination of three functions. Looking at one period, we have a square pulse, a.k.a. a rect function. A rect function typically ranges from 0 to 1, so the rect is scaled by 2x and it is shifted down. That constant shift is the second function. And finally, the pulse happens periodically, so we can write this as a convolution (lower case) with a Dirac comb, a series of delta functions, infinitesimally thin spikes. Convolving a function with a comb basically makes a copy of the function at each tooth of the comb.
The next figure illustrates this for our square wave. The square wave is in grey; two times the rect (with a width, w, of pi) is in blue; the Dirac comb (with a period, T, of 2 pi) is in red; and the constant offset of -1 is in green.
We can write the function as:
$$ y(t) = 2 \text{rect}\left(\frac{t}{w}\right) \ast \Delta_T(t) - 1 $$
and using some rules of Fourier transforms—and a table of transforms—we can easily write the function’s transform. First, the constant becomes a delta function—one component of a comb—at zero and pointing down because it is negative 1. The rect becomes a sinc function which is a sine divided by its argument, and the argument is given by the width, w, of the rect. The comb becomes a comb with a spacing of 1/T instead of T. For the operators, addition and scaling stay the same, but the convolution becomes multiplication. This—with the rest of the details filled in—gives:
$$ Y(f) = 2 \frac{w \sin(\pi w f)}{\pi w f} \times \frac{\Delta_{1/T}(f)}{T} - \delta(f) $$
These constituents are shown in the top half of the next figure. The sinc (multiplied by 2 and divided by T) is in blue; the comb—not divided by T because we divided the sinc by T for visibility—in red; and the solitary delta function in green. The lower half of the figure shows everything put together. Notice how the resulting function is only non-zero where it coincides with the comb and that the sinc (in grey) represents an amplitude envelope.
Also notice that every other piece of the comb sits at the same frequency as the zero-crossings of the sinc. This explains why we only saw odd harmonics before. The zeros of the sinc sit where the argument of the sine is a multiple of pi or, in this example, with a spacing of 1/w. Because we have a 50% duty-cycle, w is half the period, T, so the zero spacing is 2/T. This is twice the spacing of the comb, so every other harmonic coincides with a zero of the envelope.
Finally, note that the combination of the sinc, comb, and solitary delta at frequency 0 is zero. f=0 represents a constant, or average value—a DC offset if you’re an electrical engineer. Because of the 50% duty cycle, the average value of our square wave is Y(0)=0.
Now let’s wrap up with a picture of what happens when the duty cycle is not 50% and we have an arbitrary shift in the square wave. We’ll have a non-zero average value, and we’ll have a non-zero values for all of the a and b coefficients of the of the Fourier series.
The top half of the plot shows the square wave (black), the sum of the first ten harmonics (blue), and the sums of the first several odd and even harmonics (green and red). The lower half shows the frequency space representation with the amplitudes of the cosine and sine coefficients—or equivalently the real and imaginary parts of the Fourier transform. (The relative amplitudes of the real and imaginary parts are controlled by time-shifting the wave: a time shift is the same as convolving with a single delta function positioned away from t=0, and this is the same as multiplying by a complex exponential—a pure phase term—in the frequency domain.)
So that was kind of a long road over three posts, and it may not seem very practical yet. But understanding transitions between time and frequency and basic Fourier transform properties—especially the convolution-multiplication duality—are fundamental to countless engineering problems in analog and digital signal processing, wireless communication, and optics. (Lenses can do Fourier transforms in space). An upcoming post will look at how understanding these principles helped us find the source of unwanted emissions from a motor drive circuit, so stick around for that.
Thu 09 May 2013 - 11:07:00 CDT
In my earlier post about Mr. Gibbs I discussed using the Fourier series to break down a periodic signal into consitituent sine waves. The example was a 50% duty-cycle square wave, and, as it happens, the Fourier series only included sine terms and only included every other one. Real world signals tend not to be so uniform, and now I want to take a quick closer look at what happens when you break the symmetry.
The first step is straightforward. The earlier square wave was defined as having a transition at t=0 (assuming the signal is a function of time, t). Likewise, sin(t) also transitions through zero at t=0, but cos(t) is has a maximum at 0. So the square wave is in-phase with a sine function, and only exhibits non-zero b coefficients—the coefficients of the sine harmonics—as shown in the following plot.
But if we shift the square wave by one-quarter period, the center of one of its lobes will be at t=0, similar to a cosine. Also, by definition, shifting a sine by a quarter period gives you a cosine. Together this suggests that the shifted square wave is in phase with a cosine, and its Fourier series should have only cosine terms—a coefficients instead of b coefficients. The next plot shows this. And also note that the magnitudes of the coefficients match between the two plots, but due to the mathematical details, the a coefficients alternate between positive and negative values.
Now what happens if the shift is by some amount other than a quarter period (or some multiple)? Remember odd and even functions from high-school math? An odd function follows:
$$ f(-x)=-f(x) $$
A sine function is an example. A cosine function, on the other hand, is even and follows:
$$ f(-x) = f(x) $$
So another way of looking at the two examples above is that the first square wave is an odd function and the second (shifted by a quarter period) is even. If you add even functions together, you end up with an even function. Likewise with adding odd functions to get an odd sum. So it makes sense that the odd square wave only has sine constituents, and the even square wave is built from cosines.
If we do shift the square wave by some arbitrary amount it will no longer be an odd or even function, and it will take some mixture of sines and cosines to build it up. If you shift the sine-phased square wave to the left by some amount t’ and crunch the calculus, you end up with these coefficients:
$$a_n = \frac{2 \sin(n t’)}{n \pi} \left( 1-\cos(n\pi)\right)$$
$$b_n = \frac{2 \cos(n t’)}{n \pi} \left( 1-\cos(n\pi)\right)$$
The next plot shows the result.
When we superimpose the sum of just the cosine terms—the first 10 harmonics—and the sum of just the sine terms on the shifted square wave, a pattern emerges. Note in the plot below that the cosine terms (red) fill in the portion of the shifted square wave that is even, and the sine terms (blue) fill in the portion that is odd.
Now you may have noticed one other detail: regardless of the shift, the coefficients for even harmonics—not talking about even functions, rather simply even indexes (2, 4, 6, …)—are always zero. We’ll look at that in a future post and also look at applying this stuff to some real-world problems. Stay tuned.
Sun 28 April 2013 - 15:55:00 CDT
The math here is driven by what is by far my favorite interview question. In all the interviews that I’ve conducted as an engineer and engineering manager at various places, not one person has ever gotten it right. (So by reading this, you’re giving yourself a leg up.) It’s not a great interview question really, but I just find it interesting that this tidbit stuck with me, and no one else seems to remember it from engineering school.
First, a little background. A great deal of signal processing—and analysis of linear systems in general—relies on basis decomposition. This essentially means taking a signal and breaking it down into a sum of much simpler functions that are each easy to deal with. In Fourier analysis, the basis consists of the set of sinusoids of different frequencies. When adding together the constituent parts, you can vary the amplitude and phase (equivalently the complex amplitude) from one frequency to the next. This is what defines linear analysis/synthesis: basically you can only scale and add.
Further, the individual sines/cosines are called orthogonal—a term borrowed from vector math—because you cannot decompose a sine wave into a sum of sine waves of different frequencies:
$$A \sin(f_1 x) + B \sin(f_2 x) \ne sin(f_3 x)$$
(For unique frequencies and non-zero A and B of course.) Just like if you had three vectors perpendicular (or orthogonal) to each other. In 3D space, no linear combination of a vector in the x-direction and one in the y-direction can result in a vector in the z-direction.
If, in addition, the sine waves forming the basis are scaled appropriately relative to each other, they are said to be normalized. All of this makes an Orthonormal Basis.
Now with that basis—pun intended—if you take a periodic signal, you can find the amplitude of each constituent sine wave making up the signal by calculating the Fourier series. Wikipedia and countless other sources provide the formulas for using sines and cosines to analyze the periodic function f(x):
$$ f(x) = \frac{a_0}{2} + \sum_{n=1}^{\infty}a_n \cos(nx) + b_n \sin(nx) $$
where:
$$a_n= \frac{1}{\pi} \int_{-\pi}^{\pi} f(x) \cos(nx) dx, \quad n \ge 0 $$
and:
$$b_n= \frac{1}{\pi} \int_{-\pi}^{\pi} f(x) \sin(nx) dx, \quad n \ge 1 $$
The coefficients a and b provide the amplitudes of sines (and cosines) with a discrete set of frequencies that are all multiples of the repetition rate of the original periodic signal. These are the harmonics of the original signal. (For non-periodic signals, you calculate the continuous Fourier Transform of the signal to get the amplitudes of a continuum of frequencies of sine waves. Likewise, you can use other basis functions such as the set of Laguerre-Gaussians if you’re dealing in 2D and circular symmetry.)
Again, the good people editing wikipedia have some lovely graphics that you should look at. I will only show a couple basic things here that get us back to the original point. Let’s look at the first three odd, sine harmonics of a square wave. (As it happens, if the square wave crosses zero at the origin and has a 50% duty cycle it only has odd, sine harmonics.) Superimposing these on the square wave, we see that the fundamental (red) has the same period as the square wave itself, and it “fills in” most of the square wave like painting a wall with a big roller. The next two odd harmonics have three (green) and five (blue) times the frequency of the fundamental, and they serve to fill in the corners of the square wave like painting with a trim brush and a detail brush.
Also note that the amplitudes of the harmonics decrease with increasing frequency. Think of it this way: only the sharp transitions benefit from the detail provided by higher and higher frequencies, and these transitions are a relatively small part of the whole signal. And note too that the harmonics and the square wave all cross y=0 together, and the phases of the harmonics alternate in the center of a square pulse (the red and blue traces have peaks where the green trace has a valley).
Now instead of looking at the harmonics individually, we will build the square wave by adding harmonics together. We don’t truly get a square wave until we have added in infinitely many sine waves of increasing frequency. Until infinity—which is never reached—we end up with varying degrees of overshoot and undershoot at the transitions and ripple in the “flats”. The next figure shows the square wave (black) and the sums of the first five (blue), ten (red), and twenty (green) non-zero harmonics.
Now the answer to the interview question: Interestingly, the amplitude of the over/undershoot is constant at roughly 9% of the square wave’s amplitude as you add more and more constituent frequencies. (Rigorously it is not constant, but rather it approaches this finite limit.) This is the Gibbs Phenomenon.
Gibbs Phenomenon. Remember that.
Sun 24 March 2013 - 18:19:50 CDT
So it’s the time of year when I spend some—ok, maybe a bit too much—time watching some interesting sports. My Wisconsin Badgers in basketball who did well in the B1G tournament but not the NCAA; the Badgers in hockey who won the WCHA tournament and are off to the NCAA; and the start of the Formula1 racing season where I’m intrigued by the personalities, and I love the technology.
But I question the advertising.
Watching the Malaysian Grand Prix this weekend, there was a common ad for a motorcycle/ATV company who were touting their “driver-centered design.” Now this may be novel because their competition ignores the driver or their own previous designs did. I’m not sure. But why would anyone designing a motorcycle not center the design on the driver? The driver is fairly important in the equation. (And before you mention aerodynamics, don’t forget that a fast bike is something the driver wants.)
The same is true for any user of any product, and you need to pay attention when capturing requirements and working through the initial design and engineering of your next product. Starting with the user allows you to form a logical tree of requirements that are all rooted in things that provide value to your users or address their concerns or give them enjoyment.
Now, user requirements are certainly top-level requirements, but they are not the only things on the top line. Certainly if users don’t want your widget, you won’t sell very many. But if governments won’t let you sell it, you won’t sell very many either. You also need to consider agency requirements and fulfillment of standards—for the FCC, FDA, USDA, UL, CSA, CE, and on and on—from the start. (I’ve seen too many people try to fill out a mountain of paperwork at the end when things are already running late and running out of money.)
Finally, there is your brand. In order to compete, you need to differentiate your product from the other guy, and often the products you offer will have similar features and traits. Ask your marketing experts for specific guidelines for colors, sizes, logos, icons, labels, etc. at the start of your next project. Don’t wait until they see the prototype and ask you to change, well, everything.
So to get off on the right foot, build your top-level requirements with three categories in mind:
…and branch out from there.
Subscribe to the blog to get automatic update. And post comments or contact us for more information.