Posts Tagged #optics

  • (Not) Posting My Slides

    Greg Ingersoll

    Mon 09 September 2013 - 11:34:31 CDT

    #presentations #optics

    Last week I presented a paper at SPIE Optics+Photonics in San Diego. It was a good conference, and I think the people who attended my session were happy with it. (Also thanks to the CalTech and U of Arizona people for a good lunch discussion after the session.)

    Here I’m not going to get into the (ongoing) research, but I want to highlight a couple things I did with the slides that you might try in your own presentations.

    First, sorry (long-time friend of mine) Geoff, but I’m not going to post all of the slides. Remember, slides don’t stand alone as a document. In fact, the point of this presentation at the conference was to highlight salient sections of the paper I had already written. So, if anything, I would post the paper.

    Now, slide number 1: a plot of diffraction efficiency data generated in Scilab.

    A slide with a plot

    A few things to note here:

    • No bullets. (And yes, I understand the irony of including this point in a bulleted list.)
    • The plot is big on the page. No clutter around it. It uses all of the available space while leaving some margin so things don’t look too crowded.
    • There is a title on the page, but it’s short (and big). By itself, this probably doesn’t make sense, but in the context of the talk, the title serves as a reminder of where we are.
    • Lines have colors and textures. The colors have good contrast vs. white, and show up well on a projector. (Hint: don’t use cyan or yellow.) Also the colors are different enough from each other. Finally, the dashed lines help distinguish the curves if you have issues seeing color or if the plot is printed in black and white (as it is in paper published in the conference proceedings).
    • There is a clear, consistent font. The basic Keynote template I started with uses Gill Sans Light. The smallest non-subscript text is roughly 28pt. And, importantly, the plot axes and legend use the same font as the rest of the presentation.

    That last point is not difficult to achieve, but it is time-consuming. Scilab generates good plots, but doesn’t have a lot of flexibility in formatting. Also, I’ve found that if you play around with dashed lines in Scilab, they look fine in the application but don’t export well. Fortunately, Scilab will export SVG vector graphic files that can be tailored in Adobe Illustrator (or Inkscape or similar) to change fonts, add annotations, change colors, and change dashes. (Caution: The initial vector structure you get will be a mess. But with a little effort, you can split/combine paths and group/ungroup entities to get to something that makes sense.)

    Another trick is to keep consistent axis scaling in Scilab as much as makes sense:

    --> plot(...);
    --> a = gca(); // get current axis
    --> a.data_bounds = [xmin, ymin; xmax, ymax];
    --> a.tight_limits = "on"; 
    

    This way, you can reuse axis formatting–line weight, color, font–across multiple plots in your presentation. (It also helps the audience understand relative scales between data sets.) And finally, keeping everything in a vector format as long as possible maintains image fidelity when you resize in the presentation slide.

    Now slide number 2 (and 3): a full-bleed photograph of part of the lab setup.

    Full bleed photo with context

    Large photos are great, obviously, for illustrating things that are difficult to describe (like an experimental lab setup). The first photo has the room lights on, and when this slide is up, I can describe the position of the holographic plate on the rotation stage, the location of the laser source, etc. But it is difficult to see the two diffraction spots on the screen. However, when I include another photo taken from the same position with the room lights off, you can easily see the diffraction spots.

    ...and the same scene with contrast

    You can’t see the apparatus well anymore in the second picture, but that doesn’t matter because context was established in the previous slide. When the slideware–Keynote in this case–dissolves between the photos, connection between the two is obvious, and that can be worked into the description during the fade: “The output spots are easier to see once we turn the room lights off.”

    (Also, don’t forget to practice with technology interruptions. My presenter display–showing the next slide and a timer–decided not to work during the presentation. My laptop display was just mirroring the projector, and there wasn’t time to fix it. Things still went reasonably well.)

  • Dietary Intake

    Greg Ingersoll

    Mon 11 March 2013 - 14:10:00 CDT

    #requirements #optics #physics

    I was reviewing customer requirements on a recent project, and we came across something unintentionally comical. There was a requirement for illumination intensity which is given as power per unit area, in this case microwatts per square centimeter (uW/cm^2). The problem was that the m was missing leaving uW/c^2.

    6 ounces of cake over 10 minutes is roughly 0.28g/s.

    Now c is not a unit, but in the context of light, it represents the speed of light with units of meters per second. Just for fun, I plugged this into Wolfram Alpha as “500uW/c^2”. This reduced to having units of kilograms per second which the tool interpreted as “recommended dietary intake”!

    This instance was an obvious typo, but it does highlight the need to be careful when authoring and reviewing requirements. Missing a digit, for example, could have far-reaching consequences for your project. And completely wrong units can cause your Mars probe to crash. Better to catch these things early.

    And while we‘re on the subject, what do you use for requirements management? I’m guessing Word/Excel is a popular answer. Leave a comment below. This is a topic I will be covering in future posts.

  • Shoreline Waves

    Greg Ingersoll

    Tue 12 February 2013 - 19:05:00 CST

    #optics #physics

    In a previous post I mentioned that an optical plane wave—or any plane wave for that matter—when viewed in the frequency domain has a spatial frequency that depends on its arrival angle. Oftentimes, the data you see in the frequency domain is referred to as a spectrum, and plane waves arriving from different angles form an Angular Plane Wave Spectrum.

    Waves near Jacksonville Beach, FL.

    Looking at unit conversion, going from the time domain to its corresponding frequency domain converts from seconds to 1/seconds (or cycles per second or Hertz). Space, millimeters, goes to 1/mm (or sometimes line pairs per mm). But how do we go from an angle in radians to 1/mm?

    First, consider a shoreline with waves coming in. The waves are comprised of a series of peaks and valleys with some spacing—the wave period—and they are traveling along a line that intersects the shoreline at some angle. If you were to freeze time and walk down the shoreline, you’ll also see peaks and valleys, and the spacing here is what defines the spatial frequency of the wave in the plane of the shoreline. (In optical terms, the “shoreline” could be any plane of interest in the system: an aperture, a lens, a sensor, etc.)

    A traveling wave. Note how when the wave is sliced at an angle, the cross section is also a sine wave but with a lower frequency.

    The simplest case, of course, contradicts what I just said. If the waves are coming in directly at the shore, as you walk along, the amplitude does not have peaks and valleys. It is constant, or has zero frequency (or is DC if you are an electrical engineer). As the angle of incidence increases, peaks and valleys appear, but they are spaced very far apart from each other. That is they have a low frequency, and the frequency increases as the angle increases. (In the limit, the angle is 90 degrees, the wave is traveling parallel to the shore, and the apparent frequency is equal to the wave frequency.)

    Peaks and valleys of a plane wave crossing the “shore” at an angle. The double arrow denotes the period of the traveling wave. The spacing between two peaks along the shore gives the wave period—one over the frequency—in that plane.

    Finally, a little trigonometry will give us the frequency of the wave in the angular plane wave spectrum as a function of its angle and period. Look at the same figure with a superimposed triangle. The wave (with period lambda) arrives at an angle, theta, relative to the dashed line which is perpendicular to the plane of interest (i.e. theta is relative to the plane normal). The spacing between peaks in the plane of interest is the hypotenuse of the triangle, and the length of this is the inverse of the spatial frequency, f, in the plane.

    The same figure with a superimposed triangle. (Get that trig identity table out.)

    So f is given by:

    $$f=\frac{\sin(\theta)}{\lambda}$$


    This has units of 1/length which is expected for spatial frequency, and the sine function takes care of the angle units.

  • Resolving Eye Shapes

    Greg Ingersoll

    Thu 31 January 2013 - 12:25:00 CST

    #optics

    If you have ever looked closely at the eyes of certain animals, you may have noticed something about the shapes of their pupils. Some animals, say humans, have round pupils while other animals, cats for instance, have tall thin pupils. And animals like deer and goats have pupils that are wider than they are tall. What are the advantages of each? One way of looking at the answer involves a quick diversion into spatial frequency analysis and properties of Fourier transforms.

    Domestic cats have tall narrow pupils.

    Consider light coming from two distant stars—pinpricks in the sky. The wavefront of the light expands spherically from each star, and by the time it reaches your eye, the sphere is so big that it seems basically flat, or planar, instead of curved. So what we have is a plane wave from each star, and because the stars are in different places, the waves are traveling at slightly different angles. Having different angles is equivalent to having different spatial frequencies (the reason is straightforward, but is a whole post by itself).

    The pupils of deer are wider than they are tall.

    When these two plane waves pass through an aperture like the anatomical pupil of your eye, the wavefronts are clipped. Immediately after the aperture, instead of being infinitely wide plane waves, they now stop at the edges of the aperture. These hard edges cause the light to diffract which results in additional frequency content after the aperture. Looking at this in the (spatial) frequency domain can shed some additional light.

    In the plane of the aperture, a plane wave arriving at some angle appears as a sine wave when plotting energy vs. location. Consulting a table of Fourier transforms, a sine wave transforms into two delta functions—infinitely thin spikes sitting at particular frequencies. One delta is at the positive frequency and one is at the negative frequency. So having two plane waves at different angles leads to four delta functions in the frequency domain.

    Left: two sinusoids of different frequencies (5 and 8 cycles per unit length) in the spatial domain. Right: the resulting delta functions in the spatial frequency domain.

    The simplified aperture is just a rectangle function. This function equals 1 over the width of the aperture and equals zero everywhere else. In the frequency domain this becomes a sinc function

    $$sinc(x)=\frac{\sin(\pi x)}{\pi x}$$

    which is infinitely wide in frequency. And importantly, the horizontal scaling of the sinc function in frequency is inversely proportional to the width of the rectangle function: wide rect, narrow sinc lobes and vice versa.

    Left: the rectangular aperture. Right: Its frequency content, the sinc function.

    Now the plane waves passing through the aperture is equivalent mathematically to multiplying the plane waves by the aperture. While addition in space is also addition in frequency, multiplication in the spatial domain leads to convolution in frequency and vice versa. (Convolution: so that’s where that name came from.) So what do we get when we convolve a sinc function with two delta functions?

    As it happens, convolution of anything with a delta function simply gives you a copy of the something shifted to the location of the delta. So convolving the sinc with four deltas gives us four copies of the sinc in the frequency domain. Instead of two discrete frequencies (four if you count positive and negative), we now still have a fair amount of energy at those original frequencies—the sinc functions are centered there—but we also have energy splashing into infinitely many other frequencies. This is the result of diffraction from the hard aperture.

    The frequency domain result of the aperture clipping the two plane waves. Red and blue are the individual waves; black is the sum.

    Now let’s move the original distant points of light closer together. The difference between the angles of the plane waves and thus the frequency separation of the delta functions decreases. But now after the convolution, the sinc functions’ main lobes have collided and blended together to where you can no longer tell there are two contributing peaks. You can no longer distinguish, or resolve, the sources as separate points. This is essentially the definition of resolution in an optical system. (Often the points are considered to be minimally resolved when the peak of one sinc function coincides with the first zero of the other.)

    Frequencies (angles) are too close together to be resolved. (Frequencies are now 5 and 6 cycles per unit length.)

    But wait! What if the aperture gets wider? This causes the sinc functions’ lobes to get narrower meaning that the light sources can be this close together while still being distinguishable. A larger aperture provides higher resolution.

    The same frequencies as the previous plot, but increasing the aperture width has narrowed the sinc function, and the spots are resolvable again. (Aperture width increased from 0.5 units to 1.5 units.)

    So back to the animals. A cat’s eye is tall and narrow. The vertical dimension of the pupil is larger giving the cat higher-resolution vision vertically than it has horizontally. This is useful for a cat climbing trees and looking for prey above and below. A goat or deer has the opposite orientation. Its pupil is wider than it is tall giving it higher-resolution vision horizontally, useful for a grazing animal watching for predators on open plains. And humans—and many other animals—have round pupils giving essential equivalent resolution vertically and horizontally. (Note that light exclusion also factors strongly into the function of animals’ eyes. For instance, cats often operate in the dark, so their retinas are quite sensitive. Their slit pupils can close down quite far during the day significantly reducing the amount of light getting in.)

    The fundamental mathematics here—space/time vs. frequency domains, Fourier transforms, convolution—can be applied to countless problems in science and engineering. Be sure to subscribe to the blog to get new items as they come out, and leave comments and questions.

Get Updates

Subscribe to the blog to get automatic update. And post comments or contact us for more information.