• (Not) Posting My Slides

    Greg Ingersoll

    Mon 09 September 2013 - 11:34:31 CDT

    #presentations #optics

    Last week I presented a paper at SPIE Optics+Photonics in San Diego. It was a good conference, and I think the people who attended my session were happy with it. (Also thanks to the CalTech and U of Arizona people for a good lunch discussion after the session.)

    Here I’m not going to get into the (ongoing) research, but I want to highlight a couple things I did with the slides that you might try in your own presentations.

    First, sorry (long-time friend of mine) Geoff, but I’m not going to post all of the slides. Remember, slides don’t stand alone as a document. In fact, the point of this presentation at the conference was to highlight salient sections of the paper I had already written. So, if anything, I would post the paper.

    Now, slide number 1: a plot of diffraction efficiency data generated in Scilab.

    A slide with a plot

    A few things to note here:

    • No bullets. (And yes, I understand the irony of including this point in a bulleted list.)
    • The plot is big on the page. No clutter around it. It uses all of the available space while leaving some margin so things don’t look too crowded.
    • There is a title on the page, but it’s short (and big). By itself, this probably doesn’t make sense, but in the context of the talk, the title serves as a reminder of where we are.
    • Lines have colors and textures. The colors have good contrast vs. white, and show up well on a projector. (Hint: don’t use cyan or yellow.) Also the colors are different enough from each other. Finally, the dashed lines help distinguish the curves if you have issues seeing color or if the plot is printed in black and white (as it is in paper published in the conference proceedings).
    • There is a clear, consistent font. The basic Keynote template I started with uses Gill Sans Light. The smallest non-subscript text is roughly 28pt. And, importantly, the plot axes and legend use the same font as the rest of the presentation.

    That last point is not difficult to achieve, but it is time-consuming. Scilab generates good plots, but doesn’t have a lot of flexibility in formatting. Also, I’ve found that if you play around with dashed lines in Scilab, they look fine in the application but don’t export well. Fortunately, Scilab will export SVG vector graphic files that can be tailored in Adobe Illustrator (or Inkscape or similar) to change fonts, add annotations, change colors, and change dashes. (Caution: The initial vector structure you get will be a mess. But with a little effort, you can split/combine paths and group/ungroup entities to get to something that makes sense.)

    Another trick is to keep consistent axis scaling in Scilab as much as makes sense:

    --> plot(...);
    --> a = gca(); // get current axis
    --> a.data_bounds = [xmin, ymin; xmax, ymax];
    --> a.tight_limits = "on"; 
    

    This way, you can reuse axis formatting–line weight, color, font–across multiple plots in your presentation. (It also helps the audience understand relative scales between data sets.) And finally, keeping everything in a vector format as long as possible maintains image fidelity when you resize in the presentation slide.

    Now slide number 2 (and 3): a full-bleed photograph of part of the lab setup.

    Full bleed photo with context

    Large photos are great, obviously, for illustrating things that are difficult to describe (like an experimental lab setup). The first photo has the room lights on, and when this slide is up, I can describe the position of the holographic plate on the rotation stage, the location of the laser source, etc. But it is difficult to see the two diffraction spots on the screen. However, when I include another photo taken from the same position with the room lights off, you can easily see the diffraction spots.

    ...and the same scene with contrast

    You can’t see the apparatus well anymore in the second picture, but that doesn’t matter because context was established in the previous slide. When the slideware–Keynote in this case–dissolves between the photos, connection between the two is obvious, and that can be worked into the description during the fade: “The output spots are easier to see once we turn the room lights off.”

    (Also, don’t forget to practice with technology interruptions. My presenter display–showing the next slide and a timer–decided not to work during the presentation. My laptop display was just mirroring the projector, and there wasn’t time to fix it. Things still went reasonably well.)

  • What Did You Break?

    Greg Ingersoll

    Tue 03 September 2013 - 14:11:10 CDT

    #system_engineering #requirements #software

    I recently worked on a consulting project where I was brought in to carry some embedded software across the finish line. There was more work than they originally thought–not an uncommon sentiment–and the timeline was growing increasingly critical. (When isn’t it?).

    The project seemed straightforward on the surface. “We have a legacy machine and legacy code base. We need to update the processor because we can’t get the old one anymore. So we also need to update the firmware because a lot of it is hardware dependent.” No sweat. No surprises here. Code in in C, so it’s reasonably portable. Find the hardware-dependent parts, extract and modularize, port, verify.

    Except…

    There were no requirements. And there was no version control implemented (unless you consider dated, initialed comments version control–hint: it’s not). But worst of all, another software engineer with the best of intentions decided that the legacy code was poorly organized and needed to be completely rearchitected.

    First, he was right. Those hardware-dependent parts: everywhere. No modularity. And what modularity there was depended heavily on Ctrl-C/Ctrl-V: copy and paste running rampant. If good code is DRY (Don’t Repeat Yourself–hint: it is), this code was a fire hose trained on the Pacific Ocean. Global variables ruled the day. Comments were needed because object naming was nigh unintelligible. But the existing comments–when not used as a poor substitute for version control–were not kept up to date, so they did more harm than good. (Pro-tip: please stop bothering with comments unless you’re writing in assembly; I want to read your code not your comments.)

    But second, he was dead wrong. Remember, no requirements. The code base had no associated functional or unit tests. The whole system only had an end-of-line test plan. The legacy machine was the design documentation, and the fact that customers liked it and wanted the revision to act the same was the pass/fail criterion.

    You are not–repeat not–allowed to refactor code in this situation.

    You don’t know the impacts. You can’t test the changes. You have no idea what you just broke. This is a stopgap project to keep the assembly line going until the next gen is released. This is not your Mona Lisa or Empire State Building or Saturn V.

    There is one course of action: Click Compile. Fix hardware- or toolchain-related syntax error. Repeat.

    The end result isn’t pretty and it’s not maintainable. But neither was the starting point. Step away from the editor (i.e. vi), the schematic/layout suite, the solid modeling tool, the #2 pencil, the screwdriver…whatever. The goal is to ship an almost exact copy. It doesn’t have to be pretty; your customer doesn’t care. It has to ship on time and on budget (your customer definitely cares about this).

    All that being said, though, please don’t run your development this way. Don’t get into this situation. Write requirements and tests. Get early buy-in. Use automated testing for software–it’s just more software–so you can refactor and optimize to your heart’s content. Documentation costs money, sure. Useful documentation anyway, and it saves more than it costs.

    Know where you’re going before you start walking, and you’ll have a maintainable, efficient system that will be more profitable in the long run.

  • Do Be Relevant

    Greg Ingersoll

    Mon 29 July 2013 - 10:13:53 CDT

    #business

    Following up from my earlier post covering resources for building effective presentations, I thought I’d throw a few more out there. These are not about presentations so much as about general effectiveness. I’ve read all of these–most multiple times–and I find them eye-opening and engaging and helpful. So if you’re a manager, and employee, an academic, a free-lancer, whatever, you’ll probably get something good out this list.

    1. Linchpin by Seth Godin. Are you indispensable? It’s a good place to be, and you can get there. But also be careful. Being the only person in your organization who knows your product or technology doesn’t mean you can’t be replaced. Like most of Godin’s work, this is an interesting read about how you can be sustainably indispensable and relevant.
    2. Drive by Daniel Pink. Autonomy, Mastery, and Purpose. You want the freedom to be creative and the responsibility that comes with it. You want to be great at what you do. You want what you do to mean something. These are the things that get you out of bed in the morning. Are you getting them?
    3. The Dip by Seth Godin. A short and sweet gem from Godin—subscribe to his blog while you’re at it. How do you know when your idea is draining your resources? And how do you know if you should push through or abandon? (Over the course of my Ph.D. research, I’ve read this several times.)
    4. The Valve Handbook for New Employees. An interesting take on organizing a relatively small company of highly-motivated people. (Side note: why do you have people who are not highly-motivated?)

    …and a few more quick ones:

    1. The Non-Designer’s Design Book and The PC/Mac is not a Typewriter by Robin Williams.
    2. Business Model Generation
    3. Escape from Cubicle Nation by Pam Slim.
    4. The Personal MBA by Josh Kaufman.
  • Swedish Origami

    Greg Ingersoll

    Thu 18 July 2013 - 15:58:13 CDT

    #signal_processing #mathematics

    I recently posted a few articles about Fourier decomposition, and I often get asked–and ask myself–“How do I calculate that again?” using a tool like Matlab or (my preference) its open-source cousin, Scilab. Just how we change a data set sampled in time to one sampled in frequency is straightforward. We run the Fast Fourier Transform algorithm:

    --> y=fft(x);
    

    Easy. But what if you want to actually understand the data and do something with it? The problems with not knowing how to tailor the output of this calculation are best illustrated with some plots. In Scilab, let’s generate a basic audio signal with two tones sampled at 44.1kHz. First we create an axis consisting of 1024 evenly spaced points in time. A sample rate of 44.1kHz–the CD audio rate–means the sample times are separated by a bit less than 23 microseconds. (Note: this example code works with Scilab 5.4.0 on OSX, your mileage may vary.)

    --> N = 1024;
    --> fs = 44100;
    --> t = (1/fs)*(0:1:(N-1));
    

    Then we create two sine waves with frequencies of 5000Hz and 10000Hz and different amplitudes (1 and 2), add them together, and plot the result.

    --> f1 = 10000;
    --> a1 = 1;
    --> y1 = a1*sin(2*%pi*f1*t);
    --> f2 = 5000;
    --> a2 = 2;
    --> y2 = a2*sin(2*%pi*f2*t);
    --> y = y1 + y2;
    --> plot(t, y);
    

    After zooming in on the plot, we get something that looks like this:

    The Input Signal

    Taking the Fourier transform and plotting:

    --> Y = fft(y);
    --> plot(Y);
    

    gives this:

    The FFT Samples

    which looks interesting, but it’s not terribly useful. First, we don’t have a frequency axis, just numbered samples. Second, we expect a delta function–a spike (technical term)–for each sine wave, but we see four spikes instead of two plus some negative-going add-ons. And the amplitudes are all messed up; the spikes are too tall.

    Second thing first. The result of the FFT is a series of complex numbers, and if you give that to the plot function, Scilab plots the real part by default. In most applications, we’re not interested in real and imaginary parts but rather amplitude and phase. Antennas and image sensors (eyes) and microphones (ears) and speakers and… often only care about intensity, or amplitude-squared; even the phase information is ignored. So for now, let’s plot the amplitude of the FFT using the complex absolute value function.

    At the same time, let’s add a proper frequency-axis to the plot. The nature of the FFT is such that the 0th sample is DC or a frequency of 0, and the highest frequency is just less than the sampling frequency. What does “just less” mean? Well, the sample spacing is such that if you extended the sequence by one sample, the Nth point would be the sampling frequency (but because we’re zero referenced, the (N-1)th point is the last one.) An easy way to handle this is:

    --> f = linspace(0, fs, N+1); // one sample too many
    --> f = f(1:$-1);             // drop the last sample
    

    The final trick is to divide by N to scale the amplitude correctly–I always forget that bit, so now I’ve written it down. Now if we plot the scaled amplitude of the FFT vs. our frequency axis

    --> plot(f, abs(Y)/N);
    

    we get:

    FFT Amplitude vs. frequency

    Hmmm, still four spikes instead of two. And we started with amplitudes of 1 and 2, not 0.5 and 1, so the scaling is still wrong. Or is it?

    There are two ways to look at this. First, from a Fourier transform table, the transform of a sine function of frequency f is a delta function at f and one at -f each with half the amplitude. So if we move the right half of our plot to the left and shift the axis to include negative frequencies:

    --> plot(f-(fs/2), fftshift(abs(Y)/N));
    

    the plot looks like this:

    FFT Amplitude vs. frequency centered at DC

    Now we have spikes of amplitude 1–half the original 2–at 5000Hz and -5000Hz, and spikes of amplitude 0.5 at 10000Hz and -10000Hz, just like we expect. (Note that the amplitudes are slightly less than these values, and the spikes flare at their bases. This is a topic for another time, but it’s related to windowing with a rectangle…or convolving with a sinc.)

    The other way of looking at the result is to re-examine the unshifted plot and note that the high-frequency spikes are at 39100Hz and 34100Hz. Not coincidentally, these frequencies are the sampling frequency (44100Hz) minus the frequencies of interest (e.g. 44100-5000=39100). The FFT plot is a mirror image folded–origami, get it?–at one-half the sampling frequency. Half the sampling frequency is often called the Nyquist frequency (Harry Nyquist was Swedish, and with folding we’ve resolved this post’s title). The theory says that this is the highest frequency that can be reconstructed from the sampled signal without aliasing. This is also a topic for another time, but here’s a taste.

    Let’s keep the 44100Hz sampling rate, but feed it a 30000Hz signal.

    --> z = sin(2*%pi*30000*t);
    --> plot(f-(fs/2), fftshift(abs(fft(z))/N));
    

    Aliased Signal vs. frequency centered at DC

    Again we see the characteristic spikes at positive and negative frequencies, but the frequency is not 30000Hz. In fact, it is 44100-30000=14100Hz. So if you record a 30kHz tone at a 44.1kHz rate and play it back, the signal will manifest itself as a tone with a bit less than half the original frequency!

    This is fundamental theory that is applied regularly to countless systems: audio, video and imaging, communications, biometric sensors, … While the math and the tools are relatively simple, reality makes the engineering more difficult. Keep on eye on this blog for more.

  • Don't Be Boring

    Greg Ingersoll

    Tue 25 June 2013 - 14:45:00 CDT

    #meetings #presentations #business

    I have a conference coming up, so I’ve been thinking again about what makes a good presentation. This is a difficult question to answer, and it’s probably easier if you think about the converse at the same time: what makes a bad presentation? Here are a few concepts I’ve picked up over the years.

    1. Don’t be Boring I’m stealing this from Ben Orenstein of thoughtbot and host of their podcast. It’s pretty simple. To transfer ideas and information, the audience needs to be interested and engaged.
    2. A Presentation is not a Document If you can convey what you need to convey in a document, then format the information as a document and email it to me; no scheduling necessary. Also, don’t hand out your slides before-hand…or ever. Supplementary materials should be sent after the presentation so people can pay attention during. And contrary to popular belief, your slides should not stand alone. They should enhance what you are saying. If your slides do stand alone, just format them as a document and cancel the meeting (see also #4).
    3. Clipart No. Just no.
    4. Bulleted Lists What is the maximum number of bullets per slide? How about zero. I didn’t go to your presentation to watch you read, and I can’t listen to you and read the screen at the same time. If you’re just going to read, see #2 (and beware of infinite referential loops). Also regarding (not) reading, when you practice have a friend randomly turn off the display. Can you keep going seamlessly?
    5. Slides are Cheap Long, long gone are the days when “creating slides” literally meant creating semi-transparent film for an overhead or slide projector. You don’t need to try to cram everything onto 3 slides. You should not (and cannot) gauge the length of your presentation by counting slides; only by practicing delivering it (tip: use a video camera; you have one on your phone). And bonus tip for managers: never say, “put together a slide for the company meeting.” Instead, describe what information needs to come across, and provide a target duration.

    Sometimes you can get over a creative block and ultimately create a more memorable presentation by enforcing limits on yourself. Consider these two formats. (Several cities have events organized around these.)

    1. Pecha Kucha Twenty slides displayed for twenty seconds each.
    2. Ignite Twenty slides displayed for fifteen seconds each and auto-advancing.

    Here are some valuable resources:

    1. Presentation Zen by Garr Reynolds. Also the author of Presentation Zen Design and The Naked Presenter. Reynolds is a strong proponent of using full-bleed photographs to help your audience remember key ideas. Beautiful imagery is memorable and enjoyable, and it’s not as expensive as you think it is. (Looking unprofessional and/or being ignored is expensive.) Also, design your presentations in analog; use whiteboards/post-its and get away from the laptop.
    2. slide:ology by Nancy Duarte. Also the author of Resonate. Both excellent books covering the design of a presentation in terms of effectively conveying an idea as well as designing its colors, photos, typography, and (tasteful) animation.
    3. Read This Before Our Next Meeting (a.k.a. The Modern Meeting Standard) by Al Pittampalli. In short, make your meetings effective, and don’t have a meeting if you don’t need one. This works for all kinds of meetings, not just for presentations.
    4. Made to Stick by Chip and Dan Heath. This includes wealth of ideas about ideas. Specifically “why some ideas survive and others die.” A good read, and a good resource for developing any idea, not just for presentations.
    5. Enchantment by Guy Kawasaki, author of many other excellent books as well. The Art of the Start, in fact is the source of another favorite tip: the 10-20-30 rule. To help your pitch to land: ten slides, twenty minutes, thirty-point font.

    And for the scientists and researchers out there who need to present data, look into the excellent books by Edward R. Tufte. These are great, but dense and not inexpensive. Consider borrowing a copy to start or attend his one-day course; it’s really good.

    Finally, some comic relief:

Get Updates

Subscribe to the blog to get automatic update. And post comments or contact us for more information.