Discrete Fourier transforms are essential for simulating light propagation in telescopes and coronagraphs. To cut down the computational burden, and maintain high spatial resolution, we can make use of well-established symmetry properties of Fourier transforms. For example, if we start with a real-valued electric field in the telescope pupil, with even symmetry about the y-axis, we should get an image plane electric field with even symmetry about the y-axis. Or so I thought, until a sanity check failed…

Here’s where my assumptions broke down. The above plot is the real part of the image formed by an ideal shaped telescope pupil. I computed this with a 2-D FFT in Matlab and shifted the result so that the zero-phase pivot is at the array center. Visually, it appears symmetric about the vertical meridian, so that . And that is indeed what we’d expect, transforming from a flat (zero-phase) pupil plane wavefront with the same even symmetry.

So when I take the right half of the image, and then flip and subtract it from the left half, I should get a wall of zeroes. Instead, disparities appear:

We have disagreements as high as 2000, corresponding to almost 1% of the peak field amplitude in the image plane. Initially, I was very puzzled by what might cause such large errors. Was it an error in my pupil definition, or some array offset creeping in? Do I not understand the rules of Fourier transforms, after all? After some tests, I figured out that in each image plane pixel I was accumulating a large number of tiny floating-point roundoff errors.

Consider the above textbook formula for the 2-dimensional discrete Fourier transform. There are countless applications, but to an astronomer it describes the electric field diffraction pattern in the image plane of a telescope when the incoming starlight is monochromatic, unpolarized, and the optics are perfect. We can break this into inner and outer sums:

In our case, the telescope pupil has even circular symmetry along the x-axis, so that . When that is the case, and is real, it can be shown that the inner sum, , has circular even symmetry: . Furthermore, so does the image: .

The problem is, in practice there are several orders of magnitude of separation between the mirrored spatial frequency pair and . Before the complex exponential can be evaluated, its phase argument must be stored as a floating point value, which entails some roundoff error. To investigate my case, I chose some sample points to test how the inner complex exponential term behaves for my array (with size = 3000 x 3000).

With , I noticed a particularly high roundoff error at . Since I only consider the real part for now, I simply compare for this mirrored pair of spatial frequencies:

>> wrapToPi(2*pi*n*l/N) ans = 0.9215 >> wrapToPi(2*pi*n*(N-l)/N) ans = -0.9215 >> wrapToPi(2*pi*n*l/N) + wrapToPi(2*pi*n*(N-l)/N) ans = -1.1866e-12 >> cos(2*pi*l*n/N) - cos(2*pi*(N-l)*n/N) ans = 1.5709e-12 |

Because the two wrapped angles are symmetric about zero radians, or , the difference of cosines is mathematically zero. What we actually compute is the best approximation possible with 64-bit double precision floating point. The residual comes out exactly the same in both MATLAB and Python/Numpy. By contrast, if I take the difference of another pair of cosine evaluations, but now with the higher angle wrapping around only one period beyond , the error is several orders of magnitude smaller:

>> wrapToPi(2*pi*n*l/N) - wrapToPi(2*pi*(n*l/N + 1)) ans = 7.1054e-15 >> cos(2*pi*n*l/N) - cos(2*pi*(n*l/N + 1)) ans = -2.2204e-16 |

Although that 1e-12 above seems tiny, since we are adding up phasors in the inner sum, and then adding up of those, roundoff errors are accumulated 9 million times for each image pixel. Just left of the image center (the pivot axis of the DFT), is at its highest. Conversely, just right of the image center, the roundoff error fed into the trig function will always be very low.

Smaller array sizes would reduce this problem. But depending on what kind of measurement you’re making, that may not be an option. A better solution is to exploit the pupil’s symmetry and only keep the right hand side of the FFT result (spatial frequencies ). That would maintain a self-consistent symmetry between the pupil and the image. In fact, because the full complex image field of a real pupil is Hermetian, in this particular case the full image image is determined by one quadrant; the imaginary part in the lower half reflection just needs to be negated.

That leads to the question: is their an FFT library for MATLAB or Python/Numpy that either automatically detects, or allows you to specify the symmetry properties of your input array? It seems like an obvious way to eliminate wasteful operations which are particularly prone to quantization noise.

Some useful references on DFTs around the web:

http://mathworld.wolfram.com/DiscreteFourierTransform.html

http://fourier.eng.hmc.edu/e161/lectures/fourier/node11.html

http://web.eecs.umich.edu/~fessler/course/451/l/pdf/c5.pdf