The Fourier Transform is a method of converting a signal of dimension X to a signal of dimension 1/X. This method gives way to analyzing signals in terms and their frequencies and provides an easy mathematical manipulation of functions.

Here we discuss the basic implementation of Fourier Transform in Scilab and its applications to images.

For a 2D function, its Fourier transform is expressed mathematically as

or in a discrete sense, this would be just a summation.

**Fourier Transform of a circular aperture**

Circular images of size 256 x 256 pixels were generated using Scilab. Upon uploading the image, its 2D FFT was taken using the Scilab function **fft2d()**. The result of this function is a matrix of the same size of the original image whose elements are complex numbers. To view the result, we take the the magnitude of the complex numbers. Another problem is that the resulting function is disoriented. To undo the effect, we use **fftshift()** function to shift its ‘origin’.

Below shows the result of the image FT (2nd column) and the shifted image FT (3rd column).

If the image has dimension of space, then the FT has dimension of 1/space or spatial frequency. In the FT image, the center corresponds to zero frequencies and units away from the origin depicts increasing frequency in either *x* or *y*. High intensity corresponds to white and lower intensity corresponds to darker pixel color. Thus, from the resulting FT image, it shows that the given circular aperture is mainly composed of low frequencies. Increasing the radius of the circle decreases the diameter of the intensity in image FT. This expected since dimension conversion is from X to f=1/X. The large is X, the smaller is f. Another interesting property of the FT of a circular aperture is that it is radially symmetric which corresponds to the symmetry of the circle aperture.

**Inverse Fourier Transform **

Given a the Fourier transform with dimensions 1/X, we can convert it back to an image of dimension X. Its mathematical form is given by

To observe how inverse Fourier Transform works, we work with images of letters.

I created images of letters *A* and *G* using Paint and took their image FT. From the image FT, the inverse FT was done by using again the **fft2d****()** function but now on the image FT.

Comparing the original image to the resulting image, We observe that the orientation was inverted diagonally.

**Imaging device simulation using** **convolution **

Convolution is multiplication of two signals in Fourier space. Let *f *and *g* be 2D functions. Their convolution *h* is

The Fourier transform *H* of *h(x,y)* is just the product of the FT’s of *f(x,y)* and *g(x,y*). That is, *H=FG*. *h(x,y)* takes a form that is partly the form of *f* and partly *g*. Thus, this can be used to model the effect of sensors when measuring signals where *f* and *g* can be the signal and the sensor respectively, and *h* is the result.

We would use this concept to model an imaging device with the aperture size as the varying parameter. Let *f* be the function to describe our object image and *g* be the function to describe our sensor (just a simple circular aperture).

Our object image was made of letters “VIP” and our sensor is a circular aperture of varying radius. We get their convolution by first taking the FT’s of the object image and the circular aperture and multiplying the matrices. Next was to get the Inverse Fourier transform of the product matrix an plot the intensity image.

Above shows the result of the imaging device simulation. Aperture sizes were r=0.1, 0.3, 0.5, and 0.99. Resulting image becomes less blurry as the aperture size was increased. Physically this is expected since for greater aperture size, more light can enter the device, resulting to a much detailed image.

**Template matching using correlation**

Correlation is another application of Fourier Transforms. It is used to measure the degree of similarity between two signals or objects. Its calculated as

In Fourier space, the Fourier transform *P* of *p(x,y)* is just the product of the FT of *f(x,y)* and the conjugate of FT of *g(x,y)*. That is, *P=FG*. *

We can use this theory to match an image with another image. Specifically, we can use to match a letter with a phrase or sentence. Let our template be the sentence “THE RAIN IN SPAIN STAYS MAINLY IN THE PLAIN” and the letter to match are the letters “A” and “S”.

Similar with the convolution, we first take the FT’s of each image. We multiply the FT of the letter to that of the conjugate of the FT of the sentence. Next was to get the inverse FT and plot the image.

The results of the correlation is shown above. For the letter “A”, we observe 5 peaks (high intensity points) in the correlation image while 3 peaks were observed for “S”. These points tell how many similar letters were there in the sentence. Also, the locations of the peaks shows exactly the positions of similar letters in the sentence.

**Edge detection using the convolution integral
**

Finding the edge of a region can be viewed as a template matching where we match a smaller contour image to the whole image. The smaller contour image is the pattern that we want to find in the whole image. Example is a small vertical line image.

The patterns (small contours) we will use should be a 3×3 matrix with a total sum of zero. Patterns used were shown below corresponding to horizontal, vertical, or spot contour.

When each of the patterns are shown, the negative elements correspond to black pixels and the positive elements to white pixels. We use the same method with the convolution and plot the resulting images.

The images above shows the results for the convolution for the horizontal, vertical, and spot patterns. For the horizontal pattern, completely vertical edges in the word “VIP” were not recorded as an edge (see sides of letter *I*). The same goes with the vertical pattern, completely horizontal edges were not recorded as edges (top and bottom of *I*). The spot pattern shows the best edge detection among all patterns used.