Previous Image Processing in IDL: Contrasting and Filtering Next

Filtering an Image

Image filtering is useful for many applications, including smoothing, sharpening, removing noise, and edge detection. A filter is defined by a kernel, which is a small array applied to each pixel and its neighbors within an image. In most applications, the center of the kernel is aligned with the current pixel, and is a square with an odd number (3, 5, 7, etc.) of elements in each dimension. The process used to apply filters to an image is known as convolution, and may be applied in either the spatial or frequency domain. See Overview of Transforming Between Image Domains for more information on image domains.

Within the spatial domain, the first part of the convolution process multiplies the elements of the kernel by the matching pixel values when the kernel is centered over a pixel. The elements of the resulting array (which is the same size as the kernel) are averaged, and the original pixel value is replaced with this result. The CONVOL function performs this convolution process for an entire image.

Within the frequency domain, convolution can be performed by multiplying the FFT (Fast Fourier Transform) of the image by the FFT of the kernel, and then transforming back into the spatial domain. The kernel is padded with zero values to enlarge it to the same size as the image before the forward FFT is applied. These types of filters are usually specified within the frequency domain and do not need to be transformed. IDL's DIST and HANNING functions are examples of filters already transformed into the frequency domain. See Windowing to Remove Noise for more information on these types of filters.

The following examples in this section will focus on some of the basic filters applied within the spatial domain using the CONVOL function:

Since filters are the building blocks of many image processing methods, these examples merely show how to apply filters, as opposed to showing how a specific filter may be used to enhance a specific image or extract a specific shape. This basic introduction provides the information necessary to accomplish more advanced image-specific processing.


Note
The following filters mentioned are not the only filters used in image processing. Most image processing textbooks contain more varieties of filters.

Low Pass Filtering

A low pass filter is the basis for most smoothing methods. An image is smoothed by decreasing the disparity between pixel values by averaging nearby pixels (see Smoothing an Image for more information).

Using a low pass filter tends to retain the low frequency information within an image while reducing the high frequency information. An example is an array of ones divided by the number of elements within the kernel, such as the following 3 by 3 kernel:


Note
The above array is an example of one possible kernel for a low pass filter. Other filters may include more weighting for the center point, or have different smoothing in each dimension.

The following example shows how to use IDL's CONVOL function to smooth an aerial view of New York City within the nyny.dat file in the examples/data directory. Complete the following steps for a detailed description of the process.


Example Code
See lowpassfiltering.pro in the examples/doc/image subdirectory of the IDL installation directory for code that duplicates this example.

  1. Import the image from the nyny.dat file:
  2. file = FILEPATH('nyny.dat', $  
       SUBDIRECTORY = ['examples', 'data'])  
    imageSize = [768, 512]  
    image = READ_BINARY(file, DATA_DIMS = imageSize)  
    

     

  3. Crop the image to focus in on the bridges:
  4. croppedSize = [96, 96]  
    croppedImage = image[200:(croppedSize[0] - 1) + 200, $  
       180:(croppedSize[1] - 1) + 180]  
    

     

  5. Initialize the display:
  6. DEVICE, DECOMPOSED = 0  
    LOADCT, 0  
    displaySize = [256, 256]  
    

     

  7. Create a window and display the cropped image:
  8. WINDOW, 0, XSIZE = displaySize[0], YSIZE = displaySize[1], $  
       TITLE = 'Cropped New York Image'  
    TVSCL, CONGRID(croppedImage, displaySize[0], $  
       displaySize[1])  
    

     

    The following figure shows the cropped section of the original image.

     

    Figure 8-12: Cropped New York Image

    Figure 8-12: Cropped New York Image

     

  9. Create a kernel for a low pass filter:
  10. kernelSize = [3, 3]  
    kernel = REPLICATE((1./(kernelSize[0]*kernelSize[1])), $  
       kernelSize[0], kernelSize[1])  
    

     

  11. Apply the filter to the image:
  12. filteredImage = CONVOL(FLOAT(croppedImage), kernel, $  
       /CENTER, /EDGE_TRUNCATE)  
    

     

  13. Create another window and display the resulting filtered image:
  14. WINDOW, 1, XSIZE = displaySize[0], YSIZE = displaySize[1], $  
       TITLE = 'Low Pass Filtered New York Image'  
    TVSCL, CONGRID(filteredImage, displaySize[0], $  
       displaySize[1])  
    

     

    The following figure shows the resulting display. The high frequency pixel values have been blurred as a result of the low pass filter.

     

    Figure 8-13: Low Pass Filtered New York Image

    Figure 8-13: Low Pass Filtered New York Image

     

  15. Add the original and the filtered image together to show how the filter effects the image.
  16. WINDOW, 2, XSIZE = displaySize[0], YSIZE = displaySize[1], $  
       TITLE = 'Low Pass Combined New York Image'  
    TVSCL, CONGRID(croppedImage + filteredImage, $  
       displaySize[0], displaySize[1])  
    

     

    The following figure shows the resulting display. In the resulting combined image, the structures within the city are not as pixelated as in the original image. The image is smoothed (blurred) to appear more continuous.

     

    Figure 8-14: Low Pass Combined New York Image

    Figure 8-14: Low Pass Combined New York Image

High Pass Filtering

A high pass filter is the basis for most sharpening methods. An image is sharpened when contrast is enhanced between adjoining areas with little variation in brightness or darkness (see Sharpening an Image for more detailed information).

A high pass filter tends to retain the high frequency information within an image while reducing the low frequency information. The kernel of the high pass filter is designed to increase the brightness of the center pixel relative to neighboring pixels. The kernel array usually contains a single positive value at its center, which is completely surrounded by negative values. The following array is an example of a 3 by 3 kernel for a high pass filter:


Note
The above array is an example of one possible kernel for a high pass filter. Other filters may include more weighting for the center point.

The following example shows how to use IDL's CONVOL function with a 3 by 3 high pass filter to sharpen an aerial view of New York City within the nyny.dat file in the examples/data directory. Complete the following steps for a detailed description of the process.


Example Code
See highpassfiltering.pro in the examples/doc/image subdirectory of the IDL installation directory for code that duplicates this example.

  1. Import the image from the nyny.dat file:
  2. file = FILEPATH('nyny.dat', $  
       SUBDIRECTORY = ['examples', 'data'])  
    imageSize = [768, 512]  
    image = READ_BINARY(file, DATA_DIMS = imageSize)  
    

     

  3. Crop the image to focus in on the bridges:
  4. croppedSize = [96, 96]  
    croppedImage = image[200:(croppedSize[0] - 1) + 200, $  
       180:(croppedSize[1] - 1) + 180]  
    

     

  5. Initialize the display:
  6. DEVICE, DECOMPOSED = 0  
    LOADCT, 0  
    displaySize = [256, 256]  
    

     

  7. Create a window and display the cropped image:
  8. WINDOW, 0, XSIZE = displaySize[0], YSIZE = displaySize[1], $  
       TITLE = 'Cropped New York Image'  
    TVSCL, CONGRID(croppedImage, displaySize[0], $  
       displaySize[1])  
    

     

    The following figure shows the cropped section of the original image.

     

    Figure 8-15: Cropped New York Image

    Figure 8-15: Cropped New York Image

     

  9. Create a kernel for a high pass filter:
  10. kernelSize = [3, 3]  
    kernel = REPLICATE(-1., kernelSize[0], kernelSize[1])  
    kernel[1, 1] = 8.  
    

     

  11. Apply the filter to the image:
  12. filteredImage = CONVOL(FLOAT(croppedImage), kernel, $  
       /CENTER, /EDGE_TRUNCATE)  
    

     

  13. Create another window and display the resulting filtered image:
  14. WINDOW, 1, XSIZE = displaySize[0], YSIZE = displaySize[1], $  
       TITLE = 'High Pass Filtered New York Image'  
    TVSCL, CONGRID(filteredImage, displaySize[0], $  
       displaySize[1])  
    

     

    The following figure shows the results of applying the high pass filter. The high frequency information is retained.

     

    Figure 8-16: High Pass Filtered New York Image

    Figure 8-16: High Pass Filtered New York Image

     

  15. Add the original and the filtered image together to show how the filter effects the image.
  16. WINDOW, 2, XSIZE = displaySize[0], YSIZE = displaySize[1], $  
       TITLE = 'High Pass Combined New York Image'  
    TVSCL, CONGRID(croppedImage + filteredImage, $  
       displaySize[0], displaySize[1])  
    

     

    The following figure shows the resulting display. In the resulting combined image, the structures within the city are more pixelated than in the original image. The pixels are highlighted and appear more discontinuous, exposing the three-dimensional nature of the structures within the image.

     

    Figure 8-17: High Pass Combined New York Image

    Figure 8-17: High Pass Combined New York Image

Directional Filtering

A directional filter forms the basis for some edge detection methods. An edge within an image is visible when a large change (a steep gradient) occurs between adjacent pixel values. This change in values is measured by the first derivatives (often referred to as slopes) of an image. Directional filters can be used to compute the first derivatives of an image (see Detecting Edges for more information on edge detection).

Directional filters can be designed for any direction within a given space. For images, x- and y-directional filters are commonly used to compute derivatives in their respective directions. The following array is an example of a 3 by 3 kernel for an x-directional filter (the kernel for the y-direction is the transpose of this kernel):


Note
The above array is an example of one possible kernel for a x-directional filter. Other filters may include more weighting in the center of the nonzero columns.

The following example shows how to use IDL's CONVOL function to determine the first derivatives of an image in the x-direction. The resulting derivatives are then scaled to just show negative, zero, and positive slopes. This example uses the aerial view of New York City within the nyny.dat file in the examples/data directory. Complete the following steps for a detailed description of the process.


Example Code
See directionfiltering.pro in the examples/doc/image subdirectory of the IDL installation directory for code that duplicates this example.

  1. Import the image from the nyny.dat file:
  2. file = FILEPATH('nyny.dat', $  
       SUBDIRECTORY = ['examples', 'data'])  
    imageSize = [768, 512]  
    image = READ_BINARY(file, DATA_DIMS = imageSize)  
    

     

  3. Crop the image to focus in on the bridges:
  4. croppedSize = [96, 96]  
    croppedImage = image[200:(croppedSize[0] - 1) + 200, $  
       180:(croppedSize[1] - 1) + 180]  
    

     

  5. Initialize the display:
  6. DEVICE, DECOMPOSED = 0  
    LOADCT, 0  
    displaySize = [256, 256]  
    

     

  7. Create a window and display the cropped image:
  8. WINDOW, 0, XSIZE = displaySize[0], YSIZE = displaySize[1], $  
       TITLE = 'Cropped New York Image'  
    TVSCL, CONGRID(croppedImage, displaySize[0], $  
       displaySize[1])  
    

     

    The following figure shows the cropped section of the original image.

     

    Figure 8-18: Cropped New York Image

    Figure 8-18: Cropped New York Image

     

  9. Create a kernel for an x-directional filter:
  10. kernelSize = [3, 3]  
    kernel = FLTARR(kernelSize[0], kernelSize[1])  
    kernel[0, *] = -1.  
    kernel[2, *] = 1.  
    

     

  11. Apply the filter to the image:
  12. filteredImage = CONVOL(FLOAT(croppedImage), kernel, $  
       /CENTER, /EDGE_TRUNCATE)  
    

     

  13. Create another window and display the resulting filtered image:
  14. WINDOW, 1, XSIZE = displaySize[0], YSIZE = displaySize[1], $  
       TITLE = 'Direction Filtered New York Image'  
    TVSCL, CONGRID(filteredImage, displaySize[0], $  
       displaySize[1])  
    

     

    The resulting image shows some edge information. The most noticeable edge is seen as a "shadow" for each bridge. This information represents the slopes in the x-direction of the image. The filtered image can then be scaled to highlight these slopes.

     

    Figure 8-19: Direction Filtered New York Image

    Figure 8-19: Direction Filtered New York Image

     

  15. Create another window and display negative slopes as black, zero slopes as gray, and positive slopes as white:
  16. WINDOW, 2, XSIZE = displaySize[0], YSIZE = displaySize[1], $  
       TITLE = 'Slopes of Direction Filtered New York Image'  
    TVSCL, CONGRID(-1 > FIX(filteredImage/50) < 1, 
    displaySize[0], $  
       displaySize[1])  
    

     

    The following figure shows the negative slopes (black areas), zero slopes (gray areas), and positive slopes (white areas) produced by the x-directional filter. The adjacent black and white areas show edges in the x-direction, such as along the bridge closest to the right side of the image.

     

    Figure 8-20: Slopes of Direction Filtered New York Image

    Figure 8-20: Slopes of Direction Filtered New York Image

Laplacian Filtering

A Laplacian filter forms another basis for edge detection methods. A Laplacian filter can be used to compute the second derivatives of an image, which measure the rate at which the first derivatives change. This helps to determine if a change in adjacent pixel values is an edge or a continuous progression (see Detecting Edges for more information on edge detection).

Kernels of Laplacian filters usually contain negative values in a cross pattern (similar to a plus sign), which is centered within the array. The corners are either zero or positive values. The center value can be either negative or positive. The following array is an example of a 3 by 3 kernel for a Laplacian filter:


Note
The above array is an example of one possible kernel for a Laplacian filter. Other filters may include positive, nonzero values in the corners and more weighting in the centered cross pattern.

The following example shows how to use IDL's CONVOL function with a 3 by 3 Laplacian filter to determine the second derivatives of an image. This type of information is used within edge detection processes to find ridges. This example uses an aerial view of New York City within the nyny.dat file in the examples/data directory. Complete the following steps for a detailed description of the process.


Example Code
See laplacefiltering.pro in the examples/doc/image subdirectory of the IDL installation directory for code that duplicates this example.

  1. Import the image from the nyny.dat file:
  2. file = FILEPATH('nyny.dat', $  
       SUBDIRECTORY = ['examples', 'data'])  
    imageSize = [768, 512]  
    image = READ_BINARY(file, DATA_DIMS = imageSize)  
    

     

  3. Crop the image to focus in on the bridges:
  4. croppedSize = [96, 96]  
    croppedImage = image[200:(croppedSize[0] - 1) + 200, $  
       180:(croppedSize[1] - 1) + 180]  
    

     

  5. Initialize the display:
  6. DEVICE, DECOMPOSED = 0  
    LOADCT, 0  
    displaySize = [256, 256]  
    

     

  7. Create a window and display the cropped image:
  8. WINDOW, 0, XSIZE = displaySize[0], YSIZE = displaySize[1], $  
       TITLE = 'Cropped New York Image'  
    TVSCL, CONGRID(croppedImage, displaySize[0], $  
       displaySize[1])  
    

     

    The following figure shows the cropped section of the original image.

     

    Figure 8-21: Cropped New York Image

    Figure 8-21: Cropped New York Image

     

  9. Create a kernel of a Laplacian filter:
  10. kernelSize = [3, 3]  
    kernel = FLTARR(kernelSize[0], kernelSize[1])  
    kernel[1, *] = -1.  
    kernel[*, 1] = -1.  
    kernel[1, 1] = 4.  
    

     

  11. Apply the filter to the image:
  12. filteredImage = CONVOL(FLOAT(croppedImage), kernel, $  
       /CENTER, /EDGE_TRUNCATE)  
    

     

  13. Create another window and display the resulting filtered image:
  14. WINDOW, 1, XSIZE = displaySize[0], YSIZE = displaySize[1], $  
       TITLE = 'Laplace Filtered New York Image'  
    TVSCL, CONGRID(filteredImage, displaySize[0], $  
       displaySize[1])  
    

     

    The following figure contains positive and negative second derivative information. The positive values represent depressions (valleys) and the negative values represent ridges.

     

    Figure 8-22: Laplacian Filtered New York Image

    Figure 8-22: Laplacian Filtered New York Image

     

  15. Create another window and display only the negative values (ridges) within the image:
  16. WINDOW, 2, XSIZE = displaySize[0], YSIZE = displaySize[1], $  
       TITLE = 'Negative Values of Laplace Filtered New York 
    Image'  
    TVSCL, CONGRID(filteredImage < 0, $  
       displaySize[0], displaySize[1])  
    

     

    The following figure shows the negative values produced by the Laplacian filter. The most noticeable ridges in this result are the medians within the wide boulevards of the city.

     

    Figure 8-23: Negative Values of Laplacian Filtered New York Image

    Figure 8-23: Negative Values of Laplacian Filtered New York Image

  IDL Online Help (June 16, 2005)