A Review of the Application of Optical and Radar Remote Sensing Data Fusion to Land Use Mapping and Monitoring (original) (raw)

Radar and optical remote sensing data evaluation and fusion; a case study for Washington, DC, USA

The recent increase in the availability of spaceborne radar in different wavelengths with multiple polarisations provides new opportunities for land surface analysis. This research effort explored how different radar data, and derived texture values, independently and in combination with optical imagery influence land cover/use classification accuracies for a study site in Washington, DC, USA. Two spaceborne radar images, Radarsat-2L-band and Palsar C-band quad-polarised radar, were registered with Aster optical data for this study. Traditional methods of classification were applied to various components and combinations of this data set, and overall and class-specific thematic accuracies obtained for comparison. The results for the two despeckled radar data sets were quite different, with Radarsat-2 obtaining an overall accuracy of 59% and Palsar 77%, while that of the optical Aster was 90%. Combining the original radar and a variance texture measure increased the accuracy of Radarsat-2 to 71% but that of Palsar only to 78%. One of the sensor fusions of optical and radar obtained an accuracy of 93%. For this location, radar by itself does not obtain classification accuracies as high as optical data, but fusion with optical imagery provides better overall thematic accuracy than the optical independently, and results in some useful improvements on a class-by-class basis. For those regions with high cloud cover, quad polarisation radar can independently provide viable results but it may be wavelength-dependent.

Radar and optical data integration for land-use/land-cover mapping

PE & RS- Photogrammetric …, 2000

This study evaluated the advantages of combining traditional spaceborne optical data from the visible and infrared wavelengths with the longer wavelengths of radar. East African landscapes, including areas of settlements, natural vegetation, and agriculture, were examined. For three study sites, multisensor data sets were digitally integrated with training data and ground-truth information derived from field visits. The primary methodology was standard image processing, including spectral signature extraction and the application of a statistical decision rule to classify the surface features. The relative accuracy of the classifications was established b y comparison to ground-truth information. In all sites, the merger of optical and radar sensors improved the ability to map surface features over either sensor independently, although different manipulations of the radar data were necessary to obtain the most useful results. Those manipulations included measures of texture, spatial filtering, and despeckling prior to texture extraction.