Chapter II 
Correcting brightness gradients in hyperspectral data from urban areas

↓21

Remote Sensing of Environment 101 (2006) 25-37 Sebastian Schiefer, Patrick Hostert and Alexander Damm © 2005 Elsevier Inc. All rights reserved. doi: 10.1016./j.rse.2005.12.003 Received 24 August 2005; revised 29 November 2005; accepted 1 December 2005.

Abstract

The analysis of airborne hyperspectral data is often affected by brightness gradients that are caused by directional surface reflectance. For line scanners these gradients occur in across-track direction and depend on the sensor’s view-angle. They are greatest whenever the flight path is perpendicular to the sun-target-observer plane. A common way to correct these gradients is to normalize the reflectance factors to nadir view. This is especially complicated for data from spatially and spectrally heterogeneous urban areas and requires surface type specific models. This paper presents a class-wise empirical approach that is adapted to meet the needs of such images.

↓22

Within this class-wise approach, empirical models are fit to the brightness gradients of spectrally pure pixels from classes after a spectral angle mapping (SAM). Compensation factors resulting from these models are then assigned to all pixels of the image, both in a discrete manner according the SAM and in a weighted manner based on information from the SAM rule images. The latter scheme is designed in consideration of the great number of mixed pixels.

The method is tested on data from the Hyperspectral Mapper (HyMap) that was acquired over Berlin, Germany. It proves superior to a common global approach based on a thorough assessment using a second HyMap image as reference. The weighted assignment of compensation factors is adequate for the correction of areas that are characterized by mixed pixels.

A remainder of the original brightness gradient cannot be found in the corrected image, which can then be used for any subsequent qualitative and quantitative analyses. Thus, the proposed method enables the comparison and composition of airborne data sets with similar recording conditions and does not require additional field or laboratory measurements.

Chapter II:1 Introduction

↓23

In the field of imaging spectrometry, the number of urban applications has increased over the past five years (e.g.Ben-Dor et al., 2001;Herold et al., 2003;Benediktsson et al., 2005). Before that time, imaging spectrometry data was almost exclusively used in studies on vegetation or minerals in land cover applications. This can be explained to some extent by the great challenge posed by the complex structure of urban surfaces, an insufficient spatial resolution, and deficiencies in signal-to-noise ratio (SNR) and sensor calibration of early imaging spectrometers. Due to technical sensor development and increasing availability, hyperspectral data may be utilized for numerous urban remote sensing applications in the future. Especially those approaches that already proved successful with data of medium spatial and spectral resolution will further improve by including hyperspectral information, e.g. the modeling of urban sprawl (Wilson et al., 2003) or mapping of impervious surfaces (Wu and Murray, 2003).

In general, a spatial resolution of 5 m or finer is suggested for urban applications (Welch, 1982;Small, 2003). Airborne imaging spectrometers like the Hyperspectral Mapper (HyMap) can be flown at altitudes as low as 1900 m resulting in a spatial resolution of 4 m for the 128 spectral bands. The very high spectral resolution of such instruments allows for analyses that cannot be conducted with spaceborne, multispectral instruments of similar spatial resolution like IKONOS; most classification schemes in urban applications require spectral information beyond the bandwidths of multispectral sensors (Herold et al., 2003).

However, a low operating altitude requires a wide field-of-view (FOV) to cover an appropriate area; in the case of HyMap the FOV is 61.3°. Especially in urban areas, this wide FOV leads to severe image distortions like object displacement and obscured surfaces. In addition, brightness artifacts exist, which are exacerbated by large view-angles. They result from anisotropic, bidirectional reflectance and are greatest when the flight path is perpendicular to the sun-target-observer plane (e.g.Beisl, 2001). Generally speaking, the reflectance signal is higher in backscattering direction, i.e. when view and illumination direction are similar and shaded proportions of the viewed scene are hidden by sunlit proportions. This effect leads to an across-track brightness gradient that depends on the view-angle of the sensor and the illumination conditions during the over-flight. The brightness gradient “prevents precise intra- and intercomparison of images, affects spectral ratios and is adverse to proper mosaicking” (Beisl and Woodhouse, 2004) and it hinders the integration of information from spectral libraries that include laboratory and field measurements. For example, Ben-Dor et al. (2001) perform a Mixture Tuned Matched Filtering on urban imaging spectrometer data that was acquired perpendicular to the sun-target-observer plane and has not been corrected for bidirectional effects. They describe problems at large view-angles along the edges of the image using this quantitative method.

↓24

Thus, a complete preprocessing chain of imaging spectrometer data should include the correction of a possible across-track brightness gradient. This way, field and laboratory measurements can better be integrated for the design and training of subsequent classifications, quantitative models can be applied, and images that were acquired at different times or under varying conditions are easier to compare.

Existing approaches do not meet the requirements of data from urban areas. Approaches to model bidirectional effects and to assign derived compensation factors to individual pixels are not capable of describing the heterogeneous spectral and spatial structure of data from this environment. This paper extends and modifies an existing empirical approach and presents a simple, yet effective method for the removal of an across-track brightness gradient in HyMap data from urban areas.

Chapter II:2 Background

Chapter II:2.1 Bidirectional reflectance

The characteristics of bidirectional reflectance are determined by structural and optical properties of the viewed land surface (Lucht et al., 2000) and depend on illumination geometry, the sensor’s view-angle and -direction, as well as the wavelength. They are completely described by the bidirectional reflectance-distribution function (BRDF)

↓25

(1)

according to Nicodemus et al. (1977), where ƒr is the BRDF, L i and L r are the incident and reflected radiance, θ i and θ r are the zenith angles that describe the directions of incident and reflected flux, φ i  and φ r  are the respective azimuth angles, ω i is the solid angle element of irradiance in the given direction. In Eq. (1), the original, purely geometric BRDF was extended by a wavelength dependency that is indicated by λ (compareSandmeier et al., 1998).

Actually, the BRDF is a useful concept, but it can never be measured directly, since infinitesimal elements of solid angles do not include measurable amounts of radiant flux (Nicodemus et al., 1977). In practice, reflectance is measured over finite solid angles, i.e. biconical or hemispherical-conical. However, the term bidirectional reflectance factor (BRF) is most often used to describe the measured reflectance and the term bidirectional reflectance is rather loosely used for BRF(s) measured over targets from one or more nadir and off-nadir viewing angles. (Deering, 1989)

↓26

Lucht et al. (2000) mention several reasons for anisotropic bidirectional reflectance, e.g. mirror BRDF caused by specular reflectors, by sunglint or forward scattering leaves and soil elements; volume scattering BRDF of scatterers like leaves in closed canopies; and gap-driven BRDF in the case of geometric-optical surface scattering, e.g. in sparse forests, driven by shadow casting and mutual obscuration of 3-dimensional surface elements. Pinty et al. (2002) visualize and discuss how the distribution of vegetation within single pixels can influence the gap-driven BRF at different view-angles.

Chapter II:2.2 BRDF models

BRDF models have been developed for different purposes and are not necessarily exclusive to the correction of brightness gradients. Actually, the bidirectional properties of certain surfaces are often used to derive information that is difficult to be deduced solely from the spectral signal. Several physical, semi-empirical and empirical BRDF models have been described and successfully applied over the past 25 years, especially for vegetation canopy reflectance, e.g. Goel (1988). A good overview on the different models in the context of the present paper is given by Beisl (2001).

Physical models are not suited for the correction of unwanted brightness gradients, but rather adapted for the interpretation of the BRDF information content. They are based on radiative transfer theory (Ross, 1981;Verhoef, 1984) or ray tracing methods (Li and Strahler, 1986;North, 1996) and describe the actual interactions between electro-magnetic radiation and surface materials. They can, for example, be used for the retrieval of the biophysical parameters that are closely linked to the measured signal, like leaf area index and canopy water content (e.g.Kötz et al., 2004) or canopy architectural properties (e.g.White et al., 2002). Kimes et al. (2000) give an overview on physical models and their possible applications.

↓27

The most common semi-empirical models are so-called kernel-driven models, which already have been used for a correction of brightness gradients (e.g.Leroy and Roujean, 1994;Beisl, 2001). Kernel-driven models describe the BRDF as a linear superposition of a set of kernels, e.g. an isotropic, a volume and a geometric scattering kernel, all of which model basic BRDF shapes and are derived from approximations of more detailed physical models (Wanner et al., 1995). This way, they are simple and fast to invert (Hu et al., 1997;Chopping, 2000). The most common models have been developed for data from sensors like the Advanced Very High Resolution Radiometer (AVHRR) or the Moderate Resolution Imaging Spectroradiometer (MODIS). They are applied to derive various parameters at a global scale, e.g. the MODIS BRDF and Albedo Product (Lucht et al., 2000) but also for an improved differentiation between land cover classes using multi-angular data (Chopping et al., 2002).

In purely empirical approaches, mathematical functions are chosen to model the actually observed BRF without a physical basis, solely because of their shape. It is thus impossible to directly derive biophysical parameters from theses models. Early empirical models for a description of the bidirectional reflectance from vegetated surfaces (e.g.Royer et al., 1985;Walthall et al., 1985) are frequently used for the correction of brightness gradients in airborne line scanner imagery. Based on the mentioned models Kennedy et al. (1997) use a second degree polynomial to describe and remove brightness gradients in airborne data as a function of the view-angle only. They conclude that an intelligent use of these fairly simple methods can be an efficient preprocessing tool. This approach is implemented as the so-called “Cross Track Illumination Correction” in the Environment for Visualizing Images (ENVI) software package (RSI, 2004). The neglect of the azimuth dependence is feasible in this context, because the illumination geometry does not change between pixels from different scan lines that are viewed at the same angle. Thus, all differences induced by directional reflectance occur in across-track direction, i.e. within the scan lines, and are sufficiently modeled as a function of view-angle (Kennedy et al., 1997).

Beisl (2001) compares various kernel-driven approaches and the mentioned empirical model for the correction of HyMap data from South-east Spain. He concludes that some kernel-driven models definitely perform better than the empirical approach for low solar zenith angles, especially due to their ability to model the so-called hotspot effect on vegetated surfaces. For medium solar zenith angles their performance is slightly better, for high solar zenith angles it is similar. He cannot identify one kernel-driven model that performs best for all situations.

↓28

However, image data from urban environments is very complex. Besides vegetation canopies in park areas, urban areas are to a great extent characterized by man-made surface materials with geometric structures that can hardly be described by physical approaches, i.e. roofs of various inclinations or facades at large view-angles. Against this background and given the following reasons, an empirical model was chosen to compensate for brightness gradients in the HyMap data in this study: (1) it takes into account the influence of surface geometry at small scales; (2) according to Beisl (2001) the chosen empirical approach is robust and performs equally for all illumination situations. He explicitly recommends it for “stiff problems”; (3) an empirical approach is completely independent of the viewed surface types and spatial resolution of the image, whereas most semi-empirical kernels have been developed to model structures of vegetated surfaces at moderate resolution scales; (4) the main drawback of this approach is a lacking term to model the hotspot effect, a phenomenon that does not exist in the data of this work; (5) the approach is very simple, computationally fast, and requires no ground measurements.

Chapter II:2.3 Correction of surface type dependent brightness gradients

Several authors mention variations in the bidirectional properties of different surface types: Richter and Schläpfer (2002) notice the need of a class-wise correction of bidirectional effects and implement an index-based pre-classification in their software for the atmospheric correction of airborne scanner data. They do not suggest a certain method for the correction of possible brightness gradients, though. Schlerf et al. (2005) eliminate the brightness gradient in a coniferous forest by masking all other pixels prior to the fit of an empirical model. Kennedy et al. (1997) classify an airborne image prior to the correction and fit individual models to those classes. They describe great differences in the bidirectional properties of soil and vegetation and recommend a class-wise approach. Beisl (2001) describes the significant differences between classes like bright sand/soil, bright veg e tation and dry vegetation and tests a class-wise correction. Feingersh et al. (2005) suggest empirical models for different surfaces based on laboratory measurements with a goniometer for the correction of BRDF effects.

The spatial and spectral heterogeneity of urban areas exceeds that of other environments by far and a high number of mixed pixels exist. Great differences in the bidirectional reflectance of urban surfaces in HyMap data have previously been described by the authors (Schiefer et al., 2005). In consideration of all mentioned aspects, the compensation of the brightness gradient will be conducted in a class-wise empirical approach with a focus on the two questions of (1) how to best fit an empirical model to the brightness gradient in data from urban areas, and (2) how to correct all pixels from an image of such a spectral and spatial heterogeneity.

Chapter II:3 Data

Chapter II:3.1 HyMap imagery

↓29

The HyMap sensor acquires data in the visible (VIS), the near-infrared (NIR) and short-wave infrared (SWIR) between 0.4 and 2.5 µm. The data are stored in 128 spectral bands with an average sampling interval of approximately 15 nm. 512 pixels are recorded for each scan line with an instantaneous field of view (IFOV) of 2.0 and 2.5 mrad in across and along-track direction, respectively. Given the FOV of 61.3°, the maximum view-angle is slightly greater than 30° (in this paper, negative view-angles are used to indicate a view-direction towards the sun). Limited by the operating altitude and minimum speed of the aircraft, HyMap is typically flown between 1900 and 5000 m above ground and the spatial resolution is thus between 3.9 and 10 m, the swath width between 2 and 5 km.

The HyMap imagery for this study was acquired over Berlin, Germany, during the HyEurope 2003 campaign of the German Aerospace Centre (DLR), on July 30, 2003, around 12 am local time. Sun elevation was 56° (θ i = 34°) and its azimuth φ i 148°.

One of four flight lines of Berlin is corrected for its across-track brightness gradient in the present work. The flight direction was at 78° leading to a sun-target-observer geometry that causes a severe brightness gradient (Fig. II-1). The scene is located around E 397397 and N 5821900 in UTM zone 33 and covers a great variety of urban structures in an area of 16 by 2.59 kilometers at a ground instantaneous field of view (GIFOV) of 4.6 m. It extends from the central governmental district eastwards to suburban areas and includes densely built-up areas with orthogonal street patterns in the city center, park areas and allotments, industrial grounds, and railway areas. In addition, structures are viewed that are characteristic for Berlin, e.g. trees along most of the streets, or – typical for formerly socialist cities – wide boulevards in the center and large apartment complexes in suburban areas. The across-track brightness gradient is present on all surfaces but most obvious for vegetation.

↓30

Figure II-1: Illumination and viewing geometry of the corrected image and the reference image.

A second flight line was acquired from South to North with a heading of 12° and a spatial resolution of 3.9 m, overlapping the first image in the city center (Fig. II-1). The brightness gradient in this flight line is negligible due to its sun-target-observer geometry and it is used for validation of the correction results.

Chapter II:3.2 Preprocessing

Prior to the removal of the brightness gradient, the data sets were corrected for atmospheric influences and transformed to reflectance values (ρ). One of the most prominent atmospheric effects is the so-called path radiance, which is also view-angle dependent and might cause an additional brightness gradient (Beisl and Woodhouse, 2004). The atmospheric correction was performed using the Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes (FLAASH) algorithm version 1.7 as implemented in ENVI 4.0 (RSI, 2004). The approach removes the path radiance and was carried out first. Otherwise the path radiance might be modeled by both the empirical correction of the bidirectional effects and the subsequent parametric atmospheric correction, and this way be overcorrected. FLAASH incorporates the MODTRAN-4 radiation transfer code (Matthew et al., 2000). The amount of water vapor is calculated for each pixel individually from the 1.135 µm water feature and adjacency effects are considered. In the present case, images were corrected using the mid-latitude summer atmosphere. The ISAACS method was used to model multiple scattering of solar irradiance. Since there is no significant industry in Berlin, a rural aerosol model without soot-like aerosols was chosen. Best results were achieved with an initial visibility of 30 km for a retrieval of the aerosol optical thickness. Finally, FLAASH’s spectral polishing – a linear renormalization similar to the method introduced by Boardman (1998) – was applied to smooth the spectra.

↓31

Following this correction, the signal-to-noise ratio was estimated for the very homogenous area of an artificial soccer field and bands with insufficient SNR-values were deleted. The remaining 116 bands are used in the present work.

Chapter II:4 Methods

A class-wise correction requires a preliminary classification of the data, an empirical modeling of the brightness gradients for all bands of the individual classes, the calculation of compensation factors that result from these models, and their application to the entire image. The individual steps will be explained in the following with a focus on the classification as well as the class-wise application of the models to the data.

Chapter II:4.1 Preliminary classification

Brightness gradients are removed in order to guarantee better results for subsequent analyses -including classifications. It thus seems paradoxical to classify an image for the removal of brightness gradients. In this context the preliminary classification must not be seen as a classification into land use classes; areas are rather delineated by BRF characteristics. A combination of band ratios and results from a spectral angle mapping (Kruse et al., 1993), which minimizes the influence of a brightness gradient, are used for this purpose. Being a supervised classification, the spectral angle mapping (SAM) requires training spectra. Therefore, training areas were manually selected for each surface type that appears relevant for the urban environment.

↓32

The results of the classification are actually needed for two objectives: (1) to identify pure pixels from specific surfaces to fit representative empirical models to present brightness gradients and (2) to assign class membership values to all pixels in order to apply the surface-type specific compensation factors to the entire image.

In a first approach the classes water, cast shadow, and spec u lar reflector are masked based on thresholds in single bands or band ratios; these classes are often either very bright or dark and hinder the work with empirical models. Afterwards spectrally pure classes are determined using SAM with restrictive angular thresholds. In this case, a large number of unclassified pixels is accepted in favor of a high user accuracy. These classes are expected to avoid pixels in transition zones to different surface types and statistically robust models are fit to the brightness gradients of the spectrally pure classes. The fit requires a sufficient representation of each class for all view-angles to compensate the influence of bright or dark outliers. This condition is important for the decision on the final number of classes to be used.

In a second classification the SAM is conducted without prior masking and with less restrictive thresholds for the angular values in order to classify the entire image into the previously defined classes.

Chapter II:4.2 The empirical correction of brightness gradients

↓33

The empirical correction is based upon approaches by Kennedy et al. (1997) and Walthall et al. (1985). It comprises a normalization of bidirectional reflectance factors to nadir view: Existing brightness gradients are modeled by calculating the mean reflectance for each view-angle and fitting a quadratic curve to these values. This is done for each spectral band individually. The modeled reflectance ρ* is described by the view-angle θ v, which is equivalent to θ r, the quadratic and linear coefficients q and l, and the constant c:

(2)

Assuming bidirectional effects to be zero at nadir position, brightness gradients can be removed via the multiplicative or additive compensation factors k m and k a,

↓34

(3)

,

(4)

respectively. The application of Eq. (3) or (4) to all pixels from the image results in a compensation layer for each spectral band. Using this compensation layer, the normalized reflectance values ρ’ of all pixels are calculated as

(5)

,

(6)

↓35

where ρ is the atmospherically corrected BRF, and r the pixel’s position in along-track direction, i.e. row.

Chapter II:4.3 Class-wise correction

As in many analyses of remotely sensed data, the mixed pixel phenomenon and false classifications play a crucial role in the present approach and interfere with the correction in two ways:

At first, potentially included mixed or falsely classified pixels would render the models less representative. Mixed pixels are avoided in a classification with restrictive thresholds as outlined in Section 4.1. The pixels of the resulting pure classes are used to fit the surface specific models (Eq. (2)).

↓36

At second, the entire image needs to be corrected, including pixels of rare materials or mixed composition. This requires a useful way to assign the compensation layers of the pure classes that result from Eq. (3) or (4) to every pixel of the image, including those pixels that were not classified during the first restrictive classification. Two different approaches are tested for this processing step with a focus on the correction of mixed pixels and compared to a global approach. Thus, three different corrections are discussed in this paper:

  1. a global correction, using one compensation layer for all pixels that was generated with an empirical model fit to the entire image,
  2. a class-wise correction, that applies surface specific models based on classes from the restrictive SAM to all pixels according to classification results using non-restrictive angular thresholds (Fig. II-2),
  3. a weighted class-wise correction, applying a mixture of the compensation layers in (b) to all pixels, according to class membership values of the individual pixels, that are derived from the SAM rule images (Fig. II-2).

This last approach is the attempt to take into account the mixed bidirectional properties of spectrally mixed pixels. It requires information on the abundance of the previously modeled surface types for every pixel. A spectral mixture analysis is not appropriate in this context due to several reasons: its results are influenced by the brightness gradient, a complete set of spectral endmembers is required to produce reliable results and the effort needed is contradictory to the idea of a simple empirical approach.

↓37

Instead, the existing rule images of the SAM and their inherent information on class membership are utilized to describe possible mixed pixels. These rule images contain the pixel-wise angles between the vector that describes a reference spectrum in data space, i.e. the average spectrum from a previously selected training area, and the vector of the respective pixel. This way, one rule image exists for every reference spectrum. Within these, small angles indicate great similarity. Histograms from these rule images typically show two peaks (Fig. II-3). At small angles, to the left, a narrow peak relates to pixels that are spectrally very similar to the reference spectrum, i.e. belong to the same class. A second peak further to the right represents the pixels of all other classes at once, since the angular difference is independent from a direction in spectral space. The slopes to the sides of this second peak are less steep and sometimes contain relative maxima.

Figure II-2: Class-wise and weighted class-wise correction of brightness gradients in individual bands of HyMap data. Results from a SAM with restrictive angular thresholds are used to model the brightness gradients and to generate the compensation layers of classified surface types. Rule images are then used to assign these compensation layers to individual pixels in a discrete and weighted manner. The numbers in brackets indicate the corresponding equations in the text.

During a regular SAM classification, thresholds for the angular values are defined to generate discrete classes. To describe the abundance of surface types in mixed pixels in terms of class membership values, a transition zone between pure and mixed pixels is derived interactively for the rule image of every final class (Fig. II-3). The values of the corresponding angular intervals are inverse linearly normalized between 1 and 0 for each pixel; values left and right of the transition zone are set to 1 and 0, respectively. In a second step, the transformed values of all rule images are pixel-wise divided by their sum in order to guarantee unity. During the weighted class-wise correction, the surface specific compensation layers will be combined for every pixel individually according to these transformed rule images, i.e. class weights.

↓38

Figure II-3: Histogram of a rule image from the class vegetation during SAM classification. The vertical lines indicate angular thresholds of the restrictive (left) and non-restrictive (right) classifications. The grey area shows the transition zone as used to transform the rule image for the weighted class-wise correction.

All three correction approaches are performed twice, using multiplicative and additive compensation factors. Within the class-wise and weighted class-wise approach the fit of the empirical models is based upon all pixels of the corresponding pure classes, not upon angular means as in the global approach. This way, unequal distributions are better taken into account and the influence of outliers at angles with only few pixels is reduced.

Chapter II:5 Results and discussion

Chapter II:5.1 Preliminary classification

A thorough analysis of the SAM results leads to four spectral classes that are used for the class-wise correction of the brightness gradient. The class vegetation is actually a combination of five sub-classes with reference spectra from different tree stands and well irrigated photosynthetically active grass surfaces. For this purpose a minimum image was produced from the five original rule images to receive the maximum class membership values for vegetation. Besides for classification, this combined minimum image was also used for the transformation into class-weights as described in Section 4.3. A differentiated treatment of these surface types would certainly be useful, but the number of well irrigated grass surfaces in the image is too small for the reliable fit of an empirical model, due to the very dry and hot summer of 2003 in central Europe. By combining different types of vegetation a statistically robust model can be fit and the spectral heterogeneity of vegetation is represented. Given the dry weather conditions, the second class dry vegetation turns out to be appropriate and sufficiently large. This class includes non-irrigated surfaces with dry grass and varying fractions of background soil signal. The transition between the classes vegetation and dry veget a tion is smooth and the number of pixels that are a mixture of the two is high. Dark roof represents a great part of the various roof types. All other roof types, i.e. red or metal roofs, lead to very small classes that are spectrally too different to be combined with the dark roofs. The class street includes non-built up impervious surfaces. Attempts to add more classes, e.g. open soil of construction areas, result in unsatisfactory results. The classes are too small or not present in all angular intervals, making a good model fit impossible.

↓39

Whereas only 31.4% of the pixels are assigned to one of the four spectral classes in the first classification with restrictive thresholds, 98.8% are classified in the second (Table II-1). The remaining 1.2% are spectrally extreme pixels like specular reflecting roofs or water bodies that differ extremely from the four spectral classes. The high number of pixels being classified as vegetation or dry vegetation is explained by several park areas in the city and more rural areas in the eastern part of the image.

Table II-1: Results from the restrictive and non-restrictive classification. The size of the four classes vegetation, dry vegetation, dark roof and street enables the fit of empirical correction models to the brightness gradients of the respective surface types. Specular reflectors, water and cast shadows are masked prior to the classification.

Restrictive classification

Non-restrictive classification

No. pixels

% pixel

No. pixels

% pixel

Unclassified

1,003,039

64.3

19,312

1.2

Vegetation

279,237

17.9

615,091

39.4

Dry vegetation

81,044

5.2

311,213

20.0

Dark roof

46,951

3.0

90,922

5.8

Street

83,285

5.3

523,526

33.6

Mask

66,508

4.3

-

-

The decision on these four final classes and on the angular thresholds for the restrictive classification was driven by the class size and the quality of the model fit. Obviously, not every urban surface type is represented by one of the four classes.

↓40

During the second classification, previously unclassified or masked pixels were assigned to one of the four final classes. This way the entire image can be corrected with the class-wise approach using the compensation layers that base on the pure classes. However, the areal increase of the four classes is unequal. The class street, for example, increases significantly more than the class dark roof. This is in parts explained by the spectral heterogeneity of non built-up areas. At the same time, several pixels from different roof types were added to the class street during the second classification. Misclassifications of materials from built up and non-built up areas in urban environments are a well known problem (e.g.Herold et al., 2004). Nevertheless, a more sophisticated and hence complex classification was not intended and a wrong assignment does not necessarily worsen the results of the brightness correction, i.e. roofs that are spectrally similar to ground materials might have similar bidirectional properties.

Chapter II:5.2 Empirical models

Empirical models are fit to the pixels from the four spectral classes in each spectral band based on the results of the restrictive classification (Fig. II-4). The distinct brightness gradients of the four classes can all be modeled well using a second degree polynomial. The different shapes of the models underline the need of a class-wise approach. The models appear to capture different phenomena and are reminiscent of the kernels used in semi-empirical models (e.g.Wanner et al., 1995).

The brightness gradient of vegetation is dominated by a steady, concave increase towards backscatter direction, i.e. positive view-angles. This corresponds well with findings by Jacquemoud et al. (2000). They simulate the bidirectional reflectance factor of vegetation as a function of view-angle with four physical models. Their results agree with the gradients for vegetation in the present work for the HyMap view-angle interval. Kimes (1983) explains this shape by the sensor viewing forelit surfaces when looking with the sun and backlit surfaces when looking towards the sun.

↓41

Figure II-4: Brightness gradients and empirical models of four spectral classes. Gradients are illustrated by average brightness of 4° view-angle intervals for three spectral bands at 661.6 (diamonds), 828.5 (triangles) and 1647.8 nm (squares); fitted models are displayed as solid lines. The sun incident angle θ i is 34°.

The gradient for dry vegetation differs from this shape: it is almost constant in forwardscatter direction and then increases similar to the one of vegetation in backscatter direction. The dry grass surfaces are missing the 3-dimensional structure that causes a great part of the typical directional effects of vegetation mentioned above. The effects are thus weakened and the brightness gradient is also expected to be influenced by the bidirectional properties of the background soil signal.

The increase towards backscatter direction in the brightness gradient for street is less distinct. The abnormal feature around θ v  = -22° might be explained by the structure of alternating houses and streets; the sensor views differently illuminated surfaces depending on the width of streets and the height of houses.

↓42

The influence of surface geometry is also very obvious in the gradient of dark roofs: The brightness is almost constant for all view-angles. This was expected, since the high frequent changes in inclination of the roofs superpose the view-angles’ influence on the target-observer-geometry.

The standard deviations of the classes are compared to their standard deviations of 4° view-angle intervals, following Beisl (2001). The mean standard deviations of the intervals are significantly lower than the overall standard deviation values for all four classes (Table II-2). This underlines the existence of the brightness gradients. The difference between the two values is most obvious for vegetation and dry vegetation, i.e. classes with very distinct gradients. The standard deviation’s decrease is less obvious for street and small for dark roof.

Chapter II:5.3 Evaluation of the class-wise correction of brightness gradients

Assessing a correction of brightness gradients is complicated, because reliable references often do not exist for all angles and surface types. Besides an inspection of the brightness gradient in the corrected image and a visual analysis of results, the aforementioned overlapping HyMap image is utilized as a reference. At first, the class-wise correction will be discussed based on the results from the multiplicative approach. Afterwards the additive approach will be examined in comparison.

↓43

Table II-2: Standard deviations of classes and unclassified pixels after restrictive classification for all view-angles (SD all) and mean standard deviation of 4° view-angle intervals (SD angles). Values are averaged over all bands.

SD all

SD angles

Unclassified

857.18

798.24

Vegetation

569.75

432.51

Dry vegetation

546.45

472.72

Dark roof

318.66

294.34

Street

735.39

680.94

The empirical models are based on the pixels classified during the restrictive classification, i.e. only 31.4% of all pixels. However, the remaining pixels show a brightness gradient, too. This gradient is completely removed during the class-wise correction (Fig. II-5). It thus seems feasible to apply empirical models derived from pure classes to the entire image.

Figure II-5: Comparison of the brightness gradients before and after multiplicative class-wise correction of pixels that were not classified during the restrictive classification in spectral bands at 661.6 (diamonds/thick solid), 828.5 (triangles/solid) and 1647.8 nm (squares/dashed). The sun incident angle θ i is 34°.

↓44

The visually observable brightness gradient is removed by all three approaches that are compared in this study (Fig. II-6b). Especially the most dominant gradients of areas with veget a tion and dry vegetation cannot be observed in the corrected images.

The differences between the class-wise and weighted class-wise approach become obvious in areas with many mixed pixels, e.g. transition zones between the classes vegetation and dry vegetation: the class-wise approach is characterized by abrupt brightness differences caused by discrete class boundaries (Fig. II-6f). A similar phenomenon is described by Beisl (2001) for agricultural areas with varying vegetation cover. At this point the advantage of the weighted class-wise approach comes into play: differences between neighboring mixed pixels are smoother (Fig. II-6e). This way, abrupt changes that might hamper subsequent quantitative analyses can be avoided. The overall quality and image statistics are almost identical. The idea to generate weights based on SAM rule images, thus, proved useful. Nevertheless, the authors do not consider the rule images a generally appropriate quantitative measure; especially the assumption of a linear relation between the spectral angle and surface abundance is critical.

Figure II-6: Subsets of the corrected image before and after the class-wise correction (R = 828.5 nm; G = 1647.8 nm; B = 661.6 nm): In the uncorrected data (a), the bright surfaces to the right (backscatter direction) lead to obvious gradients over the entire FOV. These gradients do not exist after the multiplicative class-wise correction (b); the performance of other approaches appears similar at this scale. (c) illustrates the classification of the entire image; (d) shows the restrictive SAM classification (including previously masked areas) that was used to fit the empirical models. The advantages of the weighted class-wise approach (e) over the multiplicative (f) and additive (g) class-wise approaches are obvious in transition zones with mixed pixels; the original subset (h) is shown for comparison. The full image is displayed left. Note the rotated northing of all images.

↓45

Results from the different correction approaches are compared to data from the reference image. A direct areal comparison of the entire overlapping area is not possible, because of the view-angle dependent object-displacement and the different spatial resolutions of the two images. Instead, pixels from the same distinct surfaces were manually selected in both images and the average spectra are compared. All surfaces are located in the nadir area of the reference image and at large view-angles in the corrected image. This way, areas with high compensation factors are assessed. However, different proportions of sub-pixel scale objects might be compared, due to the respective nadir and off-nadir view. This might be a source of uncertainty during the assessment, e.g. for 3-dimensional structures like tree crowns, which is difficult to quantify.

Spectra from the image before and after the different corrections are compared to spectra from the reference image (Fig. II-7). The class-wise correction leads to better results in all cases except for irrigated lawn, which is overcorrected by all multiplicative approaches. In the case of non-irrigated vegetation, a roof and a school yard, the class-wise approach performs exceptionally better. Results from the weighted class-wise method are almost identical to the class-wise approach and differences cannot be displayed.

Figure II-7: Spectra from six selected surfaces at large view-angles in HyMap data before and after correction with multiplicative global and class-wise approach. Spectra from the same surfaces in the nadir area of the reference image are shown for comparison.

↓46

The class-wise correction of the soil surface is only slightly better than the global correction. However, its advantage over the global correction is shown by an interesting feature: Between 700 and 800 nm a decrease in the reflectance of the spectrum after global correction exists that cannot be found in the other three spectra. This is an overcorrection, caused by the influence of vegetated surfaces and their spectral characteristics in the red and NIR region on the compensation factors of the global approach. This same phenomenon was also observed for a concrete surface and a second roof surface (both not shown).

Vegetated surfaces are hardest to correct. The influence of bidirectional effects is high and brightness gradients are severe, especially in the NIR. The class-wise approach performs very well in the VIS and SWIR regions of a spectrum extracted from a tree group (Fig. II-7). In the NIR, reflectance is slightly overcorrected, but results are better than using the global approach. Similar patterns can be observed for additional vegetated surfaces (not shown). Differences might to some extent be explained by the different view-angles of corrected and reference image.

The class vegetation combines vegetation canopies of different structures and thus different bidirectional effects. Park areas with trees dominate and the correction of an extremely bright irrigated lawn surface (Fig. II-7) does not perform adequately with either method. The empirical model is not well suited for this surface type.

↓47

At this point a general problem needs to be mentioned: in a single image, surfaces are always classified by spectral values, not by bidirectional properties. This drawback has to be considered, when applying a class-wise approach. Classes like vegetation that are spectrally heterogeneous might require several models. Especially for non-urban environments more than one class for vegetated surfaces is recommendable.

However, the four correction functions and possible combinations of these prove capable of correcting most bidirectional behaviors. An increase in the number of classes – if statistically possible – should only be considered, when an additional bidirectional effect is observed and can be modeled.

Overall, results from the additive approaches are not as good as results from the multiplicative corrections. Constant compensation factors cause very low and often negative values for dark surfaces, like streets (Fig. II-6g). They appear overcorrected. The original standard deviation is maintained even when average brightness is clearly reduced. Hence, images exhibit artificial noise in low albedo areas. This phenomenon is reduced by the class-wise approaches, but still exists in all classes.

↓48

As for the multiplicative approach, spectra from additively corrected images were compared to spectra from the reference image (not shown). Again, the class-wise approach performed better than the global. In two cases, the class-wise additive correction actually performed better than the multiplicative: the irrigated lawn was less overcorrected in the NIR and the very bright soil surface is corrected even better. In both cases, surface brightness differs significantly from the modeled gradient. The constant compensation factors of the additive approach thus generate smaller errors for extremely bright areas than the relative compensation factors of the multiplicative method.

Chapter II:6 Conclusions

The suggested class-wise and weighted class-wise empirical approaches are capable of correcting the brightness gradient in HyMap data from an urban area. They always perform better than a global approach and are not limited to certain surface types, like most semi-empirical models that are designed to model vegetation canopies. This is especially advantageous for the correction of urban data. At the same time, the approach taken does not require any directional field measurement with a goniometer or additional images at different viewing conditions. This makes it superior to other approaches described in the literature and applicable independent from the provision of additional information.

The multiplicative normalization performs better than the additive for the present data set. Its advantages are expected to be generally valid.

↓49

The correction of brightness gradients from vegetated surfaces might be further enhanced by a differentiation between vegetation types. In case the focus of subsequent analyses is put on vegetation issues, working with more than one vegetation class is useful. This should be tested using images that represent different vegetation types equally well.

The weighted class-wise approach overcomes problems of the discrete transitions in the class-wise approach. It is probably close to the ideal correction of brightness gradients in complex environments: one correction model is determined for each relevant surface type and individual pixels are corrected with a weighted term according to the abundance of the basic surface types.

The concept of the spectral angle mapper proved appropriate. It requires one interactive step and it would be desirable to extract spectral classes for all environments during preprocessing in an automated way.

↓50

A brightness gradient cannot be observed for the corrected image, not even for the pixels that were discarded when fitting the brightness models. Spectra from the corrected image fit those from a reference image, even at large view-angles. Thus, the entire area of the processed image and its full hyperspectral information can be used in both qualitative and quantitative analyses, including the work with spectral libraries or in multi-temporal studies. The suggested class-wise approach can easily be transferred to other hyperspectral data sets that show similar brightness gradients.

Acknowledgments

The authors would like to thank the German Aerospace Center (DLR) for the HyMap data. Sebastian Schiefer is funded by the scholarship programme of the German Federal Environmental Foundation (DBU), Alexander Damm by the Young Scientists Programme of the State Berlin. This research is partly funded by the German Research Council (DFG) under project no. HO 2568/2-1 and 2-2. The authors are also thankful and the three anonymous reviewers who provided helpful comments to improve this manuscript.


© Die inhaltliche Zusammenstellung und Aufmachung dieser Publikation sowie die elektronische Verarbeitung sind urheberrechtlich geschützt. Jede Verwertung, die nicht ausdrücklich vom Urheberrechtsgesetz zugelassen ist, bedarf der vorherigen Zustimmung. Das gilt insbesondere für die Vervielfältigung, die Bearbeitung und Einspeicherung und Verarbeitung in elektronische Systeme.
DiML DTD Version 4.0Zertifizierter Dokumentenserver
der Humboldt-Universität zu Berlin
HTML generated:
27.05.2008