A Russian proverb states that “it takes all kinds of trees to make a forest”, but what kind of trees are in our forests? I am frequently asked whether it is possible to identify individual tree species with eCognition and often times I am forced to answer with, “it depends”… Yes, our Trimble eCognition software does provide a number of powerful and flexible image analysis tools but input data is plays a key role.
I recently read an interesting paper, “Spectral-Spatial Dimensionality Reduction of APEX Hyperspectral Imagery for Tree Species Classification; a Case Study of Salzach Riparian Mixed Forest”, from Zahra Dabiri and Stefan Lang of the Department of Geoinformatics (Z_GIS) at the University of Salzburg where the authors investigate the use of airborne hyperspectral data for the classification of 6 tree species: Norway Spruce (Picea abies), Canadian Poplar (Populus canadensis), Balsam Poplar (Populus balsamifera), European Ash (Fraxinus excelsior), Grey Alder (Alnus incana) and White Willow (Salix alba).
For this purpose, the study used data from the ESA-APEX (airborne prism experiment) – hyperspectral imagery with 288 bands an a spatial resolution of 2.5 m. Working with such hyperspectral data presents 2 primary challenges, as the authors point out: a) the Hughes phenomena an b) the spectral complexity posed by a riparian mixed forest environment – the “spectral variability per pixel”.
To account for the Hughes phenomena challenge, Dabrir and Lange implemented the widely used minimum noise fraction transformation (MNF) technique to segregate noise and reduce the dimensionality. This lead to a reduction in bands used from 288 to 18.
To address the challenge of spectral variability, the authors chose to use Trimble eCognition to apply Object-based image analysis techniques. Working with image objects, rather than individual pixels, allows for the reduction of spatial complexity within an image. Once image objects were created, a machine learning approach, using a random forest (RF) classifier was selected with eCognition (for details on the number of samples used for the analysis, please refer to Table 2 in the publication).
The results of the OBIA classification with eCognition were compared to a traditional pixel-based approach. The OBIA method yielded an overall accuracy of 85%, which was 22% higher than the pixel-based analysis. The figure below shows examples comparing the 2 classification approaches:
The study concludes that “spectral-spatial dimensionality reduction of airborne APEX hyperspectral imagery, using MNF transformation and object-level classification showed a better performance in comparison to classification of the only spectrally-reduced APEX hyperspectral imagery. Still, both classification results (i.e., object- and pixel-levels) have misclassifications, which might be caused by variation of spectral signature due to age, health and bidirectional reflectance of tree species”.
Mapping individual tree species is possible with eCognition, in combination with hyperspectral data, as the authors demonstrate, but one does need to account for natural aspects in the forest.