Skip to content Skip to sidebar Skip to footer

Typically, when we think of change detection in a remote sensing context, we consider the difference between a set of images captured over a fixed area during a relatively short period of time, such as imagery from 2005 compared to imagery from 2010. In some cases, the historical depth of the data set is greater, allowing for a continual 30+ year analysis as demonstrated in one of my earlier posts – Automating Complex Multi-Temporal Analyses with eCognition.

And then there are the results of this fascinating paper recently published in the MDPI remote sensing journal entitled Over 150 Years of Change: Object-Oriented Analysis of Historical Land Cover in the Main River Catchment, Bavaria/Germany. In their study, the authors examine land cover change in the Main River Catchment region based on historical maps spanning a 150 year period and current GIS data with the Trimble eCognition software.

The input data used to determine historical land cover came from so-called “primary sheets”, or Uraufnahmen in German, created between 1808-1864. Position sheets were then derived from the primary sheets and nearly cover the extent of the former Kingdom of Bavaria – these were typically produced between 1817-1872. The authors used a combination of 26 position sheets and 456 primary sheets in GeoTiff format.

2015 ATKIS (Amtliches Topographisch-Kartographisches Informationssystem) data was used to represent current land cover conditions.

Due to the fact that the maps from 1800 clearly used some different land cover designations than are in use today a common land cover scheme was established to allow for the direct comparison of the different data sets. The authors used the following classes:

  • Cropland: Arable land for crop production (Ackerland, Streuobstacker, Hopfen, and Gartenland)
  • Meadow: Pastures and grasslands (Grünland and Streuobstwiese)
  • Forest: Standing trees that serve for timber production (Wald)
  • Urban: Human settlements, e.g., cities, houses, historical buildings, transportation lines, and recreational areas (Siedlung)

The analysis and classification of the historical maps was done in eCognition and accounted for an area of 2268 km2. “OBIA methods are preferred over pixel-based approaches for this task… During the analysis of individual pixel information, the context is missing, which is important for historical maps where the meaning of symbols depicted play a more important role than the colors on the map”.

The eCognition-based analysis began with a normalization of the input images – the 3-band historical maps. The authors chose to apply both a linear and histogram normalization stretches and “the nine bands together (3 RGB, 3 linearly normalized RGB, and 3 histogram normalized RGB) were used and combined to increase the accuracy of land classes detection”.

One the data preparation was complete, a segmentation was performed to generate initial image objects after which “many classification-based segmentations follow that improve the extent and identification of objects, describing an iterative process. In each step, more information is extracted from the image and used to improve the aggregation of objects”. To generate their initial image objects, the authors chose to run a quadtree- and subsequently a multiresolution segmentation.

Upon the existence of image objects, Ulloa-Torrealba et al. then set up an additional number of image object features (what they refer to as “variables”). Among these was a Customized Feature (i.e. a user-defined feature) for Normalized Brightness. Additionally, Hue, Saturation, and Intensity (HSI) features were calculated for the entire RGB color space, including the normalized layers (note that in eCognition 10 the HSI calculation is now automated via the rule set and supports the creation of image layers via the color space transformation algorithm). In total, 12 HSI features were calculated for use in the classification.

Once all input layers and potential features for the image analysis were established, the authors began the rule set development phase of their project. It is nice to see this discussed in their paper as it is often a part of the project lifecycle that is swept under the rug. Creating a rule set can be compared to creating a computer program or app, and this typically goes through several iterations of testing. The process of finding the best fitting features for the establishment of class descriptions “was repeated approximately 312 times, for an average of two variables and algorithms per class”. In the end, the authors note that “during the class identification, mean, brightness, standard deviation, and shape indexes were the most used”. In addition, valuable “spatial features, such as area, relative border with a specific class, and distance to scene border, were used in the reshaping and border improvement phase”.

The characteristics of each historical map (image) added complexity to the project as these were unique and hampered transferability – “for every set of variables, bands, and parameters used, threshold values were chosen and used in the classification”. Ulloa-Torrealba et al. provide a nice summary of the different algorithms they employed during the classification and object refinement phase.

Category Algorithm Description
Basic Classification Assign Class Assign a class to all objects of another class with a membership of 1
Classification Classifies object according their membership to a list of selected classes
Advanced Classification Find Enclosed by Class Finds and classifies objects that are totally enclosed or surrounded by the target class
Basic Object Reshaping Merge Region Fusions all the objects of the indicated domain or class
Grow Region Expands image objects incorporating neighboring objects
Convert to Sub-Objects Splits an object into the smaller objects in a level underneath
Advanced Object Reshaping Border Optimization Changes the shapes of the objects by either adding (Dilatation) or removing (Erosion) sub-objects from the outer or inner border respectively
Morphology Smooths the border of objects using a mask created by the user. This mask either removes (Opening) or adds (Closing) pixels
Pixel-Based Object Reshaping Pixel-Based Object Resizing Grows or shrinks objects based on a relative area defined by the user. Operates at the pixel level 
Interactive Operation Manual Classification Allows object classification with a click
Export Export Vector Layers Exports the selected classes and specifies format and attributes

 

A hierarchical classification approach was taken, meaning that not all objects were assigned to a class at once. This approach has advantages not only in terms of performance but also in regards to building more flexible rule sets – the rule set designer can take advantage of relational features and be more general in the use of thresholds within class descriptions. In this case, letters and lines were initially classified followed by cropland, meadow and forest. Then, urban objects needed to be carved out of the letters and lines classification – due the similarity of color and shape in the historical maps. In a final phase, objects with the desired land cover classes were put into a new image object level.

In addition to the analysis, the authors also chose to do the accuracy assessment in eCognition with the Accuracy Assessment tool. An error matrix was generated based on a TTA (Training and Test Area) Mask, comparing samples from each input image to the classification results. This tool was particularly helpful to get an indication of rule set stability and reliability. “According to the automatic classification, 40% of the land was classified as cropland, 35% as forest, 22% as meadows, and only 0.5% as urban areas”. The authors noted a “remarkable” correspondence between the results of the automated analysis of the historical maps and the cadastral statistics from 1853 where “42% of the land was classified as cropland, 32% as forest, 20% as meadows, and 0.7% as urban areas”.

The final part of the analysis was the change detection between 1850 and 2015. The comparison with the modern ATKIS data demonstrated a significant increase in urban area, from 0.3% to 8.8% (2600% growth), a 24% loss in cropland as well as a 4% loss in meadows and a 4% growth in forested areas. The expanse of urban area does not come as a surprise given the population increase since 1850 – the majority of the land cover conversion logically took place on former cropland, followed by meadow.

The author’s approach resulted in a “high accuracy: 98 ± 0.4% (Overall accuracy) and 96 ± 0.06% (Kappa)”. The urban class  yielded the lowest  accuracy which was expected “due to the difficulty of separating letters & lines and urban classes”.

 When comparing the automated classification results to a manual classification the automated approach in eCognition worked well and “maps with very good and good quality could be automatically classified with higher accuracy than those with poor quality”. The overall accuracy was 82% – a detailed confusion matrix can be found in the publication. The authors conclude that “accuracy is between high and moderate depending on the input quality of the maps”.

Image analysis, especially automated image analysis, often depends on the quality of the input data and the computer science GIGO (garbage in, garbage out) concept applies. I really enjoyed the authors’ presentation and discussion of their study in this paper – it was informative and did not shy away from the realities of rule set development and input data. 

Follow Us on Social Media

For more news on eCognition