Field Boundary Delineation with Multi-Temporal Sentinel-2 Imagery
We are often asked how Trimble eCognition can be used to detect agriculture fields from imagery and we typically tell our users to consider a multi-temporal analysis approach as fields change through time depending on the growing season and crop type. Our Trimble Innovation Program partner at the University of Stellenbosch in South Africa recently published a fascinating paper on this topic. Barry Watkins and Adriaan Van Niekerk just published their paper entitled A Comparison of Object-Based Image Analysis Approaches for Field Boundary Delineation using Multi-Temporal Sentinel-2 Imagery in Computers and Electronics in Agriculture.
Although remote sensing techniques have been used to extract field boundaries in the past, most of these studies focused on the use of Very High Resolution (VHR) data. As implied in the paper’s title, Barry’s study utilized Sentinel-2 data with a comparatively lower spatial resolution of 10m, but with the great advantage of being freely available worldwide with a revisit time of 5-days. The use of lower resolution data makes this type of analysis more challenging.
The goal of the research was to evaluate several EO methodologies for automatically delineating agriculture field boundaries. Within the developed method, the authors use a novel multi-temporal edge detection approach to segment crop fields, orchards and vineyards.
For this purpose, a study area on the border of the Northern Cape , North West and Free State provinces of South Africa was chosen. The region is characterized by dry winters and long warm summers with crops of maize, barley, groundnuts, pecan nuts and lucerne.
Seven Sentinel-2 scenes were acquired for the region during the summer growing season to best capture crop diversity. The OBIA workflow consisted of 5 general steps: 1) generation of edge detection layers for the individual input images, 2) aggrication of the edge detection layers, 3) image segmentation based on the aggregated edge layer, 4) separation of uncultivated image objects and finally, 5) the removal of noise.
Two different edge detection algorithms were applied: the Canny edge detector and Scharr operator. In step 2 of the analysis the results of the edge detection, 28 (4 bands x 7 acquisition dates) multi-temporal edge layers, were grouped via “a simple equal weight summation” into a single composite layer. The edge composite layers were used as input for segmentation – three segmentation algorithms with eCognition Developer were selected for analysis: the infamous multiresolution segmentation (MRS), the multi-temporal segmentation (MTS) and the watershed segmentation (WS).
The 4th step of the analysis combined two different classification approaches with eCognition. A machine learning, classification and regression tree (CART), classifier was trained on various NDVI-based features to differentiate between cultivated and uncultivated areas. In addition, a threshold-based (or knowledge-based) classification was applied to address sliver polygons along the field boundaries that the CART incorrectly classified as crop fields.
In a final step, noise objects were removed from the classification. These objects were related to agriculture infrastructure and areas affected by soil conditions.
Of the various segmentation workflows tested, the WS approach interestingly outperformed both the MRS and MTS results, yielding an impressive overall accuracy of 92.9% based on Canny-derived edges and 92.7% based on the Scharr-derived edges . A likely cause for this is the multi-temporal approach – “Employing multi-temporal imagery reinforces object edges (in this case field boundaries), while potentially reducing the noise within a field as noise is unlikely to be present in the same location over multiple dates”.
The work done by the University of Stellenbosch not only demonstrates a novel approach to analyzing multi-temporal EO data as it pertains to delinating crop fields, it looks at two wonderful aspects of the eCognition software.
First of all, the ability to create flexible rule sets that combine multiple classification approaches into a single workflow – machine learning and knowledge-based classification. So much of what we read today pits classification approaches against one another. It is great to see that the authors have taken the best of each and combined them here.
Secondly, the ability to look beyond the initial input data – using layer operation tools to apply edge extraction algorithms and create data from data. It is always important to look beyond the initial (optical) data since sometimes what we are looking for is hidden between the cracks, or in this case edges!
If you are interested in this work, please see the link to the publication provided by the authors and also check out the blog Making Sense of Remote Sensing by Adriaan Van Niekerk which examines a variety of remote sensing topics.