Under the big data paradigm, many national mapping programs are feeling the pressure to increase frequency and reduce latency of national-scale map products. Conterminous United States (CONUS)-scale land cover programs are especially challenged given the efficiencies required to produce classified maps at this scale, let alone annual products using rulesets or imagery representing near current conditions. Data users have high expectations of accurate near-real time data that they can use to update their near-real time models and processes. But that’s the goal and the game is on! New satellite and aerial platforms, computing technologies, and models are helping to speed things up and increase accuracy, aided by cloud computing, state of the art compositing and change detection algorithms, artificial intelligence, machine learning, new sensors and space-aerial-ground lidar. In this special session, representatives from LANDFIRE, NLCD and LCMAP will showcase processes and technology that are being applied to get mapped products out the door faster and better.
Presentation Abstracts
NLCD: Balancing accuracy and methodology innovation with increasing production frequency
Jon Dewitz - USGS
Abstract
The National Landcover Database (NLCD) has increased release frequency to every other year along with dramatically increasing our product suite. NLCD has also reduced latency to roughly a year after imagery acquisition. These three changes amount to NLCD increasing scope over fifteen fold from ten years ago. With continued interest in yearly releases, this presentation will walk through the trade-offs that come with a potential increase and possible solutions to meet user needs.
Mapping Disturbance for the Conterminous United States in Less than Six Months: Exploring Improvements in Processing Power, Image Compositing, and Improved Change Detection Algorithms
Brian Tolk - KBR contractor to USGS EROS Center
Abstract | Presentation
Mapping disturbance across national-scale landscapes is a significant undertaking involving several processes and dependencies. The LANDFIRE team has a long history of mapping disturbance dating back to 2001. Computing power, data access, and change-detection algorithms have been improving since our first disturbance products were produced; however, substantial advances have been made over the past few years in image processing and change detection development. Image processing advances include a shift away from the best-pixel algorithm to percentile-based image composite which predominately results in less blemished and more complete composite imagery, reducing the editing time of the mapping analyst. Additionally, using percentile-based change indices (e.g., dNDVI and dNBR) has improved the ability of analysts to better detect and confirm more varied and subtle types of disturbances. As a result, we have been able to make more accurate annual disturbance maps in less time. These improvements are apparent when comparing past to current disturbance products while reducing the time needed to complete the mapping of full disturbance years. Annual disturbance products and associated attributes are the first spatial datasets required in a workflow that feeds into LANDFIRE’s other mapping efforts, such as vegetation type, cover, height, and fuels characteristics. High-quality products delivered in a timeframe supportive of these efforts are essential to LANDFIRE.
A Random Forest-Based Commission Error Filter for LANDFIRE Disturbance Mapping
Sanath Sathyachandran Kumar ASRC Federal Data Solutions, contractor to USGS EROS Center
Abstract | Presentation
The Landscape Fire and Resource Management Planning Tools (LANDFIRE) Program is an interagency collaboration that produces vegetation, wildland fuel, and fire regime data products for CONUS and OCONUS. The accurate mapping of annual disturbance (change) is crucial for LANDFIRE (LF) as this data set informs subsequent downstream products (i.e., vegetation and fuels) based upon the type and magnitude of change. LF disturbance is mapped using established change detection algorithms applied to seasonal Landsat observation-based composites and other processing steps, as well as detailed review by human analyst(s) for quality assurance purposes. Field-mapped events and satellite-derived fire program data also contribute significantly to LF disturbance products. Here we present for the first time annual and regional accuracy assessments of LF disturbance data sets. The accuracies were computed by comparing against the Land Change Monitoring, Assessment, and Projection (LCMAP) reference data set, which was developed to quantify accuracy metrics for LCMAP products. It has gone through an independent analysis by a set of expert interpreters to detect change between 1984 and 2018 using high-resolution imagery. Results indicate that CONUS-wide accuracy for LANDFIRE disturbance maps generally increased over the years 2013-2016. The overall accuracy across CONUS and the study period was approximately 97%. The undisturbed class user’s and producer’s accuracy was 99% and 97%, respectively, while the disturbed class user’s and producer’s accuracy was 48% and 70%, respectively. The percent of pixels that changed in the reference data within this time period at least once was about 4%. Regional (LF tile-wise) variations in accuracy metrics are discussed along with the challenges of using the reference data, algorithms, and variations in the type of change across regions. Results of this work are expected to be useful to the end user and to improve the application of our algorithms.
The LANDFIRE image-based annual prototype: Detailed annual updates to vegetation maps for the United States using machine learning
Daryn Dockter - LANDFIRE Vegetation Specialist, KBR Contractor to USGS EROS
Abstract | Presentation
Background/Question/Methods
LANDFIRE is a national program that produces over 20 spatial products including over 700 vegetation types, as well as vegetation cover and height, for all lands at a 30-meter resolution. Historically, LANDFIRE has produced these maps using two methods: a base-layer method consisting of machine learning classification and regression tree (CART) models based on field plots and Landsat image composites (versions LF 2001 and LF 2016), and intervening update layers developed by combining mapped disturbances with rulesets representing changes to cover and height. However, with the growth of cloud-based image and lidar processing, LANDFIRE anticipates moving towards an annual mapping approach similar to the base layer method. Changes to vegetation lifeform, cover, and height would be modeled using satellite imagery rather than rulesets. Such a feat would compress what was once a multi-year mapping process into a few months, thus increasing frequency and reducing latency of map products. A prototype is in development that uses the most recent image composites available along with established machine learning models to update vegetation lifeform, cover, and height where change has been detected from LANDFIRE annual disturbance products. In this presentation we will describe the methods we have investigated and evaluate the results for multiple regions.
Results/Conclusions
Preliminary results show that the modeled prototype outputs consistently reflect the pattern and amount of vegetation. This prototype is the first step in our planned scaled deployment of these methods for the entire country. Prioritization of vegetation lifeform, cover, and height in areas that have experienced significant disturbance and regrowth may help operationalize these methods for vegetation updates across the full landscape, which will also lead to better fuel mapping for fire behavior modeling.
Painting the landscape by number: The use of image segmentation to improve geospatial vegetation classification
Joshua J. Picotte - ASRC Federal Data Solutions, contractor to the USGS EROS Center
Abstract | Presentation
It is often easy to visually see the spatial patterns of vegetation variation on the landscape using moderate resolution satellite data (e.g., Landsat). When attempting to model these patterns using machine learning (ML), often what the eye can see cannot be clearly replicated by modeling. ML is highly dependent on the data quality of input data for model development and remotely sensed data to which these models are applied. Even state of the art ML algorithms cannot overcome data limitations. Vegetation modelers are therefore often faced with the challenge of either collecting additional high quality training data or developing processes/rulesets to take advantage of satellite sensors spectral characteristics to potentially fix poorly mapped vegetation classes. Because both options can be time consuming thereby resulting in production latency, we have developed open-source automated image segmentation tools. These tools were developed using Python to enable automation and the automatic scaling of tools from desktop to cloud computing systems. We will demonstrate how the use of these segmentation tools better characterized the spatial information within satellite data to improved vegetation classification by using the LANDFIRE lifeform (i.e., tree, shrub, and herbaceous classes) products as a test case, by using 1) overlapping plot/segments to increase the training data size and heterogeneity and 2) segmentations as a majority filter. The resulting increase in classification accuracy coupled with the decrease in development time for the LANDFIRE lifeform product will help facilitate LANDFIRE’s goal for annual data releases. We envision that these tools could be incorporated into any thematic classification framework that uses geospatially referenced training data.
Annual Monitoring of Land Cover Change: The Benefits and Challenges of Lowering Latency
Jesslyn Brown - USGS
Abstract
Advancements in sensor technologies, algorithms, and computing resources are fueling a revolution in monitoring the Earth. In parallel, there is increasing demand for up-to-date, detailed land change information for studies related to climate change, carbon dynamics, Earth systems, and ecosystem sustainability. The USGS has implemented a monitoring capability called Land Change Monitoring, Assessment, and Projection (LCMAP) to provide regularly updated land cover and land cover change at 30-m spatial resolution covering multiple decades. LCMAP involves an integrated framework of data infrastructure, processing, products, and analysis to support Earth Science investigation, assessment of current and past change, and projecting future land cover state. Several Collections have been released for the conterminous U.S. (CONUS), with Collection 1.3 (1985 – 2021) planned in mid-2022.
Processes involve Landsat Analysis Ready Data, ingesting and processing time-series data, producing and distributing science products, and performing analysis to support advanced modeling and assessments. The annual product suite has been updated with a lower latency than seen with other national land cover products. The current goal for annual delivery of science products is within 6 months and that is well supported using an on-premises automated system. The USGS is migrating from current architecture to cloud computing to support evolution toward additional input sources and lower monitoring latency. We anticipate monitoring to take <=3 months by migrating to cloud computing where large processing efficiencies enable faster production of land cover change products.
We validate products using independent, randomly sampled reference plots that provide quantitative measures of product accuracy, uncertainty, and statistical area estimation. Collection 1 land cover has overall accuracy (± standard error) of 82.5% (±0.2%) across all years for eight land cover classes, is consistent year-to-year, but shows regional variation with higher accuracy in the western U.S. Updating reference data is a significant challenge for continuous monitoring.