A deep transfer learning framework for mapping high spatiotemporal resolution LAI

Junxiong Zhou, Qi Yang, Licheng Liu, Yanghui Kang, Xiaowei Jia, Min Chen, Rahul Ghosh, Shaomin Xu, Chongya Jiang, Kaiyu Guan, Vipin Kumar, Zhenong Jin

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

Leaf area index (LAI) is an important variable for characterizing vegetation structure. Contemporary satellite-based LAI products with moderate spatial resolution, such as those derived from the MODIS observations, offer unique opportunities for large-scale monitoring but are insufficient for resolving heterogeneous landscapes. Although high-resolution satellite observations can derive detailed LAI maps, the low revisit frequency and the presence of cloud cover disrupt the temporal continuity of these high-resolution data, thereby leading to the fact that most current LAI inversion models for high spatial resolution data are based on single pixel and date without fully utilizing temporal information. Moreover, LAI estimation models trained solely using satellite products or simulations are also impeded by the inconsistencies between field and satellite LAI exist owing to local atmospheric, soil, and canopy conditions, while in-situ LAI measurements are sparse and insufficient for large-scale missions. To address these challenges, this study proposed a new framework based on deep transfer learning, which includes three key features that contribute to its high performance. Firstly, a Bi-directional Long Short-Term Memory (Bi-LSTM) model is pre-trained using MODIS reflectance and MODIS LAI products to capture the general non-linear relationship between reflectance and LAI and incorporate temporal dependencies as prior information to reduce uncertainties associated with ill-posed inversion problems and noises. Secondly, this pre-trained Bi-LSTM is transferred from satellite to field by fine-tuning with sparse in-situ LAI measurements to overcome issues arising from local inconsistencies. Thirdly, reconstructed Landsat time-series images via fusing MODIS and Landsat reflectance images are used as inputs of the Bi-LSTM to generate high-quality Landsat LAI products at both high spatial and temporal resolutions. To validate the proposed approach, field LAI measurements were collected at nine locations across the contiguous U.S. from 2000 to 2018, including three land cover types: croplands, grasslands, and forest. Quantitative assessments demonstrate that the Bi-LSTM outperforms three benchmarks, including a PROSAIL-based Look-Up Table (LUT) method, a random forest-based LAI retrieval and MODIS LAI (MCD15A3H), exhibiting lower RMSE and higher R2 in most cases. Additionally, the Bi-LSTM predictions yield lower random fluctuations than estimations from the LUT·, random forest and MODIS LAI, indicating the higher robustness of the proposed framework. The findings of this study highlight the value of transfer learning in estimation of vegetation biophysical parameters, which involves pre-training using sufficient existing satellite products to produce a generalized model and transferring knowledge from in-situ measurements to bridge gaps between satellite and field. By leveraging advanced transfer learning techniques and multi-source and multi-scale data, the proposed framework enables the production of long-term LAI maps at fine resolutions, facilitating downstream applications in regions characterized by high spatial heterogeneity.

Original languageEnglish (US)
Pages (from-to)30-48
Number of pages19
JournalISPRS Journal of Photogrammetry and Remote Sensing
Volume206
DOIs
StatePublished - Dec 2023

Bibliographical note

Publisher Copyright:
© 2023 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS)

Keywords

  • Deep transfer learning
  • High spatial resolution
  • Leaf area index
  • Long short-term memory

Fingerprint

Dive into the research topics of 'A deep transfer learning framework for mapping high spatiotemporal resolution LAI'. Together they form a unique fingerprint.

Cite this