Cloud removal is a relevant topic in remote sensing, fostering medium- and high-resolution optical (OPT) image usability for Earth monitoring and study. Recent applications of deep generative models and sequence-to-sequence-based models have proved their capability to advance the field significantly. Nevertheless, there are still some gaps: the amount of cloud coverage, the landscape temporal changes, and the density and thickness of clouds need further investigation. We fill some of these gaps in this work by introducing an innovative deep model. The proposed model is multimodal, relying on both spatial and temporal sources of information to restore the whole optical scene of interest. We use the outcomes of both temporalsequence blending and direct translation from synthetic aperture radar (SAR) to optical images to obtain a pixel-wise restoration of the whole scene. The reconstructed images preserve scene details without resorting to a considerable portion of a clean image. Our approach’s advantage is demonstrated across various atmospheric conditions tested on different datasets. Quantitative and qualitative results prove that the proposed method obtains cloud-free images coping with landscape changes.

PLFM: Pixel-Level Merging of intermediate Feature Maps by disentangling and fusing spatial and temporal data for Cloud Removal

Alessandro Sebastianelli;Maria Pia Del Rosso;Silvia Liberata Ullo
2022-01-01

Abstract

Cloud removal is a relevant topic in remote sensing, fostering medium- and high-resolution optical (OPT) image usability for Earth monitoring and study. Recent applications of deep generative models and sequence-to-sequence-based models have proved their capability to advance the field significantly. Nevertheless, there are still some gaps: the amount of cloud coverage, the landscape temporal changes, and the density and thickness of clouds need further investigation. We fill some of these gaps in this work by introducing an innovative deep model. The proposed model is multimodal, relying on both spatial and temporal sources of information to restore the whole optical scene of interest. We use the outcomes of both temporalsequence blending and direct translation from synthetic aperture radar (SAR) to optical images to obtain a pixel-wise restoration of the whole scene. The reconstructed images preserve scene details without resorting to a considerable portion of a clean image. Our approach’s advantage is demonstrated across various atmospheric conditions tested on different datasets. Quantitative and qualitative results prove that the proposed method obtains cloud-free images coping with landscape changes.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12070/55277
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 25
  • ???jsp.display-item.citation.isi??? 24
social impact