Multi-Modal Deep Learning for 2D/3D Mapping with Satellite Time Series images: From Floods to Forests to Cities
Time: Tue 2025-11-04 14.00
Location: Kollegiesalen, Brinellvägen 8, Stockholm
Video link: https://kth-se.zoom.us/j/68698558153
Language: English
Subject area: Geodesy and Geoinformatics, Geoinformatics
Doctoral student: Ritu Yadav , Geoinformatik
Opponent: Professor Jocelyn Chanussot, Grenoble Institute of Technology, France
Supervisor: Professor Yifang Ban, Geoinformatik; Associate professor Andrea Nascetti, Geoinformatik
QC 20251017
Abstract
Driven by climate change and rapid urbanization, there is an urgent need for reliable large-scale Earth observation (EO) products that capture both two-dimensional (2D) and three-dimensional (3D) characteristics of the Earth’s surface. Modern satellite missions, particularly Sentinel-1 Synthetic Aperture Radar (SAR) and Sentinel-2 MultiSpectral Instrument (MSI), provide freely accessible global-scale imagery with frequent revisits, offering new opportunities for large-scale mapping such as floods, urban growth, and forest dynamics. Concurrently, deep learning (DL) has become state-of-the-art for EO analysis. However, challenges remain in ensuring generalization across regions, reducing reliance on labeled data, extracting 3D features from mid-resolution imagery, and enhancing reliability through uncertainty estimation. This thesis addresses these challenges by proposing novel DL models for 2D and 3D applications, improving model generalizability, curating benchmark datasets, and integrating uncertainty estimation into EO tasks.
For 2D mapping, this thesis focuses on flood mapping as the primary application. Two supervised segmentation networks were developed for the task. The first, Attentive U-Net, enhances Sentinel-1 VV, VH, and VV/VH ratio inputs using spatial and channel-wise self-attention. The second, a dual-stream Fusion Network, integrates Sentinel-1 data with DEM and permanent water masks for improved contextual learning. Both outperformed supervised baselines on the Sen1Floods11 dataset, achieving 3–5% higher IoU. To further improve model generalizability and reduce dependency on labels, an unsupervised model (CLVAE) was developed that learns spatiotemporal features from Sentinel-1 SAR time series using reconstruction and contrastive learning. Flood maps are derived by detecting changes in latent feature distributions of pre and post-flood time series images. CLVAE achieved 70% IoU, surpassing unsupervised baselines by a minimum margin of 15% IoU and outperforming supervised models when tested on unseen flood sites, showing a higher model generalizability.
For 3D mapping, multiple advances were made. A hybrid CNN-transformer architecture (T-SwinUNet) was proposed for large-scale building height estimation from 12-month Sentinel-1 and Sentinel-2 time series. Leveraging multi-modal spatio-temporal features and multitask learning, it achieved 1.89 m RMSE at 10 m resolution and generalized across diverse European cities. The model outperformed existing global height product GHSL-Built-H.To further improve building height estimation accuracy, the M4Heights benchmark dataset was released, covering sites in Estonia, Switzerland, and the Netherlands. Combining 10 m Sentinel-1&2 time series with 1 m aerial orthophotos enables multi-scale and multitask learning for super-resolution building height estimation. Baseline evaluations confirmed its benefits, and the open dataset supports fair model comparisons and encourages further innovation in the field.Extending 3D mapping from the built environment to natural ecosystems, the BioMassters benchmark dataset for above-ground forest biomass estimation was curated and released. It covers 8.5 million hectares of Finnish forests, with labels derived from high-resolution LiDAR data and inputs from Sentinel-1&2 time series. Released alongside a global challenge with over 1000 model submissions, the results demonstrated the superiority of DL methods over the coarse 100 m ESA CCI Biomass product, enabling biomass mapping at 10 m resolution and underscoring the importance of open, DL-ready datasets.
The thesis further advances 3D mapping by integrating uncertainty quantification into large-scale regression tasks for building height, canopy height, and biomass estimation at 10 m resolution. Two uncertainty quantification approaches were investigated through: (i) a Gaussian uncertainty model, which assumes symmetric error distributions, and (ii) a Quantile uncertainty model, which provides asymmetric intervals and captures the direction of uncertainty. Both methods achieved accuracy comparable to deterministic baselines while additionally providing calibrated confidence intervals. Importantly, they outperformed existing global canopy and biomass products that include uncertainty information. The Gaussian model performed best for canopy height and biomass, while the quantile model proved more robust for building height, where data follow strictly non-Gaussian and skewed distributions. Together, these advances establish uncertainty-aware regression as a critical step toward making EO-derived 3D products more trustworthy for real-world applications.
In conclusion, this thesis addresses key challenges in large-scale 2D and 3D EO tasks, spanning flood detection, building height estimation, biomass estimation, and canopy height estimation. By advancing DL models that leverage time series of Sentinel-1&2 imagery, integrating uncertainty quantification into the model and releasing benchmark datasets, this thesis makes major contributions to scalable, reliable and reproducible EO data products. These advances enhance the trustworthiness of EO-derived products for real-world applications, supporting sustainable urban planning, climate resilience, and the monitoring of Sustainable Development Goals.