Previous studies have employed various methods, such as experimental and hydrodynamic models, to assess and estimate the extent of flood inundation (Teng et al. 2017). In a different approach, Azizian and Brocca (2020) utilized a topographic map with a scale of 1:1000 and an interpolation method to generate a Digital Elevation Model (DEM) and channel geometry. This allowed them to simulate floods and create an inundation map.
In order to prepare a flood sensitivity map, Entezari et al. (2020) classified flood data into two groups: training and validation. They considered 11 factors that influence flood occurrences, including geology, land use, distance from rivers, drainage density, slope, aspect, land curvature, topographic wetness index (TWI), altitude classes, and average rainfall. Using the frequency ratio model, weight of evidence, and flood point density in each layer, they calculated the weight of each layer. Finally, by employing the overlapping method, they obtained the final flood sensitivity map for Kermanshah province in Iran. This map was divided into five categories, ranging from areas with very low to very high sensitivity.
River gauge data and simulation models can be utilized to predict flooded areas, although they may not pinpoint the exact locations of flood areas. Conversely, conducting land surveys of flooded areas is a daunting task, especially for vast regions. Satellite flood maps, on the other hand, serve as a fundamental tool for determining the spatial distribution and extent of floods. Consequently, recent focus has shifted towards methods that rely on remote sensing, optical, and radar images to gather valuable information for creating flood inundation maps.
There are two main types of satellite observations used for flood monitoring: optical and radar images. Optical sensors like MODIS, AVHRR, Landsat, Spot, Iconos, GeoEye, Sentinel 2, and WorldView offer medium to high-resolution images for monitoring flooding events both before and after a crisis, aiding in the extraction of flood maps.
Mehmood et al. (2021) employed Landsat 5, 7, and 8 to map flood areas, evaluating the results by creating an uncertainty matrix for nine flood events worldwide, with an overall accuracy ranging between 74 and 89 percent. Landsat 8, in particular, plays a crucial role in preparing flood inundation maps, with its archive accessible to the public since 2008.
Goyal et al. (2023) conducted an analysis of 64 sites in Ramsar China to study their inundation patterns. They utilized Landsat images from 1991 to 2020 to obtain a time series of data. To achieve this, they utilized three images from Landsat 5, 7, and 8. Landsat 5 and 7 consisted of 4 visible and near-infrared bands, two shortwave bands, and one thermal band. On the other hand, Landsat 8 includes 11 bands.
Supervised classification algorithms, such as NDWI2, are often used to extract water from optical images (Vavassori et al., 2022). This algorithm is based on the principle that water absorbs reflections in near-infrared and beyond wavelengths (Mehmood et al., 2021). However, according to Tarpanelli et al. (2022), although NDWI is a well-known water index that utilizes green and NIR bands and is effective in most cases, it tends to overestimate water classes in residential areas (Tavus et al., 2020).
To address this limitation, a more sensitive index called MNDWI3 was introduced in 2006 to detect water complications. MNDWI is computed based on the green and SWIR bands and is used to highlight water bodies. It also eliminates features that are commonly associated with water bodies in other water mapping indicators (Mehmood et al., 2021; Tavus et al., 2020).
NDWI and MNDWI are widely used band ratios for generating surface water maps. These indicators rely on the green and near-infrared spectral values of the electromagnetic spectrum. Both MNDWI and NDWI range between -1 and +1, with values above zero indicating water (Sivanpillai et al., 2021).
After a flood event, the values of NDWI and MNDWI in the image are higher compared to before the flood. By applying a threshold value, it is possible to separate pixels corresponding to flooded areas from non-flooded areas (Sivanpillai et al., 2021).
In some studies, the NDVI4 index is used to estimate the extent of vegetation damage. For instance, Shrestha et al. (2013) examined the impact of floods in 2006, 2008, and 2011 on agricultural products in the Missouri and Iowa regions using MODIS satellite images.
Solaimani et al. (2020) introduced a method to identify the extent of flood damage in Golestan province in April 2018. They utilized the NDVI index derived from Sentinel2 data during two time frames: early March 2017 and late April 2018. Sentinel2 offers high-resolution optical images globally, encompassing 13 spectral bands including visible, near-infrared, and short-wave infrared, with spatial resolutions of 10, 20, and 60 meters over a swath width of 290 km. Due to its limitations, Sentinel 2 can only capture floods in daylight and under favorable weather conditions, as sunlight cannot penetrate clouds in the visible spectrum (Sivanpillai et al. 2021; Tarpanelli et al. 2022; Vavassori et al. 2022). Consequently, the chances of detecting the maximum flood area are diminished when cloud cover interferes with satellite revisit times (Tarpanelli et al. 2022).
Active radar sensors have the capability to penetrate clouds, dust, and rain commonly found in flood scenarios, enabling observations both day and night. Hence, they are the preferred choice for generating flood inundation maps (Singh and Pandey, 2021; Vanama et al. 2020; Kiran et al. 2019; Solaimani et al. 2020; Singha et al. 2020; Nghia et al. 2022; Tavus et al. 2020; Ghosh et al. 2022; Teng et al. 2017).
Singh and Pandey (2021) employed Sentinel1 data to delineate flood-prone areas in Punjab before and after the August 2019 event. They applied filters to the Sentinel1 dataset to extract SAR5 data in IW, VV, and VH polarizations in descending orbit modes, focusing on dates surrounding the flood event. Pre-flood analysis utilized SAR images from 13th March 2019 to 13th June 2019, while post-flood analysis utilized images from 21st August 2019 to 31st August 2019. The flood-affected areas were delineated by importing shape files into the GEE6 platform, resulting in mapping an area of 205.2 square kilometers as flood zones.
Kiran et al. (2019) utilized Sentinel1 data of level 1 GRD7 type and VV polar mode. The processing steps involved the utilization of the satellite orbit file, removal of thermal noise, calibration, salt and pepper filter, ground correction, and linear to decibel conversion.
Some studies have employed a combination of optical and radar data to monitor floods and generate maps of flooded areas (Tavus et al. 2020; Ghosh et al. 2022; Tarpanelli et al. 2022). Tavus et al. (2020) employed a combined approach using Sentinel 2 optical data and Sentinel 1 SAR data to create a flood map for the flood event that occurred on 8th 2018, in Erdoğan Province, Turkey. Tarpanelli et al. (2022) examined the effectiveness of using Sentinel1 and 2 for flood detection in Europe. They analyzed river discharge data in 2000 regions of Europe over a ten-year period. The findings revealed that, on average, 58% of flood events are detectable with Sentinel1. However, due to cloud cover, only 28% can be observed with Sentinel 2.
Since the introduction of the GEE platform in 2010, a web-based platform designed to address big data challenges and enhance satellite image processing for large-scale applications, many flood monitoring researchers have gravitated towards its usage. GEE is a cloud computing platform developed by Google to tackle big data analysis, enabling the processing of remote sensing data over extensive areas and long-term environmental monitoring (Ghosh et al. 2022). This advanced cloud-based platform provides users with access to a vast amount of data and facilitates their processing (Vanama et al. 2020; Nghia et al. 2022; Ghosh et al. 2022; Wu et al. 2019). The GEE data catalog encompasses diverse data from Landsat, Sentinel, Modis, and NAIP (Wu et al. 2019).
Vanama et al. (2020) introduced the GEE4FLOOD framework to generate flood maps on the GEE cloud platform. GEE4FLOOD efficiently manages extensive data by utilizing multi-time SAR images in GEE along with an automated algorithm for flood map creation.
Moharrami et al. (2021) monitored the flood event in March 2019 in Aqqala, a city in northern Iran, by employing Sentinel1 images. GRD level 1 products were acquired, and Otsu's threshold algorithm was applied to distinguish flood areas from other land covers in the region. A total of 8 scenes were chosen for flood monitoring, including one pre-flood, 6 during the flood, and one post-flood. The SRTM8 height digital model was utilized to adjust the image height. The flood map was overlaid on DEM and slope maps for analyzing the flooded areas' distribution.
Singha et al. (2020) examined the temporal and spatial patterns of floods in Bangladesh from 2014 to 2018 using SAR images on the GEE platform. Wu et al. (2019) developed an automated approach for flood detection through the integration of lidar data and multi-temporal aerial images on Google Earth Engine. The aerial images were classified using a machine learning algorithm to identify inundated areas. Following the extraction of water clusters from unsupervised classification, misclassified pixels, such as tree shadows, buildings, and other topographic features, were eliminated.
Pourghasemi et al. (2020) utilized machine learning techniques, including boosted regression tree (BRT) and generalized linear model, to assess flood risk and susceptibility in specific areas using Sentinel3 images on Google Earth Engine. The study focused on four prominent districts in Fars province, revealing high to very high flood risk levels for critical infrastructures like hospitals, pharmacies, fire stations, ATMs, gas stations, and mosques in Shiraz. The research methodology involved four key steps: 1) gathering flood data from Sentinel 3 images on the GEE platform, 2) identifying and evaluating influential factors such as elevation, slope, surface curvature, Topographic Wetness Index (TWI), proximity to rivers and roads, drainage density, lithology, precipitation, land cover, and soil characteristics, 3) generating a flood susceptibility map validated with performance indicators, and 4) assessing flood risk on key natural units and vital infrastructures.
DeVries et al. (2020) introduced an algorithm that leverages all accessible Sentinel1 images along with historical Landsat images and supplementary data sources within Google Earth Engine to generate flood inundation maps. Singh and Pandey (2021) conducted an analysis of Sentinel 1 SAR data using Google Earth Engine to produce a flood inundation map.
Scheip and Wegmann (2021) utilized an open-source application called Hazmapper in Google Earth Engine to create a risk map, enabling users to extract and analyze maps based on Sentinel or Landsat data. The NDVI index in HazMapper was employed to detect areas where vegetation was damaged after a natural disaster.
Nghia et al. (2022) established a logical model using GEE with Sentinel1 data and observed data from Tan Chu hydrological stations to estimate floods in the downstream Mekong River Basin.
Ghosh et al. (2022) conducted a web-based analysis to exhibit the spatial analytical capabilities of GEE in flood-affected regions, while also examining socio-demographic impacts. The results were validated using Sentinel2 data.
In recent times, there has been a growing trend in utilizing volunteered and public data for flood calculations. Vavassori et al. (2022) employed optical multispectral images and VGI9 to develop a flood inundation map using a semi-automated approach. To overcome the limitations of social media images, they utilized QField to gather the necessary metadata for image classification. QField is an open-source and user-friendly application that seamlessly integrates with QGIS, making it convenient for collecting spatial data. In cases where posts did not have a specific location, the analysis focused on the content of the post itself, including comments, hashtags, likes, and shares. Google Street View was employed to retrieve the location information. The NDWI index was then calculated to differentiate between water features, vegetation, and urban areas.