2 University of Chinese Academy of Sciences, Beijing 100049, China;
3 State Key Laboratory of Desert and Oasis Ecology, Xinjiang Institute of Ecology and Geography, Chinese Academy of Sciences, Urumqi 830011, China;
4 Jiangsu Centre for Collaborative Innovation in Geographical Information Resource Development and Application, Nanjing 210023, China
Coastal zones are transitional areas sensible and vulnerable to the interactions of sea, land, and air, including typhoons, storm surges, and other marine disasters. Irrational land development in coastal areas may result in the increase of marine disaster risk, and the changes in any impervious surfaces affect directly their vulnerability. In addition, using a land cover to assess the vulnerability of marine disasters in coastal areas can show disaster-prone areas clearly and intuitively, thus guiding urban development planning and marine disaster prevention (Liu et al., 2018). Therefore, the efficient detection of land cover is important.
Remote sensing technology features advantages in analyzing and monitoring land use/cover information because of its wide range, accuracy, and a short datacollection-return period for measurement. However, in tropical, subtropical, and other coastal areas, cloud and rain often make it difficult to obtain high-quality optical imagery, which affects seriously the accuracy of extracted land use/cover information (Wang et al., 1999; Watmough et al., 2011; Zhu and Woodcock, 2012). To solve the problem, synthetic aperture radar (SAR) data has been increasingly used for land use/cover classification (Ding and Li, 2011, 2014; Nunziata et al., 2014; Ding et al., 2015; Gou et al., 2016; Buono et al., 2017; Wang et al., 2017; Chen et al., 2018). SAR data are near free from cloud interference and can reflect the geometric structure, water content, canopy roughness, and other aspects of ground objects (Greatbatch, 2012; Huang et al., 2017). In addition, SAR imagery allows researchers to distinguish urbanized land and bare land more accurately than optical imagery (Xu et al., 2017). However, the imaging mechanism that employed makes SAR data susceptible to the influence of atmospheric and soil moisture, which generates relatively more noise. In order to achieve higher accuracy of land use/cover mapping in coastal areas with ample atmospheric water vapor (Gong, 2018; Gong et al., 2018a, b), a method that employs complementary advantages available to different data sources by combination of optical and radar data is widely used (Zhu et al., 2012; Lehmann et al., 2015).
In object-based image analysis (OBIA), setting a segmentation scale is a crucial step, and the quality of image segmentation directly affects the final classification accuracy (Wang et al., 2004). Over- and under-segmentation can cause pixels that belong to a single object to not be segmented into the same homogeneous patch. This will affect the subsequent series of classification steps. Thus, different segmentation scales may yield different classification results (Kim et al., 2011; Zhang et al., 2013; Witharana and Civco, 2014). In addition, no widely accepted rules have been established for setting an optimal segmentation scale, and most researchers obtain ideal homogeneous classification objects by repeated experimentation and constant comparison (Cho et al., 2012; Malahlela et al., 2014).
Although many scholars have combined optical and SAR data to achieve land use/cover classification using OBIA (Ban et al., 2010; Peters et al., 2011; Nascimento et al., 2013; Jiao et al., 2015; Gibril et al., 2017), many problems remain in the classification strategy, such as the need to address the influence of the different segmentation scales on classification accuracy, the effects of different classification features on the classification result, the contribution of SAR data on improving the accuracy of land cover classification in tropical coastal areas having persistent cloud cover, the effects of using a combination of optical and radar imagery on the cloud-penetration capability of radar images, and other topics worthy of in-depth discussion and analysis.
In this paper, to obtain high-precision land cover of the coastal and frequently cloudy regions for vulnerability assessment of marine disaster, a framework consisting of three-group comparative experiments is conducted to solve the above problems. The Random Forest method is used to classify the land cover of Singapore in 2017 at different segmentation scales using Landsat Operational Land Imager (OLI) spectral features and the Advanced Land Observing Satellite 2 (ALOS-2) Phased Array type L-band Synthetic Aperture Radar (PALSAR) image features and the combination of Landsat OLI spectral and ALOS-2 PALSAR image features. Based on the classification results, the effects of different segmentation scales and classification features on the classification accuracy are evaluated quantitatively. Then, the optimal segmentation scale and feature combination are used to acquire land cover in 2008 and 2017 for vulnerability assessments. Finally, vulnerability changes and the level of disaster prevention in increased high-vulnerability areas are discussed. We hope that the derived conclusions will help researchers and government managers make efficient use of optical and SAR data when planning urban development and preventing marine disasters in cloudy coastal areas.2 MATERIAL AND METHOD 2.1 Study area
The study area covers all of Singapore and includes parts of Malaysia and Indonesia, with an experimental area of 2 054 km2 (Fig. 1). Singapore is a typical coastal city-state country surrounded by the sea and is located on the southernmost part of the Malay Peninsula, which is extremely vulnerable to marine disasters. The generally flat terrain gives way to hills in the western and central areas with other plain areas. With the developed economy and a high rate of urbanization, urbanization in Singapore has been dramatically expanded, resulting in main land-cover categories, including urban built-up area, forestland, waters, and urban green land (Sidhu et al., 2017). Therefore, the categories of land cover in this study are divided into vegetation, waters, impervious surface, and bare land. In addition, as the study area is located near the equator in a tropical rainforest climate, hot, rainy, humid, and cloudy all year round, it is difficult to obtain high quality, remotely sensed optical data. Hence, it is necessary to introduce SAR data into the land cover classification in this area.2.2 Dataset and preprocessing 2.2.1 Landsat OLI data and preprocessing
The Landsat OLI imagery is provided by the Operational Land Imager of the Landsat-8 satellite from the National Aeronautics and Space Administration. A Landsat OLI image (Fig. 2a) with a partial cloud cover on April 19, 2017, was selected to compare the effect of cloud on land use/cover extraction after adding PALSAR imagery. In addition, the Landsat OLI used is standard Level 1 Terraincorrected (L1T) data, which is rectified through standard terrain correction. In order to improve the classification accuracy, the Landsat OLI images used in this study are radiation-corrected using ENVI 5.3 software.2.2.2 PALSAR mosaic data and preprocessing
The 2008 and 2017 PALSAR mosaic images used in this study are Advanced Land Observing Satellite 1 (ALOS-1) and ALOS-2 PALSAR images, respectively. They are all generated by the Japan Aerospace Exploration Agency. The spatial resolution of the data is 25 m, including HH and HV polarization bands. The HH band uses horizontal transmission and a horizontal receiver while HV uses horizontal transmission and a vertical receiver. Orthorectification, slope correction, and radiometric calibration were applied to the dataset. We used 2017 data and converted the digital numbers of the HH and HV bands to gamma naught backscattering coefficients using Eq.1 (Shimada et al., 2009):
where γ0 is the backscattering coefficient and DN is the digital number of the amplitude image.
Previous studies have shown that the HH, HV, the ratio (HH/HV), and the difference (HH-HV) of the bands can reflect the land cover types, and those bands have been widely used in land cover classification (Dong et al., 2012, 2013). Therefore, we used HH, HV, HH/HV, and HH-HV bands of the PALSAR images for classification. A median filter in 3×3 window size was applied to the PALSAR images to reduce speckles. For subsequent classification analysis, the four bands (HH, HV, HH/HV, and HHHV) were resampled to 30 m using the nearestneighbor method. At the same time, all of the PALSAR bands were re-projected to the Universal Transverse Mercator projection and Datum World Geodetic System 84 to be consistent with the Landsat OLI image. Figure 2 shows the preprocessed results of each image.2.3 Methods
The overall framework of this research is shown in Fig. 3. First, the images are processed in a series of ways; then, with the help of the Global Urban Footprint (GUF) raster map, the Global Human Settlement Layer (GHSL) raster map, and highresolution Google satellite imagery, training samples, and validation samples are collected. To explore the effects of different image classification features, the images are divided into three groups: a Landsat OLI group, an ALOS PALSAR group, and a third combined Landsat OLI and ALOS PALSAR group. Each group represents a specific segmentationclassification strategy. For each group of images, multi-scale segmentation and image feature extraction are processed, and Random Forest is used to classifying the land cover. After that, the effects of different segmentation scales and image features on the classification results are compared. At last, the optimal segmentation scale and classification features are used to obtain the land cover in 2008 and 2017 and to assess the vulnerability of the marine disasters in Singapore.2.3.1 Sampling
Consistent with the classification of land cover, we selected four types of land cover categories: vegetation, waters, impervious surface, and bare land. In order to improve the accuracy and efficiency of sample collection, we used GUF, GHSL, and highresolution Google satellite imagery to facilitate sample labeling. High accuracy GUF in spatial resolution of 12 m is the thematic information of the global urban footprint generated by the German Aerospace Center with TanDEM X radar data from 2011 to 2012 (Esch et al., 2013). GHSL is the thematic information of the global human settlement layer generated by the Joint Research Center of the European Union and several research institutions based on optical images in high- to very-high resolution. This set of data combines the advantages of multisensors and multispectral data and has a high spatial resolution of 10 m. In addition, the overall accuracy (OA) was greater than 90%, which provided the most detailed mapping data of global human settlements currently available (Pesaresi et al., 2013). Due to the high spatial resolution and accuracy of GUF and GHSL, in this study, samples of impervious surfaces were selected for the use of them.
First, 2 000 vector blocks in 50 m×50 m size are randomly generated and overlain with GUF and GHSL. If a square falls completely into the GUF and GHSL regions, they are classified as impervious surfaces. Subsequently, the 2 000 vector blocks are converted into Keyhole Markup Language format data and imported into Google Earth. Using contemporaneous high-resolution Google imagery and Google street view, we visually discriminated the categories of blocks and determined the accuracy of the previously labeled impervious surfaces again. Then, each block that only occupied a single category is selected as a training sample. Note that in order to improve the quality of classification training, the selection process must ensure that there are no fewer than 300 samples of each category. Since a bare land is rarely distributed, the sample size for this land type could be reduced appropriately. For any category of an insufficient sample size, we added additional samples manually using high-resolution Google images. Ultimately, we selected 1 500 training samples for classification.2.3.2 Multiresolution segmentation
The present study uses an eCognition 9.0 developer's multi-resolution segmentation algorithm to segment three groups of images for the three classification strategies: Landsat OLI (bands 1–11), ALOS PALSAR (HH, HV, HH/HV, HH-HV), and Landsat OLI and ALOS PALSAR stacked imagery (band 1–11, HH, HV, HH/HV, and HH-HV). The multi-resolution segmentation algorithm can continuously merge image pixels or existing image objects. This is a bottom-up segmentation algorithm based on the region-merging technique. Under the premise of ensuring the minimum average heterogeneity between objects and other objects, as well as the maximum homogeneity between pixels within objects, the multi-resolution segmentation algorithm merges pixels into objects or merges small objects into larger ones. The algorithm implements segmentation mainly by three criteria: scale parameter, image layer weights, and the composition of homogeneity criteria (Ma et al., 2015; Li et al., 2016). The scale parameter defines the maximum allowed heterogeneity for the resulting image objects. A larger scale parameter value generates larger image objects. Image layer weights set the weight of the image bands involved in the segmentation. Using this parameter, the image bands with more image information can be assigned with higher weights. The composition of homogeneity criteria refers to minimum heterogeneity and homogeneity, which are composed of color and shape; the weight of both is 1.0. The shape is represented by smoothness and compactness, and the sum of the two weights is 1.0. The proportion of color and shape needs to be tested by the user to obtain the best parameter setting. In a real segmentation process, the proportion of the two is usually set to (0.8–0.9):(0.2–0.1).
To achieve the key scale, we used the exponential sampling method and segmentation on 17 different scales, which starts with 5 and ends at 1280 (Wang et al., 2016, 2018a & b). Each segmentation scale is calculated by Eq.2 (Wang et al., 2018a) below, and each scale is rounded off:
where Si is the segmentation scale; S0 is the initial segmentation scale, and its value is 5.
In order to avoid image information missing, the weight of all the image bands involved in segmentation in this study is set to 1 for all scales. Previous studies have illustrated that a higher color weight produces better segmentation results (Laliberte and Rango, 2009; Ma et al., 2015). Therefore, the parameters of color and shape were set to 0.9 and 0.1, respectively. The parameters of smoothness and compactness were both set to 0.5 because the smoothness and compactness are conceived to have equal importance.2.3.3 Features extraction
In terms of classification features, in addition to using the spectral mean and standard deviation of Landsat OLI and the scattering mean and standard deviation of ALOS PALSAR, some spectral indices, and ALOS PALSAR's texture features are added. Previous studies used the normalized difference vegetation index (NDVI by Huete et al., 1997), the modified normalized difference water index (MNDWI by Xu, 2006), and the normalized difference building index (NDBI by Zha et al., 2003) to assist in the extraction of vegetation, waters, and impervious surface, respectively. Therefore, we extracted and used NDVI, MNDWI, and NDBI spectral indices. Based on pixels within each segmented object, eCognition 9.0 software is used to calculate the texture features of the HH and HV scattering bands of ALOS PALSAR. The texture features are comprised of Gray-Level Co-occurrence Matrix (GLCM) homogeneity, GLCM contrast, GLCM dissimilarity, GLCM entropy, GLCM angular second moment, GLCM mean, GLCM std. dev., and GLCM correlation. Additionally, rivers, roads, and some buildings have shape features that are obviously different from other ground objects, so length/width, shape index, and border index are introduced.
The features of the three types of segmentationclassification strategies are shown in Table 1. The first group of these strategies uses seven bands (visible, NIR, and SWIR) of the Landsat OLI. The classification features select the mean and standard deviation for each spectral band, NDVI, MNDWI, NDBI, and length/width, as well as the shape and border indices of each segmented object. The second segmentationclassification strategy uses ALOS PALSAR's HH and HV bands, the ratio of (HH/HV), and difference between them (HH-HV). In this second strategy, the mean and standard deviation for each scattering band, GLCM texture features of HH and HV, length/width, and the shape and border indices of each segmented object are selected as classification features. The last strategy combines the seven spectral bands of the Landsat OLI and four scattering bands of ALOS PALSAR and applies for all of the classification features used in the previous two strategies.2.3.4 Random Forest classifier
The Random Forest classifier is used to generate a land cover map of the study area. The Random Forest classification method involves a machine-learning algorithm composed of multiple decision trees (Breiman, 2001) that have been widely applied in remote sensing image classification. Many application cases have also proven that this method is quite robust (Hayes et al., 2014; Isaac et al., 2017). The construction of Random Forest in eCognition 9.0 software often requires the setting of some parameters, such as the maximum tree depth, the minimum number of samples per node, and the maximum number of trees, among others. In this study, by considering the accuracy and efficiency of classification after multiple tests, the maximum tree depth and the number of trees are set to 10 and 100, respectively, and other parameters used default values during all the classification experiments.2.3.5 Accuracy assessment
In this study, 2 500 vector blocks in 30 m×30 m size are randomly generated as alternative accuracy validation samples. Consistent with the labeling process of training samples described in Section 2.3.1, a verification dataset consisting of 600, 600, 600, and 200 samples of vegetation, waters, impervious surface, and bare land are obtained, respectively. Figure 4 shows the distribution.
The use of an error matrix is considered the primary method to verify the accuracy of remote sensing image classification (Congalton, 1991). Therefore, we used 2 000 verification blocks to evaluate the accuracy of each classification result by calculating an error matrix. The OA, Kappa coefficient (Kappa), Kappa coefficient of per class, user's accuracies, and producer's accuracies can be obtained by using the error matrix. In addition, by comparing these assessment criteria comprehensively, we can deduce the influence of different segmentation scales on the classification results and the contribution of different classification features to the results.2.3.6 Values of vulnerability indicators
Land use type is regarded as indicators in the vulnerability assessment. The land cover information in 2008 is obtained by the optimal classification strategy and segmentation scale using Landsat Thematic Mapper (TM) and ALOS-1 PALSAR data, and the result with the highest classification accuracy is used in 2017. With the help of high-resolution Google imagery, we subdivided the classification results into 12 categories. According to the technical guidelines for marine disasters risk assessment issued by the State Oceanic Administration People's Republic of China (2015), we determined the vulnerability values and levels of different land use types (Table 2).3 RESULT 3.1 Responses of classification accuracy to segmentation scale
Figure 5 shows the classification of the OA and Kappa for the three segmentation-classification strategies on 17 different segmentation scales. As seen, the OA and Kappa of the three strategies exhibited the same basic variation trend, all of them increase at the beginning and then decrease with the increase of the segmentation scale. On each segmentation scale, the OA and Kappa of the PLASAR strategy are significantly less than those of the other two experimental strategies; nevertheless, the difference between the classification results of the OLI+PALSAR strategy and the OLI strategy is relatively small.
For the OLI+PALSAR strategy, when the segmentation scale is less than 320, the OA is greater than 0.9; in addition, the optimal classification result is obtained on a scale of 40, with an OA of 0.94 and a Kappa of 0.92 (Fig. 5). Similarly, the OLI strategy has a high OA that is greater than 0.9 when the segmentation scale is less than 226. In addition, the optimal classification result is generated on a scale of 40, with an OA of 0.93 and a Kappa of 0.90 (Fig. 5). The results of the PALSAR strategy indicate that the classification accuracy rapidly decreases with the increase in segmentation scale when the scale is greater than 40. Moreover, this strategy produces an optimal result at a scale of 10, with an OA of 0.79, and a Kappa of 0.71 (Fig. 5). The optimal classification results for each segmentation-classification strategy are shown in Fig. 6. From the above analyses, one can conclude that the classification accuracies of the OLI+PALSAR and the OLI are not sensitive to the change of segmentation scale when the scale is lower than 113. In contrast, the PALSAR's accuracy is sensitive to a change in segmentation scale. In other words, the segmentation scale has a significant effect on the classification result when only PALSAR data are used.
In addition, the Kappa coefficients of each category at different segmentation scales for three segmentationclassification strategies are also calculated (Fig. 7). Consistent with the overall Kappa, the Kappa coefficients of four land cover categories tend to increase first and then decrease with the increase in segmentation scale in each strategy.3.2 Responses of classification accuracy to classification feature
To illustrate the effects of different classification features on the results, the present study compares the optimal classification results of each segmentationclassification strategy (Fig. 8). The OA, Kappa, and Kappa coefficients of both impervious surface and bare land of the OLI+PALSAR strategy increased by 1.3%, 1.7%, 5.5%, and 1.7%, respectively, compared with the result of the OLI strategy; but the Kappa coefficients of waters decreased by 0.2%. This indicates that the addition of PALSAR imagery effectively could improve the OA, especially for the accuracies of impervious surface and bare land, but it does not help in the classification of vegetation and waters. It can also be seen in Fig. 8 that the accuracy of bare land identification is significantly lower than that of the other categories, which probably occurs because some coastal land areas of Singapore had been filled with sand. The spectral reflection characteristics of these sands are very similar to those of impervious surface, which results in bare land misclassification into an impervious surface. Moreover, Singapore has an extremely high green coverage rate in urban areas, a small amount of bare land inside the city could be misclassified into vegetation by the influence of vegetation spectrum of surrounding pixels. This interpretation is confirmed in the confusion matrix of Table 3.
According to the confusion matrix of the three strategies (Table 3), the user's accuracies of the four categories generated from the OLI+PALSAR strategy are very high, indicating that the synergistic use of the spectral and scattering features can effectively reduce the misclassification errors for various ground objects. The producer's accuracies of impervious surface and bare land of the OLI+PALSAR strategy are respectively 3.84% and 1.5% higher than those of the OLI strategy, indicating that adding the PALSAR imagery features can significantly reduce omissions for these two categories.
According to the above analyses, we believe that the spectral features of OLI can effectively extract vegetation and waters, and after adding the scattering and texture features of PALSAR, a better extraction result can be obtained for both impervious surface and bare land.3.3 Responses of different segmentationclassification strategies for cloud and shadow
It is well-known that the use of a SAR image can eliminate the influence of clouds on classification results. However, it is unknown whether the synergistic use of SAR and an optical image has this function, and whether different segmentationclassification strategies would have different responses to cloud and shadow. Therefore, to a good understanding of this situation, we selected and compared regions of partial cloud cover. Clouds are classified into two categories: clouds covering impervious (Fig. 9a) and non-impervious surfaces (Fig. 9e). We took the optimal classification results of each strategy and used the corresponding regions in Fig. 9 for comparison.
Figure 9c & g shows that regardless of the type of surface cover, the OLI strategy cannot avoid the interference of cloud and shadow because these areas are misclassified as bare land and waters, respectively. In contrast, Fig. 9d & h shows that the PALSAR strategy can be unaffected completely by cloud and shadow, and this advantage is stable for any type of surface cover. The OLI+PALSAR strategy can also avoid the interference of cloud and shadow. Nevertheless, this ability performed differently for various types of surface cover. Figure 9b & f shows that cloud and shadow do not affect the results only when the surface type is an impervious surface, which reveals that the addition of PALSAR scattering features plays the role of cloud-penetration conditionally only. Therefore, for a cloud cover area of a non-impervious surface, we suggest using the classification results of PALSAR instead. Figure 10 shows how Fig. 9f could be modified by Fig. 9h. One can see that this method can allow researchers to obtain more accurate results without complex cloud processing.3.4 Vulnerability assessment and analysis of change areas
Figure 11 shows the spatial distribution of vulnerability of Singapore in 2008 and 2017. One can see that the vulnerability in the south is higher than that in the north in the past decade. The most vulnerable area is in the southeast, which has a high concentration of residents. Compared with 2008, vulnerable areas of levels Ⅰ through Ⅲ expanded significantly in 2017, and the area of levels Ⅰ and Ⅱ increased to 46% of the total area (Fig. 12), which explains that with the economic development, artificial surface expansion could increase the vulnerability to marine disasters in Singapore.
From 2008 to 2017, a total area of 118.97 km2 in Singapore changed from low-vulnerability to highvulnerability in mainly the coastal reclamation areas and expansion areas for inland residents, which are protected areas that need to be focused on. Usually, the geographical environment of a disaster-bearing body would intervene in the impact of the disaster. For marine disasters, the farther from the coast and the higher the altitude, the less affected the area is, and the less risky it is. Therefore, to develop rationally disaster prevention measures, the risk of these newly increased high-vulnerability areas is divided into coastal distance and altitude. The distance from shore is divided into nine grades in a unit of 1 km. According to the natural breakpoint method, the altitude is divided into six grades: 0–5, 5–17, 17–31, 31–51, 51–87, and 87–198. The risk values of new highvulnerability regions are obtained via spatial overlay analysis, and those regions are divided into five risk levels by the natural breakpoint method (Fig. 13). According to the results, low-altitude areas along the southern coast in an area of 58.80 km2 had the highest risk level. There are many ports, docks, and factories that constitute trade and freight centers. Therefore, once a disaster occurs, the economic losses would be serious. The Ⅱ-risk level areas are mainly residential areas and commercial offices, which are densely populated and are highly vulnerable to casualties after any disasters. According to the natural geographical attributes of these two high-risk levels, it is necessary to strengthen the disaster prevention measures for newly increased high-vulnerability regions within 4 km offshore and below 30 m above sea level. For Ⅲ, Ⅳ, and Ⅳ risk zones, the alertness can be reduced gradually. In this way, the different risk zones can be equipped with different human and property resources to avoid any waste of resources, and the government can achieve targeted disaster prevention.4 DISCUSSION 4.1 Effects of various segmentation scales and the optimal segmentation scale
In this study, we found that when using different imagery for segmentation, the scale effect varies correspondingly. Compared the case of the synergistic use of OLI and PALSAR or the use of OLI only, a larger sensitivity to segmentation scale changes is present when using only PALSAR. We believe this case occurred because of Landsat OLI and PALSAR lack high-resolution, so the segmentation results of small and medium scale may be similar. However, when PALSAR imagery was used only for segmentation, very little band information could be referenced, resulting in a large gap between the obtained homogeneous objects when the scale varies slightly. In addition, larger objects containing a large number of mixed pixels are generated as the scale increases, which led to the sensitivity of OA to scale.
Studying the effects of different scales on the classification results is usually done to determine an optimal segmentation scale (Yue et al., 2012). Optimal segmentation scale refers to the segmentation scale that can provide the highest classification accuracy (Wang et al., 2004), and it is usually a single scale. By comparing the classification results of different scales, the present study found that various strategies could provide a higher OA within a certain scale range; this is different from when very high-resolution imagery is used in the case in which only one optimal segmentation scale exists. Therefore, when using integrated Landsat OLI and PALSAR data, there is no need for much concern about the size of the segmentation scale. In addition, according to Fig. 5, when the scale is between 5 and 80, good results could be achieved.4.2 Effects of PLASAR image features on classification results
The synergistic use of PLASAR and OLI image features can achieve the highest overall classification accuracy, and the scattering and textural features of PLASAR have great advantages in successfully extracting impervious surface and bare land. However, when using PLASAR image features only, the OA is much lower than the results classified with the OLI spectral features alone. One reason for this phenomenon may be the resampling of PLASAR imagery to the same resolution as OLI weakened its ability to identify ground objects. Another possible reason is that the information content of PLASAR data would also decrease when removing speckle noise (Xu et al., 2017).
We also found that the addition of PLASAR image features to spectral images could significantly avoid the interference of cloud and shadow. However, as indicated in Section 3.3, only cloud and shadow features that cover an impervious surface could be identified correctly. This may occur because the special scattering mechanism of PLASAR images enables them to extract accurately impervious surfaces covered with buildings of a certain height based on altitude information. Meanwhile, in a nonimpervious area, because the altitude information of a ground object is not obvious and because of the influence of spectral information of cloud and shadow, PALSAR cannot successfully penetrate clouds. Of course, this may also occur because the weights of the various image features used in this paper are the same; perhaps increasing the weight of radar image features in cloud coverage area can achieve better results. This represents a problem worthy of discussion in the future.
It is well known that the identification and preprocessing of clouds and shadows has always been a difficult problem in image mapping. One significant finding of this research is to reveal the effects of radar and optical imagery on the classification in small areas of cloud cover under different classification strategies. Additionally, we proposed to combine the results of two strategies in the area covered by nonimpervious surfaces, which could not only achieve high OA values but also quickly and easily remove the interference of clouds and shadows without preprocessing for them. This proposal provides an idea for rapid mapping in cloudy coastal areas.4.3 Indications of remote sensing for vulnerability assessment of marine disasters
Remote sensing provides a large amount of data for use in marine disaster vulnerability assessments. Because different data sets have their own advantages, deciding how to choose data and methods is the key to successful application. Optical images can provide rich features such as bands and textures to identify accurately the type of ground objects. However, optical images are affected greatly by clouds, especially in low latitudes, where there are very few cloudless images available. The ability of radar images to penetrate clouds and fog can effectively compensate to the defects in information acquisition of the optical image in the cloud area and cloud shadow area. The experiments in this paper have proved this. In terms of resolution, medium-resolution data do not provide near-real-time data like lowresolution data, but it has advantages of short re-entry periods, small data volumes, and fast processing speeds when compared to high-resolution satellite data. In addition, it should be noted that mediumresolution images still have shortcomings in providing fine categories, and it needs to perform fine-category mapping for vulnerability assessment with the help of high-resolution images. However, medium-resolution images can quickly lock a wide range of key vulnerability changes, which can save cost and time. Therefore, considering efficiency and accuracy, we recommend the synergistic use of medium-resolution optical and SRA images for the vulnerability assessment of marine disasters in a wide range of coastal cloudy areas.5 CONCLUSION
To better conduct the vulnerability assessment of coastal areas with persistent cloud cover, we systematically analyzed the effects of segmentation scale and classification feature on the accuracy of land cover classification in Singapore. The best classification results are derived from the combination of Landsat OLI and PALSAR imagery using all of the selected features, and considering accuracy and efficiency the segmentation scale between 40 and 80 is recommended. The additional scattering and textural features of PALSAR to optical imagery could significantly improve the OA, especially the accuracy of identifying impervious surface, bare land, cloud, and shadow. In other words, the combination of PALSAR and Landsat OLI data could not only overcome the shortage of high quality cloudless optical imagery in cloudy coastal areas but also integrate the advantages of each type of imagery, which will significantly improve the efficiency and accuracy of land cover classification. In addition, an area with 118.97 km2 turned into high-vulnerability areas in Singapore during the past 10 years. Of these regions, areas within 4 km of the coast and below 30 m are high-risk areas that require major attention.
The conclusions of this study build a scientific basis for the synergistic use of medium resolution Landsat OLI and PALSAR data in the update of land cover in the cloudy coastal areas, which can provide timely and rich information for vulnerability assessment of marine disasters. Therefore, we believe that this classification strategy will have a strong application prospect for marine disaster prevention and mitigation.6 DADA AVAILABILITY STATEMENT
The data that support the findings of this study are available from the corresponding author upon reasonable request. The Landsat OIL and PALSAR data can be obtained from https://glovis.usgs.gov/ and http://www.eorc.jaxa.jp/ALOS/en/palsar_fnf/data/index.htm, respectively.
Ban Y F, Hu H T, Rangel I M. 2010. Fusion of quickbird MS and RADARSAT SAR data for urban land-cover mapping:object-based and knowledge-based approach. International Journal of Remote Sensing, 31(6): 1391-1410. DOI:10.1080/01431160903475415
Breiman L. 2001. Random forests. Machine Learning, 45(1): 5-32. DOI:10.1023/A:1010933404324
Buono A, Nunziata F, Migliaccio M, Yang X, Li X. 2017. Classification of the Yellow River delta area using fully polarimetric SAR measurements. International Journal of Remote Sensing, 38(23): 6714-6734. DOI:10.1080/01431161.2017.1363437
Chen W S, Gou S P, Wang X L, Li X F, Jiao L C. 2018. Classification of PolSAR images using multilayer autoencoders and a self-paced learning approach. Remote Sensing, 10(1): 110. DOI:10.3390/rs10010110
Cho M A, Mathieu R, Asner G P, Naidoo L, Van Aardt J, Ramoelo A, Debba P, Wessels K, Main R, Smit I P J, Erasmus B. 2012. Mapping tree species composition in South African savannas using an integrated airborne spectral and LiDAR system. Remote Sensing of Environment, 125: 214-226. DOI:10.1016/j.rse.2012.07.010
Congalton R G. 1991. A review of assessing the accuracy of classifications of remotely sensed data. Remote Sensing of Environment, 37(1): 35-46. DOI:10.1016/0034-4257(91)90048-B
Ding X W, Li X F. 2011. Monitoring of the water-area variations of Lake Dongting in China with ENVISAT ASAR images. International Journal of Applied Earth Observation and Geoinformation, 13(6): 894-901. DOI:10.1016/j.jag.2011.06.009
Ding X W, Li X F. 2014. Shoreline movement monitoring based on SAR images in Shanghai, China. International Journal of Remote Sensing, 35(11-12): 3994-4008. DOI:10.1080/01431161.2014.916480
Ding X W, Nunziata F, Li X F, Migliaccio M. 2015. Performance analysis and validation of waterline extraction approaches using single- and dual-polarimetric SAR data. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 8(3): 1019-1027. DOI:10.1109/JSTARS.2014.2362511
Dong J W, Xiao X M, Chen B Q, Torbick N, Jin C, Zhang G L, Biradar C. 2013. Mapping deciduous rubber plantations through integration of PALSAR and multi-temporal Landsat imagery. Remote Sensing of Environment, 134: 392-402. DOI:10.1016/j.rse.2013.03.014
Dong J W, Xiao X M, Sheldon S, Biradar C, Duong N D, Hazarika M. 2012. A comparison of forest cover maps in Mainland Southeast Asia from multiple sources:PALSAR, MERIS, MODIS and FRA. Remote Sensing of Environment, 127: 60-73. DOI:10.1016/j.rse.2012.08.022
Esch T, Marconcini M, Felbier A, Roth A, Heldens W, Huber M, Schwinger M, Taubenböck H, Müller A, Dech S. 2013. Urban footprint processor-fully automated processing chain generating settlement masks from global data of the TanDEM-X mission. IEEE Geoscience and Remote Sensing Letters, 10(6): 1617-1621. DOI:10.1109/LGRS.2013.2272953
Gibril M B A, Bakar S A, Yao K, Idrees M O, Pradhan B. 2017. Fusion of RADARSAT-2 and multispectral optical remote sensing data for LULC extraction in a tropical agricultural area. Geocarto International, 32(7): 735-748. DOI:10.1080/10106049.2016.1170893
Gong S Q, Hagan D F T, Wu X Y, Wang G J. 2018b. Spatiotemporal analysis of precipitable water vapour over northwest china utilizing MERSI/FY-3A products. International Journal of Remote Sensing, 39(10): 3094-3110. DOI:10.1080/01431161.2018.1437298
Gong S Q, Hagan D F, Lu J, Wang G J. 2018a. Validation on MERSI/FY-3A precipitable water vapor product. Advances in Space Research, 61(1): 413-425. DOI:10.1016/j.asr.2017.10.005.
Gong S Q. 2018. Evaluation of maritime aerosol optical depth and precipitable water vapor content from the Microtops Ⅱ Sun photometer. Optik, 169: 1-7. DOI:10.1016/j.ijleo.2018.05.025
Gou S P, Li X F, Yang X F. 2016. Coastal zone classification with fully polarimetric SAR imagery. IEEE Geoscience and Remote Sensing Letters, 13(11): 1616-1620. DOI:10.1109/LGRS.2016.2597965
Greatbatch I. 2012. Polarimetric radar imaging:from basics to applications, by Jong-Sen Lee and Eric Pottier. International Journal of Remote Sensing, 33(2): 661-662. DOI:10.1080/01431161.2010.535193
Hayes M M, Miller S N, Murphy M A. 2014. High-resolution landcover classification using Random Forest. Remote Sensing Letters, 5(2): 112-121. DOI:10.1080/2150704X.2014.882526
Huang X D, Wang J F, Shang J L, Liao C H, Liu J G. 2017. Application of polarization signature to land cover scattering mechanism analysis and classification using multi-temporal C-band polarimetric RADARSAT-2 imagery. Remote Sensing of Environment, 193: 11-28. DOI:10.1016/j.rse.2017.02.014
Huete A R, Liu H Q, Batchily K, Van Leeuwen W. 1997. A comparison of vegetation indices over a global set of TM images for EOS-MODIS. Remote Sensing of Environment, 59(3): 440-451. DOI:10.1016/S0034-4257(96)00112-5
Isaac E, Easwarakumar K S, Isaac J. 2017. Urban landcover classification from multispectral image data using optimized AdaBoosted random forests. Remote Sensing Letters, 8(4): 350-359. DOI:10.1080/2150704X.2016.1274443
Jiao X F, Zhang Y, Guindon B. 2015. Synergistic use of RADARSAT-2 ultra fine and fine quad-pol data to map oil sands infrastructure land:object-based approach. International Journal of Applied Earth Observation and Geoinformation, 38: 193-203. DOI:10.1016/j.jag.2015.01.007
Kim M, Warner T A, Madden M, Atkinson D S. 2011. Multiscale GEOBIA with very high spatial resolution digital aerial imagery:scale, texture and image objects. International Journal of Remote Sensing, 32(10): 2825-2850. DOI:10.1080/01431161003745608
Laliberte A S, Rango A. 2009. Texture and scale in objectbased analysis of subdecimeter resolution unmanned aerial vehicle (UAV) imagery. IEEE Transactions on Geoscience and Remote Sensing, 47(3): 761-770. DOI:10.1109/TGRS.2008.2009355
Lehmann E A, Caccetta P, Lowell K, Mitchell A, Zhou Z S, Held A, Milne T, Tapley I. 2015. SAR and optical remote sensing:assessment of complementarity and interoperability in the context of a large-scale operational forest monitoring system. Remote Sensing of Environment, 156: 335-348. DOI:10.1016/j.rse.2014.09.034
Li M C, Ma L, Blaschke T, Cheng L, Tiede D. 2016. A systematic comparison of different object-based classification techniques using high spatial resolution imagery in agricultural environments. International JournalofAppliedEarthObservationandGeoinformation, 49: 87-98. DOI:10.1016/j.jag.2016.01.011
Liu Q R, Ruan C Q, Zhong S, Li J, Yin Z H, Lian X H. 2018. Risk assessment of storm surge disaster based on numerical models and remote sensing. International JournalofAppliedEarthObservationandGeoinformation, 68: 20-30. DOI:10.1016/j.jag.2018.01.016
Ma L, Cheng L, Li M C, Liu Y X, Ma X X. 2015. Training set size, scale, and features in geographic object-based image analysis of very high resolution unmanned aerial vehicle imagery. ISPRS Journal of Photogrammetry and Remote Sensing, 102: 14-27. DOI:10.1016/j.isprsjprs.2014.12.026
Malahlela O, Cho M A, Mutanga O. 2014. Mapping canopy gaps in an indigenous subtropical coastal forest using high-resolution WorldView-2 data. International Journal of Remote Sensing, 35(17): 6397-6417. DOI:10.1080/01431161.2014.954061
Nascimento Jr W R, Souza-Filho P W M, Proisy C, Lucas R M, Rosenqvist A. 2013. Mapping changes in the largest continuous amazonian mangrove belt using object-based classification of multisensor satellite imagery. Estuarine, Coastal and Shelf Science, 117: 83-93. DOI:10.1016/j.ecss.2012.10.005
Nunziata F, Migliaccio M, Li X F, Ding X W. 2014. Coastline extraction using dual-polarimetric COSMO-SkyMed PingPong mode SAR data. IEEE Geoscience and Remote Sensing Letters, 11(1): 104-108. DOI:10.1109/LGRS.2013.2247561
Pesaresi M, Guo H D, Blaes X, Ehrlich D, Ferri S, Gueguen L, Halkia M, Kauffmann M, Kemper T, Lu L L, MarinHerrera M A, Ouzounis G K, Scavazzon M, Soille P, Syrris V, Zanchetta L. 2013. A global human settlement layer from optical HR/VHR RS data:concept and first results. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 6(5): 2102-2131. DOI:10.1109/JSTARS.2013.2271445
Peters J, Van Coillie F, Westra T, De Wulf R. 2011. Synergy of very high resolution optical and radar data for objectbased olive grove mapping. International Journal of Geographical Information Science, 25(6): 971-989. DOI:10.1080/13658816.2010.515946
Shimada M, Isoguchi O, Tadono T, Isono K. 2009. PALSAR radiometric and geometric calibration. IEEE Transactions on Geoscience and Remote Sensing, 47(12): 3915-3932. DOI:10.1109/TGRS.2009.2023909
Sidhu N, Pebesma E, Wang Y C. 2017. Usability study to assess the IGBP land cover classification for Singapore. Remote Sensing, 9(10): 1075. DOI:10.3390/rs9101075
State Oceanic Administration People's Republic of China. 2015. Guideline for risk assessment and zoning of storm surge disaster. State Oceanic Administration People's Republic of China. http://www.gzagri.gov.cn/gzsnyj/bzgf/201807/69dc480f97824378a8183203810038c8/files/0e7788e1cffd4ca69668fea125b0a3a1.pdf. Accessed on 2018-05-25.
Wang B, Ono A, Muramatsu K, Fujiwara N. 1999. Automated detection and removal of clouds and their shadows from Landsat TM images. IEICE Transactions on Information and Systems, E82-D(2): 453-460.
Wang L, Sousa W P, Gong P. 2004. Integration of object-based and pixel-based classification for mapping mangroves with IKONOS imagery. International Journal of Remote Sensing, 25(24): 5655-5668. DOI:10.1080/014311602331291215
Wang W S, Yang X F, Li X F, Chen K S, Liu G H, Li Z W, Gade M. 2017. A fully polarimetric SAR imagery classification scheme for mud and sand flats in intertidal zones. IEEE Transactions on Geoscience and Remote Sensing, 55(3): 1734-1742. DOI:10.1109/TGRS.2016.2631632
Wang Z H, Lu C, Yang X M. 2018a. Exponentially sampling scale parameters for the efficient segmentation of remotesensing images. International Journal of Remote Sensing, 39(6): 1628-1654. DOI:10.1080/01431161.2017.1410297
Wang Z H, Meng F, Yang X M, Yang F S, Fang Y. 2016. Study on the automatic selection of segmentation scale parameters for high spatial resolution remote sensing images. Journal of Geo-information Science, 18(5): 639-648. (in Chinese with English abstract) DOI:10.3724/SP.J.1047.2016.00639
Wang Z H, Yang X M, Lu C, Yang F S. 2018b. A scale selfadapting segmentation approach and knowledge transfer for automatically updating land use/cover change databases using high spatial resolution images. International Journal of Applied Earth Observation and Geoinformation, 69: 88-98. DOI:10.1016/j.jag.2018.03.001
Watmough G R, Atkinson P M, Hutton C W. 2011. A combined spectral and object-based approach to transparent cloud removal in an operational setting for Landsat ETM+. International Journal of Applied Earth Observation and Geoinformation, 13(2): 220-227. DOI:10.1016/j.jag.2010.11.006
Witharana C, Civco D L. 2014. Optimizing multi-resolution segmentation scale using empirical methods:exploring the sensitivity of the supervised discrepancy measure euclidean distance 2 (ED2). ISPRS Journal of Photogrammetry and Remote Sensing, 87: 108-121. DOI:10.1016/j.isprsjprs.2013.11.006
Xu H Q. 2006. Modification of normalised difference water index (NDWI) to enhance open water features in remotely sensed imagery. International Journal of Remote Sensing, 27(14): 3025-3033. DOI:10.1080/01431160600589179
Xu R, Zhang H S, Lin H. 2017. Urban impervious surfaces estimation from optical and SAR imagery:a comprehensive comparison. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 10(9): 4010-4021. DOI:10.1109/JSTARS.2017.2706747
Yue A Z, Yang J Y, Zhang C, Su W, Yun W J, Zhu D H, Liu S X, Wang Z W. 2012. The optimal segmentation scale identification using multispectral WorldView-2 images. Sensor Letters, 10(1-2): 285-291. DOI:10.1166/sl.2012.1860
Zha Y, Gao J, Ni S. 2003. Use of normalized difference builtup index in automatically mapping urban areas from TM imagery. International Journal of Remote Sensing, 24(3): 583-594. DOI:10.1080/01431160304987
Zhang X L, Xiao P F, Song X Q, She J F. 2013. Boundaryconstrained multi-scale segmentation method for remote sensing images. ISPRS Journal of Photogrammetry and Remote Sensing, 78: 15-25. DOI:10.1016/j.isprsjprs.2013.01.002
Zhu Z, Woodcock C E, Rogan J, Kellndorfer J. 2012. Assessment of spectral, polarimetric, temporal, and spatial dimensions for urban and peri-urban land cover classification using Landsat and SAR data. Remote Sensing of Environment, 117: 72-82. DOI:10.1016/j.rse.2011.07.020
Zhu Z, Woodcock C E. 2012. Object-based cloud and cloud shadow detection in Landsat imagery. Remote Sensing of Environment, 118: 83-94. DOI:10.1016/j.rse.2011.10.028