Change detection is a significant issue for understanding the changes occurring on the land surface. We propose a change detection approach based on a semantic segmentation network from multispectral (MS) images. Different from the traditional approaches that learn deep features from the change index or establish mapping relations from patches, the proposed approach employs the semantic segmentation network UNet++ for end-to-end change detection. Nevertheless, in UNet++, the deep feature is directly upsampled from the node in the lower level and does not involve much information from the nodes in the other levels. To cope with this problem and further enhance its robustness, the zigzag UNet++ (ZUNet++) is developed. In ZUNet++, the zigzag connection between nodes can be found, so the inputs of the node involve not only the upsampled deep feature but also the downsampled shallow feature, i.e., the network fuses multiple feature information. In addition, as few MS training datasets are available, we designed a strategy in which each MS image is transferred into several pseudo-RGB images; thus the network is trained by available RGB training sets and can be applied to the testing MS datasets. In the experiment, three real testing MS datasets that reflect different types of changes in Xi’an City are used. Experimental results show that, upon determining the appropriate parameter, the proposed ZUNet++ outperforms the other state-of-the-art approaches, demonstrating its feasibility and effectiveness.
KEYWORDS: 3D modeling, Tin, 3D image processing, Data modeling, Clouds, Cultural heritage, Gallium nitride, Network architectures, Volume rendering, Visualization
As one of the eight wonders in the world, the virtual restoration of Terracotta Warriors is of great significance to archaeology. However, some parts of fragments were corroded for thousands of years, resulting in the existence of several holes in most of the restored cultural heritage artifacts. Based on the structural and textural information, we present a framework for filling the hole. First, a method based on the Poisson equation was employed to fill the hole on the triangular mesh model. Then, to complete the surface color and texture information of the three-dimension (3D) model, make the hole patch, and the original model surface texture natural transition, the 3D problem is converted into two-dimension (2D) image inpaint problem, and a refined network is added into EdgeConnect to generate a higher resolution result. A set of experiments is performed to evaluate the performance of our proposed framework. We hope the proposed framework can provide a useful tool to guide the virtual restoration of other cultural heritage artifacts.
Bioluminescence tomography (BLT) can reconstruct internal bioluminescent source from the surface measurements. However, multiple sources resolving of BLT is always a challenge. In this work, a comparative study on hybrid clustering algorithm, synchronization-based clustering algorithm and iterative self-organizing data analysis technique algorithm for multiple sources recognition of BLT is conducted. Simulation experiments on two and three sources reconstruction are demonstrated the performances of these three algorithms. The results show that the iterative selforganizing data analysis technique is more suitable for the closer multiple-targets and the other two algorithms are suitable for distant targets. Moreover, iterative self-organizing data analysis technique has the least computing time.
This paper establishes the fuzzy autoencoder (FAE) to detect multiple changes between two one-dimensional multitemporal images. Different from the traditional approaches based on the pixel intensity, FAE includes a multilayer structure through self-reconstruction to extract the feature from an image. Due to the existence of noise in the images, the raw data tend to be corrupted and fail to detect the real changes. Therefore, the fuzzy number is introduced to the autoencoder to establish the FAE which is able to suppress the noise and learn robust features. In this way, the information in the fuzzy domain is introduced into the input, and in practice the fuzzy domain is discretized to facilitate the calculation. In addition, the weighted Frobenius norm is used to establish the loss function which can be minimized to achieve the optimal parameters. The framework is highlighted by the newly designed FAE. As the fuzzy number is introduced into the autoencoder, more information concerning the fuzzy domain is taken into consideration and thus the impact brought by the noise is relieved to a large extent. Hence, the FAE can generate robust features, enhancing its performance on deep feature representation learning. Several tests on three datasets show us the proper parameter settings, and the experimental results from the FAE framework and the other compared approaches demonstrate its effectiveness and robustness in terms of accuracy and elapsed time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.