With the increasing popularity of using autonomous underwater vehicles (AUVs) to gather large quantities of Synthetic Aperture Sonar (SAS) seafloor imagery, the burden on human operators to identify targets in these seafloor images has increased significantly. Existing methods of automated target detection can have perfect or near-perfect accuracy, but often produce a high ratio of false positives. Thus, it is desired to find features that discriminate between targets and high-confidence false alarms. In this paper, we examine the potential of several classical methods of feature extraction in how well their generated features can separate the two classes of image tiles: those containing targets from those containing no targets. To quantify the ability of a set of features to separate these classes, we measure the region-based cross validation accuracy of a linear SVM trained on the features in question, extracted from SAS imagery provided to us by the U.S. Navy. We show that these general feature extraction methods show potential in the ATR problem, suggesting further research is warranted.
In underwater synthetic aperture sonar (SAS) imagery, there is a need for accurate target recognition algorithms. Automated detection of underwater objects has many applications, not the least of which being the safe extraction of dangerous explosives. In this paper, we discuss experiments on a deep learning approach to binary classification of target and non-target SAS image tiles. Using a fused anomaly detector, the pixels in each SAS image have been narrowed down into regions of interest (ROIs), from which small target-sized tiles are extracted. This tile data set is created prior to the work done in this paper. Our objective is to carry out extensive tests on the classification accuracy of deep convolutional neural networks (CNNs) using location-based cross validation. Here we discuss the results of varying network architectures, hyperparameters, loss, and activation functions; in conjunction with an analysis of training and testing set configuration. It is also in our interest to analyze these unique network setups extensively, rather than comparing merely classification accuracy. The approach is tested on a collection of SAS imagery.
Fractal analysis of an image is a mathematical approach to generate surface related features from an image or image tile that can be applied to image segmentation and to object recognition. In undersea target countermeasures, the targets of interest can appear as anomalies in a variety of contexts, visually different textures on the seafloor. In this paper, we evaluate the use of fractal dimension as a primary feature and related characteristics as secondary features to be extracted from synthetic aperture sonar (SAS) imagery for the purpose of target detection. We develop three separate methods for computing fractal dimension. Tiles with targets are compared to others from the same background textures without targets. The different fractal dimension feature methods are tested with respect to how well they can be used to detect targets vs. false alarms within the same contexts. These features are evaluated for utility using a set of image tiles extracted from a SAS data set generated by the U.S. Navy in conjunction with the Office of Naval Research. We find that all three methods perform well in the classification task, with a fractional Brownian motion model performing the best among the individual methods. We also find that the secondary features are just as useful, if not more so, in classifying false alarms vs. targets. The best classification accuracy overall, in our experimentation, is found when the features from all three methods are combined into a single feature vector.
Automated anomaly and target detection are commonly used as a prescreening step within a larger target detection and target classification framework to find regions of interest for further analysis. A number of anomaly and target detection algorithms have been developed in the literature for application to target detection in Synthetic Aperture Sonar (SAS) imagery. In this paper, a comparison of two anomaly and one target detection algorithm for target detection in synthetic aperture sonar is presented. In the comparison, each method is tested on a large set of real sonar imagery and results are evaluated using receiver operating characteristic curves. The results are compiled and quantitatively shown to highlight the strengths and weakness of the variety of approaches within various sea-floor environments and on particular target shapes and types.
The ability to discern the characteristics of the seafloor has many applications. Due to minimal visibility, Synthetic Aperture Sonar Imagery (SAS) uses sonar to produce a texture map of the seabed below. In this paper, we discuss an approach to detecting targets from varying seafloor contexts. The approach begins with one or more anomaly detecting prescreeners that use minimal information about targets and that can be applied under various seafloor conditions. In addition, these anomaly detectors see multiple fusion experiments and manipulation to bolster and account for unique target characteristics. Suppressed hits or peaks in the resultant confidence surface, are further processed for scoring. Through ROC curve production and areas under their curves, detection effectiveness becomes simple to distinguish. Attention is paid to determine performance with respect to seafloor type from various locations. The approach is tested on a SAS data collection conducted by the U.S. Navy.