**Publications**(147)

This will count as one of your downloads.

You will have access to both the presentation and article (if available).

This content is available for download via your institution's subscription. To access this item, please sign in to your personal account.

No SPIE account? Create an account

*a priori*. Unfortunately, in real world applications with no availability of ground truth its effectiveness is generally performed by visual inspection which is the only means of evaluating its performance qualitatively in which case background information provides an important piece of information to help image analysts to interpret results of anomaly detection. Interestingly, this issue has never been explored in anomaly detection. This paper investigates the effect of background on anomaly detection via various degrees of background suppression. It decomposes anomaly detection into a two-stage process where the first stage is background suppression so as to enhance anomaly contrast against background and is then followed by a matched filter to increase anomaly detectability by intensity. In order to see background suppression progressively changing with data samples causal anomaly detection is further developed to see how an anomaly detector performs background suppression sample by sample with sample varying spectral correlation. Finally, a 3D ROC analysis used to evaluate effect of background suppression on anomaly detection.

*p*. So far all techniques are eigen-based approaches which use eigenvalues or eigenvectors to estimate the value of

*p*. However, when eigenvalues are used to estimate VD such as Harsanyi-Farrand-Chang’s method or hyperspectral signal subspace identification by minimum error (HySime), there will be no way to find what the spectrally distinct signatures are. On the other hand, if eigenvectors/singular vectors are used to estimate VD such as maximal orthogonal complement algorithm (MOCA), eigenvectors/singular vectors do not represent real signal sources. Most importantly, current available methods used to estimate VD run into two major issues. One is the value of VD being fixed at a constant. The other is a lack of providing a means of finding signal sources of interest. As a matter of fact, the spectrally distinct signatures defined by VD should adapt its value to various target signal sources of interest. For example, the number of endmembers should be different from the number of anomalies. In this paper we develop a second-order statistics approach to determining the value of the VD and the virtual endmember basis.

*p*endmember classes where the value of

*p*can be determined by virtual dimensionality (VD). We further develop an endmember identification algorithm to select true endmembers from these

*p*endmembers. So, in our proposed technique three state processes are developed. It first uses PPI to produce a set of endmember candidates and then develops a clustering algorithm to group PPI-generated endmember candidates into

*p*endmember classes and finally concludes by designing an algorithm to extract true endmembers from the

*p*endmember classes.

^{nd}order statistics compared to R-RXD which is specified by statistics of the first two orders including sample mean as the first order statistics, the values determined by K-RXD and R-RXD will be different. Experiments are conducted in comparison with widely used eigen-based approaches.

^{1}norm of the anomaly subspace subject to a decomposition of data space into background and anomaly subspaces. By virtue of such a background-anomaly decomposition the commonly used RX detector can be implemented in the sense that anomalies can be separated in the anomaly subspace specified by a sparse matrix. Experimental results demonstrate that the background-anomaly subspace decomposition can actually improve and enhance RXD performance.

**R**by suppressing the background thus enhancing detection of targets of interest. In many real world problems, implementing target detection on a timely basis is crucial, specifically moving targets. However, since the calculation of the sample correlation matrix

**R**needs the complete data set prior to its use in detection, CEM is prevented from being implemented as a real time processing algorithm. In order to resolve this dilemma, the sample correlation matrix

**R**must be replaced with a causal sample correlation matrix formed by only those data samples that have been visited and the currently being processed data sample. This causality is a pre-requisite to real time processing. By virtue of such causality, designing and developing a real time processing version of CEM becomes feasible. This paper presents a progressive CEM (PCEM) where the causal sample correlation matrix can be updated sample by sample. Accordingly, PCEM allows the CEM to be implemented as a causal CEM (C-CEM) as well as real time (RT) CEM via a recursive update equation in real time.

**K**used in RXD was replaced by the sample correlation matrix,

**R**(

*n*) which can be updated up to the currently being visited data sample,

**r**

_{n}. However, such proposed C-RXD is not a real processing algorithm since the inverse of the matrix

**R**(

*n*),

**R**

^{-1}(n) is recalculated by entire data samples up to

**r**

*n*. In order to implement C-RXD the matrix

**R**(

*n*) must be carried out in such a fashion that the matrix

**R**

^{-1}(

*n*) can be updated only through previously calculated

**R**

^{-1}(

*n*-1) as well as the currently being processed data sample

**r**

_{n}. This paper develops a real time processing of CRXD, called real time causal anomaly detector (RT-C-RXD) which is derived from the concept of Kalman filtering via a causal update equation using only innovations information provided by the pixel currently being processed without re-processing previous pixels.

*p*and then finding a technique to reduce an original data space to a low dimensional data space with dimensionality specified by

*p*. This paper introduces a new concept of dynamic dimensionality reduction (DDR) which considers the parameter

*p*as a variable by varying the value of

*p*to make

*p*adaptive compared to the commonly used DR, referred to as static dimensionality reduction (SDR) with the parameter

*p*fixed at a constant value. In order to materialize the DDR another new concept, referred to as progressive DR (PDR) is also developed so that the DR can be performed progressively to adapt the variable size of data dimensionality determined by varying the value of

*p*. The advantages of the DDR over SDR are demonstrated through experiments conducted for hyperspectral image classification.

*a priori*. A second one is the use of random initial endmembers to initialize N-FINDR, which results in inconsistent final results of extracted endmembers. A third one is its very expensive computational cost caused by an exhaustive search. While the first two issues can be resolved by a recently developed concept, virtual dimensionality (VD) and custom-designed initialization algorithms respectively, the third issue seems to remain challenging. This paper addresses the latter issue by re-designing N-FINDR which can generate one endmember at a time sequentially in a successive fashion to ease computational complexity. Such resulting algorithm is called SeQuential N-FINDR (SQ N-FINDR) as opposed to the original N-FINDR referred to as SiMultaneous N-FINDR (SM N-FINDR) which generates all endmembers simultaneously at once. Two variants of SQ N-FINDR can be further derived to reduce computational complexity. Interestingly, experimental results show that SQ N-FINDR can perform as well as SM-N-FINDR if initial endmembers are appropriately selected.

*p*must be sufficient for data analysis. Unfortunately, in MultiSpectral Imagery (MSI)

*p*seems to be small, while

*p*in HyperSpectral Imagery (HSI) seems too large. Interestingly, very little has been reported on how to deal with this issue when

*p*is too small or too large. This paper investigates this issue. When

*p*is too small, two approaches are developed to mitigate the problem. One is Band Expansion Process (BEP) which augments original data band dimensionality by producing additional bands via a set of nonlinear functions. The other is a kernel-based approach, referred to as Kernel-based PCA (K-PCA) which maps features in the original data space to a higher dimensional feature space via a set of nonlinear kernel. While both approaches make attempts to resolve the issue of a small

*p*using a set of nonlinear functions, their design rationales are completely different, particularly they are not correlated. As for a large

*p*such as HSI, a recently developed Virtual Dimensionality (VD) can be used for this purpose where the VD was originally developed to estimate number of spectrally distinct signatures. If we assume one spectrally distinct signature can be accommodated by one component, the value of

*p*can be actually determined by the VD. Finally, experiments are conducted to explore and evaluate the utility of component analyses, specifically, PCA and ICA using BEP and K-PCA for MSI and VD for HSI.

*k*th moment and statistical independency specified by infinite order of statistics measured by mutual information. In order to substantiate proposed statistics-based EEAs, experiments using synthetic and real images are conducted for demonstration.

^{nd}order statistics, least squares error (LSE) also specified by 2

^{nd}order statistics (variance), 3

^{rd}order statistics (skewness), 4

^{th}order statistics (kurtosis),

*k*

^{th}moment, entropy specified by infinite order of statistics and statistical independency measured by mutual information. Of particular interest are Independent Component Analysis-based EEAs which use statistics of various orders such as variance, skewness, kurtosis the kth moment and infinite orders including entropy and divergence. In order to substantiate proposed statistics-based EEAs, experiments using synthetic and real images are conducted in comparison with several popular and well-known EEAs such as Pixel Purity Index (PPI), N-finder algorithm (N-FINDR).

*p*required for an endmember extraction algorithm (EEA) to generate. Unfortunately, this issue has been overlooked and avoided by making an empirical assumption without justification. However, it has been shown that an appropriate selection of

*p*is critical to success in extracting desired endmembers from image data. This paper explores methods available in the literature that can be used to estimate the value,

*p*. These include the commonly used eigenvalue-based energy method, An Information criterion (AIC), Minimum Description Length (MDL), Gershgorin radii-based method, Signal Subspace Estimation (SSE) and Neyman-Pearson detection method in detection theory. In order to evaluate the effectiveness of these methods, two sets of experiments are conducted for performance analysis. The first set consists of synthetic imagebased simulations which allow us to evaluate their performance with

*a*

*priori*knowledge, while the second set comprising of real hyperspectral image experiments which demonstrate utility of these methods in real applications.

*p*, required to be generated and the other is generation of initial endmembers. Since most endmember extraction algorithms (EEAs) use randomly generated vectors as their initial endmembers to initialize their algorithms, their final generated endmembers are generally determined by these random initial endmembers. As a result, a different set of random initial endmembers may well likely produce a different final set of desired endmembers. This paper converts this disadvantage to an advantage and further resolves the above-mentioned two issues. Due to the random nature of initial endmembers, the proposed idea is to implement an EEA as a random algorithm so that a single run using a random set of initial endmembers is considered as a realization of a random algorithm. As a result, if an EEA is implemented several times with different sets of random initial endmembers, the intersection of their final generated endmembers in all runs should contain the desired endmembers. An EEA is then terminated when their produced intersections converge to the same set of desired endmembers. In this case, there is no need to determine the

*p*. An EEA implemented in such a manner is called automatic EEA (AEEA). Two commonly used EEAs, pixel purity index (PPI) and N-finder algorithm (N-FINDR), are extended to AEEAs along with a new automatic ICA-based EEA. Experimental results demonstrate that the AEEA performs at least as well as their counterparts.

*p*, a recently developed concept, called virtual dimensionality (VD) is used to estimate the

*p*. Once the

*p*is determined, a set of

*p*desired bands can be selected by LCBS. Finally, experiments are conducted to substantiate the proposed LCBS.

*a priori*sample spectral correlation (PR-SSC) and

*a posteriori*SSC (PSSSC) are developed to account for spectral variability within real data to achieve better target discrimination and identification. While the former can be used to derive a family of

*a priori*hyperspectral measures via orthogonal subspace projection (OSP) to eliminate interfering effects caused by undesired signatures, the latter results in a family of

*a posteriori*hyperspectral measures that include sample covariance/correlation matrix as

*a posteriori*information to increase ability in discrimination and identification. Interestingly, some well-known measures such as Euclidean distance (ED) and spectral angle mapper (SAM) can be shown to be special cases of the proposed PR-SSC and PS-SSC hyperspectral measures.

^{nd}order statistics-based method, the ICA-AQA is a high order statistics-based technique. Second, due to the use of statistical independence it is generally thought that the ICA cannot be implemented as a constrained method. The ICA-AQA shows otherwise. Third, in order for the ACLSMA to perform abundance quantification, it requires an algorithm to find image endmembers first then followed by an abundance-constrained algorithm for quantification. As opposed to such a two-stage process, the ICAAQA can accomplish endmember extraction and abundance quantification simultaneously in one-shot operation. Experimental results demonstrate that the ICA-AQA performs at least comparably to abundance-constrained methods.

_{D}versus the false alarm probability, P

_{F}. Unfortunately, such a two-dimensional (2D) (P

_{D},P

_{F})-based ROC curve does not factor in the concentration detected in an agent signal which is a crucial parameter in chemical/biological agent detection. The proposed 3D ROC analysis is developed from such a need. It includes an additional parameter, referred to as threshold

*t*, which is used to threshold the detected agent signal concentration. Consequently, a different value of

*t*results in a different 2D ROC curve. In order to take into account the thresholding factor

*t*, a 3D ROC curve is derived and plotted based on three parameters, (P

_{D},P

_{F},

*t*). As a result of the 3D ROC curve, three 2D ROC curves can be also derived. One is the conventional 2D (P

_{D},P

_{F})-ROC curve. Another is a 2D (P

_{D},

*t*)-ROC curve which describes the relationship between P

_{D}and the threshold value

*t*. A third one is a 2D (P

_{F},

*t*)-ROC curve which shows the effect of the threshold value

*t*on P

_{F}. The utility of the proposed 3D ROC analysis will be demonstrated by the detection software developed by the UMBC for the tickets used in HHA for water monitoring.

*L*spectral dimensions, the

*L*-dimensional volume formed by a simplex with vertices specified by purest pixels is always larger than that formed by any other combination of pixels. Despite the algorithm has been successfully used in various applications, it does not provide a mechanism to determine how many endmembers are needed. In this work, we use a recently developed concept of virtual dimensionality (VD) to determine how many endmembers need to be generated by N-FINDR. Another issue in implementing the algorithm is that N-FINDR starts with a random set of pixels generated from the data as the initial endmember set which cannot be selected by users at their discretion. Since the algorithm does not perform an exhaustive search, it is very sensitive to the selection of initial endmembers which not only can affect the algorithm convergence rate but also the final results. In order to resolve this dilemma, we use an endmember initialization algorithm (EIA) that can be used to select an appropriate set of endmembers for initialization of N-FINDR. Experiments show that, when N-FINDR is implemented in conjunction with such EIA-generated initial endmembers, the number of replacements during the course of searching process can be substantially reduced.

*a priori*, and interferers which are unknown interfering sources. By virtue of such signal decomposition, we chan show that the TCIMF is actually a generalization of the OSP and CEM. In particular, we investigate assumptions made for the OSP and CEM in terms of these three types of signal sources and exploit insights into their filter design. As will be shown in this paper, the OSP and the CEM perform the same tasks by operating different levels of information and both can be viewed as special cases of the TCIMF.

_{i}and x

_{j}denote two hyperspectral image pixel vectors with their corresponding spectra specified by s

_{i}and s

_{j}. SAM is the spectral angle of x

_{i}and x

_{j}and is defined by [SAM(s

_{i},s

_{j})]. Similarly, SID measures the information divergence between x

_{i}and x

_{j}and is defined by [SID(s

_{i},s

_{j})]. The new measure, referred to as (SID,SAM)-mixed measure has two variations defined by SID(s

_{i},s

_{j})xtan(SAM(s

_{i},s

_{j})] and SID(s

_{i},s

_{j})xsin[SAM(s

_{i},s

_{j})] where tan [SAM(s

_{i},s

_{j})] and sin[SAM(s

_{i},s

_{j})] are the tangent and the sine of the angle between vectors x and y. The advantage of the developed (SID,SAM)-mixed measure combines both strengths of SID and SAM in spectral discriminability. In order to demonstrate its utility, a comparative study is conducted among the new measure, SID and SAM where the discriminatory power of the (SID,SAM)-mixed measure is significantly improved over SID and SAM.

Generalized constrained energy minimization approach to subpixel detection for multispectral imagery

You currently do not have any folders to save your paper to! Create a new folder below.

**Proceedings Volume Editor**(4)

This will count as one of your downloads.

You will have access to both the presentation and article (if available).

This content is available for download via your institution's subscription. To access this item, please sign in to your personal account.

No SPIE account? Create an account

View contact details

__sign in__to access your institution's subscriptions.

**To obtain this item,**you may purchase the complete book in print format on SPIE.org.