KEYWORDS: Signal to noise ratio, Sensors, Data compression, Matrices, Distortion, Interference (communication), Doppler effect, Wavelets, Data modeling, Quantization
The Complex Ambiguity Function (CAF) used in emitter location measurement is a 2-dimensional complex-valued
function of time-difference-of-arrival (TDOA) and frequency-difference-of-arrival (FDOA). In classical TDOA/FDOA
systems, pairs of sensors share data (using compression) to compute the CAF, which is then used to estimate the
TDOA/FDOA for each pair; the sets of TDOA/FDOA measurements are then transmitted to a common site where they
are fused into an emitter location. However, in some recently published methods for improved emitter location methods,
it has been proposed that after each pair of sensors computes the CAF it is the entire CAFs that should be shared rather
than the extracted TDOA/FDOA estimates. This leads to a need for methods to compress the CAFs. Because a CAF is a
2-D functions it can be thought of as a form of image - albeit, a complex-valued image. We apply and appropriately
modify the Embedded Zerotree Wavelet (EZW) to compress the Ambiguity Function. Several techniques are analyzed to
exploit the correlation between the imaginary part and real part of Ambiguity Function and comparisons are made
between the approaches. The impact of such compression on the overall location accuracy is assessed via simulations.
Early work in source location using time-difference-of-arrival/frequency-difference-of-arrival (TDOA/FDOA) focused on locating acoustic sources while later work focused on locating electromagnetic sources. The key difference is the signal model assumptions: WSS Gaussian process is widely used in the acoustic case but is not appropriate in the electromagnetic case. The Fisher information (FI) is fundamentally different for the two scenarios and leads to different distortion metrics for data compression algorithms that seek to maximize the FI for a given data rate. We discuss the philosophical impacts of this relevant to the following question: having collected a single set of data and wanting to do the best "job" for that data, should it matter if the data is viewed as coming from a WSS random process?
This work shows that one must be careful when using a random signal model. If one takes the operational rate-distortion view, the goal of compression is to adapt the algorithm to the specific data observed. This is a modern view that contrasts with classical rate-distortion where the distortion measure includes an averaging over the ensemble. We assert that for the operational rate-distortion approach with FI as distortion measure, one should not use a random signal model.
Data compression ideas can be extended to assess the data quality across multiple sensors to manage the network of sensors to optimize the location accuracy subject to communication constraints. From an unconstrained-resources viewpoint it is desirable to use the complete set of deployed sensors; however, that generally results in an excessive data volume. We have previously presented here results on selecting pre-paired sensors. We have now extended our results to enable optimal joint pairing/selection of sensors.
Pairing and selecting sensors to participate in sensing is crucial to satisfying trade-offs between accuracy and time-line requirements. We propose two methods that use Fisher information to determine sensor pairing/selection. The first method optimally determines pairings as well as selections of pairs but with the constraint that no sensors are shared between pairs. The second method allows sensors to be shared between pairs. In the first method, it is simple to evaluate the Fisher information but is challenging to make the optimal selections of sensors. However, the opposite is true in the second method: it is more challenging to evaluate the Fisher information but is simple to make the optimal selections of sensors.
Data compression ideas can be extended to assess the data quality across multiple sensors to manage the network of sensors to optimize the location accuracy subject to communication constraints. From an unconstrained-resources viewpoint it is desirable to use the complete set of deployed sensors; however, that generally results in an excessive data volume. Selecting a subset of sensors to participate in a sensing task is crucial to satisfying trade-offs between accuracy and time-line requirements. For emitter location it is well-known that the geometry between sensors and the target plays a key role in determining the location accuracy. Furthermore, the deployed sensors have different data quality. Given these two factors, it is no trivial matter to select the optimal subset of sensors. We attack this problem through use of a data quality measure based on Fisher Information for set of sensors and optimize it via sensor selection and data compression.
KEYWORDS: Sensors, Sensor networks, Data compression, Data centers, Data communications, Data fusion, Algorithm development, Data analysis, Head, Quantization
Data compression methods have mostly focused on achieving a desired perception quality for multi-media data for a given number of bits. However, there has been interest over the last several decades on compression for communicating data to a remote location where the data is used to compute estimates. This paper traces the perspectives in the research literature for compression-for-estimation. We discuss how these perspectives can all be cast in the following form: the source emits a signal - possibly dependent on some unknown parameter(s), the ith sensor receives the signal and compresses it for transmission to a central processing center where it is used to make the estimate(s). The previous perspectives can be grouped as optimizing compression for the purpose of either (i) estimation of the source signal or (ii) the source parameter. Early results focused on restricting the encoder to being a scalar quantizer that is designed according to some optimization criteria. Later results focused on more general compression structures, although, most of those focus on establishing information theoretic results and bounds. Recent results by the authors use operational rate-distortion methods to develop task-driven compression algorithms that allow trade-offs between the multiple estimation tasks for a given rate.
KEYWORDS: Sensor networks, Data compression, Sensors, Distortion, Error analysis, Data communications, Signal to noise ratio, Algorithm development, Energy efficiency, Image compression
This paper first discusses the need for data compression within sensor networks and argues that data compression is a fundamental tool for achieving trade-offs in sensor networks among three important sensor network parameters: energy-efficiency, accuracy, and latency. Next, it discusses how to use Fisher information to design data compression algorithms that address the trade-offs inherent in accomplishing multiple estimation tasks within sensor networks. Results for specific examples demonstrate that such trades can be made using optimization frameworks for the data compression algorithms.
We show that the standard image compression algorithms are not suitable for compressing images in correlation pattern recognition since they aim at retaining image fidelity in terms of perceptual quality rather than preserving spectrally significant information for pattern recognition. New compression algorithms for pattern recognition are therefore developed, which are based on the modification of the standard compression algorithms to achieve higher
compression ratio and simultaneously to enhance pattern recognition performance. This is done by emphasizing middle and high frequency components and discarding low frequency components according to a new developed distortion measure for compression. The operations of denoising, edge enhancement and compression can be integrated in the encoding process in the proposed compression algorithms. Simulation results show the effectiveness of the proposed compression algorithms.
KEYWORDS: Signal to noise ratio, Radar, Data compression, Distortion, Signal processing, Monte Carlo methods, Error analysis, Radar signal processing, Signal detection, Computer simulations
This paper ties together and extends several recent results we have presented. We previously showed: (i) the usefulness of non-MSE distortion criteria in data compression for time-difference-of-arrival (TDOA) emitter location (SPIE 2001 & 2002), and (ii) the ability to exploit redundancy between radar pulses in a joint TDOA/FDOA (frequency-difference-of-arrival) location scheme (SPIE 2001 & 2002). In (ii) we showed how to compress radar signals by gating around the detected pulses and then putting the pulses into the rows of a matrix which is then compressed through use of the SVD; this approach employed a purely MSE distortion criterion. An open question in this approach was: Is it possible to eliminate some of the pulses from the pulse matrix to increase the compression ratio without significantly sacrificing location accuracy?
We resolve this question by applying our proposed non-MSE to the FDOA accuracy and finding the optimal set of pulses to remove from the pulse matrix. The removal of pulses is shown to have negligible impact on the FDOA accuracy but does degrade the TDOA accuracy from that achievable using the SVD-based compression without pulse elimination. However, we demonstrate that the SVD method includes an inherent de-noising effect (common in SVD-based signal processing) that provides an improvement in TDOA accuracy over the case of no compression processing; thus, the overall impact on TDOA/FDOA accuracy is negligible while providing compression ratios on the order of 100:1 for typical radar signals.
KEYWORDS: Radar, Data compression, Computer programming, Signal to noise ratio, MATLAB, Transmitters, Time metrology, Standards development, Computer engineering, Electromagnetism
Previously a method has been proposed for a high-performance compression method designed expressly for compressing intercepted radar pulse trains for the purpose of locating the transmitter. The method relies on gating pulses, putting them into a pulse matrix and then using the singular value decomposition (SVD) to compress the signal data.
This paper reformulates the pulse gating method and shows that it requires the solution of an integer linear programming problem and several standard methods are first considered. It is shown that the large number of constraints in the original formulation can be significantly reduced by replacing the constraint set by its convex hull; simple rules for identifying the convex hull are given. However, even with these reductions the execution time for these methods can be prohibitive at very large pulse counts; furthermore, these methods exhibited numerical precision and convergence problems as the number of pulses increased. Therefore, an efficient non-standard method for solving this integer optimization problem is developed by exploiting characteristics of the objective function. This method solves the pulse gating problem with short execution times that grow negligibly with increasing pulse counts.
The location of an emitter is estimated by intercepting its signal and sharing the data among several platforms to measure the time-difference-of-arrival (TDOA) and the frequency-difference-of-arrival (FDOA). A common compression approach is to use a rate-distortion criterion where distortion is taken to be the mean-square error (MSE) between the original and compressed versions of the signal. However, we show that this MSE-only approach is inappropriate for TDOA/FDOA estimation and then define a more appropriate, non-MSE distortion measure. This measure is based on the fact that in addition to the dependence on MSE, the TDOA accuracy also depends inversely on the signal's RMS (or Gabor) bandwidth and the FDOA accuracy also depends inversely on the signal's RMS (or Gabor) duration.
The form of this new measure must be optimized under the constraint of a specified budget on the total number of bits available for coding. We show that this optimization requires a selection of DFT cells to retain that must be jointly chosen with an appropriate allocation of bits to the selected DFT cells. This joint selection/allocation is a challenging integer optimization problem that still has not been solved. However, we consider three possible sub-optimal approaches and compare their performance.
KEYWORDS: Signal to noise ratio, Radar, Prototyping, Signal processing, Chromium, Data compression, Interference (communication), Monte Carlo methods, Image compression, Fermium
An effective method for geolocation of a radar emitter is to intercept its signal at multiple platforms and share the data to allow measurement of the time-difference-of-arrival (TDOA) and the frequency-difference-of-arrival (FDOA). This requires effective data compression. For radar location we show that it is possible to exploit pulse-to-pulse redundancy. A compression method is developed that exploits the singular value decomposition (SVD) to compress the intercepted radar pulse train. This method consists of five steps: (i) pulse gating, (ii) pulse alignment, (iii) matrix formation, (iv) SVD-based rank reduction, and (v) encoding. Matrix formation places aligned pulses into rows to form a matrix that has rank close to one and SVD truncation gives a low rank approximate matrix. We show that (i) compression is maximized if the matrix is made to have two-thirds as many rows as columns and (ii) truncation to a rank-one matrix is feasible. We interpret this as extracting a prototype pulse trainlet. The maximum compression ratio is expressed in terms of the number of pulses and the number of samples per pulse and point out a particularly interesting and important characteristic - the compression ratio increases as the total number of signal samples increases. Theoretical and simulation results show that this approach provides a compression ratio up to about 30:1 in practical signal scenarios.
KEYWORDS: Signal to noise ratio, Wavelets, Wavelet transforms, Quantization, Distortion, Signal processing, Data compression, Time-frequency analysis, Radar, Image compression
The location of an emitter is estimated by intercepting its signal and sharing the data among several platforms to measure the time-difference-of-arrival (TDOA) and the frequency-difference-of-arrival (FDOA). Doing this in a timely fashion requires effective data compression. A common compression approach is to use a rate-distortion criterion where distortion is taken to be the mean-square error (MSE) between the original and compressed versions of the signal. However, in this paper we show that this MSE-only approach is inappropriate for TDOA/FDOA estimation and then define a more appropriate, non-MSE distortion measure. This measure is based on the fact that in addition to the dependence on MSE, the TDOA accuracy also depends inversely on the signal's RMS (or Gabor) bandwidth and the FDOA accuracy also depends inversely on the signal's RMS (or Gabor) duration. We discuss how the wavelet transform is a natural choice to exploit this non-MSE criterion. These ideas are shown to be natural generalizations of our previously presented results showing how to determine the correct balance between quantization and decimation. We develop a MSE-based wavelet method and then incorporate the non-MSE error criterion. Simulations show the wavelet method provides significant compression ratios with negligible accuracy reduction. We also make comparisons to methods that don't exploit time-frequency structure and see that the wavelet methods far out-perform them.
KEYWORDS: Signal to noise ratio, Quantization, Data compression, Electronic filtering, Interference (communication), Optical filters, Image compression, Distortion, Electromagnetism, Signal processing
The location of an electromagnetic emitter is commonly estimated by intercepting its signal and then sharing the data among several platforms. Doing this in a timely fashion requires effective data compression. Previous data compression efforts have focused on minimizing the man- square error (MSE) due to compression. However, this criterion is likely to fall short because it fails to exploit how the signal's structure impacts the parameter estimates. Because TDOA accuracy depends on the signal's RMS bandwidth, compression techniques that can significantly reduce the amount of data while negligibly impacting the RMS bandwidth have great potential. We show that it is possible to exploit this idea by balancing the impacts of simple filtering/decimation and quantization and derive a criterion that determines an optimal balance between the amount of decimation and the level of quantization. This criterion is then used to show that by using a combination of decimation and quantization it is possible to meet requirements on data transfer time that can't be met through quantization alone. Furthermore, when quantization-alone approaches can meet the data transfer time requirement, we demonstrate that the decimation/quantization approach can lead to better TDOA accuracies. Rate-distortion curves are plotted to show the effectiveness of the approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.