The concept of algorithmic engineering in the context of parallel signal processing is introduced and discussed. The main points are illustrated by means of some fairly simple worked examples, most of which relate to the use of QR decomposition by square-root-free Givens rotations as applied to adaptive beamforming.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
In this paper, we present new results that support the presence of a sea clutter attractor. In particular, we show that sea clutter waveform is deterministic, matchable almost exactly by the output of a nonlinear neural network model with enough degrees of freedom fairly close to the dimension of the attractor. The results presented herein are based on real-life radar data collected with an instrumentation-quality research facility.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Adaptive filters for broadband beainforming are two-dimensional filters with one dimension being space and the other dimension being time. The filtering in the time dimension is a simple convolution, hence fast algorithms can exploit computational redundancy in this dimension. The filtering in the space dimension is an arbitrary linear combiner, and for reasons arising from various implementation considerations, it is desirable to use factorized estimation techniques in this dimension. In this paper, we present scalar implementations of multichannel fast Recursive Least-Squares algorithms in transversal filter form (so-called FTF algorithms). The point is that by processing the different channels sequentially, i.e. one at a time, the processing of any channel reduces to that of the single-channel algorithm. This sequential processing decomposes the multichannel algorithm into a set of intertwined single-channel algorithms. Geometrically, this corresponds to a modified Gram-Schmidt orthogonalization of multichannel error vectors. Algebraically, this technique corresponds to matrix triangularization of multichannel error covariance matrices and converts matrix operations into a regular set of scalar operations. Algorithm structures that are amenable to VLSI implementation on arrays of parallel processors naturally follow from our approach. Numerically, the resulting algorithm benefits from the advantages of triangularization techniques in block-processing, which are a well-known part of Kalman filtering expertise. Furthermore, recently introduced stabilization techniques for proper control of the propagation of numerical errors in the update recursions are also incorporated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
The paper examines techniques whereby nonlinear adaptive processors can be introduced into the application of adaptive communications equalisation. This has traditionally been seen as an area restricted to the application of basically linear systems. It is shown that data equalisation may be formulated as an inherently non-linear problem. This leads us to propose a non-linear equaliser structure based on multi-layer perceptrons. Simulations are presented which verify the superior performance of this equaliser compared to linear and decision feedback equalisers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Two new systolic architectures for square root covariance Kalman filtering are presented. Both utilise a triangular array for QR decomposition with Givens rotations but achieve the state update in different ways. The new architectures compare favourably with those recently published in the literature.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
A new lattice filter algorithm for adaptive filtering is presented. In common with other lattice algorithms for adaptive filtering, this algorithm only requires 0(p) operations for the solution of a p-th order problem. The algorithm is derived from the QR-decomposition (QRD) based recursive least squares minimisation algorithm and hence is expected to have superior numerical properties compared with other fast algorithms. This algorithm contains within it a new algo-rithm for solving the least squares linear prediction problem. The algorithms are presented in two forms: one that in-volves taking square-roots and one that does not. Some preliminary computer simulation results are presented that in-dicate that the output residuals produced by the new, fast adaptive filtering algorithm are in good agreement with those from the more established, 0(p^{2}) QRD recursive least squares minimisation algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
The numerical accuracy and robustness of a systolic QR decomposition post-processor system which computes least squares combination coefficients are investigated. Two versions are compared, one using a Kalman closed-loop feedback arrangement, the other a theoretically equivalent open-loop system. The build-up of round-off error in the post-processor and the ability of the systems to compensate weight vector errors are assessed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
In this paper, we give an overview of a few recently obtained results regarding al-gorithms and systolic arrays for updating singular value decompositions. The Ordinary SVD as well as the Product SVD and the Quotient SVD will be discussed. The updating algorithms consist in an interlacing of QR-updatings and a Jacobi-type SVD-algorithm applied to the triangular factor(s). At any time step an approximate decomposition is computed from a previous approximation, with a limited number of operations (0 (n^{2})). When combined with exponential weighting, these algorithms are seen to be highly applicable to tracking probleths. Furthermore, they can elegantly be mapped onto systolic arrays, making use of slight modifications of well known systolic implementations for the matrix-vector product, the QR-updating and the SVD.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
A new, efficient two plane rotations (TPR) method for computing two-sided rotations involved in singular value decomposition (SVD) is presented. By exploiting the commutative properties of some special types of 2x2 matrices, we show that a two-sided rotation can be computed by only two plane rotations and a few additions. Moreover, if we use coordinate rotation digital computer (CORDIC) processors to implement the processing elements (PEs) of the SVD array given by Brent and Luk, the computational overhead of the diagonal PEs due to angle calculations can be avoided. The resulting SVD array has a homogeneous structure with identical diagonal and off-diagonal PEs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
We consider a problem pertaining to bearing estimation in unknown noise using the covariance differencing approach, and propose a linear array of processors which exhibits a linear speed-up with respect to a uniprocessor system. Our solution hinges on a new canonic matrix factorization which we term the hyperbolic singular value decomposition. The parallel algorithm for hyperbolic SVD based bearing estimation is an adaptation of a well known biorthogonalization technique developed by Hestenes. Parallel implementations of the algorithm are based on earlier works on one-sided Jacobi methods. It turns out that strategies for parallelization of Jacobi methods are equally well applicable for computing the hyperbolic singular value decomposition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Present work is discussed on the implementation of system identification and control methods using singular value decompositions (SVD) on a systolic array of INMOS Transputer chips. The central computation required for both the system identification and control is a canonical variate analysis involving the computation of a generalized singular value decomposition (GSVD). Algorithms are developed for efficient computation of the GSVD on the Transputer systolic array. The GSVD algorithm is developed with particular attention given to the numerical stability, accuracy, and computational efficiency on the systolic array. The implementation of the software in Occam for an array of INMOS transputers is discussed in terms of granularity, required memory, and computation time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
We develop a Jacobi-like scheme for computing the generalized Schur form of a regular pencil of matrices λB - A. The method starts with a preliminary triangularization of the matrix B and iteratively reduces A to triangular form, while maintaining B triangular. The scheme heavily relies on the technique of Stewart for computing the Schur form of an arbitrary matrix A. Just as Stewart's algorithm, this one can efficiently be implemented in parallel on a square array of processors. A quantitative analysis of the convergence of the method is also presented. This explains some of its peculiarities, and at the same time yields further insight in Stewart's algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Optimum algorithms for signal processing are notoriously costly to implement since they usually require intensive linear algebra operations to be performed at very high rates. In these cases a cost-effective solution is to design a pipelined or parallel architecture with special-purpose VLSI processors. One may often lower the hardware cost of such a dedicated architecture by using processors that implement CORDIC-like arithmetic algorithms. Indeed, with CORDIC algorithms, the evaluation and the application of an operation, such as determining a rotation that brings a vector onto another one and rotating other vectors by that amount, require the same time on identical processors and can be fully overlapped in most cases, thus leading to highly efficient implementations. We have shown earlier that a necessary condition for a CORDIC-type algorithm to exist is that the function to be implemented can be represented in terms of a matrix exponential. This paper refines this condition to the ability to represent , the desired function in terms of a rational representation of a matrix exponential. This insight gives us a powerful tool for the design of new CORDIC algorithms. This is demonstrated by rederiving classical CORDIC algorithms and introducing several new ones, for Jacobi rotations, three and higher dimensional rotations, etc.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
This paper considers the problem of estimating the parameters of multiple narrowband signals arriving at an array of sensors. Modern approaches to this problem often involve costly procedures for calculating the estimates. The ESPRIT (Estimation of Signal Parameters via Rotational Invariance Techniques) algorithm was recently proposed as a means for obtaining accurate estimates without requiring a costly search of the parameter space. This method utilizes an array invariance to arrive at a computationally efficient multidimensional estimation procedure. Herein, the asymptotic distribution of the estimation error is derived for the Total Least Squares (TLS) version of ESPRIT. The Cramer-Rao Bound (CRB) for the ESPRIT problem formulation is also derived and found to coincide with the variance of the asymptotic distribution through numerical examples. The method is also compared to least squares ESPRIT and MUSIC as well as to the CRB for a calibrated array. Simulations indicate that the theoretic expressions can be used to accurately predict the performance of the algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
The Constrained Total least Squares (CTLS) method is a generalized least squares technique used to solve an overdetermined set of linear equations whose coefficients are noisy. The CTLS method is a natural extension of the Total Least Squares method to the case where the noise components of the coefficients are algebraically related. This paper presents a number of analytical properties of the CTLS method and solution, and sets forth their application to harmonic superresolution. the CTLS solution as formulated is derived as a constrained minimization problem. It is shown that the CTLS problem can be reduced to an unconstrained minimization problem over a smaller set of variables. A perturbation analysis of the CTLS solution valid for small noise levels, is derived, and from it the root mean square error of the CTLS solution is obtained in closed form. It is shown that the CTLS problem is equivalent to a constrained parameter maximum-likelihood problem. A complex version of the Newton method for finding the minimum of a real function of several complex variables is derived and applied to find the CTLS solution. Finally, the CTLS technique is applied to frequency estimation of sinusoids and direction of arrival estimation of wavefronts impinging on a linear uniform array. Simulation results show that the CTLS method is more accurate than the Modified Forward Backward Linear Prediction technique of Tufts and Kumaresan. Also, the CTLS, MUSIC, and ROOT-MUSIC techniques are compared for angle-of-arrival estimation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
The problem is to recover stochastic processes from an unknown stationary linear transform. Our contribution is two-fold. First we focus on instantaneous mixtures: observation e(t) is assumed to write as a regular linear transform of the sources, x(t), as e(t)=Box(t). The only assumption requested is that the sources xi(t) are mutually independent, and no additional knowledge upon their statistics is necessary provided they are not normal. Extensions to convolutional mixing are then pointed out, namely cases where e(t)=A(t)*x(t) where A(t) has a rational transfer function. Sensitive improvements to the algorithm of Giannakis et al for MA identification are included. Multivariate ARMA identification can be split into three successive estimation problems: AR identification, monic MA identification, and estimation of B_{o} in last position.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Many efficient signal subspace algorithms have been published on direction finding of narrow-band sources, e.g. MUSIC, ESPRIT. The difficulty of extending these methods to the wide-band cases lies in the fact that the signal vectors from each source doesn't span an one-dimensional subspace. Therefore, all the signal-subspace based algorithms will fail in case of wide-band sources. Recently, several approaches have been suggested to resolve this problem, e.g., the spectral-spatial approach, the coherent signal-subspace (CSS) method, the modal decomposition algorithm. Each of these algorithms has one or several of the following shortcomings: high computational cost, impractical signal model assumption, requirement of initial DOA estimate, complete knowledge of array manifold, etc. In this paper, we present a new and more efficient method for estimating DOA's of multiple wide-band sources via spectral smoothing. The proposed algorithm requires much less computational cost than the existing approaches and doesn't need initial DOA estimate, or specific signal model (ARMA, identical spectrum). If the array of sensors satisfies invariant displacement condition, ESPRIT can be used to eliminate the need of complete knowledge of array manifold and to reduce computational load and memory requirement. Under certain scenarios, the analytical analysis and computer simulation show the better performance of the proposed algorithm than that of the existing approaches mentioned above.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
ESPRIT is a recently introduced algorithm for direction-of-arrival (DOA) and spectral estimation. Its principal advantage is that parameter estimates are obtained directly, without knowledge (and hence storage) of the parameter manifold and without computation or search of some spectral measure. This advantage is achieved by exploiting a certain invariance structure for the spatial or temporal signal samples; for example, in DOA estimation, it is assumed that the sensor array is composed of two identical subarrays separated by a known displacement vector. In many applications, arrays are constructed with invariances in more than one spatial direction. These arrays are typically sampled uniformly, so an additional invariance in time may also be present. In this paper, it is shown how to generalize the geometric concepts of ESPRIT to accommodate arrays with multidimensional invariance structure, and consequently to simultaneously estimate multiple parameters (e.g., azimuth, elevation, frequency, etc.) per source. The framework for this generalization is provided by the fact that ESPRIT is equivalent to a least-squares (LS) fit of the single invariance structure to a set of vectors which span the signal subspace (hence the term subspace-fitting). For ESPRIT , the TLS fit requires no constraints. In the multi-dimensional case, however, constraints must be applied in order to obtain an "optimal" solution. We will primarily be concerned with suboptimal methods which approximate the optimal solution. Suboptimal here refers to the fact that the constraints are satisfied by a two-stage procedure rather than simultaneously with the LS fit; i.e., an unconstrained fit is performed first, after which the solutions are "projected" in some sense back onto the constraints.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
An iterative parametric nonlinear programming procedure for estimating the azimuth and elevation angles of multiple sources incident on an array of sensors is developed. The sources can be perfectly coherent and the array's geometry is unrestricted. Parametric least squares error modeling plays a pivotal role in the estimation procedure.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
The article addresses the application of the Hough and fast Hough (FHT) transforms for finding lines in sets of coordinate pairs. The backprojection of the Hough transform is known to be a strip. The backprojection of the FHT is shown to be a Haired strip. The flair can be made insignificant by adjusting a scale factor. Overlapping the strips is useful to both algorithms. The Hough transform is shown to require over-lap to guarantee finding a solution. The tradeoff between overlap and sampling is stated as a theorem. Though not required for the FHT, variable overlap can remove the variations of strip width with slope.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
This paper examines the effect of array geometry on signal detection and signal estimation using the Schmidt MUSIC algorithm and related techniques. Upper and lower bounds are found for the separation between signal and noise eigenvalues as a function of the eigenvalues of the beam response matrix and the source correlation matrix. It is shown that the condition number of the steering phase shift matrix is determined almost entirely by its smallest singular value. Limited numerical evaluations for planar arrays suggest that the smallest singular value is determined by arrival spacing relative to array aperture, and is insensitive to the details of the array geometry. For the special case when all arrivals are closely spaced, it is shown that the steering phase shift matrix is the product of a Vandermonde matrix depending on the arrival spacing with a second matrix determined by the array geometry.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
In this paper, a unified implementation model for adaptive linearly-constrained beamformers is presented and several implementation structures which employ this model are compared. It is shown that adaptive linearly-constrained beamformers can be implemented efficiently using matrix decompoition methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
This paper reviews state space model-based methods for signal processing applications. A state space frame-work is shown to provide a convenient tool for exposing and exploiting structure inherent in many model based methods. It is also shown that there exist state space methods which are robust to noise in data, and to numerical errors. From a computational point of view, the methods are often less complex than existing competing methods. Futhermore they only involve matrix operations which are suitable for systolic/wavefront implementation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
The Wigner-Ville (W-V) distribution is a time-frequency representation that yields a highly accurate estimate of instantaneous frequency. It is related to the narrowband ambiguity function by an integral transform, and it can be used in a variety of detection and estimation problems. Convolution of signal and filter W-V distributions yields a spectrogram that could also be constructed with a bank of constant bandwidth filters. The wideband, ambiguity function represents the Doppler effect with dilation or compression rather than with frequency shift as in the narrowband approximation. The "Q-distribution" is a modified W-V representation that is related to the wideband ambiguity function by an integral transform and can be used to construct a proportional bandwidth spectrogram corresponding to a bank of constant-Q filters. The Q-distribution is thus a wideband version of the W-V distribution. Properties of the Q-distribution indicate that it may prove useful for detection and parameter estimation as well as measurement of wideband scattering functions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
We here develop a beamspace domain based Maximum Likelihood (BDML) estimation scheme for low-angle radar tracking. In the low-angle radar tracking scenario, echoes return from the target via a specular path, as well as by a direct path. The angular separation between the direct and specular arrivals is a fraction of a beamwidth, negating the use of classical monopulse bearing estimation based on sum and difference beams. In the new scheme, the element space snapshot vectors are first operated on by a Butler matrix composed of three orthogonal beamforming vectors. The conversion to 3-D beamspace domain via the Butler matrix beam former is shown to facilitate a simple, closed-form BDML estimator for the direct and specular path angles. To avoid track breaking in cases where the two signals arrive at the center of the array either perfectly in-phase or 180° out-of-phase, the use of frequency diversity is incorporated. The coherent signal subspace concept of Wang and Kaveh is invoked as a means for retaining the computational simplicity of single frequency operation. With proper selection of the auxiliary frequencies, it is shown that perfect focusing at the reference frequency may be simply achieved at the outset, i. e., without iterating. Simulations are presented demonstrating the performance of the BDML estimation scheme employing frequency diversity in a low-angle radar tracking environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
In this paper, a method of precise motion compensation for InverseSynthetic Aperture Radar (ISAR) has been submitted. It is the overall target and not any particular target scatter which is tracked to do the motion compensation. Thus track loss arising from target scintillation or shadowing effect can be greatly reduced. So, it is possible to process a large segment data in azimuth as to improve resolution. The results shown in this paper demonstrate that the method of motion compensation is effective and feasible against maneuvering target.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
This talk will survey the role of algebraic fields in general and of the Fourier transform in particular in the engineering problems of digital signal processing and of error control codes. The premise of the talk is that there are close ties between the subjects of digital signal processing and of error control codes. For a variety of reasons, the subject of error control codes has been highly algebraic. The use of algebraic methods has developed more slowly in digital signal processing. By surveying the computational procedures, we hope to stimulate new methods and applications. The plan of the talks is to survey the structure of useful algebraic fields, then examine the Fourier transform in an arbitrary field. Finally, we shall discuss the role of algebraic fields and of Fourier transforms in a variety of applications. From a computational point of view, the algorithms used in digital signal processors and in error correcting decoders are often quite similar. From an applications point of view it may be inefficient to separate these two tasks into distinct subsystems of an implementation. The future may very well see a blurring of the line between the traditional tasks of filtering and the traditional tasks of error control. Indeed, both of these tasks, broadly stated, involve the removal of noise from a received signal.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Exact computations, performed with residues, occur in Number Theoretic Transforms and Residue Number System implementations. Once thought awkward to implement with standard logic circuits, the application of efficient small lookup tables, constructed with pipelined dynamic ROM's, allows very efficient construction of hardware ideally suited to residue operations. Linear DSP operations that are compute bound (require many arithmetic computations per input/output sample), are best suited for implementation with systolic arrays. For very high throughput operations, bit-level systolic arrays appear to be ideally suited to their implementation. The dynamic ROM's used for the residue computations, are a perfect vehicle for implementing such operations at the bit-level in a systolic architecture. This paper discusses VLSI architectures based on finite ring computations, using linear systolic arrays of small look-up tables.The advantage of this approach is that fixed coefficient multiplication requires no in crease in hardware over that required for general addition, and the resulting structure is homogeneous in that only one cell type is required to implement all of the processing functions. Several advantages accrue in the VLSI setting, particularly clock rate increase, ease of testability and natural fault tolerance Three different approaches to implementing the finite ring/field residue calculations are discussed. The first uses a bit-level steering mechanism around small dynamic ROM's; two applications of this technique to digital signal processing are outlined. The other two techniques are based on a redundant binary representation implemented with a pipelined adder configuration, and an iterative solution technique based on neural-like networks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
A new method for estimating directions of arrival of plane waves onto arrays of sensors is proposed. The method is particularly well-suited to the case where the background noise field is non-isotropic, with arbitrary covariance matrix. The joint posterior probability density function of the signal parameters and the noise covariance matrix Σ is formed after a suitable non-informative prior p(Σ) is defined, and then the dependence on Σ is integrated out. The resulting estimator structure is then modified to substantially reduce the computational requirements. A geometric interpretation of the final objective function is given. Significantly improved performance over the MUSIC algorithm, particularly with regard to threshold, is observed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
We investigate the estimation of signal parameters as source locations from sensor array measurements in the presence of partly unknown noise fields and also the estimation of spectral parameters of the signals and of noise. This problem has drawn much interest, and many parameter estimation methods for array data have been discussed in the literature. We concentrate on approximate maximum likelihood estimation in the frequency domain assuming stationary array measurements. We review several concepts in which different asymptotic distributional properties of Fourier transformed array data are applied. We investigate narrowband as well as wideband data. Asymptotic distributional results of the estimates are presented. Numerical procedures, approximations and estimates from different model fits having, in some cases, the same asymptotic behavior as maximum likelihood estimates are discussed. Finally, we summarize the results from numerical experiments that show the behavior of different estimates in the single frequency case for a small number of data snapshots.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
This paper presents a hardware structure for a parallel implementation of an efficient least squares algorithm for linear prediction problem presented in [11]. Experimented results on a Warp systolic multiprocessor computer show the actual speed up of the parallel implementation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
A new method for detecting the number of signals incident upon an array of sensors is described. This new method is based on finding upper thresholds for the observed eigenvalues of the covariance matrix of the array output. Theoretical analysis shows that the performance of the new method is flexible and can be controlled by a parameter ta. By using a suitable value of ι_{α} the performance of the new method can be made superior to MDL, in that the threshold occurs at a lower value of SNR. Also, it can be made superior to the AIC in that a lower error rate can be achieved at high SNR. Simulation results are included to confirm the analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
A procedure for estimating of the number and direction of arrival of uncorrelated plane waves in the presence of an unknown colored background noise using a uniform linear array is developed under the assumption that the response of the array to the noise has a rational power spectrum. The number and direction of arrival of the plane waves are estimated by rooting a polynomial which is formed from elements of the vectors spanning the null-space of a Hankel matrix whose entries are the spatial correlation sequence of the sensor outputs. A method for separating the roots of the polynomial that are due to the plane waves from those that are due to the noise is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
AbstractÃ¢â‚¬â€?ESPRIT is a recently developed and patented technique for high-resolution estimation of signal parameters. It exploits an invariance structure designed into the sensor array to achieve a reduction in computational requirements of many orders of magnitude over previous techniques such as MUSIC, Burg's MEM, and Capon's ML, and in addition achieves performance improvement as measured by parameter estimate error variance. It is also manifestly more robust with respect to sensor errors (e.g. gain, phase, and location errors) than other methods as well. Whereas ESPRIT only requires that the sensor array possess a single invariance best visualized by considering two identical but other-wise arbitrary arrays of sensors displaced (but not rotated) with respect to each other, many arrays currently in use in various applications are uniform linear arrays of identical sensor elements. Phased array radars are commonplace in high-resolution direction finding systems, and uniform tapped delay lines (i.e., constant rate A/D converters) are the rule rather than the exception in digital signal processing systems. Such arrays possess many invariances, and are amenable to other types of analysis, which is one of the main reasons such structures are so prevalent. Recent developments in high-resolution algorithms of the signal/noise subspace genre including total least squares (TLS) ESPRIT applied to uniform linear arrays are summarized. ESPRIT is also shown to be a generalization of the root-MUSIC algorithm (applicable only to the case of uniform linear arrays of omni-directional sensors and unimodular cisoids). Comparisons with various estimator bounds, including CramerRao bounds, are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
The instantaneous frequency (IF) of a signal is a parameter which is of significant practical importance, since in many situations it corresponds to some physical phenomenon. This paper considers the definition of the IF, describes a number of ways of estimating it (along with a consideration of how closely the estimates are likely to correspond to physical reality), and presents two applications where IF estimation is used.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
The standard deviation of instantaneous frequency (local bandwidth) is derived for the short time Fourier transform. This is done by calculating the local moments of frequency for a given time instants using the spectrogram as a joint time-frequency distribution. By minimizing the local bandwidth optimal windows are obtained. We show that amplitude modulation has a very significant effect on the optimum window. We also show that to obtain the highest possible resolution, divergent windows which non the less lead to convergent short time Fourier transforms, must sometimes be used. Series expansions for the estimated instantaneous frequency and local bandwidth are derived in terms of the derivatives pf the phase. The theorem of Ville, Mandel and Fink, relating the global bandwidth to the excursions of the instantaneous frequency, is generalized to the short time Fourier transform. The bandwidth and duration of the spectrogram are related to those of the signal and window and a local uncertainty relationship for the spectrogram is derived. Also, the concept of local duration for a particular frequency is introduced and explicit formulas are given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
We suggest a methodology for the design of time-variant filters based on time-frequency representations such as the short-time Fourier transform and the Wigner distribution function.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Smoothing of Wigner distribution introduces two dimensional sequences which brings the theory of two-dimensional filter analysis and design to the one-dimensional time-varying spectrum estimation. In this paper, the region of support of the 2-D filters associated with the commonly used periodogram, averaged periodograms and Wigner estimators are defined and used to express through the singular value decomposition, the periodograms-based estimators as a linear combination of the Pseudo Wigner estimators (PWE). The PWE associated with the maximum singular value of the eigenvector expansion of the periodogram is viewed as the closest approximation between the two estimators. Error bounds are derived and simulations are performed to demonstrate the effects of limiting the expansion to the dominant singular vale es, i.e., using a reduced rank periodogram.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
A Zak transform is defined which plays a role in wavelet analysis based on the affine group completely analogous to the role played by the Zak transform on the wavelet analysis based on the Weyl-Heisenberg group. It is shown that Zak transform is a major tool in analysis and synthesis of non-stationary signals.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
This paper provides a unifying perspective for several narowband and wideband signal processing techniques. It considers narrowband ambiguity functions and Wigner-Ville distibutions, together with the wideband ambiguity function and several proposed approaches to a wideband version of the Wigner-Ville distribution (WVD). A unifying perspective is provided by the methodology of unitary representations and ray representations of transformation groups.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
We define the modulus squared of the wavelet transform to be the wavelet estimate, and express it in terms of any bilinear joint time-frequency distribution characterized by a product kernel.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
A new conceptualization of time-frequency (t-f) energy distributions is discussed in this paper. Many new t-f distributions with desirable properties may now be designed with relative ease by approaching the problem in terms of the ambiguity plane representation of the kernel. Careful attention to the design principles yields kernels which result in high resolution t-f distributions with a considerable reduction of the sometimes troublesome cross terms observed when using other distributions such as the Wigner Distribution (WD). When these new t-f distributions are applied to some common signals, fascinating new details emerge. Examples are provided for two acoustic transients: joint clicking and marine mammal sounds.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
The proposed adaptive filter approach for cross-terms elimination relies on the fact that the Wigner distribution (WD), when windowed in time, due to computations and availability problems, does not change its spectral characteristics at the signal frequencies. The only time-varying components, as the window slides on stationary data, are those corresponding to the signal cross-terms, where they exhibit a phase change. The constant and the time-varying behavior in Pseudo Wigner Distribution (PWD) can be distinguished by adaptive filtering. In this paper, frequency-domain least mean squares (LMS) algorithm is used to both track and suppress the cross-terms, and therefore, allows a much better reading of the time-varying distribution of power over frequency. In the adaptive filtering approach, the desired input (primary) takes the PWD's for different data blocks, while the filter input (reference) is assigned unit values. Due to the nature of the primary and reference data, the frequency domain filter adapts to the constant sinusoidal peak values. However, because of the uncorrelatedness, it fails to track the time-varying components in the desired signal, leaving out the cross-terms to propagate to the output filter error. The cross-correlation between the constant values in the reference and the time-varying components in the desired signal can be controlled by the mechanism by which the PWD's are generated. Random sliding over the data may show improved performance compared to regular sliding with disjoint or overlapping blocks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.