Synthetic aperture radar (SAR) collects samples of the 3D Spatial Fourier transform on a two dimensional manifold corresponding to the backscatter data of wideband pulses launched from different look angles along an aperture. Traditional 3D reconstruction techniques involve aggregating and indexing phase history data in the spatial Fourier domain collected through set of sparse apertures and applying an inverse 3D Fourier Transform. We present a coordinate-based multi-layer perceptron (MLP) that enforces the smooth surface prior. The 3D geometry is represented using the signed distance function. Since estimating a smooth surface from a sparse and noisy point cloud is an ill-posed problem, in this work, we regularize the surface estimation by sampling points from the implicit surface representation during the training step.We validate the model's reconstruction ability using the Civilian vehicles data domes.
We consider the task of estimating the scattering coefficients and locations of the scattering centers that exhibit limited azimuthal persistence for a wide-angle synthetic aperture radar (SAR) sensor operating in spotlight mode. We exploit the sparsity of the scattering centers in the spatial domain as well as the slow-varying structure of the scattering coefficients in the azimuth domain to solve the ill-posed linear inverse problem. Furthermore, we utilize this recovered model as a template for the task of target recognition and pose estimation. We also investigate the effects of missing pulses in the initial recovery step of the model on the performance of the proposed method for target recognition. We empirically establish that the recovered model can be used to estimate the target class and pose simultaneously for the case of missing measurements.
We study the problem of target identification from Synthetic Aperture Radar (SAR) imagery. Target classification using SAR imagery is a challenging problem due to large variations of target signature as the target aspect angle changes. Previous work on modeling wide angle SAR imagery has shown that point features, extracted from scattering center locations, result in a high dimensional feature vector that lies on a low dimensional manifold. In this paper we use rich probabilistic models for these target manifolds to analyze classification performance as a function of Signal-to-noise ratio (SNR) and Bandwidth. We employ Mixture of Factor Analyzers (MoFA) models to approximate the target manifold locally, and use error bounds for the estimation and analysis of classification error performance. We compare our performance predictions with the empirical performance of practical classifiers using simulated wideband SAR signatures of civilian vehicles.
Wide-area persistent radar video offers the ability to track moving targets. A shortcoming of the current technology is an inability to maintain track when Doppler shift places moving target returns co-located with strong clutter. Further, the high down-link data rate required for wide-area imaging presents a stringent system bottleneck. We present a multi-channel approach to augment the synthetic aperture radar (SAR) modality with space time adaptive processing (STAP) while constraining the down-link data rate to that of a single antenna SAR system. To this end, we adopt a multiple transmit, single receive (MISO) architecture. A frequency division design for orthogonal transmit waveforms is presented; the approach maintains coherence on clutter, achieves the maximal unaliased band of radial velocities, retains full resolution SAR images, and requires no increase in receiver data rate vis-a-vis the wide-area SAR modality. For Nt transmit antennas and N samples per pulse, the enhanced sensing provides a STAP capability with Nt times larger range bins than the SAR mode, at the
cost of O(log N) more computations per pulse. The proposed MISO system and the associated signal processing
are detailed, and the approach is numerically demonstrated via simulation of an airborne X-band system.
We consider the problem of distributed sensing and detection using a network of sensor nodes, and the challenges
that arise in fusing disparate data. Multiple sensors make local inferences on the state of nature (e.g., the presence
of a signal), and those observations are then transmitted to a regional fusion center. The fusion center is tasked
to make improved decisions. We develop methods to optimize those decisions.
Interoperability between disparate sensor nodes can be addressed by combining similar types of parameters
(e.g., direction of arrival and location estimates to better infer location), albeit with varying qualities. As an
initial problem, we consider the case where each sensor makes a binary decision on the presence of a signal source
and the fusion node combines these to make a more accurate decision. We consider a lossy medium in which
signals undergo a range-dependent propagation loss. We determine local thresholds that optimize a performance
metric, including both constrained global detection performance and asymptotic error performance. We study
the eect sensor node density on detection performance under a network load constraint. The asymptotic
performance metrics provide indicators of the amount of value that each sensor contributes to the fusion task.
We propose a single-receive, multiple-transmit channel imaging radar system that limits received data rate while
also providing spatial processing for improved detection of moving targets. A multi-input, single-output (MISO)
system uses orthogonal waveforms to separate spatial channels at the single receiver. The use of orthogonal
waveforms necessitates several modications to both synthetic aperture radar imaging and adaptive space-time
beamforming. An orthogonal frequency-division transmit waveform scheme is proposed, and we derive the
attendant extensions to the standard backprojection and space-time beamforming algorithms.. We demonstrate
imaging and moving target detection results using data from an airborne X-band system. We conclude with a
discussion of the clutter covariance matrix of the resulting space-time beamformer and a suggested waveform
scheduling scheme to minimize the rank of the observed clutter subspace.
The emergence of 3D imaging from multipass radar collections motivates the need for 3D autofocus. While
several effective methods exist to coherently align radar pulses for 2D image formation from a single elevation
pass, further methods are needed to appropriately align radar collection surfaces from pass to pass. We propose
one such method of 3D autofocus involving the optimization of a coherence factor metric for the dominant
scatterers in an image scene. This method is demonstrated using a diffuse target from a multipass collection of
circular SAR data.
Typically in SAR imaging, there is insufficient data to form well-resolved three-dimensional (3D) images using
traditional Fourier image reconstruction; furthermore, scattering centers do not persist over wide-angles. In
this work, we examine 3D non-coherent wide-angle imaging on the GOTCHA Air Force Research Laboratory
(AFRL) data set; this data set consists of multipass complete circular aperture radar data from a scene at AFRL,
with each pass varying in elevation as a result of aircraft flight dynamics . We compare two algorithms capable
of forming well-resolved 3D images over this data set: regularized lp least-squares inversion, and non-uniform
multipass interferometric SAR (IFSAR).
We consider three dimensional target construction from SAR data collected on multiple complete circular apertures
at different elevation angle. The 3-D resolution of circular SAR systems is constrained by two factors: the
sparse sampling in elevation and the limited azimuthal persistence of the reflectors in the scene. Three dimensional
target reconstruction with multipass circular SAR data is further complicated by nonuniform elevation
spacing in real flight paths and non-constant elevation angle throughout the circular pass. In this paper we first
develop parametric spectral estimation methods that extend standard IFSAR method of height estimation to
apertures at more than two elevation angles. Next, we show that linear interpolation of the phase history data
leads to unsatisfactory performance in 3-D reconstruction from nonuniformly sampled elevation passes. We then
present a new sparsity regularized interpolation algorithm to preprocess nonuniform elevation samples to create
a virtual uniform linear array geometry. We illustrate the performance of the proposed method using simulated
backscatter data.
KEYWORDS: Sensors, Sensor networks, Data modeling, Process modeling, Transmitters, Receivers, Radio propagation, Autoregressive models, Received signal strength, Statistical modeling
Sensor data generation is a key component of high fidelity design and testing of applications at scale. In
addition to its utility in validation of applications and network services, it provides a theoretical basis for the
design of algorithms for efficient sampling, compression and exfiltration of the sensor readings. Modeling of
the environmental processes that gives rise to sensor readings is the core problem in physical sciences. Sensor
modeling for wireless sensor networks combine the physics of signal generation and propagation with models of
transducer saturation and fault models for hardware. In this paper we introduce a novel modeling technique
for constructing probabilistic models for censored sensor readings. The model is an extension of the Gaussian
process regression and applies to continuous valued readings subject to censoring. We illustrate the performance
of the proposed technique in modeling wireless propagation between nodes of a wireless sensor network. The
model can capture the non-isotropic nature of the propagation characteristics and utilizes the information from
the packet reception failures. We use measured data set from the Kansei sensor network testbed using 802.15.4
radios.
In this paper we consider the problem of joint enhancement of multichannel Synthetic Aperture Radar (SAR)
data. Previous work by Cetin and Karl introduced nonquadratic regularization methods for image enhancement
using sparsity enforcing penalty terms. For multichannel data, independent enhancement of each channel is
shown to degrade the relative phase information across channels that is useful for 3D reconstruction. We thus
propose a method for joint enhancement of multichannel SAR data with joint sparsity constraints. We develop
both a gradient-based and a Lagrange-Newton-based method for solving the joint reconstruction problem, and
demonstrate the performance of the proposed methods on IFSAR height extraction problem from multi-elevation
data.
We study circular synthetic aperture radar (CSAR) systems collecting radar backscatter measurements over a
complete circular aperture of 360 degrees. This study is motivated by the GOTCHA CSAR data collection experiment
conducted by the Air Force Research Laboratory (AFRL). Circular SAR provides wide-angle information
about the anisotropic reflectivity of the scattering centers in the scene, and also provides three dimensional information
about the location of the scattering centers due to a non planar collection geometry. Three dimensional
imaging results with single pass circular SAR data reveals that the 3D resolution of the system is poor due to
the limited persistence of the reflectors in the scene. We present results on polarimetric processing of CSAR
data and illustrate reasoning of three dimensional shape from multi-view layover using prior information about
target scattering mechanisms. Next, we discuss processing of multipass (CSAR) data and present volumetric
imaging results with IFSAR and three dimensional backprojection techniques on the GOTCHA data set. We
observe that the volumetric imaging with GOTCHA data is degraded by aliasing and high sidelobes due to
nonlinear flightpaths and sparse and unequal sampling in elevation. We conclude with a model based technique
that resolves target features and enhances the volumetric imagery by extrapolating the phase history data using
the estimated model.
Sensor network technology has enabled new surveillance systems where sensor nodes equipped with processing and communication capabilities can collaboratively detect, classify and track targets of interest over a large surveillance area. In this paper we study distributed fusion of multimodal sensor data for extracting target information from a large scale sensor network. Optimal tracking, classification, and reporting of threat events require joint consideration of multiple sensor modalities. Multiple sensor modalities improve tracking by reducing the uncertainty in the track estimates as well as resolving track-sensor data association problems. Our approach to solving the fusion problem with large number of multimodal sensors is construction of likelihood maps. The likelihood maps provide a summary data for the solution of the detection, tracking and classification problem. The likelihood map presents the sensory information in an easy format for the decision makers to interpret and is suitable with fusion of spatial prior information such as maps, imaging data from stand-off imaging sensors. We follow a statistical approach to combine sensor data at different levels of uncertainty and resolution. The likelihood map transforms each sensor data stream to a spatio-temporal likelihood map ideally suitable for fusion with imaging sensor outputs and prior geographic information about the scene. We also discuss distributed computation of the likelihood map using a gossip based algorithm and present simulation results.
Support Vector Regression is a well established robust method for function estimation. The Support Vector Machine uses inner-product kernels between support vectors and the input vectors to transform the nonlinear classification and regressions problem to a linear version.function where the surface is approximated with a linear
combination of the kernel function evaluated at the support vectors. In many applications, the number of these support vectors can be quite large which can increase the length of the prediction phase for large data sets. Here we study a technique for reducing the number of support vectors to achieve comparable function estimation accuracy. The method identifies support vectors that are close to the ε-tube and uses them to approximate the function estimate of the original algorithm.
Recently there has been a renewed interest in the notion of deploying large numbers of networked sensors for applications ranging from environmental monitoring to surveillance. In a typical scenario a number of sensors are distributed in a region of interest. Each sensor is equipped with sensing, processing and communication
capabilities. The information gathered from the sensors can be used to detect, track and classify objects of interest. For a number of locations the sensors location is crucial in interpreting the data collected from those sensors. Scalability requirements dictate sensor nodes that are inexpensive devices without a dedicated localization
hardware such as GPS. Therefore the network has to rely on information collected within the network to self-localize. In the literature a number of algorithms has been proposed for network localization which uses measurements informative of range, angle, proximity between nodes. Recent work by Patwari and Hero relies on
sensor data without explicit range estimates. The assumption is that the correlation structure in the data is a monotone function of the intersensor distances. In this paper we propose a new method based on unsupervised learning techniques to extract location information from the sensor data itself. We consider a grid consisting of virtual nodes and try to fit grid in the actual sensor network data using the method of self organizing maps. Then known sensor network geometry can be used to rotate and scale the grid to a global coordinate system. Finally, we illustrate how the virtual nodes location information can be used to track a target.
In this paper we discuss the design of sequential detection networks for nonparametric sequential analysis. We present a general probabilistic model for sequential detection problems where the sample size as well as the statistics of the sample can be varied. A general sequential detection network handles three decisions. First, the network decides whether to continue sampling or stop and make a final decision. Second, in the case of continued sampling the network chooses the source for the next sample. Third, once the sampling is concluded the network makes the final classification decision. We present a Q-learning method to train sequential detection networks through reinforcement learning and cross-entropy minimization on labeled data. As a special case we obtain networks that approximate the optimal parametric sequential probability ratio test. The performance of the proposed detection networks is compared to optimal tests using simulations.
KEYWORDS: Computer programming, Monte Carlo methods, Control systems, Device simulation, Optimization (mathematics), Systems modeling, Dynamical systems, Energy efficiency, Control systems design, Scientific programming
In this paper we consider the design of intelligent control policies for water distribution systems. The controller presented in this paper is based upon a hybrid system that utilizes dynamic programming and rules as design constraints, to minimize average costs over a long time horizon under constraints on operation parameters. The method is very general and is reported here as a controller for water distribution system. In the example presented we obtain a 12.5 percent reduction in energy usage over the optimal level-based control design. We present the guiding principles used in the design and the results for a simulated system that is representative of a typical water pumping station. The design is fully adaptable to changing operating conditions and has applicability to a wide range of scheduling problems.
Ultra wideband (UWB) radar is an emerging technology with potential for all-weather, remote sensing of objects obscured by foliage or buried underground. Multiple octaves of frequency coverage and 90 degrees or more of viewing angles across a synthesized aperture are used to obtain high spatial resolution mapping of scattering behavior. Additionally, fully polarimetric responses can be measured, providing a multichannel characterization of objects in a scene. However, the diversity in wavelength and viewing angle presents significant challenges for system engineering and data interpretation. In particular, the multichannel UWB system poses unique imaging challenges arising from the variation of the UWB antenna response. We present an overview of calibration techniques for polarimetric wideband imagery, and introduce an image domain calibration technique using calibration targets.
In this paper we introduce multi-channel techniques to compensate for effects of antenna shading and crosstalk in wideband, wide-angle full polarization radar imaging. We model the systems as a 2D integral operator that includes the transmit pulse function, receive and transmit antenna transfer functions, and response from scattering objects. Existing imaging algorithms provide an approximate inversion of this integral operator, without compensation for the effect of antenna transfer functions. Thus, standard processing results in image quality diminished by the inherent variation of the antenna response--in magnitude, phase and polarization--across a large band of frequencies and wide range of aspect angles. We propose three inversion techniques for this integral operator, to improve polarization purity and to achieve localized point spread functions. The first technique uses a local approximation to the system model, and propose a conceptually simple method for the inversion. The other two techniques propose inversion methods for the exact system model in different transform domains. The result is imagery with improved polarization purity and a more localized point spread function.
Polarimetric diversity can be exploited in synthetic aperture radar (SAR) for enhanced target detection and target description. Detection statistics and target features can be computed from either polarimetric imagery or parametric processing of SAR phase histories. We adopt an M- ary Bayes classification approach and derive Bayes-optimal decision rules for detection and description of scattering centers. Scattering centers are modeled as one of M canonical geometric types with unknown amplitude, phase and orientation angle; clutter is modeled as one of M canonical geometric types with unknown amplitude, phase and orientation angel; clutter is modeled as a spherically invariant random vector. For the Bayes optimal decision rules, we provide a simple geometric interpretation and an efficient computational implementation. Moreover, we characterize the certainty of decisions by deriving an approximate posteriori probability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.