A hybrid weighted/interacting particle filter, the selectively resampling particle (SERP) filter, is used to detect and track an unknown number of independent targets on a one-dimensional "racetrack" domain. The targets evolve in a nonlinear manner. The observations model a sensor positioned above the racetrack. The observation data takes the form of a discretized image of the racetrack, in which each discrete segment has a value depending both upon the presence or absence of targets in the corresponding portion of the domain, and upon lognormal noise. The SERP filter provides a conditional distribution approximated by particle simulations. After each observation is processed, the SERP filter selectively resamples its particles in a pairwise fashion, based on their relative likelihood. We consider a reinforcement learning approach to control this resampling. We compare two different ways of applying the filter to the problem: the signal measure approach and the model selection approach. We present quantitative results of the ability of the filter to detect and track the targets, for each of the techniques. Comparisons are made between the signal measure and model selection approaches, and between the dynamic and static resampling control techniques.
In this paper, we discuss multi-target tracking for a submarine model based on incomplete observations. The submarine model is a weakly interacting stochastic dynamic system with several submarines in the underlying region. Observations are obtained at discrete times from a number of sonobuoys equipped with hydrophones and consist of a nonlinear function of the current locations of submarines corrupted by additive noise. We use filtering methods
to find the best estimation for the locations of the submarines.
Our signal is a measure-valued process, resulting in filtering equations that can not be readily implemented. We develop Markov
chain approximation approach to solve the filtering equation for our model. Our Markov chains are constructed by dividing the multi-target state space into cells, evolving particles in these cells,
and employing a random time change approach. These approximations converge to the unnormalized conditional distribution of the signal process based on the back observations. Finally we present some simulation results by using the refining stochastic grid (REST) filter (developed from our Markov chain approximation method).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.