Kobus Barnard, Andrew Connolly, Larry Denneau, Alon Efrat, Tommy Grav, Jim Heasley, Robert Jedicke, Jeremy Kubica, Bongki Moon, Scott Morris, Praveen Rao
We describe a proposed architecture for the Large Synoptic Survey Telescope (LSST) moving object processing pipeline based on a similar system under development for the Pan-STARRS project. This pipeline is responsible for identifying and discovering fast moving objects such as asteroids, updating information about them, generating appropriate alerts, and supporting queries about moving objects. Of particular interest are potentially hazardous asteroids(PHA's).
We consider the system as being composed of two interacting components. First, candidate linkages corresponding to moving objects are found by tracking detections ("tracklets"). To achieve this in reasonable time we have developed specialized data structures and algorithms that efficiently evaluate the possibilities using quadratic fits of the detections on a modest time scale.
For the second component we take a Bayesian approach to validating, refining, and merging linkages over time. Thus new detections increase our belief that an orbit is correct and contribute to better orbital parameters. Conversely, missed expected detections reduce the probability that the orbit exists. Finally, new candidate linkages are confirmed or refuted based on previous images.
In order to assign new detections to existing orbits we propose bipartite graph matching to find a maximum likelihood assignment subject to the constraint that detections match at most one orbit and vice versa. We describe how to construct this matching process to properly deal with false detections and missed detections.
In this paper we examine new data structures and algorithms for efficient and accurate gating and identification of potential track/observation associations. Specifically, we focus on the problem of continuous timed data, where observations arrive over a range of time and each observation may have a unique time stamp. For example, the data may be a continuous stream of observations or consist of many small observed subregions. This contrasts with previous work in accelerating this task, which largely assumes that observations can be treated as arriving in batches at discrete time steps. We show that it is possible to adapt established techniques to this modified task and introduce a novel data structure for tractably dealing with very large sets of tracks. Empirically we show that these data structures provide a significant benefit in both decreased computational cost and increased accuracy when contrasted with treating the observations as if they occurred at discrete time steps.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.