Open Access
1 April 2011 Re-establish the time-order across sensors of different modalities
Author Affiliations +
Abstract
Modern cameras can cut passengers' faces into boxes in 0.04 s per frame in parallel without time stamps. Unfortunately, that creates random storage without the tracking capability, and one can no longer meet the 5 W's challenge-"who speaks what, where and when." We develop a time-order reconstruction methodology which sorts the boxes as follows. i. A morphological image preprocessing to overcome the facial changes is based on the peripheral invariance of a human visual system when focusing on a maximum overlapping central region. ii. Replacing the Wiener matched filter desired output with an averaged but blurred long exposure, one can select the best matched sharp short exposures called the anchor faces β's. iii. The time-order neighborhood chaining is done by an iterative self-affirmation logic that demands a mutually agreed-upon minimum distance: whether or not the two nearest neighbors of β, namely face A and face C, also consider β to be their two nearest neighbors. The reconstruction procedure mathematically amounts to a product of two triple correlation functions sharing an intermediate state. We have thus demonstrated the time-order helps us associate a video submanifold with the acoustic manifold that solves the 5 W's challenge.
©(2011) Society of Photo-Optical Instrumentation Engineers (SPIE)
Ming-Kai Hsu, Ting N. Lee, and Harold Szu "Re-establish the time-order across sensors of different modalities," Optical Engineering 50(4), 047002 (1 April 2011). https://doi.org/10.1117/1.3562322
Published: 1 April 2011
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Video

Video surveillance

Sensors

Surveillance

Cameras

Reconstruction algorithms

Algorithm development

Back to Top