Presentation + Paper
30 April 2018 Vehicle tracking in full motion video using the progressively expanded neural network (PENNet) tracker
Evan Krieger, Theus Aspiras, Vijayan K. Asari, Kevin Krucki, Bryce Wauligman, Yakov Diskin, Karl Salva
Author Affiliations +
Abstract
Object trackers for full-motion-video (FMV) need to handle object occlusions (partial and short-term full), rotation, scaling, illumination changes, complex background variations, and perspective variations. Unlike traditional deep learning trackers that require extensive training time, the proposed Progressively Expanded Neural Network (PENNet) tracker methodology will utilize a modified variant of the extreme learning machine, which encompasses polynomial expansion and state preserving methodologies. This reduces the training time significantly for online training of the object. The proposed algorithm is evaluated on the DAPRA Video Verification of Identity (VIVID) dataset, wherein the selected highvalue-targets (HVTs) are vehicles.
Conference Presentation
© (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Evan Krieger, Theus Aspiras, Vijayan K. Asari, Kevin Krucki, Bryce Wauligman, Yakov Diskin, and Karl Salva "Vehicle tracking in full motion video using the progressively expanded neural network (PENNet) tracker", Proc. SPIE 10649, Pattern Recognition and Tracking XXIX, 106490I (30 April 2018); https://doi.org/10.1117/12.2305391
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Neural networks

RGB color model

Unmanned aerial vehicles

Video

Detection and tracking algorithms

Optical tracking

Electronic filtering

RELATED CONTENT


Back to Top