Translator Disclaimer
Presentation + Paper
21 April 2020 Maneuverability hazard detection and localization in low-altitude UAS imagery
Author Affiliations +
Abstract
Object detection and localization is an important problem in computer vision and remote sensing. While there have been several techniques presented and used in recent years, the You Only Look Once (YOLO) and derivative architectures have gained popularity due to their ability to perform real-time object localization as well as achieve remarkable detection scores in ground-based applications. Here, we present methods and results for performing maneuverability hazard detection and localization in low-altitude unmanned aerial systems (UAS) imagery. Imagery is captured over a variety of flight routes and altitudes, and then analyzed with modern deep learning techniques to discover objects such as civilian and military vehicles, barriers, and related hindrances to navigating cluttered semi-urban environments. We present our findings for the deep learning architectures under a variety of training and validation parameters that include pre-trained weights from benchmark public datasets, as well as training with a custom, mission-relevant dataset provided by U.S. Army ERDC.
Conference Presentation
© (2020) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
J. Alex Hurt, David Huangal, Jeffrey Dale, Trevor M. Bajkowski, James M. Keller, Grant J. Scott, and Stanton R. Price "Maneuverability hazard detection and localization in low-altitude UAS imagery", Proc. SPIE 11413, Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II, 114131K (21 April 2020); https://doi.org/10.1117/12.2557609
PROCEEDINGS
10 PAGES + PRESENTATION

SHARE
Advertisement
Advertisement
Back to Top