Translator Disclaimer
7 October 2019 Augmentation techniques for video surveillance in the visible and thermal spectral range
Author Affiliations +
In intelligent video surveillance, cameras record image sequences during day and night. Commonly, this demands different sensors. To achieve a better performance it is not unusual to combine them. We focus on the case that a long-wave infrared camera records continuously and in addition to this, another camera records in the visible spectral range during daytime and an intelligent algorithm supervises the picked up imagery. More accurate, our task is multispectral CNN-based object detection. At first glance, images originating from the visible spectral range differ between thermal infrared ones in the presence of color and distinct texture information on the one hand and in not containing information about thermal radiation that emits from objects on the other hand. Although color can provide valuable information for classification tasks, effects such as varying illumination and specialties of different sensors still represent significant problems. Anyway, obtaining sufficient and practical thermal infrared datasets for training a deep neural network poses still a challenge. That is the reason why training with the help of data from the visible spectral range could be advantageous, particularly if the data, which has to be evaluated contains both visible and infrared data. However, there is no clear evidence of how strongly variations in thermal radiation, shape, or color information influence classification accuracy. To gain deeper insight into how Convolutional Neural Networks make decisions and what they learn from different sensor input data, we investigate the suitability and robustness of different augmentation techniques. We use the publicly available large-scale multispectral ThermalWorld dataset consisting of images in the long-wave infrared and visible spectral range showing persons, vehicles, buildings, and pets and train for image classification a Convolutional Neural Network. The training data will be augmented with several modifications based on their different properties to find out which ones cause which impact and lead to the best classification performance.
© (2019) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Vanessa Buhrmester, Ann-Kristin Grosselfinger, David Münch, and Michael Arens "Augmentation techniques for video surveillance in the visible and thermal spectral range", Proc. SPIE 11166, Counterterrorism, Crime Fighting, Forensics, and Surveillance Technologies III, 111660N (7 October 2019);


IR-visible image sensor for diagnosis of ground object
Proceedings of SPIE (December 12 2001)
IR depth from stereo for autonomous navigation
Proceedings of SPIE (May 12 2005)
PCA-based image fusion
Proceedings of SPIE (May 08 2006)
Infrared on-orbit RCC inspection system (IORIS)
Proceedings of SPIE (March 28 2005)

Back to Top