Translator Disclaimer
10 February 2009 Texture preservation in de-noising UAV surveillance video through multi-frame sampling
Author Affiliations +
Proceedings Volume 7245, Image Processing: Algorithms and Systems VII; 72450A (2009)
Event: IS&T/SPIE Electronic Imaging, 2009, San Jose, California, United States
Image de-noising is a widely-used technology in modern real-world surveillance systems. Methods can seldom do both de-noising and texture preservation very well without a direct knowledge of the noise model. Most of the neighborhood fusion-based de-noising methods tend to over-smooth the images, which causes a significant loss of detail. Recently, a new non-local means method has been developed, which is based on the similarities among the different pixels. This technique results in good preservation of the textures; however, it also causes some artifacts. In this paper, we utilize the scale-invariant feature transform (SIFT) [1] method to find the corresponding region between different images, and then reconstruct the de-noised images by a weighted sum of these corresponding regions. Both hard and soft criteria are chosen in order to minimize the artifacts. Experiments applied to real unmanned aerial vehicle thermal infrared surveillance video show that our method is superior to popular methods in the literature.
© (2009) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Yi Wang, Ronald A. Fevig, and Richard R. Schultz "Texture preservation in de-noising UAV surveillance video through multi-frame sampling", Proc. SPIE 7245, Image Processing: Algorithms and Systems VII, 72450A (10 February 2009);


Back to Top