Translator Disclaimer
23 December 1997 NeTra-V: toward an object-based video representation
Author Affiliations +
Proceedings Volume 3312, Storage and Retrieval for Image and Video Databases VI; (1997)
Event: Photonics West '98 Electronic Imaging, 1998, San Jose, CA, United States
There is a growing need for new representations of video that allow not only compact storage of data but also content-based functionalities such as search and manipulation of objects. We present here a prototype system, called NeTra-V, that is currently being developed to address some of these content related issues. The system has a two-stage video processing structure: a global feature extraction and clustering stage, and a local feature extraction and object-based representation stage. Key aspects of the system include a new spatio-temporal segmentation and object-tracking scheme, and a hierarchical object-based video representation model. The spatio-temporal segmentation scheme combines the color/texture image segmentation and affine motion estimation techniques. Experimental results show that the proposed approach can handle large motion. The output of the segmentation, the alpha plane as it is referred to in the MPEG-4 terminology, can be used to compute local image properties. This local information forms the low-level content description module in our video representation. Experimental results illustrating spatio- temporal segmentation and tracking are provided.
© (1997) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Yining Deng, Debargha Mukherjee, and B. S. Manjunath "NeTra-V: toward an object-based video representation", Proc. SPIE 3312, Storage and Retrieval for Image and Video Databases VI, (23 December 1997);


Mosaics from MPEG-2 video
Proceedings of SPIE (July 01 2003)
Compressed video indexing based on object motion
Proceedings of SPIE (May 30 2000)

Back to Top