We propose a reformed method that utilizes the motion vectors (MVs) in an MPEG sequence as the motion depicter for motion analysis and representation of video contents. The MVs are converted to a uniform MV set, independent of the frame type and the direction of prediction, and then used as motion depicters in each frame. To obtain such a uniform MV set, a new motion analysis method using bidirectional prediction-independent framework is proposed. Generally, it is impossible to directly compare an I frame without MV to others such as B or P frames. But, this approach enables a frame-type-independent representation that normalizes temporal features including frame type, macroblock (MB) encoding, and MVs. Experimental results show that our method has good performance and high validity. Compared with a full-decoding method, the average of the processing time in our method is reduced about 55%, because our method is directly processed on the MPEG bit stream after variable length code (VLC) decoding. Average of the effective number of the normalized MVs in the proposed algorithm is increased about 25% than that of the conventional method.
Because video sequences consist of dynamic video objects in nature, video object motion is an effective feature in describing the content of video sequences. In this paper, we propose a method that converts motion vectors (MVs) on MPEG coded domain to a uniform set, independent of the frame type and the direction of prediction, and directly utilizes these normalized MVs (N-MVs) in understanding video contents. To obtain such uniform MV set, we proposed a new motion analysis method using Bi-directional Prediction-Independent Framework. Generally, it is impossible to directly compare an I frame without MV to other frame types such as B or P frame. But, our approach enables a frame-type independent representation that normalizes temporal features including frame type, MB encoding and MVs. In the experiments, we show that the proposed method is better than the conventional one in terms of performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.