Burst packet loss imposes significant quality degradation for streaming applications. Interleaving, which helps reduce the probability of losing adjacent packets, is considered an effective method to mitigate burst errors. Most current research on wavelet image/video streaming is focused on how to maximize the interleaving effect in the spatial or spatial-frequency domain. However, in order to achieve the best video quality, optimizing temporal interleaving is very important, especially when error concealment is present in the streaming system because an inappropriate interleaving method may have an adverse effect on error concealment. Optimization of temporal interleaving on wavelet-compressed image/video streaming has not been previously studied. In this paper a novel optimal packet interleaving method is proposed for streaming applications on burst-loss channels. The objective is to achieve the best video quality at the receiver given an error-concealment algorithm and the channel traffic conditions. The proposed method consists of two steps: 1) spatial interleaving is conducted during packetization to disperse damage resulting from packet loss; 2) temporal interleaving is applied during transmission maximize the effect of error concealment at the receiver. In addition, a new concept that addresses the needs of error concealment, namely "temporal neighbor packet distance" is defined in order to facilitate the optimization. A low computational complexity algorithm is developed to satisfy the requirement of real-time transmission. Experimental results show that our proposed method can consistently improve the effects of error concealment.
The term Content-Based appears often in applications for which MPEG-7 is expected to play a significant role. MPEG-7 standardizes descriptors of multimedia content, and while compression is not the primary focus of MPEG-7, the descriptors defined by MPEG-7 can be used to reconstruct a rough representation of an original multimedia source. In contrast, current image and video compression standards such as JPEG and MPEG are not designed to encode at the very low bit-rates that could be accomplished with MPEG-7 using descriptors. In this paper we show that content-based mechanisms can be introduced into compression algorithms to improve the scalability and functionality of current compression methods such as JPEG and MPEG. This is the fundamental idea behind Content-Based Compression (CBC). Our definition of CBC is a compression method that effectively encodes a sufficient description of the content of an image or a video in order to ensure that the recipient is able to reconstruct the image or video to some degree of accuracy. The degree of accuracy can be, for example, the classification error rate of the encoded objects, since in MPEG-7 the classification error rate measures the performance of the content descriptors. We argue that the major difference between a content-based compression algorithm and conventional block-based or object-based compression algorithms is that content-based compression replaces the quantizer with a more sophisticated classifier, or with a quantizer which minimizes classification error. Compared to conventional image and video compression methods such as JPEG and MPEG, our results show that content-based compression is able to achieve more efficient image and video coding by suppressing the background while leaving the objects of interest nearly intact.
Discriminant Feature Extraction (DFE) is widely recognized as an important pre-processing step in classification applications. Most DFE algorithms are linear and thus can only explore the linear discriminant information among the different classes. Recently, there has been several promising attempts to develop nonlinear DFE algorithms, among which is Kernel-based Feature Extraction (KFE). The efficacy of KFE has been experimentally verified by both synthetic data and real problems. However, KFE has some known limitations. First, KFE does not work well for strongly overlapped data. Second, KFE employs all of the training set samples during the feature extraction phase, which can result in significant computation when applied to very large datasets. Finally, KFE can result in overfitting. In this paper, we propose a substantial improvement to KFE that overcomes the above limitations by using a representative dataset, which consists of critical points that are generated from data-editing techniques and centroid points that are determined by using the Frequency Sensitive Competitive Learning (FSCL) algorithm. Experiments show that this new KFE algorithm performs well on significantly overlapped datasets, and it also reduces computational complexity. Further, by controlling the number of centroids, the overfitting problem can be effectively alleviated.
Kernel-based Feature Extraction (KFE) is an emerging nonlinear discriminant feature extraction technique. In many classification scenarios using KFE allows the dimensionality of raw data to be reduced while class separability is preserved or even improved. KFE offers better performance than alternative linear algorithms because it employs nonlinear discriminating information among the classes. In this paper, we explore the potential application of KFE to radar signatures, as might be used for Automatic Target Recognition (ATR). Radar signatures can be problematic for many traditional ATR algorithms because of their unique characteristics. For example, some unprocessed radar signatures are high dimensional, linearly inseparable, and extremely sensitive to aspect changes. Applying KFE on High Range Resolution (HRR) radar signatures, we observe that KFE is quite effective on HRR data in terms of preserving/improving separability and reducing the dimensionality of the original data. Furthermore, our experiments indicate the number of extracted features that are needed for HRR radar signatures.
Feature Extraction (FE) algorithms have attracted great attention in recent years. In order to improve the performance of FE algorithms, nonlinear kernel transformations (e.g., the kernel trick) and scatter matrix based class separability criteria have been introduced in Kernel-based Feature Extraction (KFE)\cite{}. However, for any L-class problem, at most L-1 nonlinear kernel features can be extracted by KFE, which is not desirable for many applications. To solve this problem, a modified kernel-based feature extraction (MKFE) based on nonparametric scatter matrices was proposed, but with the limitation of only being able to extract multiple features for 2-class problems. In this paper, we present a general MKFE algorithm for multi-class problems. The core of our algorithm is a novel expression of the nonparametric between-class matrix, which is shown to be consistent with the definition of the parametric between-class matrix in the sense of the scatter-matrix-based class separability criteria. Based on this expression of the between-class matrix our algorithm is able to extract multiple kernel features in multi-class problems. To speed up the computation, we also proposed a simplified formula. Experimental results using synthetic data are provided to demonstrate the effectiveness of our proposed algorithm.
Support Vector Machines (SVMs) are an emerging machine learning technique that has found widespread application in various areas during the past four years. The success of SVMs is mainly due to a number of attractive features, including a) applicability to the processing of high dimensional data, b) ability to achieve a global optimum, and c) the ability to deal with nonlinear data. One potential application for SVMs is High Range Resolution (HRR) radar signatures, typically used for HRR-based Automatic Target Recognition (ATR). HRR signatures are problematic for many traditional ATR algorithms because of the unique characteristics of HRR signatures. For example, HRR signatures are generally high dimensional, linearly inseparable, and extremely sensitive to aspect changes. In this paper we demonstrate that SVMs are a promising alternative in dealing with the challenges of HRR signatures. The studies presented in this paper represent an initial attempt at applying SVMs to HRR data. The most straightforward application of SVMs to HRR-based ATR is to use SVMs as classifiers. We experimentally compare the performance of SVM-based classifiers with several conventional classifiers, such as k-Nearest-Neighbor (kNN) classifiers and Artificial Neural Network (ANN) Classifiers. Experimental results suggest that SVM classifiers possess a number of advantages. For example, a) applying SVM classifiers to HRR data requires little prior knowledge of the target data, b) SVM classifiers require much less computation than kNN classifiers during testing, and c) the structure of a trained SVM classifier can reveal a number of important properties of the target data.
First generation image compression methods using block-based DCT or wavelet transforms compressed all image blocks with a uniform compression ratio. Consequently, any regions of special interest were degraded along with the remainder of the image. Second generation image compression methods apply object-based compression techniques in which each object is first segmented and then encoded separately. Content-based compression further improves on object-based compression by applying image understanding techniques. First, each object is recognized or classified, and then different objects are compressed at different compression rates according to their priorities. Regions with higher priorities (such as objects of interest) receive more encoding bits as compared to less important regions, such as the background. The major difference between a content-based compression algorithm and conventional block-based or object-based compression algorithms is that content-based compression replaces the quantizer with a more sophisticated classifier. In this paper we describe a technique in which the image is first segmented into regions by texture and color. These regions are then classified and merged into different objects by means of a classifier based on its color, texture and shape features. Each object is then transformed by either DCT or Wavelets. The resulting coefficients are encoded to an accuracy that minimizes recognition error and satisfies alternative requirements. We employ the Chernoff bound to compute the cost function of the recognition error. Compared to the conventional image compression methods, our results show that content-based compression is able to achieve more efficient image coding by suppressing the background while leaving the objects of interest virtually intact.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.