Specific emitter identification technology has always been a hot issue for the vital research of radio-related departments in various countries. However, due to the sharp increase in the types of radiation sources and the complexity of electromagnetic space, identifying individual radiation sources has become more challenging. In order to provide a convenient and effective radiation source identification method, this paper proposes an emitter individual identification research method based on the characteristics of synchronization signal rising edges. With the help of artificial intelligence technology, the method proposed in this paper has achieved a very high individual identification rate of radiation sources by using the characteristics of synchronization signal rising edges and deep neural networks.
Center-surrounded receptive fields, which can be well simulated by the Laplacian of Gaussian (LOG) filter, have been found in the cells of the retina and lateral geniculate nucleus (LGN). With center-surrounded receptive fields, the human visual system (HVS) can reduce the visual redundancy by extracting the edges and contours of objects. Furthermore, current researches on image quality assessment (IQA) have shown that human's perception of image quality can be estimated by the correlation degree between the extracted perceptual-aware features of the reference and test images. Thus, this paper assesses the quality of a video by measuring the similarity of perceptual-aware features from LOG filtering between the test video and reference video.
Considering the spatial and temporal channel of the human visual system both include the second derivative of Gaussian function, we first construct a three-dimensional LOG (3D LOG) filter to simulate human visual filter and to extract the perceptual-aware features for the design of VQA algorithms. Moreover, since the correlation measuring based on 2D LOG filtering of video spatiotemporal slice (STS) images can capture the distortion of spatiotemporal motion structure accurately and effectively, then we apply the 2D LOG filtering to video STS images and using maximum pooling for distortion of vertical and horizontal STS images to improve prediction accuracy.
The performance of proposed algorithms is validated on the LIVE VQA database. The Spearman’s rank correlation coefficients of the proposed algorithms are all above 0.82, which shows that our methods are better than that of most mainstream VQA methods.
Image quality assessment (IQA) has always been an active research topic since the birth of the digital image. Actually, the arrival of deep learning has made IQA more promising. However, most state-of-the-art no-reference (NR) IQA methods require regression training on distorted images or extracted features with subjective image scores, which makes them suffer from insufficient reference image content and training samples with subjectively scoring due to timeconsuming and laborious subjective testing. Furthermore, most convolutional neural networks (CNN)-based methods generally transform original images into patches to accommodate fixed-size input of CNN, which often alter the image's data and introduce noise into the neural network. This paper aims to solve the above problems by adopting new strategies and proposes a novel NRIQA method based on deep CNN. Specifically, first, we obtain image data with diverse image content, multiple image sizes, and reasonable distortion by crawling, filtrating, and degrading numerous publicly licensed high-quality images from the Internet. Then, we score all the images using an excellent full-reference (FR) IQA algorithm, thereby artificially construct a large objective IQA database. Next, we design a deep CNN, which can accept input images of original sizes from our database instead of patches, then we train the model with the FRIQA index as training objective thus propose the opinionunaware( OU) NRIQA method. Finally, the experiment results show that our method achieves excellent performance, which outperforms state-of-the-art OU-NRIQA models and is comparable to most of the traditional opinion-aware NRIQA methods, even some FRIQA methods on standard subjective IQA databases.
Most learning-based no-reference (NR) video quality assessment (VQA) needs to be trained with a lot of subjective quality scores. However, it is currently difficult to obtain a large volume of subjective scores for videos. Inspired by the success of full-reference VQA methods based on the spatiotemporal slice (STS) images in the extraction of perceptual features and evaluation of video quality, this paper adopts multi-directional video STS images, which are images composed of multi-directional sections of video data, to deal with the lacking of subjective quality scores. By sampling the STS images of video into image patches and adding noise to the quality labels of patches, a successful NR VQA model based on multi-directional STS images and neural network training is proposed. Specifically, first, we select the subjective database that currently contains the largest number of real distortion videos as the test set. Second, we perform multi-directional STS extraction on the videos and sample the local patches from the multi -directional STS to augment the training sample set. Besides, we add some noise to the quality label of the local patches. Third, a reasonable deep neural network is constructed and trained to obtain a local quality prediction model for each patch in the STS image, and then the quality of an entire video is obtained by averaging the model prediction results of multi -directional STS images. Finally, the experiment results indicate that the proposed method tackles the insufficiency of training samples in small subjective VQA dataset and obtains a high correlation with the subjective evaluation.
As video applications become more popular, no-reference video quality assessment (NR-VQA) has become a focus of research. In many existing NR-VQA methods, perceptual feature extraction is often the key to success. Therefore, we design methods to extract the perceptual features that contain a wider range of spatiotemporal information from multidirectional video spatiotemporal slices (STS) images (the images generated by cutting video data parallel to temporal dimension in multiple directions) and use support vector machine (SVM) to perform a successful NR video quality evaluation in this paper. In the proposed NR-VQA design, we first extracted the multi-directional video STS images to obtain as much as possible the overall video motion representation. Secondly, the perceptual features of multi-directional video STS images such as the moments of feature maps, joint distribution features from the gradient magnitude and filtering response of Laplacian of Gaussian, and motion energy characteristics were extracted to characterize the motion statistics of videos. Finally, the extracted perceptual features were fed in SVM or multilayer perceptron (MLP) to perform training and testing. And the experimental results show that the proposed method has achieved the state-of-theart quality prediction performance on the largest existing annotated video database.
Video quality assessment (VQA) is becoming increasingly important as a comprehensive measure of video quality. This paper proposes a full-reference VQA (FR-VQA) algorithm based on the motion structure partition similarity of spatiotemporal slice (STS) images. To achieve this objective, a number of FR-image quality assessment algorithms were applied slice by slice to video STS images to compare their performance of detecting structure similarity of STS images. The algorithm that performed the best was selected to detect the similarity between motion-partitioning STS images. Next, as motion objects in the video sequence were found to have different influences on the prediction performance in terms of moving speed and track, the STS images were divided into simple and complex motion regions, and their contributions to the VQA task determined. Consequently, a promising effective and efficient VQA model, called STS-MSPS, is also proposed. Experimental evaluations conducted based on various annotated VQA databases indicate that the proposed STS-MSPS achieves state-of-the-art prediction performances in terms of correlations with subjective evaluation and statistical significance tests. This paper also shows that STS images by themselves provide sufficient information for VQA tasks and that the proposed complex motion region of an STS image is predominantly responsible for yielding a high-precision model.
Video quality assessment (VQA) has been a hot research topic because of rapid increase of huge demand of video
communications. From the earliest PSNR metric to advanced models that are perceptual aware, researchers have made
great progress in this field by introducing properties of human vision system (HVS) into VQA model design. Among
various algorithms that model the property of HVS perceiving motion, the spatiotemporal energy model has been validated
to be high consistent with psychophysical experiments. In this paper, we take the spatiotemporal energy model into VQA
model design by the following steps. 1) According to the pristine spatiotemporal energy model proposed by Adelson et al,
we apply the linear filters, which are oriented in space-time and tuned in spatial frequency, to filter the reference and test
videos respectively. The outputs of quadrature pairs of above filters are then squared and summed to give two measures of
motion energy, which are named rightward and leftward energy responses, respectively. 2) Based on the pristine model,
we calculate summation of the rightward and leftward energy responses as spatiotemporal features to represent perceptual
quality information for videos, named total spatiotemporal motion energy maps. 3) The proposed FR-VQA model, named
STME, is calculated with statistics based on the pixel-wise correlation between the total spatiotemporal motion energy
maps of the reference and distorted videos. The STME model was validated on the LIVE VQA Database by comparing
with existing FR-VQA models. Experimental results show that STME performs with excellent prediction accuracy and
stays in state-of-the-art VQA models.
Video quality assessment (VQA) has been a hot topic due to the rapidly increasing demands in related video applications. The existing state-of-art full reference (FR) VQA metric ViS3 uses adapted the Most Apparent Distortion (MAD) algorithm to capture spatial distortion first, and then quantifies the spatiotemporal distortion by spatiotemporal correlation and a HVS-based model from the spatiotemporal slices (STS) images. In this paper we argue that the STS images can provide enough information for measuring video distortion. Taking advantage of an effective and easy-applied FR image quality model GMSD, we propose to measure video quality by analysing the structural changes between the STS images of the reference videos and their distorted counterparts. This new VQA model is denoted as STS-GMSD. To further investigate the influence spatial dissimilarity, we also combine the frame-by-frame spatial GMSD factor with the STS-GMSD and propose another VQA model, named SSTS-GMSD. Extensive experimental evaluations on two benchmark video quality databases demonstrate that the proposed STS-GMSD outperforms the existing state-of-the-art FR-VQA methods. While STS-GMSD works all square with SSTS-GMSD, which validates that STS images contain enough information for FR-VQA model design.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.