Relative attributes have a more detailed and accurate description than previous binary ones. We propose to utilize the acquired attribute-correlated local regions of image for learning deep relative attributes. Different from previous works, which usually discover the spatial extent of the corresponding attribute based on the ranking list of all the images in the image set, we first classify the images according to the presence or absence of each provided attribute. Then, we sort the images in the classified image sets using a semisupervised method and learn the most relevant regions corresponding to a specific attribute. The learned local regions in two classified image sets are integrated to obtain the final result. The images and localized regions are then fed into the pretrained convolutional neural network model for feature extraction. Therefore, the concatenation of the high-level global feature and intermediate local feature is adopted to predict the relative attributes. We show that the proposed method produces a competitive performance compared with the state of the art in relative attribute prediction on three public benchmarks.
Tampering related to document forgeries is often accomplished by copy-pasting or add-printing. These tampering methods introduce character distortion mutation in documents. We present a method of exposing document forgeries using distortion mutation of geometric parameters. We estimate distortion parameters, which consist of translation and rotation distortions, through image matching for each character. Detection of tampered characters with distortion mutation occurs based on a distortion probability, which is calculated from character distortion parameters. The introduction of a visualized probability map describes the degree of distortion mutation for a full page. The proposed method exposes the forgeries based on individual characters and applies to English and Chinese document examinations. Experimental results demonstrate the effectiveness of our method on low JPEG compression quality and low resolution.
The cover source mismatch is a common problem in steganalysis, which may result in the degradation of detection accuracy. In this paper, we present a novel method to mitigate the problem of JPEG quantization table mismatch, named as Robust Discriminative Feature Transformation (RDFT). RDFT transforms original features to new feature representations based on a non-linear transformation matrix. It can improve the statistical consistency of the training samples and testing samples and learn new matched feature representations from original features by minimizing feature distribution difference while preserving the classification ability of training data. The comparison to prior arts reveals that the detection accuracy of the proposed RDFT algorithm can significantly outperform traditional steganalyzers under mismatched conditions and it is close to that of matched scenario. RDFT has several appealing advantages: 1) it can improve the statistical consistency of the training and testing data; 2) it can reduce the distribution difference between the training features and testing features; 3) it can preserve the classification ability of the training data; 4) it is robust to parameters and can achieve a good performance under a wide range of parameter values.
In this paper, we present an efficient method to locate the forged parts in a tampered JPEG image. The forged region usually undergoes a different JPEG compression with the background region in JPEG image forgeries. When a JPEG image is cropped to another host JPEG image and resaved in JPEG format, the JPEG block grid of the tampered region often mismatches the JPEG block grid of the host image with a certain shift. This phenomenon is called non-aligned double JPEG compression (NA-DJPEG). In this paper, we identify different JPEG compression forms by estimating the shift of NA-DJPEG compression. Our shift estimating approach is based on the percentage of non zeros of JPEG coefficients in different situations. Compared to previous work, our tampering location method (i) performances better when dealing with small image size, (ii) is robust to common tampering processing such as resizing, rotating, blurring and so on, (iii) doesn't need an image dataset to train a machine learning based classifier or to get a proper threshold.
A photocopied document can expose the photocopier characteristics to identify the source photocopier, so how to extract the optimal intrinsic features is critical for photocopier forensics. In this paper, a photocopier forensics method based on the texture features analysis for arbitrary characters is proposed and the features are considered as the intrinsic features. Firstly, an image preprocessing process is practiced to get individual character images. Secondly, three sets of features are extracted from each individual character image, including the gray level features, the gradient differential matrix (GDM) features and the gray level gradient co-occurrence matrix (GLGCM) features. Last, each individual character in a document is classified using a Fisher classifier and a majority vote is performed on the character classification results to identify the source photocopier. Experimental results on seven photocopiers prove the effectiveness of our proposed method and an average character classification accuracy of 88.47% can be achieved.
KEYWORDS: Digital watermarking, Detection and tracking algorithms, Fourier transforms, Digital imaging, Image analysis, Information technology, Information security, Security technologies, Video surveillance, Video
The paper presents an improved robust watermarking algorithm against rotation, scale and translation (RST). The watermark is embedded in magnitudes of DFT and detected based on principal axis. The detection algorithm doesn't require the original image. The results demonstrate that the method is robust to any rotation angle, wide scale ranges, JPEG compression and some collusion attacks. This algorithm is more timesaving and feasible than other algorithms in detection processing.
In this paper, a wavelet domain robust watermarking technique for still images is presented. Watermark message encoding is accomplished based on iterative error correction codes with reasonable decoder complexity, followed by the codeword spreading over the whole image. Unlike the traditional technique, the proposed method utilizes the statistical property of a certain local area of an image in DWT domain for the watermark embedding and extraction. To minimize the perceptual degradation of the watermarked image, we propose an image compensation strategy (ICS) to make the watermark perceptually invisible. Experimental results demonstrate the robustness of the algorithm to many attacks, such as A/D and D/A processing, rescaling, and lossy compression.
This paper presents a new image segmentation algorithm based on the pulse coupled neural network (PCNN) and histogram method for infrared images. The proposed algorithm abandons entirely the mechanism of the time exponential decaying function and uses the results of the gray-level histogram analysis as the interior thresholds of PCNN, meanwhile, it keeps the advantage of briding small spatial gaps and minor intensity variations. Experiment results demonstrate that the proposed algorithm can get more complete region and edge information in infrared images. It is also of much lower complexity and of high speed than the original one.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.