In this paper we present methods for scene understanding, localization and classification of complex, visually
heterogeneous objects from overhead imagery. Key features of this work include: determining boundaries of objects
within large field-of-view images, classification of increasingly complex object classes through hierarchical
descriptions, and exploiting automatically extracted hypotheses about the surrounding region to improve classification
of a more localized region. Our system uses a principled probabilistic approach to classify increasingly
larger and more complex regions, and then iteratively uses this automatically determined contextual information
to reduce false alarms and misclassifications.
We present further extensions of yet another steganographic scheme (YASS), a method based on embedding data in randomized locations so as to resist blind steganalysis. YASS is a JPEG steganographic technique that hides data in the discrete cosing transform (DCT) coefficients of randomly chosen image blocks. Continuing to focus on JPEG image steganography, we present, in this paper, a further study on YASS with the goal of improving the rate of embedding. Following are the two main improvements presented in this paper: (i) a method that randomizes the quantization matrix used on the transform domain coefficients, and (ii) an iterative hiding method that utilizes the fact that the JPEG "attack" that causes errors in the hidden bits is actually known to the encoder. We show that using both these approaches, the embedding rate can be increased while maintaining the same level of undetectability (as the original YASS scheme). Moreover, for the same embedding rate, the proposed steganographic schemes are more undetectable than the popular matrix embedding based F5 scheme, using features proposed by Pevny and Fridrich for blind steganalysis.
Print-scan resilient data hiding finds important applications in document security, and image copyright protection. In this paper, we build upon our previous work on print-scan resilient data hiding with the goal of providing a mathematical foundation for computing information-theoretic limits, and guiding design of more complicated hiding schemes allowing higher volume of embedded data. A model for print-scan process is proposed, which has three main components: a) effects due to mild cropping, b) colored high-frequency noise, and c) non-linear effects. It can be shown that cropping introduces unknown but smoothly varying phase shift in the image spectrum. A new hiding method called Differential Quantization Index Modulation (DQIM) is proposed in which, information is hidden in the phase spectrum of images by quantizing the difference in phase of adjacent frequency locations. The unknown phase shift would get cancelled when the difference is taken. Using the proposed DQIM hiding in phase, we are able to survive the print-scan process with several hundred information bits hidden into the images.