Grammars have been used for the formal specification of programming languages, and there are a number of commercial products which now use grammars. However, these have tended to be focused mainly on flow control type applications. In this paper, we consider the potential use of picture grammars and inductive logic programming in generic image understanding applications, such as object recognition. A number of issues are considered, such as what type of grammar needs to be used, how to construct the grammar with its associated attributes, difficulties encountered with parsing grammars followed by issues of automatically learning grammars using a genetic algorithm. The concept of inductive logic programming is then introduced as a method that can overcome some of the earlier difficulties.
Telepathology is a means of practicing pathology at a distance, viewing images on a computer display rather than directly through a microscope. Without compression, images take too long to transmit to a remote location and are very expensive to store for future examination. However, to date the use of compressed images in pathology remains controversial. This is because commercial image compression algorithms such as JPEG achieve data compression without knowledge of the diagnostic content. Often images are lossily compressed at the expense of corrupting informative content. None of the currently available lossy compression techniques are concerned with what information has been preserved and what data has been discarded. Their sole objective is to compress and transmit the images as fast as possible. By contrast, this paper presents a novel image compression technique, which exploits knowledge of the slide diagnostic content. This 'content based' approach combines visually lossless and lossy compression techniques, judiciously applying each in the appropriate context across an image so as to maintain 'diagnostic' information while still maximising the possible compression. Standard compression algorithms, e.g. wavelets, can still be used, but their use in a context sensitive manner can offer high compression ratios and preservation of diagnostically important information. When compared with lossless compression the novel content-based approach can potentially provide the same degree of information with a smaller amount of data. When compared with lossy compression it can provide more information for a given amount of compression. The precise gain in the compression performance depends on the application (e.g. database archive or second opinion consultation) and the diagnostic content of the images.
The conventional approach to performance evaluation for image compression in telemedicine is simply to measure compression ratio, signal-to-noise ratio and computational load. Evaluation of performance is however a much more complex and many sided issue. It is necessary to consider more deeply the requirements of the applications. In telemedicine, the preservation of clinical information must be taken into account when assessing the suitability of any particular compression algorithm. In telemedicine the metrication of this characteristic is subjective because human judgement must be brought in to identify what is of clinical importance. The assessment must therefore take into account subjective user evaluation criteria as well as objective criteria. This paper develops the concept of user based assessment techniques for image compression used in telepathology. A novel visualization approach has been developed to show and explore the highly complex performance space taking into account both types of measure. The application considered is within a general histopathology image management system; the particular component is a store-and-forward facility for second opinion elicitation. Images of histopathology slides are transmitted to the workstations of consultants working remotely to enable them to provide second opinions.
In previous work a novel approach was described which used automatic target detection together with compression techniques to achieve intelligent compression by exploiting knowledge of the image content. In this paper an extension to this work is presented in which a set of standard feature detectors such as HV-quadtrees, approximate entropy and phase congruency are used as target discriminators. These detectors all attempt to find potential areas of interest within an image but will undoubtedly be slightly different in their estimates. A probabilistic (Bayesian belief) network is then used to fuse this information into a single hypothesis of interesting areas within an image. A wavelet- based decomposition can then be applied to the image in which selective destruction of wavelet coefficients is performed outside the cued areas of interest (in effect concentrating the wavelet information into the required areas) prior to the encoding with a version of the progressive SPIHT encoder. One of the difficulties with this approach can be when large quantities of wavelet coefficients are discarded, this can potentially lead to abrupt changes at a mask boundary resulting in (visually) undesirable effects in the reconstructed image. An improvement to this is to modify the fused feature image using morphology in order to arrive at a multi-level fuzzy mask. This can then be used to gradually reduce the significance of coefficients as the distance from the mask increases. Results will illustrate how this approach can be used for the detection and compression of airborne reconnaissance imagery.
This novel approach uses automatic target detection together with compression techniques to achieve intelligent compression by exploiting knowledge of the image content. Two techniques have been experimented with one using horizontal-vertical (HV) partitioned quadtrees the other a variant of entropy called approximate entropy. The object masks that are generated using either of the techniques (or indeed other feature detectors) effectively cue potential areas of interest for subsequent encoding using two 'intelligent' image compression techniques. In the first approach, lossless compression algorithms can be applied to regions of interest within the images so that their statistical properties can be preserved to allow detailed analysis or further processing while the remainder of the image can be compressed with lossy algorithms. The degree of lossy compression is dependent both on the information content as well as the bandwidth requirement. In the second approach a wavelet-based decomposition is applied in which selective destruction of wavelet coefficients is performed outside the cued areas of interest (in effect concentrating the wavelets in required areas) prior to the encoding with a version of the progressive SPIHT encoder. Results will illustrate how both these approaches can be used for the detection and compression of airborne reconnaissance imagery.
One of the difficulties that has been apparent in applying image processing algorithms not just for automatic target recognition but also for associated tasks in image processing and understanding is that of the optimal choice of parameters and algorithms. Firstly we must select an algorithm to use and secondly the actual parameters that are required by that algorithm. It is also the case that using a chosen algorithm on a different image class yields results of a totally different quality, here we consider three image classes, namely infra-red linescan, dd5-Russian satellite and SPOT imagery. We are now exploring the use of genetic algorithms for the purpose of parameter and algorithm selection and will show how the approach can successfully obtain results which in the past have tended to be obtained somewhat heuristically.
Progress is reviewed on the development of an all source image interpretation system which exploits complementary evidence from a range of experts. This co-operation may occur between feature detectors in different bands, between detectors searching for different types of feature, or between different types of detector of the same feature. Algorithms for detecting vehicles in infrared linescan imagery gives a low missed detection rate but have been found to respond falsely to: roads fragmented by trees; structures such as cylindrical storage tanks; and to corners of man made objects, such as buildings. False alarms are reduced by applying algorithms which detect subclasses of false alarms reliably i.e. buildings and storage tanks. In addition, both are features of interest in themselves, and are useful primitives in the identification of sites. The integration of depth (in the form of disparity maps) is examined as a means of reducing false building detections. Outputs from the feature detectors are combined using a simple rule-based approach. A surface based model matching technique is examined as a means of classifying the remaining vehicle candidates.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.