This paper presents a framework for mammogram enhancement that is based on a selective enhancement technique. Several enhancement algorithms under this framework are developed, which include weighted mean gray value- and fuzzy cross-over point-based thresholding methods, algorithm fusion, iterative enhancement method, and statistical decision theory-based techniques. Using various abnormal mammograms, the presented algorithms prove to be more robust and yield superior performance when compared with six representative enhancement approaches available in the literature.
The effect of assuming and using non-Gaussian attributes of underlying source signals for separating/encoding patterns is investigated, for application to terrain categorization (TERCAT) problems. Our analysis provides transformed data, denoted as "Independent Components," which can be used and interpreted in different ways. The basis vectors of the resulting transformed data are statistically independent and tend to align themselves with source signals. In this effort, we investigate the basic formulation designed to transform signals for subsequent processing or analysis, as well as a more sophisticated model designed specifically for unsupervised classification. Mixes of single band images are used, as well as simulated color infrared and Landsat. A number of experiments are performed. We first validate the basic formulation using a straightforward application of the method to unmix signal data in image space. We next show the advantage of using this transformed data compared to the original data for visually detecting TERCAT targets of interest. Subsequently, we test two methods of performing unsupervised classification on a scene that contain a diverse range of terrain features, showing the benefit of these methods against a control method for TERCAT applications.
KEYWORDS: Image fusion, Image quality, Signal to noise ratio, Wavelets, Quality measurement, Image filtering, Optical inspection, Inspection, Image analysis, Human vision and color perception
Comparative evaluation of fused images is a critical step to evaluate the relative performance of different image fusion algorithms. Human visual inspection is often used to assess the quality of fused images. In this paper, we propose some variants of a new image quality metric based on the human vision system (HVS). The proposed measures evaluate the quality of a fused image by comparing its visual differences with the source images and require no knowledge of the ground truth. First, the images are transformed to the frequency domain. Second, the difference between the images in frequency domain is weighted with a human contrast sensitivity function (CSF). Finally, the quality of a fused image is obtained by computing the MSE of the weighted difference images obtained from the fused image and source images. Our experimental results show that these metrics are consistent with perceptually obtained results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.