Medical image have the characteristics of the complex overlapping of organ and tissue, and accompanied by noise, local volume effect, artifact. So the traditional segmentation method is not ideal. To solve this problem, a medical image segmentation algorithm based on tree-structured MRF in wavelet domain (WTS-MRF) was proposed. For expressing medical image information. WTS-MRF model defines the same tree structure at every scale of wavelet decomposition. At the same time, wavelet transform has good directional selectivity, non-redundancy and multi-scale characteristics. Multiscale and multi direction expression by wavelet decomposition improved the ability of TS-MRF to describe the non-stationary characteristics of images. Then, it can more accurately describe the statistical characteristics of images, and effectively extract the feature information of medical image. In the WTS-MRF model, there are two structures in the layer TS-MRF structure and the interlayer four fork tree structure of wavelet coefficient. The TS-MRF model is built in the layer, and the node potential function is modeled by Potts model. The Gaussian model is used to build the model for the observed characteristics with the same label. The interlayer wavelet coefficients have the property of first-order Markov. The maximum posterior probability is obtained by recursive operation, and the classification hierarchy tree label is implemented to realize medical image segmentation. the experiment results indicate that the algorithm not only can effectively extract the details but also can relatively completely extract target area of medical image, and has higher segmentation accuracy and robustness.
Image segmentation is a very important step in the low-level visual computing. Although image segmentation has been studied for many years, there are still many problems. PCNN (Pulse Coupled Neural network) has biological background, when it is applied to image segmentation it can be viewed as a region-based method, but due to the dynamics properties of PCNN, many connectionless neurons will pulse at the same time, so it is necessary to identify different regions for further processing. The existing PCNN image segmentation algorithm based on region growing is used for grayscale image segmentation, cannot be directly used for color image segmentation. In addition, the super-pixel can better reserve the edges of images, and reduce the influences resulted from the individual difference between the pixels on image segmentation at the same time. Therefore, on the basis of the super-pixel, the original PCNN algorithm based on region growing is improved by this paper. First, the color super-pixel image was transformed into grayscale super-pixel image which was used to seek seeds among the neurons that hadn’t been fired. And then it determined whether to stop growing by comparing the average of each color channel of all the pixels in the corresponding regions of the color super-pixel image. Experiment results show that the proposed algorithm for the color image segmentation is fast and effective, and has a certain effect and accuracy.
Super-pixel extraction techniques group pixels to form over-segmented image blocks according to the similarity among pixels. Compared with the traditional pixel-based methods, the image descripting method based on super-pixel has advantages of less calculation, being easy to perceive, and has been widely used in image processing and computer vision applications. Pulse coupled neural network (PCNN) is a biologically inspired model, which stems from the phenomenon of synchronous pulse release in the visual cortex of cats. Each PCNN neuron can correspond to a pixel of an input image, and the dynamic firing pattern of each neuron contains both the pixel feature information and its context spatial structural information. In this paper, a new color super-pixel extraction algorithm based on multi-channel pulse coupled neural network (MPCNN) was proposed. The algorithm adopted the block dividing idea of SLIC algorithm, and the image was divided into blocks with same size first. Then, for each image block, the adjacent pixels of each seed with similar color were classified as a group, named a super-pixel. At last, post-processing was adopted for those pixels or pixel blocks which had not been grouped. Experiments show that the proposed method can adjust the number of superpixel and segmentation precision by setting parameters, and has good potential for super-pixel extraction.
This letter presents a novel enhancement method for fog-degraded images based on dominant brightness level analysis in analyzing the characteristics of the images captured by daylight sensor on photoelectric radar surveillance system. We first perform discrete wavelet transform(DWT) on the input images and perform contrast limited adaptive histogram equalization(CLAHE) operation on LL sub-band, and then decompose the LL sub-band into low-,middle-,and high-intensity layers using Gaussian filter. After the intensity transformation and inverse DWT, the resulting enhanced image is obtained by using the guided filter.
According to traditional methods of image segmentation on sonar image processing with less robustness and the problem of low accuracy, we propose the method of sonar image segmentation based on Tree-Structured Markov Random Field(TS-MRF), the algorithm shows better ability in using spatial information. First, using a tree structure constraint two-valued MRF sequences to model sonar image, through the node to describe local information of image, hierarchy information establish interconnected relationships through nodes, at the same time when we describe the hierarchical structure information of the image, we can preserve an image’s local information effectively. Then, we define split gain coefficients to reflect the ratio that marking posterior probability division before and after the splitting on the assumption of the known image viewing features, and viewing gain coefficients of judgment as the basis for determining binary tree of node split to reduce the complexity of solving a posterior probability. Finally, during the process of image segmentation, continuing to split the leaf nodes with the maximum splitting gain, so we can get the splitting results. We add merge during the process of segmentation. Using the methods of region splitting and merging to reduce the error division, so we can obtain the final segmentation results. Experimental results show that this approach has high segmentation accuracy and robustness.
This paper proposes a new image thresholding method by integrating Multi-scale Gradient Multiplication (MGM) transformation and Adjusted Rand Index (ARI). The proposed method evaluates the optimal threshold by computing the accumulation similarity between two image collections from the perspective of global spatial attributes of images. One of the image collections are obtained by binarizing the original gray level image with each possible gray level. The others are the reference images, produced by binarizing MGM image. The MGM image is the result of applying MGM transformation to the original image. ARI is a similarity measurement in statistics, particularly in data clustering, which can be readily computed based on two image matrices. To be more accurate, the optimal threshold is determined by maximizing the accumulation similarity of ARI. Comparisons with three well established thresholding methods are depicted for numbers of real-world images. Experiment results demonstrate the effectiveness and robustness of the proposed method.
A popular histogram-based thresholding method is minimum error thresholding (MET) proposed by Kittler and Illingworth [Minimum error thresholding, Pattern Recognition 19 (1) (1986) 41-47], whereas Xue and Titterington recently proposed a median-based thresholding (MBT) [Median-based image thresholding, Image and Vision Computing 29 (9) (2011) 631-637]. Both MET and MBT can be derived from the maximization of log-likelihood. In this paper, we present a different theoretical interpretation about MBT and MET, from the perspective of minimizing Kullback-Leibler (KL) divergence. Since the KL divergence is a measure of the difference between two probability distributions, it is reasonable to regard MET and MBT as the special applications of histogram-based image similarity (HBIS) in the image thresholding. Further, it is natural to suggest a more universal image thresholding framework based on image similarity concept, since HBIS is just one of many image similarity methodologies. This thresholding framework directly transforms the threshold determining problem into an image comparison issue. Its significance is that it provides a concise and clear theoretical framework for developing potential thresholding methods with the plentiful image similarity theories.
Based on mean preserving bi-histogram equalization (BBHE), an adaptive image histogram equalization algorithm for
contrast enhancement is proposed. The threshold is gotten with adaptive iterative steps and used to divide the original
image into two sub-images. The proposed Iterative of Brightness Bi-Histogram Equalization overcomes the
over-enhancement phenomenon in the conventional histogram equalization. The simulation results show that the
algorithm can not only preserve the mean brightness, but also keep the enhancement image information effectively from
visual perception, and get a better edge detection result.
In this paper, we are concentrating on the passive geometric camera calibration, which calculates the
geometric parameters of all cameras in a camera system based on certain predefined object. The to-becalculated
parameters include intrinsic parameters (e.g. focal length and aspect ratio) and extrinsic
parameters (camera 3D position and orientation). Geometric camera calibration is the first and fundamental
step in e.g. 3D measurement and 3D reconstruction systems. Its accuracy always determines the quality of
those systems. Based on deliberately built calibration object, passive camera calibration can generate highly
accurate parameter approximation of the camera system. However, past research mainly focused on either
single or stereo camera systems, few have studied calibration of multi-camera and special camera systems
(e.g. mirror based cameras). In this paper, we will introduce a coherence systematic calibration system that
is fully automatic, workable under difficult situations and applicable for single, stereo, and multiple normal
or special camera systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.