Image matching is one of the hot issues in recent years. Image matching determines the effect of image mosaic to a large extent. Therefore, a stable, accurate, and fast image matching algorithm is very important for the subsequent processing of image information. Most of the traditional image matching methods have the problems of high-error matching rate and difficulty in to eliminating false matching. To resolve the shortcomings listed here, an improved algorithm was proposed in which a combined measure of Hamming distance. Similarity measurement with Tanimoto similarity measurement was adopted to dispose binary feature vectors. In addition, as an important step in the process of image mosaics, the calculation of the best mosaics can eliminate the ghosting and ghosting that may appear in the mosaics. However, the traditional dynamic planning method is easy to pass through the obstacles and form obvious stitching traces when searching for stitching lines in images with large obstacles. In view of the aforementioned content, we propose a fast and robust method to search the image difference matrix and structure difference matrix of the overlapping area of the input image according to the search strategy to determine the pixel coordinates of the splicing line. Finally, from an optimal splicing line, the experimental results show that our algorithm performs well for eliminating error matching points. Compared with traditional algorithms, matching accuracy is improved by 25%. In terms of calculating the optimal splicing line, the method proposed is faster than the traditional method and has good robustness.
Many applications require real-time rendering while ensuring the accuracy of the terrain model. In response to this problem, a lot of research has been conducted in the academic community. Based on previous studies, this paper studies the LOD representation of a regular terrain scene based on a quad-tree, and proposes an adaptive LOD representation method. This method has two advantages: (1) When the quad-tree is used to represent the terrain scene, it is determined whether the quad-tree node needs to be further subdivided according to the number of triangles in the patch. By parameterizing the indicator, it realized the adaptive terrain storage structure; (2) LOD is only performed for the scene within the viewing cone, and the subdivision is mainly based on the viewpoint distance, and the relationship between the subdivision level and the rendering area is quantified. The experimental results show that when the drawing scene is large, the algorithm can greatly reduce the amount of calculation, improve the rendering efficiency of the terrain model, and ensure a good experience of the entire scene.
Computer-Aided Diagnosis (CAD) benefits from its early diagnosis and accurate treatment of lung diseases. Accurate segmentation of lung fields is an important component in CAD for lung health, which facilitates subsequent analysis. However, most of the existing algorithms for lung fields segmentation are unable to ensure appearance and spatial consistency due to the varied boundaries and poor contrasts. In this paper, we propose a novel and hybrid method for lung fields segmentation by integrating Dense-U-Net network and a fully connected conditional random field (CRF). In order to realize the reuse of image features, the structure of densely-connected is added to the decoder, which ensures the object with varied shapes and sizes can be extracted without adding more parameters. To make full use of the mutual information among pixels of the original image, a fully connected CRF algorithm is adopted to further optimize the preliminary segmentation results according to the intensity and position of each pixel. Compared with some previous popular methods on JSRT dataset, the proposed method in this paper shows higher Jaccard index and Dice-Coefficient.
Lumbar vertebral fracture seriously endangers the health of people, which has a higher mortality. Due to the tiny difference among various fracture features in CT images, multiple vertebral fractures classification has a great challenge for computer-aided diagnosis system. To solve this problem, this paper proposes a multiclass PSVM ensemble method with multi-feature selection to recognize lumbar vertebral fractures from spine CT images. In the proposed method, firstly, the active contour model is utilized to segment lumbar vertebral bodies. It is helpful for the subsequent feature extraction. Secondly, different image features are extracted, including 3 geometric shape features, 3 texture features, and 5 height ratios. The importance of these features is analyzed and ranked by using infinite feature selection method, thus selecting different feature subsets. Finally, three multiclass probability SVMs with binary tree structure are trained on three datasets. The weighted voting strategy is used for the final decision fusion. To validate the effectiveness of the proposed method, probability SVM, K-nearest neighbor, and decision tree as base classifiers are compared with or without feature selection. Experimental results on 25 spine CT volumes demonstrate that the advantage of the proposed method compared to other classifiers, both in terms of the classification accuracy and Cohen’s kappa coefficient.
Virtual binocular stereoscopic camera models that can be used in generation of stereoscopic images or videos of virtual 3D scenes by use of computer graphics techniques are presented and analyzed in detail. It is found that the paralleloptical-axis model is most appropriate for synthesizing stereoscopic images or videos of virtual 3D scenes. Mathematical formulae for calculating the camera position and ‘lookat’ position of both the left and right virtual cameras therein are developed. Light-source visibility filtering with a depth variation constraint is proposed to speed up the ray-tracing-based soft shadow rendering of virtual 3D scenes and meanwhile to eliminate the drawback of generating fake shadows associated with the original light-source visibility filtering method; dual light-source visibility filtering scheme is suggested to make shadows reflected in a mirror surface correctly feature penumbra. The stereoscopic video experimentally generated by use of our method shows pleasing visual realism and impressive stereoscopic effects.
Shadow mapping is commonly used in real-time rendering. In this paper, we presented an accurate and efficient method of soft shadows generation from planar area lights. First this method generated a depth map from light’s view, and analyzed the depth-discontinuities areas as well as shadow boundaries. Then these areas were described as binary values in the texture map called binary light-visibility map, and a parallel convolution filtering algorithm based on GPU was enforced to smooth out the boundaries with a box filter. Experiments show that our algorithm is an effective shadow map based method that produces perceptually accurate soft shadows in real time with more details of shadow boundaries compared with the previous works.
Computer-Aided Diagnosis of masses in mammograms is an important indicator of breast cancer. The use of retrieval systems in breast examination is increasing gradually. In this respect, the method of exploiting the vocabulary tree framework and the inverted file in the mammographic masse retrieval have been proved high accuracy and excellent scalability. However it just considered the features in each image as a visual word and had ignored the spatial configurations of features. It greatly affect the retrieval performance. To overcome this drawback, we introduce the geometric verification method to retrieval in mammographic masses. First of all, we obtain corresponding match features based on the vocabulary tree framework and the inverted file. After that, we grasps the main point of local similarity characteristic of deformations in the local regions by constructing the circle regions of corresponding pairs. Meanwhile we segment the circle to express the geometric relationship of local matches in the area and generate the spatial encoding strictly. Finally we judge whether the matched features are correct or not, based on verifying the all spatial encoding are whether satisfied the geometric consistency. Experiments show the promising results of our approach.
Considering the weak edges in pancreas segmentation, this paper proposes a new solution which integrates more features of CT images by combining SLIC superpixels and interactive region merging. In the proposed method, Mahalanobis distance is first utilized in SLIC method to generate better superpixel images. By extracting five texture features and one gray feature, the similarity measure between two superpixels becomes more reliable in interactive region merging. Furthermore, object edge blocks are accurately addressed by re-segmentation merging process. Applying the proposed method to four cases of abdominal CT images, we segment pancreatic tissues to verify the feasibility and effectiveness. The experimental results show that the proposed method can make segmentation accuracy increase to 92% on average. This study will boost the application process of pancreas segmentation for computer-aided diagnosis system.
We introduce an algorithm for real-time sub-pixel accurate hard shadows rendering. The method focuses on addressing the shadow aliasing due to the limited resolution of shadow maps. We store a partial, approximate geometric representation of the scene’s surfaces which are visible to the light source. Inspired by the fact that aliasing occurs in the shadow silhouette regions, we present an edge detection algorithm using second-order Newton’s Divide Difference to divide shadow maps into two regions: depth-discontinuous region and depth-continuous region. A tangent estimation method based on the geometry shadow map is presented to recover the artifact aliasing of those silhouette regions. Experiments show that our algorithm eliminates the resolution issues and generates hard shadows with high quality.
Endoscopy is widely used in clinical application, and surgical navigation system is an extremely important way to enhance the safety of endoscopy. The key to improve the accuracy of the navigation system is to solve the positional relationship between camera and tracking marker precisely. The problem can be solved by the hand-eye calibration method based on dual quaternions. However, because of the tracking error and the limited motion of the endoscope, the sample motions may contain some incomplete motion samples. Those motions will cause the algorithm unstable and inaccurate. An advanced selection rule for sample motions is proposed in this paper to improve the stability and accuracy of the methods based on dual quaternion. By setting the motion filter to filter out the incomplete motion samples, finally, high precision and robust result is achieved. The experimental results show that the accuracy and stability of camera registration have been effectively improved by selecting sample motion data automatically.
The precise annotation of vessel is desired in computer-assisted systems to help surgeons identify each vessel branch. A method has been reported that annotates vessels on volume rendered images by rendering their names on them using a two-pass rendering process. In the reported method, however, cylinder surface models of the vessels should be generated for writing vessels names. In fact, vessels are not actual cylinders, so the surfaces of the vessels cannot be simulated by such models accurately. This paper presents a model-free method for annotating vessels on volume rendered images by rendering their names on them using the two-pass rendering process: surface rendering and volume rendering. In the surface rendering process, docking points of vessel names are estimated by using such properties as centerlines, running directions, and vessel regions which are obtained in preprocess. Then the vessel names are pasted on the vessel surfaces at the docking points. In the volume rendering process, volume image is rendered using a fast volume rendering algorithm with depth buffer of image rendered in the surface rendering process. Finally, those rendered images are blended into an image as a result. In order to confirm the proposed method, a visualizing system for the automated annotation of abdominal arteries is performed. The experimental results show that vessel names can be drawn on the corresponding vessel in the volume rendered images correctly. The proposed method has enormous potential to be adopted to annotate other organs which cannot be modeled using regular geometrical surface.
Virtual Endoscope is a method to emulate cavity checking visually combined row volume data obtained from CT and
MR with three-dimensional image technologies in virtue of navigation, flythrough and pseudo-color technologies. The
application on virtual endoscope is developed in recent several years, and the software realization needs the support of multiple complicated algorithms, including internal surface reconstruction, center path automatic extraction, lens setting-up, multi-case processing, collision detection, and corresponding algorithm computation and realization, which cause the application software development for Virtual Endoscope is rather complex and difficult. It puts forward volume rendering for rapid three-dimensional reconstruction, introduces highly active path planning algorithm of three-dimensional space path algorithm, improved path smooth algorithm and lens entry point auto-detecting algorithm, illustrates three-dimensional scene establishment by VTK development toolkit, and discusses the key technologies in virtual endoscope realization in this paper, based on which the virtual endoscope system has graceful performance.
KEYWORDS: 3D image processing, Image registration, Medical imaging, Volume rendering, 3D image reconstruction, Image processing, Data conversion, Medical research, Radiography, Data modeling
The development of CT and MRI etc. technique offers the means by which we can research directly human internal structure. In clinic, usually various imaging results of a patient are combined for analysis. At present, in the most case, doctors make a diagnosis by observing some slice images of human body. As complexity and configuration diversity of the structure of human body organ, and as well unpredictiveness of focus location and configuration, it is difficult to imagine the cubic configuration of organs and their relationship from these 2D slices without corresponding specialty knowledge and practical experience. So it isn't satisfied with preferable requests of medical diagnosis that only aligning two 2D images to get one 2D slice image. As a result we need extend registration t problem to 3D image. As the quantity of 3D volume data are much more, it undoubtedly increases calculation quantity for aligning two 3D images accurately. It forces us to find some good methods that can achieve better effect on precision and satisfy the demand for time. So in this paper digitally reconstructed radiograph (DRR) image method is proposed to solve correlative problems. Ray tracking two 3D images and digitally reconstruct to create two 2D images, by aligning 2D data to realize to align 3D data.
Medical image registration is one basic task in medical image processing. It can align multi images coming from different modes or time, and then it offer guarantee to images post processing. In this article we mainly research the mutual information matching method, and using image pixel grey to compute mutual information. We put forward improved registration algorithm. We compare improved mutual information registration method and classical first grade mutual information and second grade mutual information registration method. During aligning, by extracting image feature signs or edge information, using them as reference feature value of two images aligning information and improving multi parameters optimization algorithm, we get satisfactory experimental results. Regarding to the two-dimensional medical image registration, we utilized the improved method combining gradient information and mutual information, has solved the question of partial extreme value without spatial information when pure use the mutual information. The feasibility of this algorithm is proved by corresponding experiments.
KEYWORDS: Image segmentation, Image processing algorithms and systems, 3D image processing, Reconstruction algorithms, 3D image reconstruction, Volume rendering, Medical imaging, Detection and tracking algorithms, Image processing, Medical image reconstruction
Three-dimensional image reconstruction by volume rendering has two problems: time-consuming and low precision. During the diagnosis procedure, some detailed organ tissue is the interest to doctors, so the reconstructed two-dimensional images are pre-processed before three-dimensional reconstruction including disturbance removing and
precise segmentation, to obtain Region-Of-Interest (ROI) based on which three-dimensional reconstruction carries through, that can decrease the complexity of time and space. By this, Live Wire segmentation algorithm model for medical image is improved to gain exact edge coordinate for the image segmentation with interior details by improved filling algorithm. Segmented images with object details only are regarded as input to realize volume rendering by ray
casting tracking algorithm. Because the needless organs have been filtered, the disturbance on interested objects for doctors is reduced. Meanwhile, generally speaking, these needed organs left are less proportion in images. So it reduces data amount of volume rendering, and improves the speed of three-dimensional reconstruction.
Theoretical study and simulation research on atmospheric effect in airplane-ground laser communication are developed in this paper, which establishes refraction, attenuation and turbulence models of laser atmospheric transmission, uses communication speed and error rate as objective function to do digital simulation research for the whole communication process based on theoretical model of laser communication system. In simulation, various picture tools are used for observing the changing of laser energy with external environment effects, such as atmospheric effect, free-space loss, background radiation; as well as observing laser beam position and energy distribution when laser arrives at the satellite receiver. Moreover, local simulation model is established for analyzing thoroughly effects of various external factors to the laser communication performance.
Computer simulation theory and realization of Acquisition, Pointing and Tracing (APT) for laser beams is discussed in this paper. It includes two aspects: I. function Simulations on APT corresponding module units, which focus on the relationship among initial pointing uncertainty, acquisition probability, scanning pattern, CCD detect signals and facula position error. II. HLA (High Level Architecture) application, which uses three unattached machines to represent two communication terminals and satellite's independent movement, gives the simulation federate design, object design and realization. This simulation scheme can simulate open loop process of APT system perfectly, realize the distributed APT simulation, and improve the confidence. Moreover, combined with the three-dimensional engine, it shows the APT process visually and vividly.
Generalization ability of feed-forward neural networks is discussed in this paper. Firstly, presents and certifies two practical methods for improving networks generalization ability based on theory and experiment research. Secondly, gives a measuring model of networks generalization ability with generalization error. The essential is to define a probability input model and regard the expectation error of network upon testing samples as index for measuring networks generalization ability. Computation quantity and complexity are much less compared with traditional method.
Neural networks have pretty adaptability on the multifarious features ofrecognized objects, which can fulfill the multi-feature information fusion by combining multiple neural networks linearly and enhance the performance of recognition system. For the linear combination, it can't select the best dynamically to regulate the contribution of individual subnets because it combines the static weights ofthe output in subnets, which has limited the whole network performance. This paper puts forward an optimized linear combined method on multiple neural networks. This method determines the optimized combination weight by constructing estimate function ofthe whole network performance, gives the computable mathematical model for this optimized combination weight estimated method, and discusses the robust in the multiple neural networks system by optimized linear combination. From the simulation experience, this method is used on object recognition by multi-feature information fusion and gains more satisfying result than general multi-neural network linear combined method.
Because ofthe inherent features ofdetectors, the lower contrast between object and background, ambiguous image edge and great noise have widely existed in the infrared image. It's hard to get the better result by the general method when detecting and recognizing infrared images. The recognition method for infrared objects based on multiple features by integrated neural networks, which is proposed in this paper, not only has improved the reliability, but avoided the system halting because ofthe invalidity on some feature. This paper describes and implements this method from the following aspects: infrared object image processing, image segmentation, feature abstraction, and object recognition by integrated neural networks. According the experience, the image preprocessing has improved image signal noise ratio by close frame accumulation, and smoothing and decreasing noise based on the space variant scale in deformable model has guaranteed the nicer edge effect and established the good foundation for the further image segmentation. Image segmentation and feature abstraction are important steps in the course of image recognition. Segment the object image by the integrated consideration of difference operators and histogram switch, then abstract the features from it, we can find ten aspects relating to the infrared image feature and object. Finally, it fulfils information fusion by processing abstract object features with integrated neural networks, realizes the infrared object recognition, and avoids the whole system halting when some feature information is lost.
This paper presents a new idea for the optimization of an initial lens structure type from an optical lens database on the basis of introducing optical lens database (OLDB) and computer- aided optical design (CAOD) systems. Using image-quality target parameters as conditional search parameters of optimization of initial lens structure type from database, and evaluating selected lens quality with surface relative aperture (h/r), the authors were able to obtain a good effectiveness in practice.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.