PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
We have developed a method for analyzing neural electromagnetic data that allows probabilistic inferences to be drawn about regions of activation. The method involves the generation of a large number of possible solutions which both fit the data and prior expectations about the nature of probable solutions made explicit by a Bayesian formalism. In addition, we have introduced a model for the current distributions that produce MEG (and EEG) data that allows extended regions of activity, and can easily incorporate prior information such as anatomical constraints from MRI. To evaluate the feasibility and utility of the Bayesian approach with actual data, we analyzed MEG data from a visual evoked response experiment. We compared Bayesian analyses of MEG responses to visual stimuli in the left and right visual fields, in order to examine the sensitivity of the method to detect known features of human visual cortex organization. We also examined the changing pattern of cortical activation as a function of time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Functional MRI (fMRI) is a means of analyzing localized brain activity. It is statistically modeled by the multivariate Gaussian probability distribution (in space) and the time series (in time). However, the currently used analysis method takes an univariate approach. That is, the spatial relationships among voxels are ignored. This paper presents a multivariate analysis method. It formulates fMRI activation foci detection as a sensor-array signal processing problem and converts hypotheses tests of the univariate approach to a computer vision approach. It first creates multiple independent, identical sub-images and then uses a covariance matrix to characterize the multivariate Gaussian environment. Not only it utilizes the voxel intensities but also their spatio-temporal relationships. It achieves computer speed superiority over the existing methods. Results obtained by using simulated images, phantom images, and real fMRI data are included. The theoretical and experimental results obtained by using this approach were in good agreement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Functional Magnetic Resonance Imaging (fMRI) data of the brain includes activated parenchymal voxels, corresponding to the paradigm performed, non-activated parenchymal voxels and background voxels. Statistical tests, e.g. using the general linear model approach of SPM or the Kolmogorov-Smirnov (KS) non-parametric statistic, are common 'supervised' techniques to look for activation in functional brain MRI. Selection of voxel type by comparing the voxel time course with a model of the expected hemodynamic response function (HRF) from the task paradigm has proven to be difficult due to individual and spatial variance of the measured HRF. For the functional differentiation of brain voxels we introduce a method separating brain voxels based on their features in the time domain using a self-organizing map (SOM) neural network technique without modeling the HRF. Since activation measured by fMRI is related to magnetic susceptibility changes in venous blood which represents only 2 - 5% of brain matter, preprocessing is required to remove the majority of non- activated voxels which dominate learning instead of real activation patterns. Using the auto-correlation function one can select voxels which are candidates of being activated. Features of the time course of the selected voxels can be learned with the SOM. In the first step the SOM is trained by the voxels time course, fitting its neurons to the input. After learning, the neurons have adapted to the intrinsic features space of the voxel time courses. Using the trained SOM, voxel time courses are presented again, now labeled by the neuron having the smallest Euclidean distance to the presented voxel time course. The result of the labeling and the learned feature time course vectors are compared visually with the p-value map of the KS statistic. With the SOM map one can visually separate the voxels based on their features in the time domain into different functional task related classes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Our goal was to use lesions identified on MRI to guide the histological analysis of postmortem brains. After fixation, the postmortem brains were sectioned into 5 mm thick coronal slices and photographed, and the resulting slides were scanned into a computer graphics file format. The slices were co- registered to each other and assembled into a postmortem brain volume. We next co-registered the oblique coronal T1-weighted MRI to the postmortem brain volume. The transformation defined by this co-registration was used to transform the MRI lesion markings onto the postmortem brain. These transformed lesion markings were used to guide the histological analysis. Only rigid body transformations were used in this analysis. The postmortem brain volume looked most correct near the center slices, probably due to the deformations undergone by the postmortem brain during slicing, fixation, and movement. Despite this limitation and our use of rigid body transformations, the co-registered MRI and postmortem brain were reasonably well matched at the surface, at the ventricles, and at structures of interest such as the hippocampus. More precise localization of the MRI lesion markings on the postmortem brain slice will only be achieved when a nonlinear method such as warping of the brain is used.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Subtraction ictal SPECT coregistered to MRI (SISCOM) has been shown to aid epileptogenic localization and improve surgical outcomes in partial epilepsy patients. This paper reports a new method of identifying significant areas of epileptogenic activation in the SISCOM subtraction image taking into account normal variation between sequential Tc-99m Ethyl Cysteinate Diethylester SPECT scans of single individuals. The method uses the AIR 3.0 nonlinear registration software to combine a group of subtraction images into a common anatomical framework. A map of the pixel intensity standard deviation values in the subtraction images is created, and this map is nonlinearly registered to a patient's SISCOM subtraction image. Pixels in the patient subtraction image may then be evaluated based upon the statistical characteristics of corresponding pixels in the atlas. Validation experiments were performed to verify that local image variances are not constant across the image and that nonlinear registration preserves local image variances. SISCOM images created with the voxel variance method were rated higher in quality than the conventional image variance method in images from fifteen patients. No difference in localization rate was observed between the voxel variance mapping and image variance methods. The voxel significance mapping method was shown to improve the quality of clinical SISCOM images without removing localizing information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, with development of new MRI techniques, noninvasive evaluation of global and regional cardiac function is becoming a reality. One of the methods used for this purpose is MRI tagging. In tagging, spatially encoded magnetic saturation planes, tags, are created within tissues. These act as temporary markers and move with the tissue. In cardiac tagging, tag deformation pattern provides useful qualitative and quantitative information about the functional properties of underlying myocardium. The measured deformation of a single tag plane contains only unidirectional information of the past motion. In order to track the motion of a cardiac material point, this sparse, single dimensional data has to be combined with similar information gathered from other tag sets and all time frames. Previously, several methods have been developed which rely on the specific geometry of the chambers. Here, we employ an image plane based, simple cartesian coordinate system and provide a stepwise method to describe the heart motion using a four-dimensional tensor product of B-splines. The proposed displacement and forward motion fields exhibited sub-pixel accuracy. Since our motion fields are parametric and based on an image plane based coordinate system, trajectories or other derived values (velocity, acceleration, strains...) can be calculated for any desired point on the MRI images. This method is sufficiently general so that the motion of any tagged structure can be tracked.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Accurate delineation of the volumetric motion of Left- Ventricle (LV) of the heart from magnetic resonance imaging is an important area of research. We have built a system that takes tagged short-axis (SA) and long-axis (LA) image sequences as input, fits a 3D B-spline solid model to the LV of the heart by matching 3 sequences of solid knot planes to 3 sequences of tag planes, and volumetrically tracks the LV deformations. The formulation described in this article is more efficient than our previously proposed B-spline solid work. The advantage of the B-spline solid is that each LV myocardial point is uniquely indexed by a solid point, simplifying displacement reconstruction. The output of the system is a 3D solid that accurately captures the volumetric LV motion and provides a 3D motion field. The concept of registering and fitting the solid's knot planes to the LV tag planes is novel. In order to outline the myocardium from the solid, two B-spline surface models are used to fit the endocardial and epicardial walls. The generated motion fields follow the ground-truth motion fields closely.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A major issue in cardiac imaging is the assessment of cardiac function and particularly the identification of ischemic or infarcted tissues. We present in this article a method to reconstruct the motion of the left ventricle (LV) using 4D planispheric transformations of time and space combined in a first step with B-spline tensor products. Because of the 4D modeling, (1) the use of planispheric coordinates makes the numerical evaluation more stable as compared to prolate spheroidal coordinates, the equivalent focal point being much further from the apical area of the heart. (2) In the temporal modeling, a simple adaptation is possible to changing temporal dynamics such as introduced by ectopic pacing or rapid filling after systole. (3) Finally, the strain analysis and displacement parameters that are used for the spatial modeling are computed at any point of the LV volume. Experiments are conducted on a normal and a pathological LV (posterior infarct) in order to assess the tuning of the parameters of the method. The mean RMS-distance error is less than 0.5 mm for both LVs. Finally, the motion is analyzed as smooth zeroth (displacement) and first order parameters (strain).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a scheme based on the integration of continuum mechanics and estimation theory to characterize the complex nonrigid motion of the left ventricle (LV) over a sequence of 3D images. The proposed scheme is implemented in a hierarchical fashion so that both the global and local motion and deformation can be analyzed. First, global motion and deformation are analyzed and compensated by applying the 'Chen surface' modeling. Parametric representation of left ventricle surfaces are also obtained based on the segmented 3D images. Then, we develop a local motion estimation model assuming that thin slice of endocardium surface can be considered as an incompressible medium and can be characterized by the constraint of incompressibility derived from continuum mechanics. This constraint of continuum mechanics is integrated with the correlation functions derived from the estimation theory. The correlation functions are computed from the original intensity images and can be used to measure confidence of the estimation. An overall objective function can therefore be constructed as the weighted sum of the incompressibility and a motion discontinuity-preserving smoothness constraint with the corresponding correlation functions. The optimal estimation of the local deformation is obtained by minimizing this objective function. Since the proposed scheme is based on a physics model, the results are therefore more consistent with the heart function. This integrated scheme allows the point correspondences to depart slightly from the manually segmented surfaces in the region of strong uncertainty. The proposed scheme is able to generate consistent 3D motion vectors for a sequence of cardiac images. Three-dimensional visualization of the displacements is also investigated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Early detection and removal of colorectal polyps have been proven to reduce mortality from colorectal carcinoma (CRC), the second leading cause of cancer deaths in the United States. Unfortunately, traditional techniques for CRC examination (i.e., barium enema, sigmoidoscopy, and colonoscopy) are unsuitable for mass screening because of either low accuracy or poor public acceptance, costs, and risks. Virtual colonoscopy (VC) is a minimally invasive alternative that is based on tomographic scanning of the colon. After a patient's bowel is optimally cleansed and distended with gas, a fast tomographic scan, typically helical computed tomography (CT), of the abdomen is performed during a single breath-hold acquisition. Two-dimensional (2D) slices and three-dimensional (3D) rendered views of the colon lumen generated from the tomographic data are then examined for colorectal polyps. Recent clinical studies conducted at several institutions including ours have shown great potential for this technology to be an effective CRC screening tool. In this paper, we describe new methods to improve bowel preparation, colon lumen visualization, colon segmentation, and polyp detection. Our initial results show that VC with the new bowel preparation and imaging protocol is capable of achieving accuracy comparable to conventional colonoscopy and our new algorithms for image analysis contribute to increased accuracy and efficiency in VC examinations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Spiral CT colonography (virtual colonoscopy) is a rapidly developing field with both 2D and 3D display techniques currently used for visualization of colorectal polyps. Our purpose was to perform a multiobserver reader study in a test set of 30 colonic segments to determine the diagnostic performance of 2D MPR, 3D thick slab MPR, and 3D PVR, using colonoscopy as the standard of reference. In this study, CT colonography was performed in a cohort of patients with known polyps immediately prior to colonoscopy, following using a standard bowel prep and air-insufflation. A test set of 30 colonic segments was created, with a total of 22 lesions present, with all findings verified by colonoscopy. Three image displays were tested in this study: 2D multiplanar reformation (MPR), 3D thick slab MPR following 2D MPR review, and 3D perspective volume rendered (PVR) displays following 2D MPR review. Readers independently analyzed each test case in a controlled setting and scored their confidence on a 5 point scale of each focal finding observed. Our results demonstrated high sensitivity for detection of colorectal polyps using 2D MPR in this library of 30 colonic segments. Reader analysis with 3D PVR demonstrated improved characterization of focal findings in selected cases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A phantom representation of typical colon structures with precisely known geometrical measurements was designed and fabricated. Computed tomography (CT) data were collected using a range of protocols typical for spiral CT colonography. Analysis methods were developed to measure the acquired geometry of the phantom data and characterize distortions/degradation. Simple models were proposed to explain the trends in degradation in the acquisition process versus scanner protocol. Preliminary results indicate that degradation due to CT acquisition will not significantly impact the detection of clinically relevant lesions (dimensions greater than 1 cm). However, the CT acquisition process does place a lower limit on detection size of several millimeters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The early detection of colorectal polyps may reduce morbidity and mortality due to colon cancer. Reliable detection of polyps in CT colography requires presentation of the entire colonic surface to the radiologist. We have developed a simple technique for quantifying surface coverage of the colonic mucosa as shown on CT colography. The air in the colon is segmented. A shell surrounding the air volume is generated by successively growing surfaces contiguous with previously grown voxels using thresholds of -900 HU and -100 HU. To evaluate techniques used for viewing the colon, shell voxels used in the presentation techniques are monitored, and a list of voxels missed by the technique is compiled. The missed voxels can be visually inspected and quantified using clustering methods that parameterize volumes of missed voxels. The usefulness of this method was investigated using a technique that displays reformatted views about a pre- calculated centerline. A modification of the presentation method in which additional images are generated by interpolation to improve surface sampling near regions of high centerline curvature was also evaluated. The surface coverage and neighborhood parameters were calculated for 15 clinical colography studies. The results indicate that an average of 99.68% of the shell voxels were included in the reformatted images prior to interpolation. Surface coverage was improved to 99.9% when additional interpolated planes were included. The average number of missed neighbors improved from 8.13 to 5.29 with interpolation. The dimensions of the missed volumes provided useful information for surface coverage.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
CT colonography (CTC) is a new technology, which permits endoscopic-like evaluation of the mucosal surface. Recently, an electrical field based approach was developed to unravel the colon in spiral CT image volumes, that is to digitally straighten then flatten the colon using curved cross-sections. In this paper, we report (1) an exact and computation- intensive algorithm for straightening the colon using curved cross-sections, and (2) an approximate but computationally efficient straightening algorithm. In the direct straightening algorithm, each curved cross-section of the colon is defined by electrical force lines due to charges distributed along the colon path, and constructed by directly tracing the force lines. In the fast straightening algorithm, only representative force lines are traced that originate equiangularly from the current colon path position, while other force lines are interpolated from the traced force lines. The experiments are performed with both phantom and patient data. It is demonstrated that straightening the colon with curved cross-sections facilitates visualization and analysis, has potential for use in CTC; and the speed of the interpolation based straightening algorithm is practically acceptable, which is about 40 times faster than that of the direct algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We focus on color mapping between gray tons of computed tomographic images and color texture of visible human or optical images. Particularly, we propose probabilistic segmentation based on gradient entropy and Bayesian estimation to solve the material mixture problems. The approach can fill in the gap between segmentation and rendering to eliminate artifacts (jagged edges) produced by incorrect classification of material mixture and to estimate accurate surface normal for volume shading.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Virtual colonoscopy is a minimally invasive technique that enables detection of colorectal polyps and cancer. Normally, a patient's bowel is prepared with colonic lavage and gas insufflation prior to computed tomography (CT) scanning. An important step for 3D analysis of the image volume is segmentation of the colon. The high-contrast gas/tissue interface that exists in the colon lumen makes segmentation of the majority of the colon relatively easy; however, two factors inhibit automatic segmentation of the entire colon. First, the colon is not the only gas-filled organ in the data volume: lungs, small bowel, and stomach also meet this criteria. User-defined seed points placed in the colon lumen have previously been required to spatially isolate only the colon. Second, portions of the colon lumen may be obstructed by peristalsis, large masses, and/or residual feces. These complicating factors require increased user interaction during the segmentation process to isolate additional colon segments. To automate the segmentation of the colon, we have developed a method to locate seed points and segment the gas-filled lumen with no user supervision. We have also developed an automated approach to improve lumen segmentation by digitally removing residual contrast-enhanced fluid resulting from a new bowel preparation that liquefies and opacifies any residual feces.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Due to the close proximity of the heart and lungs within a closed chest environment, we expect breathing to affect various cardiac performance parameters and hence cardiac output. We present an integrative approach to study heart-lung interactions, combining a mathematical formulation of the circulation system with imaging techniques using echo-planar magnetic resonance imaging (EPI) and dynamic x-ray CT (EBCT). We hypothesize that appropriate synchronization of mechanical ventilation to cardiac-cycle specific events can improve cardiac function, i.e. stroke volume (SV) and cardiac output (CO). Computational and experimental results support the notion that heart-lung interaction, leading to altered cardiac output associated with inspiration/expiration, is not directly associated with lung inflation/deflation and thus is felt to be more influenced by pleural pressure changes. The mathematical model of the circulation demonstrates the importance of cardiac-cycle specific timing of ventilation on cardiac function and matches with experimentally observed relationships found in animal models studied via EBCT and human studies using EPI. Results show that positive pressure mechanical ventilation timed to systolic events may increase SV and CO by up to 30%, mainly by increased filling of the ventricles during diastole. Similarly, negative pressure (spontaneous) respiration has its greatest effect on ventricular diastolic filling. Cardiac-gated mechanical ventilation may provide sufficient cardiac augmentation to warrant further investigation as a minimally-invasive technique for temporary cardiac assist. Through computational modeling and advanced imaging protocols, we were able to uniquely study heart-lung interactions within the intact milieu of the never-invaded thorax.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The purpose of this project is to evaluate the dynamic changes during expiration at different levels of positive end- expiratory pressure (PEEP) in the ventilated patient. We wanted to discriminate between normal lung function and acute respiratory distress syndrome (ARDS). After approval by the local Ethic Committee we studied two ventilated patients: (1) with normal lung function; (2) ARDS). We used the 50 ms scan mode of the EBCT. The beam was positioned 1 cm above the diaphragm. The table position remained unchanged. An electronic trigger was developed, that utilizes the respirators synchronizing signal to start the EBCT at the onset of expiration. During controlled mechanical expiration at two levels of PEEP (0 and 15 cm H2O), pulmonary aeration was rated as: well-aerated (-900HU/-500HU), poorly- aerated (-500HU/-100HU) and non-aerated (-100HU/+100HU). Pathological and normal lung function showed different dynamic changes (FIG.4-12). The different PEEP levels resulted in a significant change of pulmonary aeration in the same patient. Although we studied only a very limited number of patients, respirator triggered EBCT may be accurate in discriminating pathological changes due to the abnormal lung function in the mechanically ventilated patient.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This study presents a new intravenous (IV) tomographic angiography imaging technique, called intravenous volume tomographic digital angiography (VTDA) for cross sectional pulmonary angiography. While the advantages of IV-VTDA over spiral CT in terms of volume scanning time and resolution have been validated and reported in our previous papers for head and neck vascular imaging, the superiority of IV-VTDA over spiral CT for cross sectional pulmonary angiography has not been explored yet. The purpose of this study is to demonstrate the advantage of isotropic resolution of IV-VTDA in the x, y and z directions through phantom and animal studies, and to explore its clinical application for detecting clots in pulmonary angiography. A prototype image intensifier-based VTDA imaging system has been designed and constructed by modifying a GE 8800 CT scanner. This system was used for a series of phantom and dog studies. A pulmonary vascular phantom was designed and constructed. The phantom was scanned using the prototype VTDA system for direct 3D reconstruction. Then the same phantom was scanned using a GE CT/i spiral CT scanner using the routine pulmonary CT angiography protocols. IV contrast injection and volume scanning protocols were developed during the dog studies. Both VTDA reconstructed images and spiral CT images of the specially designed phantom were analyzed and compared. The detectability of simulated vessels and clots was assessed as the function of iodine concentration levels, oriented angles, and diameters of the vessels and clots. A set of 3D VTDA reconstruction images of dog pulmonary arteries was obtained with different IV injection rates and isotropic resolution in the x, y and z directions. The results of clot detection studies in dog pulmonary arteries have also been shown. This study presents a new tomographic IV angiography imaging technique for cross sectional pulmonary angiography. The results of phantom and animal studies indicate that IV-VTDA is superior to spiral CT for cross sectional pulmonary angiography.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have previously reported a deconvolution-based technique to recover regional microvascular transport characteristics from dynamic CT images. We have refined our deconvolution algorithm and used Monte Carlo simulations to estimate the error and confidence interval of the resulting regional microvascular mean transit time (MIT) measures. Random errors, assumed to be due to white noise in the imaging process, were superimposed upon known input (PA) and regional parenchymal time-intensity (blood flow) data. The resulting simulated data were then fit to gamma variate functions and processed via our deconvolution algorithm to provide microvascular MTT measures for the simulated curves. The magnitude of the noise used in the simulations was obtained by subtracting two consecutively acquired images (approximately 1.5 sec delay between the two images) from a dynamic imaging sequence of a supine dog imaged at FRC. A total of 35 simulations were performed for each of five sample locations spanning the dependent to nondependent extent of the lungs. Microvascular MTT ranged from 3.39 sec to 9.67 sec as sample locations were moved from dependent to nondependent areas of the lungs. The standard error associated with these measures ranged from plus or minus 0.03 sec in the dependent portion of the lungs to plus or minus 0.27 sec in the non-dependent area of the lungs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The human lungs are divided into five distinct anatomic compartments called lobes. The physical boundaries between the lung lobes are called the lobar fissures. Detection of the lobar fissures in an image data set can be used to help identify the major components of the pulmonary anatomy, guide image registration with a standard lung atlas, drive additional image segmentation processing to find airways and vessels, and to provide an anatomic framework within which image-based measurements can be reported. Little work has been done to develop methods for detecting the lobar fissures. We have developed a semi-automatic method to identify the left and right oblique fissures in 3-D X-ray CT data sets. Our method is based on using fuzzy sets to describe the anatomic and image-based characteristics of likely fissure pixels, and we then use a graph search to select the most probable fissure location on 2-D slices of the data set. The user initializes the search once by defining starting pixels, initial direction and ending pixels on one slice. Once the fissure has identified on a singe slice, it can be used to guide automatic fissure detection on neighboring slices. Thus, the entire 3-D surface defined by a fissure can be identified with a little intervention. The method has been tested by processing two CT data sets from a normal subject. We present results comparing our method against results obtained by manual analysis. The average RMS error between the manual analysis and our approach is approximately 1.9 pixels (corresponding to about 1.3 mm), while the fissures themselves typically appear 3 to 6 pixels wide on a CT slice.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Three dimensional volume data sets provided by CT or MRI allow the user to move virtually around within anatomic structure and to observe them. We propose a new method to guide the path planning based on the local image interpretation during the navigation inside the anatomic structures with cavities, without preprocessing for 3D volume data sets. Thanks to the scene analysis process, the virtual sensor constructs by itself a model of the unknown scene. Qualitative and quantitative characterization of the anatomic structures, derived from this exploration, is of main interest in such a new application of virtual endoscopy that is vascular surgery. In this paper we especially describe the scene analysis process which is based on the processing of the depth map directly produced by the ray casting procedure. Application to vascular surgery, i.e. virtual angioscopy, is then depicted.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a review of the research conducted by our group to design an automatic endoscope navigation and advisory system. The whole system can be viewed as a two-layer system. The first layer is at the signal level, which consists of the processing that will be performed on a series of images to extract all the identifiable features. The information is purely dependent on what can be extracted from the 'raw' images. At the signal level, the first task is performed by detecting a single dominant feature, lumen. Few methods of identifying the lumen are proposed. The first method used contour extraction. Contours are extracted by edge detection, thresholding and linking. This method required images to be divided into overlapping squares (8 by 8 or 4 by 4) where line segments are extracted by using a Hough transform. Perceptual criteria such as proximity, connectivity, similarity in orientation, contrast and edge pixel intensity, are used to group edges both strong and weak. This approach is called perceptual grouping. The second method is based on a region extraction using split and merge approach using spatial domain data. An n-level (for a 2' by 2' image) quadtree based pyramid structure is constructed to find the most homogenous large dark region, which in most cases corresponds to the lumen. The algorithm constructs the quadtree from the bottom (pixel) level upward, recursively and computes the mean and variance of image regions corresponding to quadtree nodes. On reaching the root, the largest uniform seed region, whose mean corresponds to a lumen is selected that is grown by merging with its neighboring regions. In addition to the use of two- dimensional information in the form of regions and contours, three-dimensional shape can provide additional information that will enhance the system capabilities. Shape or depth information from an image is estimated by various methods. A particular technique suitable for endoscopy is the shape from shading, which is developed to obtain the relative depth of the colon surface in the image by assuming a point light source very close to the camera. If we assume the colon has a shape similar to a tube, then a reasonable approximation of the position of the center of the colon (lumen) will be a function of the direction in which the majority of the normal vectors of shape are pointing. The second layer is the control layer and at this level, a decision model must be built for endoscope navigation and advisory system. The system that we built is the models of probabilistic networks that create a basic, artificial intelligence system for navigation in the colon. We have constructed the probabilistic networks from correlated objective data using the maximum weighted spanning tree algorithm. In the construction of a probabilistic network, it is always assumed that the variables starting from the same parent are conditionally independent. However, this may not hold and will give rise to incorrect inferences. In these cases, we proposed the creation of a hidden node to modify the network topology, which in effect models the dependency of correlated variables, to solve the problem. The conditional probability matrices linking the hidden node to its neighbors are determined using a gradient descent method which minimizing the objective cost function. The error gradients can be treated as updating messages and ca be propagated in any direction throughout any singly connected network to adjust the network parameters. With the above two- level approach, we have been able to build an automated endoscope navigation and advisory system successfully.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Successful applications of virtual endoscopy often require the generation of centerlines as flight paths for fly-through examinations of anatomic structures. Criteria for design of effective centerline algorithms should include the following: (1) tracking of the most medial path possible, (2) robustness to segmentation errors, (3) computational efficiency, and (4) minimum of user interaction. To satisfy these design goals, we have developed a centerline generation algorithm based on the chamfer distance transform and Dijkstra's single-source shortest path algorithm. The distance transformation is applied to a segmented volume to determine the distance from each object voxel to the nearest background voxel -- a 'medialness' measure for each voxel. From a user specified source voxel, the distance and path from each object voxel to the source voxel is determined using Dijkstra's single-source shortest path algorithm, with the 'medialness' measure used as the weighting or distance factor between voxels. After execution of the algorithm is complete, paths from all voxels in the object to the source can be easily computed, a feature that is useful for all implementations of virtual endoscopy, but particularly for virtual bronchoscopy, which involves branching. The algorithm runs in O[2n(1 + f)] time, where n is the number of voxels in the volume, and f is the ratio of object voxels to total voxels in the volume. The algorithm is efficient, requiring approximately 90 seconds for a 60 megabyte dataset containing a segmented colon, and is robust to noise, segmentation errors, and start/end voxel selection. The only user interaction required is choosing the starting and ending voxels for the path. We report on objective and subjective evaluations of the algorithm when applied to several mathematical phantoms, the Visible Human Male Dataset and patient exams.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Shigeo Okuda, Joachim Kettenbach, Andreas Schreyer, Vik Moharir, Toshio Nakagori, Ferenc A. Jolesz, Abdal Majeid Alyassin, William E. Lorensen, Ron Kikinis
Proceedings Volume Medical Imaging 1999: Physiology and Function from Multidimensional Images, (1999) https://doi.org/10.1117/12.349593
Currently, many applications for virtual endoscopy (VE) are available but fly-through is still troublesome. We are using Virtual Endoscopy Software Application (VESA) in our laboratory. VESA generates a 3D model with surface rendering method and a fly-through trajectory automatically. In this study, our goal is to evaluate the usefulness of VESA for generating virtual endoscopy (VE) images and automated fly- through trajectory. We applied VESA to clinical cases including colon, biliary ducts, aortic dissection and larynx. Original cross-sectional images were either spiral CT or MRI. VESA's advantages are following features. First, VESA can generate VE images with simple operation. Second, a point to point correspondence is established between 2D images/3D models and VE images. Third, automated trajectory runs more closely to the center of the hollow organ. VESA is a user- friendly tool for generating the VE images and its automated trajectory reduces the operating time. VESA provides a unique visualization component and makes VE more practical.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Three-dimensional (3D) digital images arise in many scientific applications. In an application such as virtual endoscopy, interactive navigation through the 3D image data -- i.e., the interactive creation of an animated sequence of views -- can reveal more information than single static two-dimensional (2D) views. For this case, the 3D image acts as a 'virtual environment' representing the imaged structures, and a computer-based system can serve as a tool for navigating through the environment. But such navigation must occur at interactive speeds if it is to be practical. This demands fast volume visualization from arbitrary viewpoints. We present an inexpensive, fast, volume-rendering method that can generate a sequence of views at interactive speeds. The method is based on the approximate discrete Radon transform and the temporal coherence concept. The method forms part of a system that permits dynamic navigation through 3D digital images. We provide pictorial and numerical results illustrating the performance of our method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Virtual endoscopy reconstructions of the body noninvasively provide morphologic information of gross structural abnormalities such as stenoses in airways or blood vessels and polyps in the colonic wall. Surface irregularity or roughness is another indication of abnormality potentially detectible on virtual endoscopy. In this paper, we show how fractal dimension can be used to quantify surface roughness and how these methods may be applied to virtual angioscopy to distinguish the thoracic aorta in a normal volunteer from that of a patient predisposed to atherosclerosis. Finally, we discuss some problems we encountered applying fractal analysis to small, noisy datasets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Residual stool and fluid and wall collapses are problematic for virtual colonoscopy. Electronic colon cleansing techniques combining both bowel preparation and image processing were developed to segment the colon lumen from the abdominal computed tomographic (CT) images. This paper describes our bowel preparation and image segmentation techniques and presents some preliminary results. A feasibility study using magnetic resonance imaging (MRI) is also reported.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Medical visualization is a rapidly developing field with many application areas spanning from visualization of anatomy to surgery planning, to understanding of disease processes. With increasing computer speed, medical visualization is becoming more real-time. In this paper, we present a novel application of real-time three-dimensional visualization of coronary arteries during catheter interventions that combines image information from two complementary sources: biplane x-ray contrast angiography and intravascular ultrasound (IVUS). After identification of the three-dimensional characteristics of the intravascular ultrasound pullback sequence, vessel geometry and vessel wall images are combined into a single visualization using semi-automated analysis of a corresponding pair of biplane angiography images. Visualization data are represented using the Virtual Reality Modeling Language (VRML), the code for which is automatically generated by our angiography/IVUS image processing and analysis software system. Selection of the VRML approach facilitates real-time 3-D visualization with an ability of over-the-network image processing and dissemination of results. The visualization specifics are easily modifiable in near real time to consider the immediate requirements of the end-user, the cardiologist who performs the coronary intervention.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Surface reconstruction for 3D visualization requires both a segmentation of the image and at the least, a conversion of the data from an image format to a shape format. While the discrete pixel locations of the segmentation may be satisfactory for quantitative purposes, it is usually important for visual quality to remove such voxelation in the shape data prior to rendering. Removal of such voxelation, depending on the specific segmentation technique, may be problematic, involving excessive interpolation prior to segmentation or the application of pure surface smoothing in which important image information may be disregarded. An algorithm is presented here to address these problems. In this algorithm, a smoothly varying threshold level is determined such that that level falls within the interpolated intensity ranges of all the segmentation-boundary voxels. This threshold information is extrapolated to voxels adjacent to the boundary and then used to correct or normalize the original image in the vicinity of the boundary. Provided that the directionality of the contrast between the interior and exterior of the boundary is sufficiently consistent throughout the boundary, an isosurface of this normalized image is guaranteed to exist which falls within the voxels of the segmentation boundary.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a method for automated labeling of the bronchial branches in the virtual bronchoscopy system and its application as a training tool. Virtual Bronchoscopy System (VBS) is a new observation method of 3-D medical images. This system is useful for a variety of purposes such as diagnosis, planning of surgery, informed consent, education and training. By the proposed method the VBS can automatically labels bronchial branches which are extracted from 3-D chest X-ray CT images by the knowledge based processing in the VBS. The knowledge base of the bronchial branch name is constructed. Automated labeling is performed by comparing the tree structure of the extracted bronchus with the knowledge base. The bronchial branch name is displayed in the navigation inside the bronchus. We extended the VBS to a teaching tool by using this function. The system generates questions about bronchial branch name. When the user navigate inside the bronchus by using the VBS, the system presents a question on the virtual endoscopic view and the user answers a question. The proposed method was implemented in our VBS. We confirmed that the method can assign anatomical names to about 90% of bronchial branches extracted from 3-D X-ray CT image automatically. In an extended module for educational use of the VBS, the system could generate questions about branch names and could display them on the virtual endoscopic view automatically.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We demonstrate the added utility of virtual-bronchoscopic techniques for assessing major airway obstructions. Shaded- surface displays of the major airways are combined with tube- geometry analysis of the airway lumen to assist in the study of 57 subjects. In tandem with this study we also propose a method for automatically finding the centerlines (central axes) of the major airways. The method runs quickly on a typical anisotropically sampled 16-bit high-resolution CT image and has potential for providing useful guidance information for virtual bronchoscopy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
High-resolution micro-CT scanners permit the generation of three-dimensional (3D) digital images containing extensive vascular networks. These images provide data needed to study the overall structure and function of such complex networks. Unfortunately, human operators have extreme difficulty in extracting the hundreds of vascular segments contained in the images. Also, no suitable network representation exists that permits straightforward structural analysis and information retrieval. This work proposes an automatic procedure for extracting and analyzing the vascular network contained in very large 3D CT images, such as can be generated by 3D micro- CT and by helical CT scanners. The procedure is efficient in terms of both execution time and memory usage. As results demonstrate, the procedure faithfully follows human-defined measurements and provides far more information than can be defined interactively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have developed methods for determining 3D vessel centerlines from biplane image sequences. For dynamic quantities, e.g., vessel motion or flow, the correspondence between points along calculated centerlines must be established. We have developed and compared two techniques for determination of correspondence and of vessel motion during the heart cycle. Clinical biplane image sequences of coronary vascular trees were acquired. After manual indication of vessel points in each image, vessels were tracked, bifurcation points were calculated, and vascular hierarchy was established automatically. The imaging geometry and the 3D vessel centerlines were calculated for each pair of biplane images from the image data alone. The motion vectors for all centerline points were calculated using corresponding points determined by two methods, either as points of nearest approach of the two centerlines or as having the same cumulative arclength from the vessel origin. Corresponding points calculated using the two methods agreed to within 0.3 cm on average. Calculated motion of vessels appeared to agree with motion visible in the images. Relative 3D positions and motion vectors can be calculated reliably with minimal user interaction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper, through plaque quantification, demonstrates the use of three-dimensional (3D) reconstructions of coronary arteries to assess compensatory enlargement. The lumen and medial-adventitial border are segmented from intravascular ultrasound (IVUS) images using a novel 3D method called active surfaces and the segmented data is used to calculate the cross-sectional area of the lumen and vessel, respectively. The area of plaque for each slice is the difference of the two. Information about the distance between path points, located using a calibrated biplane angiography system, is used for the calculation of plaque volume. This quantification system can be used to track the progression or regression of atherosclerosis and is currently being used to document compensatory enlargement, a physiological phenomenon in which the overall vessel cross-sectional area increases with an increase in plaque area with little or no decrease in luminal cross-sectional area. Four ex-vivo cases have been quantified, all demonstrating this remodeling mechanism, shown by strong positive correlation between plaque area and vessel area over the reconstructed length of the vessel (R equals 0.98, R equals 0.93, R equals 0.98, R equals 0.68).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Vascular structures such as the pulmonary arterial tree contain hundreds of thousands of vessel segments, making structural and functional analysis of an entire 3D image volume very difficult. Currently-available methods for segmentation and morphometry of 3D vascular tree images require user interaction making the task very tedious and sometimes impossible. Our aim is to exploit the self-similar nature of arterial trees to simplify morphometric analysis. The structure of pulmonary arterial trees exhibits self- similarity in the sense that the segment length and diameter data from different pathways are statistically indistinguishable for subtrees distal to a given segment diameter. We analyze 3D micro-CT images of mouse and rat pulmonary arterial trees by measuring the lengths and diameters of the vessel segments of the several longest arterial pathways and their immediate branches interactively. Since measurements made on the longest pathways are representative of the tree as a whole, and there are less than 30 branches off the main trunk, the morphometry of the complex tree can be characterized by less than 100 length and diameter measurements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Shear rate has been linked to various arterial wall properties and diseases such as cell function, neointimal hyperplasia, post-stenotic dilation and progression of atherosclerotic plaque. An accurate noninvasive method of measuring blood flow near the arterial wall is needed to allow the study of wall shear rate in humans to progress. Current clinical vascular laboratories are limited to using the Hagen-Poiseuille formulation based on steady laminar flow through a rigid- walled tube, which is not realized in vivo. This project used data collected with an ultrasound scanner to approximate the magnitude of wall shear rate in two dimensions using the formal definition of the velocity gradient in a radial direction. Blood flow was measured in the common carotid artery of 20 subjects in both longitudinal and transverse directions. The results showed that the wall shear rate was a factor of approximately 1.5 to 2 times greater than the value calculated using the Hagen-Poiseuille formulation. A comparison of the longitudinal and transverse estimation methods showed very similar values for all of the calculated quantities. This comparison contributed to the conclusion that this image-based technique provides a more accurate assessment of wall shear rate, with the significant addition of a second dimension for analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Blood flow rate is an important parameter for functional evaluation of vascular disease. Instantaneous blood flow measurements from digital cerebral angiograms can be performed during endovascular interventional procedures providing radiologists with real-time minimally invasive flow measurements. Most published videodensitometric techniques assume a specific radial profile of velocity. Such assumptions can cause errors if the velocity profile differs from the model, changes during the heart cycle or if contrast concentration is not radially uniform. All of these conditions are typical in clinical practice. We propose to divide the vessel inside the region of interest into a number of narrow laminae and its X-ray image into corresponding narrow bands. Flow inside each lamina is assumed to be plug and parallel to the lamina. Blood flow velocities within each band are be computed using existing angiographic techniques and are used to compute flow velocities within each laminae and within the entire vessel. We evaluated the new approach on simulated and phantom angiograms. It improved accuracy of measurements of plug-flow algorithms from simulated angiograms. The results obtained during the evaluation of the technique on phantom angiograms are inconclusive. The proposed algorithm extends regular videodensitometric flow measurement techniques to allow for radially dependent flow profiles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The distribution of blood transit times within the pulmonary arterial tree has important implications with regards to overall lung function. Previously, we showed that the pulmonary arterial tree imparts little dispersion to an injected bolus, so that the bolus arrives at downstream arteries with a time delay, but little increase in variance. Furthermore, the arterial time delay is nearly the same for all pathways to arteries of the same diameter, independent of their pathway length. This small amount of dispersion was observed despite the velocity profile within the arterial tree and the substantial variation in arterial pathway lengths. Thus, we have begun to ask why velocity profile effects and pathway length heterogeneity within the pulmonary arterial tree have so little influence on bolus dispersion. X-ray angiography studies were used to visualize streamtube pathways within the pulmonary arterial tree. Full bolus injections were used to visualize all flow streamlines within the tree, while 'streamtube' injections labeled only about 1% of the inlet arterial cross-section. By changing the injector position within the arterial cross-section, different streamtubes were traced and found to remain intact downstream to vessels less than 200 micrometer in diameter. Thus, it appears that lower velocity streamtubes tend to peel off from the full velocity profile at arterial bifurcations, while flow streamtubes with higher average velocity travel down the main arterial trunk. The net result is that dispersive velocity profile effects are mitigated by the interaction between the distributed velocity profile and the branching pattern of the pulmonary arterial tree.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The long term goal of this research is to determine the clinical relevance of stenosis. Where most QCA algorithms calculate the decrease in lumen from one angiocardiogram, we seek to determine directly the influence of the stenosis on the blood flow. The method uses only a slightly different clinical approach as compared to 'traditional' non- interventional catheterizations. Instead of injecting a steady flow of contrast agent, we propose to inject a string of small droplets. The resulting string of droplets will enable us to estimate the relative blood flow by measuring their time of arrival in some designated regions. Repeating the same procedure after administering a vasodilative drug, we obtain a relative decrease (or less increase) in blood flow in one of the two distal branches of the bifurcation due to the presence of stenosis. From the resulting X-ray image sequence multiple frames are selected, and the information is combined to find the relative blood velocity. The conclusion is that it is possible to use sequences of images instead of just one image to calculate quantitative results. Major problems to overcome are the respiratory- and heart-motions, and differences in acquisition parameters between runs. The usefulness of the new method in real clinical applications and the coherence with other measures are currently under trial.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Blood oxygenation level dependent contrast (BOLD) functional MRI responses at 7T were observed in the cerebellum of alpha- chloralose anesthetized rats in response to innocuous electrical stimulation of a forepaw or hindpaw. The responses were imaged in both coronal and sagittal slices which allowed for a clear delineation and localization of the observed activations. We demonstrate the validity of our fMRI protocol by imaging the responses in somatosensory cortex to the same stimuli and by showing a high level of reproducibility of the cerebellar responses. Widespread bilateral activations were found with mainly a patchy and medio-lateral band organization, more pronounced ipsilaterally. There was no overlap between the cerebellar activations caused by forepaw or hindpaw stimulation. Most remarkable was the overall horizontal organization of these responses: for both stimulation paradigms the patches and bands of activation were roughly positioned in either a cranial or caudal plane running antero-posteriorly through the whole cerebellum. This is the first fMRI study in the cerebellum of the rat. We relate our findings to the known projection patterns found with other techniques and to human fMRI studies. The horizontal organization found wasn't observed before in other studies using other techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Marleen Verhoye, Boudewijn P. J. van der Sanden, P. Rijken, Hans Peters, A. J. Van Der Kogel, Gilke Pee, Greet Vanhoutte, Arend Heerschap, Anne-Marie Van der Linden
Proceedings Volume Medical Imaging 1999: Physiology and Function from Multidimensional Images, (1999) https://doi.org/10.1117/12.349612
Dynamic Magnetic Resonance Imaging (T1-mapping) using the Gadolinium complex Gadomer-17, was applied to characterize the vascular permeability of human glioma xenografts implanted in nude mice. We aimed at measuring permeability differences in two types of glioma xenograft lines with a known difference in perfusion status. The T1-data could be analyzed according to the Tofts-Kermode compartmental model for modeling tracer kinetics to vascular permeability. This vascular permeability was mapped as the permeability surface area product per unit of leakage volume (k). The two tumor types displayed different k-maps. For the fast growing E102 tumor, we observed a homogeneous distribution of the vascular permeability across the tumor with a mean k-value of (0.207 plus or minus 0.027) min-1. However, for the slowly growing E106 tumor, we could distinguish four different regions with different permeability characteristics: a well-perfused rim [k equals (0.30 plus or minus 0.09) min-1], regions at the inner side of the tumor with lower permeability [k equals (0.130 plus or minus 0.019) min-1], regions at the inner side of the tumor around necrotic regions demonstrating locally increased permeability [k equals (0.33 plus or minus 0.11) min-1], and necrotic regions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Phosphorus-31 magnetic resonance spectroscopy (31P-MRS) has gained much interest in schizophrenia research in the last years since it allows the non-invasive measurement of high- energy phosphates and phospholipids in vivo. We investigated hemispherical differences of the concentrations of different phosphorus compounds in the frontal lobes. For this purpose, well defined volumes in the dorsolateral prefrontal cortex of 32 healthy controls and 51 schizophrenic patients were examined. Schizophrenic patients showed significant lateralization effects of phosphodiesters (PDE) and the intracellular pH-value. Differences in the lateralization of 31P-MRS parameters between patients and healthy volunteers were only detected for the pH-value. While healthy controls exhibit lower pH-values in the left frontal lobe (6.96), in schizophrenic patients we found lower pH-values in the right (6.89). Detailed examinations showed that this effect is mainly based on the subgroup of schizophrenics who received atypical neuroleptic medication.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Videodensitometric measurement of blood flow requires injection of spatially varying contrast density with minimal perturbation of the flow. Constant injection does not yield sufficient spatial gradients for flow measurement during peak flow and diastole, whereas pulsed injection superposes a time- varying perturbation on the measured flow. In order to avoid these drawbacks, we propose to pre-load the catheter injection tubing with spatially varying contrast density. This allows us to generate spatially localized contrast profiles in phantoms and to inject temporally varying contrast profiles in vivo without pulsatile injection. In phantom experiments we obtain better spatial localization of bolus using pre-loading than by pulsed injection of pure contrast material. Preliminary results indicate improvement in accuracy of blood flow measurements using pre-loaded bolus.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an analysis of the Doppler shift and spread caused by moving blood flows using only a phased array beam steering. A mathematical system model provides the exact phase information without using approximation, which a conventional beam steering and focusing is based on. The target's movement relates to the beam steering time known, as slow time. We assume that the target is stationary when the beam at a certain steering angle interacts with the target under the beam propagation time known, as fast time. The velocity component in the beam direction relates to Doppler shift. The other velocity component perpendicular to the beam propagation direction corresponds to Doppler spread. The true velocity vector of the region of interest (ROI) can be estimated with a single phased array that is based on measurement of the Doppler shift and spread in the spatial domain. In our analysis, we suppose that the ROI contains multiple velocity vectors where each vector has a different magnitude and direction. In fact, the maximum and mean velocity of the ROI can be acquired under laminar flows. If the ROI is with turbulent flows it hardly estimates ROI's mean velocity. However, it is possible to indicate whether it is a laminar or turbulent flow. The numerical simulation results will be shown.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Myocardial perfusion reserve can be noninvasively assessed with cardiovascular magnetic resonance. With magnetic resonance (MR) multislice dynamic imaging techniques it is possible to acquire the complete heart during the first pass of a contrast agent bolus. For diagnostic reasons an important question is to obtain quantitative parameters of the perfusion of the myocardium. We developed a model for the analysis of the contrast agent bolus pass in the myocardium and established a process for the complete task, which will support a routine clinical use delivering these quantitative parameters in a reproducible way. To evaluate the analysis in a collective of patients with single vessel disease and without significant coronary artery disease the signal intensity curves of the first pass of a gadolinium-DTPA bolus injected via a central vein were estimated before and after dipyridamole infusion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computerized volumetric warping and registration of 3D lung images can provide objective, accurate, and reproducible measures to the understanding of human lung structure and function. It is also invaluable to the assessment of the presence of diseases and their response to therapy. However, due to the complexity of breathing motion, little work has been carried out in this research area. In this paper, we propose a novel scheme to implement volumetric lung warping and registration from 3D CT images obtained at different stages of breathing. Bronchial points of airway trees and vessels are selected as feature points since they can be easily tracked over consecutive frames. The warping of these feature points into the entire volume is obtained based on a model of continuum mechanics and is implemented in an iteration fashion governed by such model. The model consists of three constraints: an incompressibility constraint, a divergence-free constraint and a motion-discontinuity- preserving smoothness constraint. An objective function is defined as a weighted sum of the three constraint terms and the desired displacement field of the whole volume between different stage of breathing is obtained by minimizing this objective function. The 3D warping is therefore represented by the dense displacement field obtained from the iteration. Preliminary results are visualized by overlaying the displacement field with the original images. Effectiveness of the algorithm is also evaluated by comparing the volume difference between the real and warped volumes. We believe the proposed approach will open up several areas of research in lung image analysis that can make use of the results from warping lung volumes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Contrast-enhanced CT has an important role in assessing liver lesions. However, the optimal protocol to get most effective result is not clear. The main goal when deciding injection protocol is to optimize lesion detectability with rapid scanning when lesion-to-liver contrast is maximum. For this purpose, we developed a physiological model of the contrast medium enhancement based on the compartment modeling and pharmacokinetics. Blood supply to liver is achieved in two paths. This dual supply characteristic distinguishes the CT enhancement of liver from that of the other organs. The first path is by hepatic artery and the second, by portal vein. However, it is assumed that only hepatic artery can supply blood to hepatocellular carcinoma (HCC) compartment, thus, the difference of contrast enhancement is resulted between normal liver tissue and hepatic tumor. By solving differential equations for each compartment simultaneously using the computer program Matlab, CT contrast-enhancement curves were simulated. The simulated enhancement curves for aortic, hepatic, portal vein, and HCC compartments were compared with the mean enhancement curves from 24 patients exposed to the same protocols as the simulation. These enhancement curves showed a good agreement. Furthermore, we simulated lesion-to- liver curves for various injection protocols, and the effects were analyzed. The variables to be considered in the injection protocol were injection rate, dose, and concentration of contrast material. These data may help to optimize scanning protocols for better diagnosis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a 3D model-based approach is proposed for tracking 3D motion (positions and orientations) of the knee from sequences of 2D radiographs. Conventional methods using external skin markers or body model do not accurately reflect motion of the underlying bone. In contrast, our method is to use sequences of radiographs for direct visualization of bone motion during activities. A 3D texture-mapped volume rendering is used to simulate a radiograph image, a 2D projected image of the 3D model data. A Quadtree-based normalized correlation algorithm is employed to measure similarity between the projected 2D model image and the pre-processed radiograph image. An optimization routine iterates the six motion parameters until the optimal similarity is obtained. This method has been evaluated using test data collected from an anatomically accurate radiographic knee phantom, specifically femur part of the phantom. Further testing is underway using in-vivo radiograph image sequences of a canine hindlime during treadmill walking.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work we present a comprehensive approach for the kinematic analysis of musculoskeletal structures based on 4D MRI data sets and unsupervised segmentation. We applied this approach to the kinematics analysis of the knee flexion. The unsupervised segmentation algorithm automatically detects the number of spatially independent structures present in the medical image. The motion tracking algorithm is able to pass simultaneously the segmentation of all the structures which allows an automatic segmentation and tracking of the soft tissue and bone structures of knee in a series of volumetric images. Our approach requires a minimum of interactivity with the user, eliminating the need for exhaustive tracings and editing of image data. This segmentation approach allowed us to visualize and analyze the 3D knee flexion, and the local kinematics of the meniscus.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The purpose of this work is to characterize and classify the 3D architecture and kinematics of the joints of the foot, in live patients from 3D MR images. In the first part of this work we define a set of architectural parameters to describe the 3D relationships among the peritalar components and propose a new tool to automatically classify normal and pathological feet based on their architectural features. In the second part we extend the architectural method to the kinematics of the foot. Each image data set utilized in this study consists of 60 longitudinal slices of the foot acquired on a 1.5 T commercial MR system in each of 8 positions from extreme pronation to extreme supination. In the first part of this work, we developed a graphical representation of these parameters that provides useful and specific information for the clinician, and also developed a simple pattern classification method to select the most characteristic parameters for each pathological group. In the second part of this work, we show how these specific parameters vary during motion. The results presented characterize the normal architectural features of the foot. They show how each foot can be automatically classified into one of three pathological groups based on its architecture and its variations. They also present the normal features of motion, and show how normal and abnormal motions can be distinguished. We conclude that 3D MR image and analysis is a unique and powerful technique to study and follow normal and pathological feet in living patients.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.