PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The magnetic fields generated by electrical currents in the human body, such as in the brain and in the heart, can be measured using Super Conducting Quantum Interference Devices (SQUID). We present two methods by which these magnetic field measurements can be processed to obtain images of the electrical current distributions which caused them. These two methods are: reconstruction using Fourier transform techniques and reconstruction using linear estimation theory. We present a brief overview for both methods including computer-simulated reconstructions of biological activity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The SQUID based biomagnetometer has been widely used to measure the external magnetic field produced by neural activity. In this paper we consider the viability of using this data to reconstruct three dimensional neuromagnetic images (NMI) of an equivalent electrical current distribution within the brain which would produce the measured magnetic field. The fundamental limitations on this mode of imaging are evaluated and possible physical models and mathematical formulations of the problem are proposed. Several algorithms often used in medical image reconstruction are applied to the problem and their performance evaluated. We conclude that the reconstruction problem is highly ill-posed, and that conventional image reconstruction algorithms are inadequate for 3-D NMI. A class of solutions we call 'minimum dipole' is shown to provide more accurate reconstructions of simple current distributions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The primary application of magnetic resonance imaging (MRI) has been qualitative and anatomical evaluation of patient status. Recent efforts to analyze image information for quantitative evaluation centered on two relaxation parameters, Tl and T2, as the descriptors for the image data. In our work we have found that relaxation curves for biologic materials cannot be described by a monoexponential function and that, in a spin echo system, calculated Tl values are dependent on repetition time. This finding is not unexpected since, in physiologic imaging, any region of interest (ROI), is composed of a number of distinct substances and the response of that ROI will be a composite of the constituent materials. The purpose of our study was to develop a method by which the relaxation behaviors of a composite of physiological material might be characterized and use that characterization to determine its constituent materials. We created a phantom in which volumes of several "pure" materials (blood, plasma, saline and oil) were available as well as volumes which contained concentric enclosures of the pure materials. Images were formed at a number of repetition times, ranging from 160 milliseconds to 2 seconds. The image data was then transferred to a VAX 11/750 where regions of interest were marked and the mean image intensity for each ROI at each repetition time was calculated. The resultant relaxation curves of the pure materials formed basis vectors for the composite responses and the fractional content of each material was determined by a least-square error fit to the basis vectors. Excellent agreement was seen between known and measured mixture percentages. Ongoing work is centered around optimizing repetition time selection and accounting for the interaction between species in the mixtures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The electrical resistivity of various tissues is known to cover a wide range of values and images of resistivity distribution within a patient should show good contrast and may prove to have some diagnostic use. Data on the internal distribution of resistivity within a patient may be obtained by applying current between electrodes attached to the patient and measuring the voltage developed across the surface of the patient. After collection of a complete set of data a tomographic image of resistivity may be constructed using a filtered back-projection algorithm. Some likely clinical uses are in the assessment of respiratory function and cardiopulmonary dynamics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The complex permittivities of three-dimensional inhomogeneous biological bodies can be extracted from microwave scattering data by inverse scattering approach. A water-immersed microwave system is used to contract the wavelength to millimeter range and to enhance impedance matching with the biological body. Contraction of the wavelength increases the image resolution, while impedance matching promotes the microwave penetration. Scattered fields are measured using an array of 127 dipole elements and a total size of approximately 15cm x 18cm with operating frequency at 3 GHz. Two inverse scattering approaches have been developed. One approach, which has been published earlier, utilizes an inverse scattering theorem which may be considered as a generalization of the Lorentz reciprocity theorem to dissipative media. The other approach, which is presented in this article, takes scattering measurement by an array with various directions of incident wave; the wave equation is converted to a matrix equation by dividing the dielectric body into a number of cells, the dielectric data is then obtained by inverting the matrix equation. In both approaches, uniqueness is assured owing to the dissipativity of the propagation medium.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Acquisition times in magnetic resonance imaging (MRI) are typically in the order of minutes. For an image of 256 x 256 pixels, the standard Fourier reconstruction technique used in most commercial imaging systems requires 256 separate free induction decay (FID) signals. While the FID signal itself is of relatively short duration, the successive FID signals are separated by long delays, of the order of seconds, to permit substantial relaxation of the signal before the next excitation. The resultant long acquisition times give rise to motion artefacts, preclude dynamic imaging and keep the patient throughput low. In this paper, we investigate a fast imaging scheme which uses spiral trajectories in the spatial frequency domain. The entire domain can be sampled in a short time, requiring as few as one FID acquisition. The scheme requires time varying gradients having the form of a ramped sinusoid. Several reconstruction methods are considered for forming the image from the spatial frequency domain data. The possibility of using multiple spirals to deal with the rapid decay of the FID signal is also examined in detail.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
MR techniques for producing flow related images of vessels are generally grouped into two categories: (1) wash-out methods and (2) phase-encoding methods. Wash-out images of arterial flow generally utilize rapid imaging to produce flow related enhancement. Phase-encoding methods rely upon the effects of phase shifts resulting from motion along a field gradient to produce flow dependent signal differences. We present the results of experiments which utilized the phase-encoding technique to produce flow images in dogs and normal volunteers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A magnetic resonance imaging system for investigation and development of nuclear magnetic resonance (NMR) imaging techniques is being constructed by the NMR, Imaging Group within the School of Electrical Engineering at the University of Sydney. The system will also be capable of spatially-localized in-vivo spectroscopy. Primary requirements of a research instrument are flexibility and low cost, precluding purchase of a commercial system and necessitating in-house construction. The appropriate design strategy adopted was to start by building a simple NMR apparatus and expand it in steps, progressing towards a complete imaging and spectroscopy system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper a sampling scheme is developed for computer tomography (CT) systems that eliminates the need for interpolation. A set of projection angles along with their corresponding sampling rates are derived from the geometry of the Cartesian grid such that no interpolation is required to calculate the final image points for the display grid. A discussion is presented on the choice of an optimal set of projection angles that will maintain a resolution comparable to a sampling scheme of regular measurement geometry, while minimizing the computational load. The interpolation-free scanning and sampling (IFSS) scheme developed here is compared to a typical sampling scheme of regular measurement geometry through a computer simulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Maximum Likelihood Estimator (MLE) method of image reconstruction has been reported to exhibit image deterioration in regions of expected uniform activity as the number of iterations increases beyond a certain point. This apparent instability raises questions as to the usefulness of a method that yields images at different stages of the reconstruction that could have different medical interpretations. In this paper we look in some detail into the question of convergence of MLE solutions at a large number of iterations and show that the MLE method converges towards the image that it was designed to yield, i.e. the image which has the maximum likelihood to have generated the specific projection data resulting from a measurement. We also show that the maximum likelihood image can be a very deteriorated version of the true source image and that only as the number of counts in the projection data becomes very high, will the maximum likelihood image converge towards an acceptable reconstruction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An electronic collimation system for SPECT imaging has been designed to provide improved detection efficiency over a mechanical collimator. A maximum likelihood estimator (MLE) is derived for this prototype electronically collimated system. The data is shown to be independent and Poisson distributed. For the low count rates typical in nuclear medicine the statistical fluctuations due to the Poisson process are significant. Consequently resolution in reconstructions using the linear formulations typical in x-ray computed tomography is limited by the resulting non-stationary noise. The MLE approach however incorporates the Poisson nature of the data directly in the reconstruction process and thus results in superior reconstructions than those obtained using a linear approach. Inclusion of noise effects in modeling the system is shown to guarantee the existence of a unique MLE solution. The EM algorithm is employed to find the MLE solution. The structure of the transition probability matrix obtained using a polar sampling raster is exploited to speed up the algorithm. Results of a comprehensive computer simulation are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new electronically collimated gamma camera utilizing a gas scintillation position-sensitive detector and a multiwire proportional chamber is proposed and its imaging characteristics are discussed in this paper. The scheme preserves all the advantages of an electronically collimated system (ECS) i.e. high sensitivity and simultaneous multiple views of the object over the conventional NaI gamma camera. Compared with the Ge based ECS, this scheme would have higher spatial resolution and avoid the construction difficulties of a large area Ge detector.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Strip (or slot) beam digital radiography has been proposed as an ideal compromise between the excellent scatter rejection of pencil-beam or single-line scanned projection radiography systems and the excellent x-ray utilization of wide area beam systems. Moreover, the Kinestatic Charge Detector (KCD) has been proposed as a strip beam detector candidate with a potential for achieving a spatial resolution of over 5 cy/mm, a quantum detection efficiency (QDE) near unity ( > 90% and a local exposure time at a given contrast resolution which is less than other detection techniques (i.e., reduced motion blurring). Several laboratory KCDs containing various numbers of channels have now been constructed and tested which allow a better understanding of the practical performance which can be expected from a strip beam digital radiography system using a KCD.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A desirable mammography imaging system should offer high spatial resolution, high sensitivity, a wide dynamic range and a means of limiting detected x-ray scatter. Significant scatter reduction can be obtained with a slot scan acquisition format with minimal attenuation of the primary beam. A potential receptor for digital mammography is comprised of a x-ray screen optically coupled directly to a CCD detector. The dynamic range of our current front-illuminated CCD is about 400 using factory supplied readout circuitry. The high spatial resolution of the CCD (22.4 1p/mm at Nyquist frequency) and the x-ray screen are reduced by poor optical contact at this time. X-rays which interact with the2CCD generate a moderate signal (25-30% of the optical signal) when a thin (22mg/cm2) Gd2O2S:Tb screen is used. Image segments of a Kodak breast phantom were acquired with both the screen-CCD detector and the CCD detector alone. Substantial improvements in detector performance could be achieved by utilizing a back-illuminated CCD with a slow scan, correlated double sampled readout and a structured x-ray screen. The concept of direct optical coupling with structured screens can be implemented in high energy digital scanning systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Rotating Aperture Wheel (RAW) rapid multiple scanning beam device is demonstrating marked contrast improvement in initial clinical trials. Although the RAW device is well suited to the short exposure time requirements of chest radiography, such clinical application has not as yet been attempted because the RAW's efficient scatter reduction only accentuates the inherent dynamic range problem. If a large dynamic range receptor, such as very wide latitude film or a storage phosphor, were used in combination with the RAW and the resulting images were digitally contrast enhanced, a higher signal to noise ratio could be realized across the full range of required exposure in lung, subdiaphragmatic, and mediastinal fields. We have recently begun such a study using a system combining the RAW, wide latitute film, and film digitization with scanning microdensitometer facilities for evaluating images of a chest phantom. Computer enhanced RAW images compared to those obtained using a conventional 12:1 grid show: a) no loss in feature identification in the lung field, b) better visualization of low contrast nodules as well as rib detail in the subdiaphragmatic region, and c) generally better contrast in the mediastinal region. The results clearly encourage further quantitative studies of phantoms as well as preparation for clinical application using larger image matrices.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is desirable to limit detected x-ray scatter and the x-ray beam bandwidth since these two factors will affect image contrast or data used for material composition analysis. Substantial scatter reduction is possible if slit scan imaging techniques are utilized. A multilayer structure which is comprised of a stack of high and low atomic number materials can be used in the reflective mode to generate a narrow bandwidth slit-like x-ray beam. Theoretical reflectivity data for multilayer mirror designs which could be used for mammography, the imaging of iodinated contrast material, CT and dual energy analysis are presented. A breast phantom was imaged at 21 KeV using a ReW-C multilayer, a radiographic W-anode source and a screen-film receptor. Reflected spectra were measured with the ReW-C multilayer and a Mo anode tube. Modifications to the mirror-based imaging system and the need for an efficient detector are considered as means of reducing tube heating.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A solid-state digital x-ray detector is described which can replace high resolution film in industrial radiography and has potential for application in some medical imaging. Because of the 10 micron pixel pitch on the sensor, contact magnification radiology is possible and is demonstrated. Methods for frame speed increase and integration of sensor to a large format are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A Monte Carlo method was developed and implemented to simulate x-ray photon transport through a 6 cm thick adipose phantom. Simulations consisted of molybdenum target x-ray spectra ranging from 20 kVp to 55 kVp. For comparison to polyenergetic simulations, corresponding monoenergetic runs were also performed. The dependence of; scatter fraction (SF), scatter-to-primary ratio (SP), and point spread function (PSF), on x-ray spectra is reported in quanta flux and energy flux.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The purpose of this short tutorial is to highlight selected papers from recent SPIE conferences with emphasis on the areas of signal detection theory, statistical decision theory and pattern recognition, image evaluation, and image processing. The selection is biased toward the author's special areas of interest and, as is usual in reviews of this kind, a common set of threads are sought. The papers are referenced in terms of the SPIE volume number and paper number (000-00). The first common thread is that the volume numbers tend to be palindromes, namely, 454, 535, 626, and the present 767, and indicate the non-linear growth of the Society between annual Medical Imaging symposia.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a high resolution x-ray imaging device which is being developed through NIH sponsorship by the University of Arizona. It consists of an external modular x-ray sensor, a proximity focussed image intensifier and six CCD's coupled to the output of the image intensifier via six fiber optic tapers. The tapers are joined at the large ends to form a coplanar fiber optic taper assembly. The spatial resolution is expected to be determined by the external sensor up to the Nyquist frequency of the CCD which is (after magnification by the tapers) 3.9 1p/mm. The intended application is coronary angiography.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This study presents preliminary results of single exposure dual energy computed radiography using laser stimulable luminescent phosphor imaging plate detectors. The single exposure technique makes use of four of these plates in a single cassette, each plate acting as an X-ray filter to the next so that the energy separation required for the dual energy basis decomposition is achieved. An analysis to determine the best operating technique for the chest is performed using computer simulation, and was found to be 85 kVp and 14 mAs. This technique yields the same entrance exposure value as is used clinically, so that the information for the decomposition is obtained without additional dose to the patient. The Iso-transmission line technique was used as the decomposition algorithm. A humanoid chest phantom was used to test the quality of the resulting calibration material equivalent images. The quality of the images, although slightly inferior to that of dual exposure technqiues, seem acceptable for clinical application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Methods for measuring the modulation transfer function and the detective quantum efficiency of a photostimulated luminescence digital radiography system are described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A computed radiography system is described in which a storage phosphor is photostimulated by a scanning laser. Optical effects in the depth of the storage medium are calculated in a theoretical model, and their effect on excess noise and system detective quantum efficiency (DQE) is predicted. The predictions are compared with measured data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the major problems in radiation therapy is ensuring that the correct region of the patient receives the prescribed x-ray treatment and that the surrounding tissues are spared. One way to identify patient positioning errors is to make an image using the radiotherapy treatment beam. We have examined two of the factors that can influence the quality of images made with high energy x-ray beams: (i) the size of the x-ray source, and; (ii) the signal-to-noise characteristics of the detectors used to form images with high energy x-ray beams. We have developed a novel method of measuring the source distributions for 60Co machines and linear accelerators and from the measurements have been able to obtain the modulation transfer functions of their x-ray sources. We have also measured the modulation transfer functions (MTFs) and the noise power spectra (NPS) of the x-ray detectors. Based on these measurements, we conclude that images made with high energy x-ray beams are limited by film granularity and that improved images can be obtained by alternative detector systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A digital x-ray imaging system for quantitative arterial imaging and blood flow characterization has been developed in our laboratory and evaluated with a phantom study. The system consists of a low noise photo-diode array detector optically coupled to an x-ray image intensifier. The diode array can be considered as an "add-on" to an existing conventional XRII system to produce high quality images of small regions of interest selected from a larger field of view in fluoroscopy mode. X-ray collimation is used to reduce scatter and veiling glare to increase contrast, reduce a non-linear effect in logarithmically subtracted images, and minimize noise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Without contrast material, the cardiac wall motion can be clearly described by using subtraction method. The subtracted x-ray images from an end diastolic phase image gives sufficient information to evaluate cardiac motion quantitatively. Clinical validity is discussed on the actual data and several observations to distinguish abnormality of cardiac motion are obtained. In normal cases, the contraction of the cardiac wall is approximately symmetrical to long axis. The degree of contraction in the apex was significantly less than that in anterior wall, and that in inferior wall was slightly greater than that in anterior wall. The profile analysis showed that greater contraction was occurred in the inferior wall, and probably, the reason may be the upper forward shift of center of gravity. The localized abnormality of the contraction were found in the area of previous myocardial infarction as dyskinesis, akinesis, and hypokinesis. The phase images of regional wall motion showed that synchronous contraction occurred in the normal cases, whereas asynchronous contraction occurred in the ischemic cases. The amplitude image of that showed the degree of contraction clearly.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computer simulation techniques are used to examine Pr, Gd and Yb (K-edge) filters for the x-ray imaging of iodinated blood vessels. The performance of these filters is compared to a standard 2 mm Al filter with respect to vessel contrast, patient exposure and integral absorbed dose and x-ray tube loading. Additional simulations investigate how 0.2 mm Pr, Gd or Yb filters interact with the non-isotropic x-ray spectrum and affect the background intensity and vessel contrast across the detector surface, and the uniformity of exposure and integral absorbed dose across the patient. The results show that the uniformity of the primary x-ray image is neither degraded nor improved by these filters; however, patient exposure and dose can be substantially reduced and rendered more uniform.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A user friendly, PC/AT-based, high resolution fluoroscopic imaging computer has been developed and has been applied in routine clinical use. Diagnostic quality images are produced using the PPR (Pulsed Progressive Readout) technique and frame integration of live fluoro image technique. Hard copy images are conveniently produced using a multiformat camera or a laser imager. Clinical evaluations show that the quality of PPR image with 1024 by 1024 pixel resolution is equivalent to that of 105mm film image and approaches that of 9 inch cassette spot film image. The benefits of the fluoroscopic imaging computer include: radiation dose reduction, higher contrast sensitivity, enhanced resolution, immediate image viewing, elimination of retakes, lower operating cost and PAC's readiness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In our department, it is planned that the gastro-intestinal fluoroscopic area will be equipped entirely with digital imaging systems. The use of the 1024 X 1024 pixel frame store, backed by a hard disc for rapid image transfer, and the production of hard copy on a laser imager has reached the point where clinical efficacy and acceptance are assured. The further addition of facilities for annotation and the application of digital post-processing techniques are being explored both at the clinical site and at the research laboratorieS. The use of laser imaging has produced a further improvement in image quality and some of the practical problems related to this apparatus will be described. The availability of larger capacity laser disc image storage enables the local area network or "mini-PACS" system for fluoroscopy areas to become a concept worthy of investigation. We present our experience over a number of years with these systems, together with our latest investigations into potential applications of laser technology to the practice of radiology in a busy imaging centre.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We extend a previous analysis of the influence of stochastic amplifying and scattering mechanisms on the transfer of signal modulation and photon noise to include the simplest case of nonlinear film grain response, namely that associated with 2-quantum threshold. This special case allows an exact analytical solution for radiographic screen-film noise as a function of quantum exposure level. The solution has been derived under the assumption of a uniform checkerboard array of grains.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
High-speed scanning microdensitometers are now used routinely to characterize the noise in applications such as radiographic screen-film systems and the films used for hard copy output of digital printers. These measurements are often limited by the internal sources of noise in such instruments. We have analyzed the principal sources of noise in a commercially available, two-dimensional scanning microdensitometer and have found them to be associated with either the glass platen (which noise is correlated from scan to scan) or other sources which are uncorrelated from scan to scan, depending on film density. These component noise sources, as well as the total instrument noise, have been measured as a function of density and spatial frequency. Methods for reducing these noise components, including a cross-spectrum technique, are reviewed and their utility is demonstrated by application to the estimation of noise power spectra of a radiographic screen-film system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A cross-power spectral method is described for the separation of the scanning system noise spectrum from the film noise spectrum in a film measuring system. This method provides a more accurate way to measure the true film noise spectrum. The spectral separation is done in two steps: (1) Two samples are taken at each sample position on the film to produce a pair of independently sampled data arrays. (2) The cross-power spectrum is then computed from this pair of arrays using the fast Fourier transforin(FFT) technique. Since film noise and scanner noise are uncorrelated, it is shown that this cross-power spectrum is an estimate of the film noise spectrum. The use of this technique is demonstrated using simulated film data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent work aimed at quantifying the statistical efficiency of diagnostic imaging systems has resulted in the frequent use of the detective quantum efficiency (DQE) and noise equivalent quantum exposure (NEQ). Estimation of these metrics requires the separate measurement of several imaging parameters. An analysis is given which results in expressions for DQE and NEQ estimate errors in terms of component measurement error statistics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Working Group V of the ACR-NEMA Digital Imaging and Communications Standards Committee has developed a standard for the exchange of ACR-NEMA formatted data on magnetic tape. This standard makes use of the message structure provided in the ACR-NEMA Digital Imaging and Communications Standard (NEMA 300-1985), and formats it in an ANSI standard label and file structure so as to be compatible with one-half inch magnetic tape systems and software which comply with the ANSI standards. The ACR-NEMA magnetic tape standard has evolved since the preliminary information was presented at the SPIE Medicine XIV/PACS IV in 1986. At present, the standard has been circulated for comment, and will soon be balloted. This paper will present the magnetic tape standard in its final form and will also discuss the future direction of the Working Group. The Working Group is strongly considering the use of a media-independent standard for use with the ACR-NEMA message structure so as to accomodate a diverse set of exchange media.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A two-stage adaptive vector quantization scheme for radiographic image sequence coding is introduced. In vector quantization, an image se-quence is first mapped into a vector set; each vector is then encoded by two distinct pieces of information, the label and the corresponding code-word. The main problem in adaptive vector quantization is how to track the changes occurring in the sequence by updating the labels and the code-words. From the point of view of image vector quantization, the changes occurring in the radiographic image sequences can be categorized into two types: those due to body motion and those due to the injected contrast dye material. In the scheme proposed, encoding is performed in two stages. In the first stage, the label memory of the primary codebook is replenished to track the changes caused mainly by patient motions. In the second stage, the residual error vectors drawn from the area with contrast dye material are further encoded by a small secondary codebook. These areas are reliably detected as their mean values increase with the arrival of the contrast dye. By preferentially allocating extra bits (codewords) to these areas, both low distortion and better reproduction of the diagnostically useful information is obtained. Numerical and pictorial results are presented and demonstrate that good reproduction, especially those parts of the image containing contrast dye, can be obtained at a compression ratio of approximately 10 to 1 (about 0.8 bits/ pixel).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The full-frame bit allocation algorithm for radiological image compression developed in our laboratory can achieve compression ratios as high as 30:1. The software development and clinical evaluation of this algorithm has been completed. It involves two stages of operations: a two-dimensional discrete cosine transform and pixel quantization in the transform space with pixel depth kept accountable by a bit allocation table. The greatest engineering challenge in implementing a hardware version of the compression system lies in the fast cosine transform of 1Kx1K images. Our design took an expandable modular approach based on the VME bus system which has a maximum data transfer rate of 48 Mbytes per second and a Motorola 68020 microprocessor as the master controller. The transform modules are based on advanced digital signal processor (DSP) chips microprogrammed to perform fast cosine transforms. Four DSP's built into a single-board transform module can process an 1K x 1K image in 1.7 seconds. Additional transform modules working in parallel can be added if even greater speeds are desired. The flexibility inherent in the microcode extends the capabilities of the system to incorporate images of variable sizes. Our design allows fof a maximum image size of 2K x 2K.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Two major design issues of a medical and digital picture archiving and communications system are its performance and its standardization. The latter is provided for by standards such as the ACR-NEMA Standard. The performance factor is directly related to the number of nits actually transported. Therefore, transportation of both compressed and uncompressed images within a PACS should be considered. The ACR-NEMA Standard however does not support the use of compressed images. In this paper we present a proposal for an enhancement of the ACR-NEMA Standard by means of which both original and compressed images can be used. This will be exemplified by showing typical implementations of some compression techniques. As compressed images are more vulnerable to transmission errors, the topic of error detection will be dealt with too. The fault detection method used in the ACR-NEMA Standard and other methods will be evaluated on their effectiveness for the transportation of compressed images within a network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Hotelling trace criterion (HTC) is a measure of class separability used in pattern recognition to find a set of linear features that optimally separate two classes of objects. We use the HTC here not as a figure of merit for features, but as a figure of merit for characterizing imaging systems. In an earlier study, a set of images, created by overlapping ellipses, was used to simulate images of livers, with and without tumors, with noise and blur added to each image. Using the ROC parameter da as our measure, we found that the ability of the HTC to separate these images into their correct classes, by detecting the presence or absence of a tumor, has a correlation of 0.988 with the ability of humans to separate the same two classes of objects. In our most recent observer study, we used a mathematical model of normal and diseased livers, and of the imaging system to generate a realistic set of liver images. These images simulate those a physician would use in making a diagnosis, yet we have control over the disease state of the liver, and hence the object class it belongs to, as well as the amount of degradation added to the image from the imaging system. When an observer study was performed with these images we found the performance of the HTC to have a correlation of 0.829 with the performance of the human observers, with da as our measure of performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The conventional whitening matched filter is linear in the data, even for edge-detection and object-location tasks. We have considered some special cases of the next order, or quadratic, matched filter which is second order in the data. Whereas integrals of NEQ-like quantities determine the performance of the linear filter, integrals of squared NEQ-like quantities determine the performance of the nonlinear filter. In the low contrast limit the NEQ-like quantities are precisely NEQ (noise equivalent quanta), and otherwise can be found by the Karhunen-Loeve transformation. The higher power means that these tasks are more sensitive to the higher frequency response of the hardware than are the linear tasks. Whether the human observer is capable of such quadratic tasks is an interesting open question.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Disk features of positive or negative polarity were superimposed on images of stationary, Gaussian, uncorrelated noise. In one detection task, the three observers rated the likelihood that specified image locations contained a feature of known polarity rather than the uniform noise background. In a second detection task, the same observers rated the likelihood that each specified location contained a feature of unknown polarity (dark or bright), and then indicated which was the more likely polarity, assuming that a feature was present. In the third task (polarity discrimination) observers rated the likelihood that the feature, known to be present in each location, was positive rather than negative in polarity. Independent samples of images varied the feature's contrast to manipulate the observers performance within each task. An index of the observer's detection or discrimination accuracy, obtained from the measured ROC curve for each observer, task, and condition, was compared to the calculated value of the same index for the realized cross-correlator. With noisy images of this type, the cross-correlator is an "ideal observer" that yields a physically optimal decision variable. In all three tasks, the observers' performance indices were closely proportional to the cross-correlator's (slope about 0.57). This indicates an' "observer efficiency" similar to the levels we measured in tasks using CT images, for which the cross correlator is suboptimal. Observers' detection of the positive-contrast and negative-contrast features did not differ, in either the known-- or unknown--polarity tasks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An automatically computable fidelity measure which correlates well with observer preference is needed to facilitate the optimization of image processing methods. This study evaluates the use of the convolution mean squared error (CMSE) as such a measure. To compute the CMSE, both the true image and the "test image" are passed through a filter or other processing system. The mean squared error between the two identically processed images is then determined. A high-pass filter and, separately, a low-pass filter are optimized for this purpose. The inclusion of an early visual system model before these filters is also evaluated. The true image used was a high-resolution, high-count, nuclear medicine image of a liver and spleen phantom. Simulated acquisitions of this true image, which had been restored using the constrained least squares method with one of nine coarseness functions, provided the "test images." A low-pass filter of low cut-off frequency and low order gave CMSE values which correlated well (Spearman rank correlation coefficient (rs) of 0.88) with average ranks from observer preference studies. A high-pass filter of high order and high cut-off frequency yielded similar results (rs = 0.86).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
EM (expectation-maximization) algorithms for maximum likelihood estimation have received considerable attention in recent years due to their computational feasibility in tomographic image reconstruction. It is less widely recognized, however, that EN algorithms can be equally applicable to image processing in general. In this paper, the theoretical foundation of EM algorithms is first outlined. Applications of EN algorithms in both image reconstruction and image processing are then described, and some characteristics of these EM algorithms are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A common class of nonlinear signal and image processing methods involves the use of median filters or the more general rank-ordered operators. Sifting theory models these operators in a manner that allows intuitive parallels to be drawn to analogous linear filtering procedures. Such insights should provide guidance on how to utilize these available techniques and synthesize new ones.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Current image processing algorithm development is not based on an efficient mathematical structure that is specifically designed for image manipulation, feature extraction and analysis. Vast increases in image processing activities in such areas as robotics, medicine, and expert computer vision systems have resulted in an immense proliferation of different operations and architectures that all too often perform similar or identical tasks. Due to this ever-increasing diversity of image processing architectures and languages, several attempts have been made to develop a unified algebraic approach to image processing. However, these attempts have been only partially successful. In this paper, we define a heterogeneous algebra (in the sense of G. Birkhoff) which is capable of expressing all image-to-image transformations that can be defined in terms of finite algorithmic procedures. Conversely, for any image-to-image transformation defined as a finite sequence of terms in the image algebra, there is a structured program scheme that computes the transformation. Consequently, this algebra provides a common mathematical environment for image processing algorithm development, comparison, performance characterization, and optimization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many current medical image processing algorithms utilize Fourier Transform techniques that represent images as sums of translationally invariant complex exponential basis functions. Selective removal or enhancement of these translationally invariant components can be used to effect a number of image processing operations such as edge enhancement or noise attenuation. An important characteristic of many natural phenomena, including the structures of interest in medical imaging is spatial self-similarity. In this work a filtering technique that represents images as sums of scale invariant self-similar basis functions will be presented. The decomposition of a signal or image into scale invariant components can be accomplished using the Mellin Transform, which diagonalizes changes of scale in a manner analogous to the way the Fourier Transform diagonalizes translation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper is concerned with the presentation of an image analysis system based on an extension of two mathematical theories : the general topology theory and the set theory. By this way, we obtain appropriate tools for representing and analysing digitized images as particular subsets of discrete spaces on which usual concepts such as topology are somewhat inadequate. The first version of the system is presented and illustrated on an example.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In computerized tomography, certain type of artifacts can be detected if the consistency of the reconstruction is checked using the available information. A reconstructed image is said to be consistent if it satisfies all a priori knowledge. This paper demonstrates how the consistency principle can be used to detect reconstruction artifacts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the potential advantages of digital radiography is that it allows computerized methods to be applied in the analysis of image abnormalities. In this study, automated detection of microcalcifications in digital mammograms was investigated. We employed a spatial filtering technique to suppress the structured background in a mammogram while enhancing the microcalcifications. Signal-extraction techniques based on the physical characteristics of microcalcifications were then used to isolate clustered microcalcifications from the remaining noise background. To obtain test mammograms with known signal locations for evaluation of the detection accuracy of the computer method, we employed Monte Carlo techniques to generate simulated microcalcification clusters which were then superimposed on normal mammograms. The results indicate that the computer method can achieve a true-positive cluster detection rate of approximately 85% at a false-positive detection rate of one cluster per image for microcalcifications of average subtlety. Detection accuracy is expected to increase if the parameters of the image-processing and signal-extraction techniques are optimized further.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Detectors of computed radiography systems have a very wide linear dynamic range Pnd enable faithful acquisition of image data, independent of the chosen x-ray exposure or the dynamic data range within the image. To yield consistently viewable images of satisfactory quality without direct user interaction with the image, semi-automatic scaling of the clinically useful data span to the available display range must be achieved. Three methods are discussed: Detector auto-ranging and renormalization of the filtered image, local histogram equalization, and anatomy specific auto-ranging of the filtered image. Presently, the first method is favored and employed clinically.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Detection of cancerous lung nodules in chest radiographs is one of the more important tasks performed by a radiologist. In addition, the "miss rate" associated with the radiographic detection of lung nodules is approximately 30%. A computerized scheme that alerts the radiologist to possible locations of lung nodules should allow this number of false-negative diagnoses to be reduced. We are developing a computer-aided nodule detection scheme based on a difference image approach. We attempt to eliminate the camouflaging background structure of the normal lung anatomy by creating, from a single-projection chest image, two images: one in which the signal-to-noise ratio (SNR) of the nodule is maximized and another in which that SNR is suppressed while the processed background remains essentially the same. Thus, the difference between these two processed images should consist of the nodule superimposed on a relatively uniform background in which the detection task may be simplified. This difference image approach is fundamentally different from conventional subtraction techniques (e.g., temporal or dual-energy subtraction) in that the two images which are subtracted arise from the same single-projection chest radiograph. Once the difference image is obtained, thresholding is performed along with tests for circularity, size and growth in order to extract the nodules. It should be noted that once an original chest image is input to the computer the nodule detection process is totally automated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The digital imaging group at the University of Arizona Health Sciences Center Radiology Department is vigorously pursuing the development of a total digital radiology department (TDRD). One avenue of research being conducted is to define the needed resolutions and capabilities of TDRD systems. Parts of that effort are described in these proceedings and elsewhere. One of these investigations is to assess the general application of computed r adiography (CR) in clinical imaging. Specifically we are comparing images produced by the Toshiba computed radiography system (Model 201) to those produced by conventional imaging techniques. This paper describes one aspect of that work.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper reports a recent study on applying a knowledge-based system approach as a new attempt to solve the problem of chromosome classification. A theoretical framework of an expert image analysis system is proposed, based on such a study. In this scheme, chromosome classification can be carried out under a hypothesize-and-verify paradigm, by integrating a rule-based component, in which the expertise of chromosome karyotyping is formulated with an existing image analysis system which uses conventional pattern recognition techniques. Results from the existing system can be used to bring in hypotheses, and with the rule-based verification and modification procedures, improvement of the classification performance can be excepted.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The volume of various organs in the body, including the liver and spleen, have been measured from image data obtained by the S.P.E.C.T. technique. Organ volumes were obtained by applying a semi-automated edge detection algorithm to each tomographic slice image in the stack of slices encompassing the organ. The technique was validated by applying it to images of phantoms of known volume.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, it is shown that the enhancement of X-Ray and CT picture can be obtained by using Fast Polynomial Transform ( FPT) to implemention of 2-D circular convolution . A hidden modulo arithmetic of FPT with operating length 64 and a new design of simple 2-D Frequency Sampling Filter are presented. The filting convolution processing of X-Ray and CT picture is very helpful in efficiently diagnosing lung cancer and liver tubercular.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An algorithmic process has been developed for the non-linear enhancement of x-ray images with noisy characteristics. The enhancement process involves the application of two specialized algorithms in series. The first algorithm serves to enhance the image by non-linear modification of pixel values based upon neighborhood relationships. The second algorithm serves to remove noise in the processed image which might have been amplified with the enhancement process. This two-stage approach results in enhanced images with good edge and texture definition but minimal noise amplification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.