In a standard computed tomography (CT) image, pixels having the same Hounsfield Units (HU) can correspond to different materials and it is therefore challenging to differentiate and quantify materials. Dual-energy CT (DECT) is desirable to differentiate multiple materials, but DECT scanners are not widely available as singleenergy CT (SECT) scanners. Here we develop a deep learning approach to perform DECT imaging by using standard SECT data. The end point of the deep learning approach is a model capable of providing the high-energy CT image for a given input low-energy CT image. We retrospectively studied 22 patients who received contrast-enhanced abdomen DECT scan. The difference between the predicted and original high-energy CT images are 3.47 HU, 2.95 HU, 2.38 HU, and 2.40 HU for spine, aorta, liver and stomach, respectively. The difference between virtual non-contrast (VNC) images obtained from original DECT and deep learning DECT are 4.10 HU, 3.75 HU, 2.33 HU and 2.92 HU for spine, aorta, liver and stomach, respectively. The aorta iodine quantification difference between iodine maps obtained from original DECT and deep learning DECT images is 0.9%. This study demonstrates that highly accurate DECT imaging with single low-energy data is achievable by using a deep learning approach. The proposed method can significantly simplify the DECT system design, reducing the scanning dose and imaging cost.
Current image-guided prostate radiotherapy often relies on the use of implanted fiducial markers (FMs) or transducers for target localization. Fiducial or transducer insertion requires an invasive procedure that adds cost and risks for bleeding, infection and discomfort to some patients. We are developing a novel markerless prostate localization strategy using a pre-trained deep learning model to interpret routine projection kV X-ray images without the need for daily cone-beam computed tomography (CBCT). A deep learning model was first trained by using several thousand annotated projection X-ray images. The trained model is capable of identifying the location of the prostate target for a given input X-ray projection image. To assess the accuracy of the approach, three patients with prostate cancer received volumetric modulated arc therapy (VMAT) were retrospectively studied. The results obtained by using the deep learning model and the actual position of the prostate were compared quantitatively. The deviations between the target positions obtained by the deep learning model and the corresponding annotations ranged from 1.66 mm to 2.77 mm for anterior-posterior (AP) direction, and from 1.15 mm to 2.88 mm for lateral direction. Target position provided by deep learning model for the kV images acquired using OBI is found to be consistent that derived from the implanted FMs. This study demonstrates, for the first time, that highly accurate markerless prostate localization based on deep learning is achievable. The strategy provides a clinically valuable solution to daily patient positioning and real-time target tracking for image-guided radiotherapy (IGRT) and interventions.
Tomographic imaging using a penetrating wave, such as X-ray, light and microwave, is a fundamental approach to generate cross-sectional views of internal anatomy in a living subject or interrogate material composition of an object and plays an important role in modern science. To obtain an image free of aliasing artifacts, a sufficiently dense angular sampling that satisfies the Shannon-Nyquist criterion is required. In the past two decades, image reconstruction strategy with sparse sampling has been investigated extensively using approaches such as compressed-sensing. This type of approach is, however, ad hoc in nature as it encourages certain form of images. Recent advancement in deep learning provides an enabling tool to transform the way that an image is constructed. Along this line, Zhu et al1 presented a data-driven supervised learning framework to relate the sensor and image domain data and applied the method to magnetic resonance imaging (MRI). Here we investigate a deep learning strategy of tomographic X-ray imaging in the limit of a single-view projection data input. For the first time, we introduce the concept of dimension transformation in image feature domain to facilitate volumetric imaging by using a single or multiple 2D projections. The mechanism here is fundamentally different from the traditional approaches in that the image formation is driven by prior knowledge casted in the deep learning model. This work pushes the boundary of tomographic imaging to the single-view limit and opens new opportunities for numerous practical applications, such as image guided interventions and security inspections. It may also revolutionize the hardware design of future tomographic imaging systems
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.