Recent advance of imaging technology has brought new challenges and opportunities for automatic and quantitative analysis of medical images. With broader accessibility of more imaging modalities for more patients, fusion of modalities/scans from one time point and longitudinal analysis of changes across time points have become the two most critical differentiators to support more informed, more reliable and more reproducible diagnosis and therapy decisions. Unfortunately, scan fusion and longitudinal analysis are both inherently plagued with increased levels of statistical errors. A lack of comprehensive analysis by imaging scientists and a lack of full awareness by physicians pose potential risks in clinical practice.
In this paper, we discuss several key error factors affecting imaging quantification, studying their interactions, and introducing a simulation strategy to establish general error bounds for change quantification across time. We quantitatively show that image resolution, voxel anisotropy, lesion size, eccentricity, and orientation are all contributing factors to quantification error; and there is an intricate relationship between voxel anisotropy and lesion shape in affecting quantification error. Specifically, when two or more scans are to be fused at feature level, optimal linear fusion analysis reveals that scans with voxel anisotropy aligned with lesion elongation should receive a higher weight than other scans. As a result of such optimal linear fusion, we will achieve a lower variance than naïve averaging. Simulated experiments are used to validate theoretical predictions. Future work based on the proposed simulation methods may lead to general guidelines and error lower bounds for quantitative image analysis and change detection.
In this paper, we propose a new method for automated delineation of tumor boundaries in whole-body PET/CT by
jointly using information from both PET and diagnostic CT images. Our method takes advantage of initial robust hot
spot detection and segmentation performed in PET to provide a conservative tumor structure delineation. Using this
estimate as initialization, a model for tumor appearance and shape in corresponding CT structures is learned and the
model provides the basis for classifying each voxel to either lesion or background class. This CT classification is then
probabilistically integrated with PET classification using the joint likelihood ratio test technique to derive the final
delineation. More accurate and reproducible tumor delineation is achieved as a result of such multi-modal tumor
delineation, without additional user intervention. The method is particular useful to improve the PET delineation result
when there are clear contrast edges in CT between tumor and healthy tissue, and to enable CT segmentation guided by
PET when such contrast difference is absent in CT.