Precise volumetric evaluation of the liver is crucial to mitigate the risk of postoperative liver failure following hepatectomy. However, existing liver resection volumetry calculation methods offer limited functionality, providing only liver and tumor volume, and simple calculation for the future liver remnant (FLR). To enhance understanding of liver resection volumetry, we introduce a flexible tool, able to integrate the resection plans with different underlying data (liver parenchyma, liver segments classification) and allow the user to interactively select and calculate the volume of chosen regions of interest (ROI) whether individually or in combination with other ROIs. This flexibility makes this tool scale to complex cases, for example, multiple resections in the same resection plan. Working alongside an experienced surgeon, we implemented two resection strategies and investigated various ROI volumes to see the difference between the two strategies. Through the experimented usage scenarios, we effectively showcase the tool’s proficiency in facilitating complex liver volumetry analysis for liver resection planning.
Since the non-specificity of acute bilirubin encephalopathy (ABE), accurate classification based on structural MRI is intractable. Due to the complexity of the diagnosis, multi-modality fusion has been widely studied in recent years. The most current medical image classification researches only fuse image data of different modalities. Phenotypic features that may carry useful information are usually excluded from the model. In the paper, a multi-modal fusion strategy for classifying ABE was proposed, which combined the different modalities of MRI with clinical phenotypic data. The baseline consists of three individual paths for training different MRI modalities i.e., T1, T2, and T2-flair. The feature maps from different paths were concatenated to form multi-modality image features. The phenotypic inputs were encoded into a two-dimensional vector to prevent the loss of information. The Text-CNN was applied as the feature extractor of the clinical phenotype. The extracted text feature map will be concatenated with the multi-modality image feature map along the channel dimension. The obtained MRI-phenotypic feature map is sent to the fully connected layer. We trained/tested (80%/20%) the approach on a database containing 800 patients data. Each sample is composed of three modalities 3D brain MRI and its corresponding clinical phenotype data. Different comparative experiments were designed to explore the fusion strategy. The results demonstrate that the proposal achieves an accuracy of 0.78, a sensitivity of 0.46, and a specificity of 0.99, which outperforms the model using MRI or clinical phenotype as input alone. Our work suggests the fusion of clinical phenotype data and image data can improve the performance of ABE classification.
Are there any abnormal reflection in the structural Magnetic Resonance Imaging(sMRI) of patients with autism spectrum disorder (ASD)? Although a few brain regions have been somehow implicated in the pathophysiologic mechanism of the disorder, the gold-standard for diagnosis based on sMRI has not been reached in the academic community. Recently, the powerful deep learning algorithms have been widely studied and applied, which provides a chance to explore the brain structural abnormalities of ASD by the visualization based on the deep learning model. In this paper, a 3D-ResNet with an attention subnet for ASD classification is proposed. The model combined the residual module and the attention subnet to mask the regions which are relevant or irrelevant to the classification during the feature extraction. The model was trained and tested by sMRI from Autism Brain Imaging Data Exchange (ABIDE). The result of 5-fold cross-validation shows an accuracy of 75%. The Grad-CAM was further applied to display the emphasized composition of the model during classification. The class activation mapping of multiple slices of the representation sMRI was visualized. The results show that there are high related signals in the regions near the hippocampus, corpus callosum, thalamus, and amygdala. This result may confirm some of the previous hypotheses. The work is not only limited to the classification of ASD but also attempts to explore the anatomic abnormality with a quite promising visualization-based deep learning approach.
Laparoscopic videos can be affected by different distortions which may impact the performance of surgery and introduce surgical errors. In this work, we propose a framework for automatically detecting and identifying such distortions and their severity using video quality assessment. There are three major contributions presented in this work (i) a proposal for a novel video enhancement framework for laparoscopic surgery; (ii) a publicly available database for quality assessment of laparoscopic videos evaluated by expert as well as non-expert observers and (iii) objective video quality assessment of laparoscopic videos including their correlations with expert and non-expert scores.
In image guided surgery, stereo laparoscopes have been introduced to provide a 3D view of the organs during the laparoscopic intervention. This stereo video could possibly be used for other purposes other than simple viewing: such as depth estimation, 3D rendering of the scene and 3D organ modeling. This paper aims at reconstructing 3D liver surface based on stereo vision technique. The estimated surface of the liver can later be used for registration to preoperative 3D model constructed from MRI/CT scans. For this purpose, we resort to a variational disparity estimation technique by minimizing a global energy function over the entire image. More precisely, based on the gray level and gradient constancy assumptions, a data term and a local as well as a nonlocal smoothness terms are defined to build the cost function. The latter is minimized, by using an appropriate optimization technique, to estimate the disparity map. In order to reduce the disparity search range and the influence of noise, the global variational approach is performed on the coarsest level of the multi-resolution pyramidal representation of the stereo images. Then the obtained low-resolution disparity map is up-sampled with a modified joint bilateral filtering method to the original scale. In vivo liver datasets with ground truth is difficult to obtain, so the proposed method is evaluated quantitatively on two cardiac phantom datasets from Hamlyn Center achieving an accuracy of about 2.2 mm for heart1 and 2.1 mm for heart2. Reconstructed points up to 97% for heart1 and 100% for heart2 are obtained. Qualitative validation on in vivo porcine procedure's liver datasets has shown that our proposed method can estimate the untextured surfaces geometry well.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.