The number of digital medical images is growing constantly over the years. This opens new possibilities of extracting information from them using computer-assisted methods, such as artificial intelligence. In this context, the application of radiomics has received increasing attention since 2012. In radiomics, medical image data is exploited by extracting numerous features from them that are not directly visible to the human eye. These features provide valuable information for diagnosis, prognosis and therapy, especially in cancer research. In this paper, we introduce a web-based radiomics module for end users under StudierFenster (www.studierfenster.at), which can extract image features for tumor characterization. StudierFenster is an online, open science medical image processing framework, where multiple clinically relevant modules and applications have been integrated since its initiation in 2018/2019, such as a medical VR viewer and automatic cranial implant design. The newly integrated Radiomics module allows the upload of medical images and segmentations of a region of interest to StudierFenster, where predefined radiomic features are calculated from them using the ‘pyRadiomic’ Python package. The radiomics module is able to calculate not only the basic first-order statistics of the images, but also more advanced features that capture the 2D/3D shape and gray level characteristics. The design of the radiomics module follows the architecture of StudierFenster, where computation-intensive procedures, such as preprocessing of the data and calculating the features for each image-segmentation pair, are executed on a server. The results are stored in a CSV file, which can afterwards be downloaded in a web-based user interface.
The aorta is the largest vessel of the human body and its pathological degenerations, such as dissections and aneurysms, can be life threatening. An automatic and fast segmentation of the aorta can therefore be a helpful tool to quickly identify an abnormal anatomy. The segmentation of the aortic vessel tree (AVT) typically requires extensive manual labor, but, in recent years, progress in deep learning techniques made the automation of this process viable. For this purpose, we tested different deep learning networks to segment the aortic vessel tree from computed tomography angiography (CTA) scans with a deep neural network consisting of an encoder-decoder architecture with skip connections and an optional self-attention block. The networks were trained on a dataset of 56 CTA scans from three different sources and resulted in Dice score similarities between 0.043−0.897. Generally, the classical U-Nets performed better than the ones containing a self-attention block, indicating that they might diminish performance for AVT segmentation. The quality of the resulting segmentations was highly dependent on the CTA image quality, especially on the contrast between the aorta and the surrounding tissues. However, the trained deep neural network can segment CTA scans well with limited computational resources and training data.
Cardiovascular diseases are one of the strongest burdens in healthcare. If misdiagnosed, they can lead to life-threatening complications. This is especially true for aortic dissections, which may require immediate surgery depending on the categorization and still lead to late adverse events. Aortic dissection occurs when the aortic duct splits into two blood streams, the true and false lumina. The morphological characteristics of the aorta are therefore crucial for a clinician and provide vital support since they can be used to extract significant information for surgery and treatment planning. In this work, we revive a successful modeling technique – convolution surfaces – to model the lumina in aortic dissections. The skeleton of the lumina and local radial information are used to represent the true and the false lumen through convolution of local segments. Additionally, we introduce an optimization strategy based on a genetic algorithm to create the separation caused by the dissection flap.
KEYWORDS: Augmented reality, Visualization, 3D scanning, 3D displays, Image segmentation, 3D image processing, Medicine, Medical imaging, Data conversion, Computed tomography
This contribution presents a streamlined data pipeline to bring medical 3D scans onto Augmented Reality (AR) hardware. When a 3D scan is visualized on a 2D screen, depth information is lost and doctors have to rely on their experience to map the displayed data to the patient. Showing such a scan in AR addresses this problem, as one can view that scan in real 3D. To achieve this, the scan produced by a medical scanner has to be preprocessed by the user and brought onto the AR hardware. Usually, many manual steps are involved in achieving this, which require technical knowledge about the underlying software and hardware components and impede acceptance of this new technology by the target group, medical personnel. This work presents a streamlined pipeline for this process, leading to an enhanced user experience. The core component of the pipeline is a web application, to which a user can upload the direct output of a medical scanner. The scan can be interactively segmented by the user, after which both the scan and segment are stored on a server. Additionally, this paper introduces an AR application, which can be used to browse through patients and view their scans and previously created segments. We evaluate our streamlined data pipeline and AR application in a user study, reporting the results of a system usability questionnaire and a Thinking Aloud test.
Aortic dissection is an acute condition of the aorta. It typically starts with an intimal tear and continues with the separation of the aortic wall layers. This situation typically leads to the creation of a second lumen, i.e., the false lumen, where blood can flow into. For diagnosis of this pathology, computed tomography angiography (CTA) is usually used. To have a better understanding of its causes and for measuring cross-sectional caliber at onset and at each follow-up, segmentation of true and false lumen is important in clinical use. In this work, a pipeline for aortic dissection segmentation is evaluated to obtain the correct visualization of true and false lumen separated by the dissection flap that characterizes this pathology. We provide an evaluation of three different vessel enhancement filters, used as a preprocessing step, through both a qualitative and quantitative evaluation.
A practical method to analyze blood vessels, like the aorta, is to calculate the vessel's centerline and evaluate its shape in a CT or CTA scan. This contribution introduces a cloud-based centerline tool for the aorta, which computes an initial centerline from a CTA scan with two user given seed points. Afterwards, this initial centerline can be smoothed in a second step. The work done for this contribution was implemented into an existing online tool for medical image analysis, called Studierfenster. In order to evaluate the outcome of this contribution, we tested the smoothed centerline computed within Studierfenster against 40 baseline centerlines from a public available CTA challenge dataset. In doing so, we computed a minimum, maximum, and mean distance between the two centerlines in mm for every data sample, resulting in the smallest distance of 0.59mm, an overall maximum distance of 14.18mm, and a mean distance for all samples of 3.86mm with a standard deviation of 0.99mm.
Aortic dissections (AD) are injuries of the inner vessel wall of the (human) aorta. As this disease poses a significant threat to a patient’s life, it is crucial to observe and analyze the progression of the dissection over the course of the disease. The clinical examinations are usually performed with the application of Computed Tomography (CT) or Computed Tomography Angiography (CTA), based on which, automated post-processing procedures would be beneficial for the management of critical pathologies. One of the main tasks during post-processing is aorta segmentation. Different methods have been developed for the segmentation of aorta, including the tracking methods, the active contour/surface methods and the deep learning methods. In this study, a method for the automatic segmentation of aorta and its branches from original thorax CT and CTA images is introduced. The aorta is segmented based on deep learning algorithm and afterwards the branches are tracked based on particle filter algorithm.
KEYWORDS: Skull, System integration, Surgery, 3D modeling, Medical imaging applications, Medical imaging, Manufacturing, Imaging systems, Image processing, 3D printing
We introduce a fully automatic system for cranial implant design, a common task in cranioplasty operations. The system is currently integrated in Studierfenster (http://studierfenster.tugraz.at/), an online, cloud-based medical image processing platform for medical imaging applications. Enhanced by deep learning algorithms, the system automatically restores the missing part of a skull (i.e., skull shape completion) and generates the desired implant by subtracting the defective skull from the completed skull. The generated implant can be downloaded in the STereoLithography (.stl) format directly via the browser interface of the system. The implant model can then be sent to a 3D printer for in loco implant manufacturing. Furthermore, thanks to the standard format, the user can thereafter load the model into another application for post-processing whenever necessary. Such an automatic cranial implant design system can be integrated into the clinical practice to improve the current routine for surgeries related to skull defect repair (e.g., cranioplasty). Our system, although currently intended for educational and research use only, can be seen as an application of additive manufacturing for fast, patient-specific implant design.
KEYWORDS: Data conversion, Data communications, Raster graphics, Medicine, Medical imaging, Image processing, Digital imaging, Computer architecture, 3D image processing
Imaging data within the clinical practice in general uses standardized formats such as Digital Imaging and Communications in Medicine (DICOM). Aside from 3D volume data, DICOM files usually include relational and semantic description information. The majority of current applications for browsing and viewing DICOM files online handle the image volume data only, ignoring the relational component of the data. Alternatively, implementations that show the relational information are provided as complete pre-packaged solutions that are difficult to integrate in existing projects and workflows. This publication proposes a modular, client-side web application for viewing DICOM volume data and displaying DICOM description fields containing relational and semantic information. Furthermore, it supports conversion from DICOM data sets into the nearly raw raster data (NRRD) format, which is commonly utilized for research and academic environments, because of its simpler, easily processable structure, and the removal of all patient DICOM tags (anonymization). The application was developed in JavaScript and integrated into the online medical image processing framework StudierFenster (http://studierfenster.tugraz.at/). Since our application only requires a standard web browser, it can be used by everyone and can be easily deployed in any wider project without a complex software architecture.
The human organism is a highly complex system that is prone to various diseases. Some diseases are more dangerous than others, especially those that affect the circulatory system or the aorta in particular. The aorta is the largest artery in the human body. Its wall comprises several layers. When the intima, i.e. the innermost layer of the aortic wall, tears, blood enters and propagates between the layers causing them to separate. This is known as aortic dissection (AD). Without immediate treatment, an AD may kill 33% of patients within the first 24 hours, 50% of patients within 48 hours, and 75% of patients within 2 weeks. However, proper treatment is still subject to research and active discussion. By providing a deeper understanding of aortic dissections, this work aims to contribute to the continuous improvement of AD diagnosis and treatment by presenting AD in a new, immersive visual experience: Virtual Reality (VR). The visualization is based on Computed Tomography (CT) scans of human patients suffering from an AD. Given a scan, relevant visual information is segmented, refined and put into a 3D scene. Further enhanced by blood flow simulation and VR user interaction, the visualization helps in better understanding AD. The current implementation serves as a prototype and is considered to be extended by minimizing user interaction when new CT scans are loaded into VR (i) and by providing an interface to feed the visualization with simulation data provided by mathematical models (ii).
Volumetric examinations of the aorta are nowadays of crucial importance for the management of critical pathologies such as aortic dissection, aortic aneurism, and other pathologies, which affect the morphology of the artery. These examinations usually begin with the acquisition of a Computed Tomography Angiography (CTA) scan from the patient, which is later on postprocessed to reconstruct the 3D geometry of the aorta. The first postprocessing step is referred to as segmentation. Different algorithms have been suggested for the segmentation of the aorta; including interactive methods, as well as fully automatic methods. Interactive methods need to be fine-tuned on each single CTA scan and result in longer duration of the process, whereas fully automatic methods require the possession of a large amount of labeled training data. In this work, we introduce a hybrid approach by combining a deep learning method with a consolidated interaction technique. In particular, we trained a 2D and a 3D U-Net on a limited number of patches extracted from 25 labeled CTA scans. Afterwards, we use an interactive approach, which consists in defining a region of interest (ROI) by just placing a seed point. This seed point is later used as the center of a 2D or 3D patch to be fed to the 2D or 3D U-Net, respectively. Due to the low content variation of these patches, this method allows to correctly segment the ROIs without the need for parameter tuning for each dataset and with a smaller training dataset, requiring the same minimal interaction as state-of-the-art interactive methods. Later on, the new segmented CTA scans can be further used to train a convolutional network for a fully automatic approach.
In this contribution, the preparation of data for training deep learning networks that are used to segment the lower jawbone in computed tomography (CT) images is proposed. To train a neural network, we had initially only ten CT datasets of the head-neck region with a diverse number of image slices from the clinical routine of a maxillofacial surgery department. In these cases, facial surgeons segmented the lower jawbone in each image slice to generate the ground truth for the segmentation task. Since the number of present images was deemed insufficient to train a deep neural network efficiently, the data was augmented with geometric transformations and added noise. Flipping, rotating and scaling images as well as the addition of various noise types (uniform, Gaussian and salt-and-pepper) were connected within a global macro module under MeVisLab. Our macro module can prepare the data for general deep learning data in an automatic and flexible way. Augmentation methods for segmentation tasks can easily be incorporated.
Accurate segmentation of medical images is a key step in medical image processing. As the amount of medical images obtained in diagnostics, clinical studies and treatment planning increases, automatic segmentation algorithms become increasingly more important. Therefore, we plan to develop an automatic segmentation approach for the urinary bladder in computed tomography (CT) images using deep learning. For training such a neural network, a large amount of labeled training data is needed. However, public data sets of medical images with segmented ground truth are scarce. We overcome this problem by generating binary masks of images of the 18F-FDG enhanced urinary bladder obtained from a multi-modal scanner delivering registered CT and positron emission tomography (PET) image pairs. Since PET images offer good contrast, a simple thresholding algorithm suffices for segmentation. We apply data augmentation to these datasets to increase the amount of available training data. In this contribution, we present algorithms developed with the medical image processing and visualization platform MeVisLab to achieve our goals. With the proposed methods, accurate segmentation masks of the urinary bladder could be generated, and given datasets could be enlarged by a factor of up to 2500.
Accurate segmentation and measurement of brain tumors plays an important role in clinical practice and research, as it is critical for treatment planning and monitoring of tumor growth. However, brain tumor segmentation is one of the most challenging tasks in medical image analysis. Since manual segmentations are subjective, time consuming and neither accurate nor reliable, there exists a need for objective, robust and fast automated segmentation methods that provide competitive performance. Therefore, deep learning based approaches are gaining interest in the field of medical image segmentation. When the training data set is large enough, deep learning approaches can be extremely effective, but in domains like medicine, only limited data is available in the majority of cases. Due to this reason, we propose a method that allows to create a large dataset of brain MRI (Magnetic Resonance Imaging) images containing synthetic brain tumors - glioblastomas more specifically - and the corresponding ground truth, that can be subsequently used to train deep neural networks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.