Purpose: Measurement of global spinal alignment (GSA) is an important aspect of diagnosis and treatment evaluation for spinal deformity but is subject to a high level of inter-reader variability.
Approach: Two methods for automatic GSA measurement are proposed to mitigate such variability and reduce the burden of manual measurements. Both approaches use vertebral labels in spine computed tomography (CT) as input: the first (EndSeg) segments vertebral endplates using input labels as seed points; and the second (SpNorm) computes a two-dimensional curvilinear fit to the input labels. Studies were performed to characterize the performance of EndSeg and SpNorm in comparison to manual GSA measurement by five clinicians, including measurements of proximal thoracic kyphosis, main thoracic kyphosis, and lumbar lordosis.
Results: For the automatic methods, 93.8% of endplate angle estimates were within the inter-reader 95% confidence interval (CI95). All GSA measurements for the automatic methods were within the inter-reader CI95, and there was no statistically significant difference between automatic and manual methods. The SpNorm method appears particularly robust as it operates without segmentation.
Conclusions: Such methods could improve the reproducibility and reliability of GSA measurements and are potentially suitable to applications in large datasets—e.g., for outcome assessment in surgical data science.
Purpose. Fracture reduction is a challenging part of orthopaedic pelvic trauma procedures, resulting in poor long-term prognosis if reduction does not accurately restore natural morphology. Manual preoperative planning is performed to obtain target transformations of target bones – a process that is challenging and time-consuming even to experts within the rapid workflow of emergent care and fluoroscopically guided surgery. We report a method for fracture reduction planning using a novel image-based registration framework. Method. An objective function is designed to simultaneously register multi-body bone fragments that are preoperatively segmented via a graph-cut method to a pelvic statistical shape model (SSM) with inter-body collision constraints. An alternating optimization strategy switches between fragments alignment and SSM adaptation to solve for the fragment transformations for fracture reduction planning. The method was examined in a leave-one-out study performed over a pelvic atlas with 40 members with two-body and three-body fractures simulated in the left innominate bone with displacements ranging 0–20 mm and 0°–15°. Result. Experiments showed the feasibility of the registration method in both two-body and three-body fracture cases. The segmentations achieved Dice coefficient of median 0.94 (0.01 interquartile range [IQR]) and root mean square error (RMSE) of 2.93 mm (0.56 mm IQR). In two-body fracture cases, fracture reduction planning yielded 3.8 mm (1.6 mm IQR) translational and 2.9° (1.8° IQR) rotational error. Conclusion. The method demonstrated accurate fracture reduction planning within 5 mm and shows promise for future generalization to more complicated fracture cases. The algorithm provides a novel means of planning from preoperative CT images that are already acquired in standard workflow.
Purpose. We report the initial development of an image-based solution for robotic assistance of pelvic fracture fixation. The approach uses intraoperative radiographs, preoperative CT, and an end effector of known design to align the robot with target trajectories in CT. The method extends previous work to solve the robot-to-patient registration from a single radiographic view (without C-arm rotation) and addresses the workflow challenges associated with integrating robotic assistance in orthopaedic trauma surgery in a form that could be broadly applicable to isocentric or non-isocentric C-arms. Methods. The proposed method uses 3D-2D known-component registration to localize a robot end effector with respect to the patient by: (1) exploiting the extended size and complex features of pelvic anatomy to register the patient; and (2) capturing multiple end effector poses using precise robotic manipulation. These transformations, along with an offline hand-eye calibration of the end effector, are used to calculate target robot poses that align the end effector with planned trajectories in the patient CT. Geometric accuracy of the registrations was independently evaluated for the patient and the robot in phantom studies. Results. The resulting translational difference between the ground truth and patient registrations of a pelvis phantom using a single (AP) view was 1.3 mm, compared to 0.4 mm using dual (AP+Lat) views. Registration of the robot in air (i.e., no background anatomy) with five unique end effector poses achieved mean translational difference ~1.4 mm for K-wire placement in the pelvis, comparable to tracker-based margins of error (commonly ~2 mm). Conclusions. The proposed approach is feasible based on the accuracy of the patient and robot registrations and is a preliminary step in developing an image-guided robotic guidance system that more naturally fits the workflow of fluoroscopically guided orthopaedic trauma surgery. Future work will involve end-to-end development of the proposed guidance system and assessment of the system with delivery of K-wires in cadaver studies.
Purpose: Metal artifacts remain a challenge for CBCT systems in diagnostic imaging and image-guided surgery, obscuring visualization of metal instruments and surrounding anatomy. We present a method to predict C-arm CBCT orbits that will avoid metal artifacts by acquiring projection data that is least affected by polyenergetic bias. Methods: The metal artifact avoidance (MAA) method operates with a minimum of prior information, is compatible with simple mobile C-arms that are increasingly prevalent in routine use, and is consistent with either 3D filtered backprojection (FBP), more advanced (polyenergetic) model-based image reconstruction (MBIR), and/or metal artifact reduction (MAR) post-processing methods. MAA consists of the following steps: (i) coarse localization of metal objects in the field of view (FOV) via two or more low-dose scout views, coarse backprojection, and segmentation (e.g., with a U-Net); (ii) a simple model-based prediction of metal-induced x-ray spectral shift for all source-detector vertices (gantry rotation and tilt angles) accessible by the imaging system; and (iii) definition of a source-detector orbit that minimizes the view-to-view inconsistency in spectral shift. The method was evaluated in anthropomorphic phantom study emulating pedicle screw placement in spine surgery. Results: Phantom studies confirmed that the MAA method could accurately predict tilt angles that minimize metal artifacts. The proposed U-Net segmentation method was able to localize complex distributions of metal instrumentation (over 70% Dice coefficient) with 6 low-dose scout projections acquired during routine pre-scan collision check. CBCT images acquired at MAA-prescribed tilt angles demonstrated ~50% reduction in “blooming” artifacts (measured as FWHM of the screw shaft). Geometric calibration for tilted orbits at prescribed angular increments with interpolation for intermediate values demonstrated accuracy comparable to non-tilted circular trajectories in terms of the modulation transfer function. Conclusion: The preliminary results demonstrate the ability to predict C-arm orbits that provide projection data with minimal spectral bias from metal instrumentation. Such orbits exhibit strongly reduced metal artifacts, and the projection data are compatible with additional post-processing (metal artifact reduction, MAR) methods to further reduce artifacts and/or reduce noise. Ongoing studies aim to improve the robustness of metal object localization from scout views and investigate additional benefits of non-circular C-arm trajectories.
Spinal degeneration and deformity present an enormous healthcare burden, with spine surgery among the main treatment modalities. Unfortunately, spine surgery (e.g., lumbar fusion) exhibits broad variability in the quality of outcome, with ~20-40% of patients gaining no benefit in pain or function (“failed back surgery”) and earning criticism that is difficult to reconcile versus rapid growth in frequency and cost over the last decade. Vital to advancing the quality of care in spine surgery are improved clinical decision support (CDS) tools that are accurate, explainable, and actionable: accurate in prediction of outcomes; explainable in terms of the physical / physiological factors underlying the prediction; and actionable within the shared decision process between a surgeon and patient in identifying steps that could improve outcome. This technical note presents an overview of a novel outcome prediction framework for spine surgery (dubbed SpineCloud) that leverages innovative image analytics in combination with explainable prediction models to achieve accurate outcome prediction. Key to the SpineCloud framework are image analysis methods for extraction of high-level quantitative features from multi-modality peri-operative images (CT, MR, and radiography) related to spinal morphology (including bone and soft-tissue features), the surgical construct (including deviation from an ideal reference), and longitudinal change in such features. The inclusion of such image-based features is hypothesized to boost the predictive power of models that conventionally rely on demographic / clinical data alone (e.g., age, gender, BMI, etc.). Preliminary results using gradient boosted decision trees demonstrate that such prediction models are explainable (i.e., why a particular prediction is made), actionable (identifying features that may be addressed by the surgeon and/or patient), and boost predictive accuracy compared to analysis based on demographics alone (e.g., AUC improved by ~25% in preliminary studies). Incorporation of such CDS tools in spine surgery could fundamentally alter and improve the shared decisionmaking process between surgeons and patients by highlighting actionable features to improve selection of therapeutic and rehabilitative pathways.
Purpose: Data-intensive modeling could provide insight on the broad variability in outcomes in spine surgery. Previous studies were limited to analysis of demographic and clinical characteristics. We report an analytic framework called “SpineCloud” that incorporates quantitative features extracted from perioperative images to predict spine surgery outcome.
Approach: A retrospective study was conducted in which patient demographics, imaging, and outcome data were collected. Image features were automatically computed from perioperative CT. Postoperative 3- and 12-month functional and pain outcomes were analyzed in terms of improvement relative to the preoperative state. A boosted decision tree classifier was trained to predict outcome using demographic and image features as predictor variables. Predictions were computed based on SpineCloud and conventional demographic models, and features associated with poor outcome were identified from weighting terms evident in the boosted tree.
Results: Neither approach was predictive of 3- or 12-month outcomes based on preoperative data alone in the current, preliminary study. However, SpineCloud predictions incorporating image features obtained during and immediately following surgery (i.e., intraoperative and immediate postoperative images) exhibited significant improvement in area under the receiver operating characteristic (AUC): AUC = 0.72 (CI95 = 0.59 to 0.83) at 3 months and AUC = 0.69 (CI95 = 0.55 to 0.82) at 12 months.
Conclusions: Predictive modeling of lumbar spine surgery outcomes was improved by incorporation of image-based features compared to analysis based on conventional demographic data. The SpineCloud framework could improve understanding of factors underlying outcome variability and warrants further investigation and validation in a larger patient cohort.
Motivation/Purpose: This work reports the development and validation of an algorithm to automatically detect and localize vertebrae in CT images of patients undergoing spine surgery. Slice-by-slice detections using the state-of-the art 2D convolutional neural network (CNN) architectures were combined to estimate vertebra centroid location in 3D including a method that combined detections in sagittal and coronal slices. The solution facilitates applications in image guided surgery and automatic computation of image analytics for surgical data science. Methods: CNN-based object detection models in 3D (volume) and 2D (slice) images were implemented and evaluated for the task of vertebrae detection. Slice-by-slice detections in 2D architectures were combined to estimate the 3D centroid location including a model that simultaneously evaluated 2D detections in orthogonal directions (i.e., sagittal and coronal slices) to improve the robustness against spurious false detections – called Ortho-2D. Performance was evaluated in a data set consisting of 85 patients undergoing spine surgery at our institution, including images presenting spinal instrumentation/implants, spinal deformity, and anatomical abnormalities that are realistic exemplars of pathology in the patient population. Accuracy was quantified in terms of precision, recall, F1 score, and the 3D geometric error in vertebral centroid annotation compared to ground truth (expert manual) annotation. Results: Three CNN object detection models were able to successfully localize vertebrae, with Ortho-2D model that combined 2D detections in orthogonal directions achieving best performance: precision = 0.95, recall = 0.99, and F1 score = 0.97. Overall centroid localization accuracy was 3.4 mm (median) [interquartile range (IQR) = 2.7 mm], and ~97% of detections (154/159 lumbar cases) yielded acceptable centroid localization error <15 mm (considering average vertebrae size ~25 mm). Conclusions: State-of-the-art CNN architectures were adapted for vertebral centroid annotation, yielding accurate and robust localization even in the presence of anatomical abnormalities, image artifacts, and dense instrumentation. The methods are employed as a basis for streamlined image guidance (automatic initialization of 3D-2D and 3D-3D registration methods in image-guided surgery) and as an automatic spine labeling tool to generate image analytics.
Purpose: A method for automatic computation of global spinal alignment (GSA) metrics is presented to mitigate the high variability of manual definitions in radiographic images. The proposed algorithm segments vertebral endplates in CT as a basis for automatic computation of metrics of global spinal morphology. The method is developed as a potential tool for intraoperative guidance in deformity correction surgery, and/or automatic definition of GSA in large datasets for analysis of surgical outcome. Methods: The proposed approach segments vertebral endplates in spine CT images using vertebral labels as input. The segmentation algorithm extracts vertebral boundaries using a continuous max-flow algorithm and segments the vertebral endplate surface by region-growing. The point cloud of the segmented endplate is forward-projected as a digitally reconstructed radiograph (DRR), and a linear fit is computed to extract the endplate angle in the radiographic plane. Two GSA metrics (lumbar lordosis and thoracic kyphosis) were calculated using these automatically measured endplate angles. Experiments were performed in seven patient CT images acquired from Spineweb and accuracy was quantified by comparing automatically-computed endplate angles and GSA metrics to manual definitions. Results: Endplate angles were automatically computed with median accuracy = 2.7°, upper quartile (UQ) = 4.8°, and lower quartile (LQ) = 1.0° with respect to manual ground-truth definitions. This was within the measured intra- observer variability = 3.1° (RMS) of manual definitions. GSA metrics had median accuracy = 1.1° (UQ = 3.1°) for lumbar lordosis and median accuracy = 0.4° (UQ = 3.0°) for thoracic kyphosis. The performance of GSA measurements was also within the variability of the manual approach. Conclusions: The method offers a potential alternative to time-consuming, manual definition of endplate angles for GSA computation. Such automatic methods could provide a means of intraoperative decision support in correction of spinal deformity and facilitate data-intensive analysis in identifying metrics correlating with surgical outcomes.
Purpose. We report the initial implementation of an algorithm that automatically plans screw trajectories for spinal
pedicle screw placement procedures to improve the workflow, accuracy, and reproducibility of screw placement in freehand
navigated and robot-assisted spinal pedicle screw surgery. In this work, we evaluate the sensitivity of the algorithm
to the settings of key parameters in simulation studies.
Methods. Statistical shape models (SSMs) of the lumbar spine were constructed with segmentations of L1-L5 and
bilateral screw trajectories of N=40 patients. Active-shape model (ASM) registration was devised to map the SSMs to
the patient CT, initialized simply by alignment of (automatically annotated) single-point vertebral centroids. The atlas
was augmented by definition of “ideal / reference” trajectories for each spinal pedicle, and the trajectories are
deformably mapped to the patient CT. A parameter sensitivity analysis for the ASM method was performed on 3
parameters to determine robust operating points for ASM registration. The ASM method was evaluated by calculating
the root-mean-square-error between the registered SSM and the ground-truth segmentation for the L1 vertebra, and the
trajectory planning method was evaluated by performing a leave-one-out analysis and determining the entry point, end
point, and angular differences between the automatically planned trajectories and the neurosurgeon-defined reference
Results. The parameter sensitivity analysis showed that the ASM registration algorithm was relatively insensitive to
initial profile length (PLinitial) less than ~4 mm, above which runtime and registration error increased. Similarly stable
performance was observed for a maximum number of principal components (PCmax) of at least 8. Registration error ~2
mm was evident with diminishing return beyond a number of iterations, Niter, ~2000. With these parameter settings,
ASM registration of L1 achieved (2.0 ± 0.5) mm RMSE. Transpedicle trajectories for L1 agreed with reference
definition by (2.6 ± 1.3) mm at the entry point, by (3.4 ± 1.8) mm at the end point, and within (4.9° ±2.8°) in angle.
Conclusions. Initial results suggest that the algorithm yields accurate definition of pedicle trajectories in unsegmented
CT images of the spine. The studies identified stable operating points for key algorithm parameters and support ongoing
development and translation to clinical studies in free-hand navigated and robot-assisted spine surgery, where fast,
accurate trajectory definition is essential to workflow.
Conventional optical tracking systems use cameras sensitive to near-infra-red (NIR) light detecting cameras and passively/actively NIR-illuminated markers to localize instrumentation and the patient in the operating room (OR) physical space. This technology is widely-used within the neurosurgical theatre and is a staple in the standard of care in craniotomy planning. To accomplish, planning is largely conducted at the time of the procedure with the patient in a fixed OR head presentation orientation. In the work presented herein, we propose a framework to achieve this in the OR that is free of conventional tracking technology, i.e. a trackerless approach. Briefly, we are investigating a collaborative extension of 3D slicer that combines surgical planning and craniotomy designation in a novel manner. While taking advantage of the well-developed 3D slicer platform, we implement advanced features to aid the neurosurgeon in planning the location of the anticipated craniotomy relative to the preoperatively imaged tumor in a physical-to-virtual setup, and then subsequently aid the true physical procedure by correlating that physical-to-virtual plan with a novel intraoperative MR-to-physical registered field-of-view display. These steps are done such that the craniotomy can be designated without use of a conventional optical tracking technology. To test this novel approach, an experienced neurosurgeon performed experiments on four different mock surgical cases using our module as well as the conventional procedure for comparison. The results suggest that our planning system provides a simple, cost-efficient, and reliable solution for surgical planning and delivery without the use of conventional tracking technologies. We hypothesize that the combination of this early-stage craniotomy planning and delivery approach, and our past developments in cortical surface registration and deformation tracking using stereo-pair data from the surgical microscope may provide a fundamental new realization of an integrated trackerless surgical guidance platform.
The fidelity of image-guided neurosurgical procedures is often compromised due to the mechanical deformations that occur during surgery. In recent work, a framework was developed to predict the extent of this brain shift in brain-tumor resection procedures. The approach uses preoperatively determined surgical variables to predict brain shift and then subsequently corrects the patient’s preoperative image volume to more closely match the intraoperative state of the patient’s brain. However, a clinical workflow difficulty with the execution of this framework is the preoperative acquisition of surgical variables. To simplify and expedite this process, an Android, Java-based application was developed for tablets to provide neurosurgeons with the ability to manipulate three-dimensional models of the patient’s neuroanatomy and determine an expected head orientation, craniotomy size and location, and trajectory to be taken into the tumor. These variables can then be exported for use as inputs to the biomechanical model associated with the correction framework. A multisurgeon, multicase mock trial was conducted to compare the accuracy of the virtual plan to that of a mock physical surgery. It was concluded that the Android application was an accurate, efficient, and timely method for planning surgical variables.
Brain shift describes the deformation that the brain undergoes from mechanical and physiological effects typically during a neurosurgical or neurointerventional procedure. With respect to image guidance techniques, brain shift has been shown to compromise the fidelity of these approaches. In recent work, a computational pipeline has been developed to predict “brain shift” based on preoperatively determined surgical variables (such as head orientation), and subsequently correct preoperative images to more closely match the intraoperative state of the brain. However, a clinical workflow difficulty in the execution of this pipeline has been acquiring the surgical variables by the neurosurgeon prior to surgery. In order to simplify and expedite this process, an Android, Java-based application designed for tablets was developed to provide the neurosurgeon with the ability to orient 3D computer graphic models of the patient’s head, determine expected location and size of the craniotomy, and provide the trajectory into the tumor. These variables are exported for use as inputs for the biomechanical models of the preoperative computing phase for the brain shift correction pipeline. The accuracy of the application’s exported data was determined by comparing it to data acquired from the physical execution of the surgeon’s plan on a phantom head. Results indicated good overlap of craniotomy predictions, craniotomy centroid locations, and estimates of patient’s head orientation with respect to gravity. However, improvements in the app interface and mock surgical setup are needed to minimize error.