To compensate for non-uniform deformation due to patient motion within and between fractions in image guided
radiation therapy, a block matching technique was adapted and implemented on a standard graphics processing unit
(GPU) to determine the displacement vector field that maps the nonlinear transformation between successive CT images.
Normalized cross correlation (NCC) was chosen as the similarity metric for the matching step, with regularization of the
displacement vector field being performed by Gaussian smoothing. A multi-resolution framework was adopted to further
improve the performance of the algorithm. The nonlinear registration algorithm was first applied to estimate the intrafractional
motion from 4D lung CT images. It was also used to calculate the inter-fractional organ deformation between
planning CT (PCT) and Daily Cone Beam CT (CBCT) images of thorax. For both experiments, manual landmark-based
evaluation was performed to quantify the registration performance. In 4D CT registration, the mean TRE of 5 cases was
1.75 mm. In PCT-CBCT registration, the TRE of one case was 2.26mm. Compared to the CPU-based AtamaiWarp
program, our GPU-based implementation achieves comparable registration accuracy and is ~25 times faster. The results
highlight the potential utility of our algorithm for online adaptive radiation treatment.
During surgery for epilepsy it is important for the surgeon to correlate the preoperative cortical morphology
(from preoperative images) with the intraoperative environment. We extend our visualization method presented
earlier, to achieves this goal by fusing a direct (photographic) view of the surgical field with the 3D patient
model. To correlate the preoperative plan with the intraoperative surgical scene, an intensity-based perspective
3D-2D registration was employed for camera pose estimation. The 2D photographic image was then texture-mapped
onto the 3D preoperative model using the solved camera pose. In the proposed method, we employ
direct volume rendering to obtain a perspective view of the brain image using GPU-accelerated ray-casting. This
is advantageous compared to the point-based or other feature-based registration since no intermediate processing
is required. To validate our registration algorithm, we used a point-based 3D-2D registration, that was validated
using ground truth from simulated data, and then the intensity-based 3D-2D registration method was validated
using the point-based registration result as the gold standard. The registration error of the intensity-based 3D-
2D method was around 3mm when the initial pose is close to the gold standard. Application of the proposed
method for correlating fMRI maps with intraoperative cortical stimulation is shown for surgical planning in an
epilepsy patient.
We describe an interactive multimodality display environment, which combines anatomic CT, MRI, functional MRI images and photographs taken during surgical procedures, to provide comprehensive localization information regarding epilepsy seizure foci and the context of their surroundings. Our environment incorporates several unique features, including GPU-accelerated volume rendering and image fusion, versatile GPU-based clipping of volumetric images, and the ability to enhance the information delivered to the surgeon by fusing a direct (photographic) view of the surgical field with the volumetric image. We employ direct volume rendering for the fusion of multiple volumes using GPU-accelerated ray-casting. In addition, to expose the internal structures during volume fusion, we have developed user interaction tools that enable the surgeon to explore the fused volume using clipping-cube and cutaway clipping schemes. The fusion of intraoperative images onto the image volume allows enhanced visualization of the surgical procedure sites within the surgical planning environment. These techniques have been implemented as Visualization Toolkit (VTK) classes using the OpenGL fragment shading program and Python modules, and have been successfully implemented within our surgical planning environment "EpilepsyViewer". The results and performance of our GPU-based approach are compared with similar techniques in VTK, demonstrating that the use of the GPU can greatly accelerate visualization and enable increased flexibility of the system in the operating room. The result of photographic overlay shows good correspondence between the intraoperative photograph images and the preoperative image model. This environment can also be extended for use in other neurosurgical planning tasks.
A direct fuzzy model reference adaptive control is proposed in this paper. The designed controller employs direct
feedback linearization, coupled with a pseudo control variable to linearize the nonlinear system. A fuzzy adaptive
compensator based on universal approximation is designed to cancel the system pseudo error, disturbance and system
interconnection. Furthermore, a dynamic compensator is designed to stabilize the system. The proposed algorithm is
proved by Lyapunov stability theory to be asymptotically stable. Simulation results are given to demonstrate that the
designed system has perfect tracking performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.