While convolutional neural networks have shown promise in medical image registration, their inherent complexity limits their registration speed, particularly for surgical applications. Additionally, traditional feature-based matching methods struggle with multi-modal forearm image registration due to the simplicity of forearm skin textures. To address these issues, we propose a robust forearm feature point extraction method based on the forearm’s structural invariance. We combine this method with thin plate spline interpolation to achieve multi-modal forearm registration. Our approach introduces the Forearm Feature Representation Curve (FFRC) and the Multi-Modal Image Registration Framework (FAM) for aligning forearm images with digital anatomical models. FFRC identifies feature points based on forearm structural characteristics, and FAM employs FFRC for matching point pre-screening before applying an affine transformation. For deformable registration which adds Thin Plate Spline (FAM-TPS) uses the matched points as control points. In our experiments, both FAM and FAM-TPS demonstrate high registration accuracy, with FAM-TPS outperforming conventional feature-based methods. Our framework excels at registering forearm images with varying rotation angles, and we have observed a strong correlation between the feature curve’s peak value and the rotation angle. These results affirm the effectiveness of our approach in achieving precise and resilient registration.
DenseFuse is a new approach for infrared and visible image fusion. Considering the single encoding strategy of DenseFuse, we propose a dual-encoder DenseNet (DEDNet), which develops a heterogenous image dual-encoder and a channel picking/Gaussian filtering based fusion strategy. The proposed method includes encoding layers, fusion strategy and decoder, in which encoding layers consist of dual encoders. Since infrared and visible images have different imaging mechanisms, the dual encoders can extract the features of infrared and visible images more effectively and improve the quality of fused images. The fusion strategy based on l1-norm's channel selection and Gaussian filtering improves the structural integrity and spatial correlation of the fused features. In the DEDNet, the infrared image is input to the infrared encoder to get the infrared features while the visible image is input to the visible encoder to get the visible features. Then, the fusion strategy fuses the infrared and visible features structurally and spatially. Finally, the decoder reconstructs the fused features to obtain the fused image. Experiments show that the DEDNet achieves competitive results in both subjective and objective evaluation metrics compared with other fusion approaches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.