Prostate cancer ranks as the second most prevalent cancer among men globally. Accurate segmentation of prostate and the central gland plays a pivotal role in detecting abnormalities within the prostate, paving the way for early detection of prostate cancer, quantitative analysis and subsequent treatment planning. Micro-ultrasound (MUS) imaging is a novel ultrasound technique that operates at frequencies above 20MHz and offers superior resolution compared to conventional ultrasound, making it particularly effective for visualizing fine anatomical structures and pathological changes. In this paper, we leverage deep learning (DL) techniques for the segmentation of prostate and its central gland on micro-ultrasound images, investigating their potential in prostate cancer detection. We trained our DL model on MUS images from 80 patients, utilizing a five-fold cross-validation. We achieved Dice similarity coefficient (DSC) scores of 0.918 and 0.833, and an average surface-to-surface distance (SSD) of 1.176mm and 1.795mm for the prostate and the central gland, respectively. We further evaluated our method on a publicly available MUS dataset, achieving a DSC score of 0.957 and a Hausdorff Distance (HD) of 1.922mm for prostate segmentation. These results outperform the current state-of-the- art (SOTA).
The alignment of MRI and ultrasound images of the prostate is crucial in detecting prostate cancer during biopsies, directly affecting the accuracy of prostate cancer diagnosis. However, due to the low signal-to-noise ratio of ultrasound images and the varied imaging properties of the prostate between MRI and ultrasound, it’s challenging to efficiently and accurately align MRI and ultrasound images of the prostate. This study aims to present an effective affine transformation method that can automatically register prostate MRI and ultrasound images. In real-world clinical practice, it may increase the effectiveness of prostate cancer biopsies and the accuracy of prostate cancer diagnosis.
OIPAV (Ophthalmic Images Processing, Analysis and Visualization) is a cross-platform software which is specially oriented to ophthalmic images. It provides a wide range of functionalities including data I/O, image processing, interaction, ophthalmic diseases detection, data analysis and visualization to help researchers and clinicians deal with various ophthalmic images such as optical coherence tomography (OCT) images and color photo of fundus, etc. It enables users to easily access to different ophthalmic image data manufactured from different imaging devices, facilitate workflows of processing ophthalmic images and improve quantitative evaluations. In this paper, we will present the system design and functional modules of the platform and demonstrate various applications. With a satisfying function scalability and expandability, we believe that the software can be widely applied in ophthalmology field.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.