Prostate cancer diagnosis is performed by pathologists through the analysis of tissue samples from the prostate gland using a microscope. The development of automatic acquisition and digitalization technologies has allowed the construction of large collections of digitalized histopathology slides, that are usually accompanied by clinical information and other types of metadata. This collection of cases, along with the metadata, has the potential to be an invaluable resource for the analysis of new challenging cases supporting diagnosis, prognosis, and theragnosis decision tasks. This paper presents a multimodal retrieval system based on a supervised multimodal kernel semantic embedding model that supports the search of relevant cases in a multimodal database, combining both images, i.e. histopathology slides, and text, i.e. pathologist’s reports. The system was tested in a multimodal prostate adenocarcinoma dataset, composed of whole slide images of tissue samples, pathologist’s reports and gradation information using the Gleason score. The system shows a high performance for multimodal information retrieval with a Mean Average Precision of 0.6263.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.