KEYWORDS: Mammography, Transformers, Feature extraction, Breast cancer, Cancer detection, Digital mammography, Breast, Artificial intelligence, Deep learning
Deep-learning-based models have been proposed as an automated second reader for mammograms that might help reduce radiologists’ workload and improve screening accuracy. However, the inherent traits of mammograms, characterized by significantly higher resolutions and smaller regions of interest (ROIs) in comparison to natural images, impose constraints on the adaptability of deep neural networks that are well-suited for natural image analysis to the domain of mammogram analysis. In this work, we propose a novel neural network to effectively detect breast cancer on screening mammograms and address the above issues. First, we use a local self-attention-based Swin Transformer as the backbone to select the most informative patch regions from the whole mammogram. We then utilize a second CNN based network to further extract the fine-grained features of the selected patches. Finally, we employ a fusion module that aggregates global and local information to make a prediction. The final loss function is the combination of the prediction from both the transformer and CNN modules. With local self-attention and a hierarchical structure, our backbone can effectively model the relationships between ROIs (e.g., masses or micro-calcifications) of different sizes and their surrounding tissues. Thus introduces meaningful contextual information for robust feature extraction. The experimental results show that our model can achieve state-of-the-art performance, in terms of classification AUC of 0.856 on a public mammogram dataset.
KEYWORDS: Breast density, Breast, 3D modeling, Data modeling, Transformers, Education and training, Image segmentation, Tissues, Magnetic resonance imaging, Breast cancer
Early detection of breast cancer is important for improving survival rates. Based on accurate and tissue-specific risk factors, such as breast density and background parenchymal enhancement (BPE), risk-stratified screening can help identify high-risk women and provide personalized screening plans, ultimately leading to better outcomes. Measurements of density and BPE are carried out through image segmentation, but volumetric measurements may not capture the qualitative scale of these tissue-specific risk factors. This study aimed to create deep regression models that estimate the interval scale underlying the BI-RADS density and BPE categories. These models incorporate a 3D convolutional encoder and transformer layers to comprehend time-sequential data in DCE-MRI. The correlation between the models and the BI-RADS categories was evaluated with Spearman coefficients. Using 1024 patients with a BI-RADS assessment score of 3 or less and no prior history of breast cancer, the models were trained on 50% of the data and tested on 50%. The density and BPE ground truth labels were extracted from the radiology reports using BI-RADS BERT. The ordinal classes were then translated to a continuous interval scale using a linear link function. The density regression model is strongly correlated to the BI-RADS category with a correlation of 0.77, slightly lower than segmentation %FGT. The BPE regression model with transformer layers shows a moderate correlation with radiologists at 0.52, similar to the segmentation %BPE. The deep regression transformer has an advantage over segmentation as it doesn’t need time-point image registration, making it easier to use.
Immune phenotype data, specifically the description of densities and spatial distribution of immune cells are now frequently included in the clinical pathology report as these features of the cells in the tumor microenvironment (TME) have shown to be associated with prognosis. In addition, immune-therapeutics, which aim at manipulating the patients’ immune system to kill cancer cells, have recently been approved for treatment of triple-negative breast cancers (TNBCs). Thus, quantifying the immune phenotype of the cancer could be important both for prognostication, and for prediction of therapy response. We have studied the immune phenotype of 42 breast cancers using immunofluorescence protein multiplexing and quantitative image analysis. After sectioning, formalin-fixed paraffin-embedded tissues were sequentially stained with a panel of fluorescently-labelled antibodies and imaged with the multiplexer (Cell DIVE, Leica Biosystems). Composite images of antibody-stained sections were then analysed using specialized digital pathology software (HALO, Indica Labs). Binary thresholding was conducted to identify and quantify densities of various immune lineage subsets (T lymphocytes and macrophages). Their cellular localisation was mapped and the spatial features of cellular arrangement were evaluated using a k-nearest neighbor graph (KNNG) method and Louvain community-proximity clustering. The spatial relationship of various immune and cancer cell types was quantified to assess whether cellular arrangements and structures differed among breast cancer subtypes. Our work demonstrates the use of molecular and cellular imaging in quantifying features of the tumor microenvironment in breast cancer classification, and the application of KNNG in studying spatial biology.
ER, PR (estrogen, progesterone receptor), and HER2 (human epidermal growth factor receptor 2) status are assessed using immunohistochemistry and reported in standard clinical workflows as they provide valuable information to help treatment planning. The protein Ki67 has also been suggested as a prognostic biomarker but is not routinely evaluated clinically due to insufficient quality assurance. The routine pathological practice usually relies on small biopsies, such that the reduction in consumption is necessary to save materials for special assays. For this purpose, we developed and validated an automatic system for segmenting and identifying the (ER, PR, HER2, Ki67) positive cells from hæmatoxylin and eosin (H&E) stained tissue sections using multiplexed immunofluorescence (MxIF) images at cellular level as a reference standard. In this study, we used 100 tissue-microarray cores sampled from 56 cases of invasive breast cancer. For ER, we extracted cell nucleus images (HoverNet) from the H&E images and assigned each cell nucleus as ER positive vs. negative based on the corresponding MxIF signals (whole cell segmentation with DeepCSeg) upon H&E to MxIF image registration. We trained a Res-Net 18 and validated the model on a separate test-set for classifying the cells as positive vs. negative for ER, and performed the same experiment for the other three markers. We obtained area-under-the- receiver-operating-characteristic-curves (AUCs) of 0.82 (ER), 0.85 (PR), 0.75 (HER2), 0.82 (Ki67) respectively. Our study demonstrates the feasibility of using machine learning to identify molecular status at cellular level directly from the H&E slides.
Pathologists regularly use ink markings on histopathology slides to highlight specific areas of interest or orientation, making it an integral part of the workflow. Unfortunately, digitization of these ink-annotated slides hinders any computer-aided analyses, particularly deep learning algorithms, which require clean data free from artifacts. We propose a methodology that can identify and remove the ink markings for the purpose of computational analyses. We propose a two-stage network with a binary classifier for ink filtering and Pix2Pix for ink removal. We trained our network by artificially generating pseudo ink markings using only clean slides, requiring no manual annotation or curation of data. Furthermore, we demonstrate our algorithm’s efficacy over an independent dataset of H&E stained breast carcinoma slides scanned before and after the removal of pen markings. Our quantitative analysis shows promising results, achieving 98.7% accuracy for the binary classifier. For Pix2Pix, we observed a 65.6% increase in structure similarity index, a 21.3% increase in peak signal-to-noise ratio, and a 30% increase in visual information fidelity. As only clean slides are required for training, the pipeline can be adapted to multiple colors of ink markings or new domains, making it easy to deploy over different sets of histopathology slides. Code and trained models are available at: https://github.com/Vishwesh4/Ink-WSI.
The tumor microenvironment (TME) plays an important role in driving cancer progression and affecting treatment efficacy. Cellular components of the TME include various immune subsets (tumor infiltrating lymphocytes (TILs) and macrophages), cancer-associated fibroblasts (CAFs) and vascular cells. While immune lineage has been a main focus of intensive research on the TME, CAFs have also been shown to be highly heterogeneous in their molecular phenotype and function. Using a protein marker immunofluorescence multiplexing system (Cell DIVE, Leica Microsystems) and quantitative imaging tools, we investigated the identity of various CAF clusters based on the expression of α-Smooth Muscle Actin (αSMA) and Fibroblast Activation Protein (FAP), and compared their distributions across breast cancer subtypes. We determined the cell counts of various CAF subsets using binary counting and identified the heterogeneous presentations of clusters using K-means clustering and Uniform Manifold Approximation and Projection (UMAP). We found that the abundance of CAF clusters varied among breast cancer subtypes. An integrated analysis of CAF cluster composition in each cancer and the transcriptomic data of CAF-associated genes such as CD29, IL6 and PDGFRβ was performed. We observed increased densities of proliferative, αSMA-positive CAFs in basal-like breast cancers that exhibited a co-expression signature of CAF-associated genes. Finally, an association analysis of CAF cluster composition and gene expression with previously identified radiomic phenotype was performed, but significant correlation was not detected.
Cytometry plays essential roles in immunology and oncology. Recent advancements in cellular imaging allow more detailed characterization of cells by labeling each cell with multiple protein markers. The increase of dimensionality makes manual analysis challenging. Clustering algorithms provide a means for phenotyping high-dimensional cell populations in an unsupervised manner for downstream analysis. The choice and usability of the methods are critical in practice. Literature provided comprehensive studies on those topics using publicly available flow cytometry data, which validated cell phenotypes by those methods against manual gated cell populations. In order to extend the knowledge for identification of cell phenotypes including unknown cell populations in our dataset, we conducted an exploratory study using clinical relevant tissue types as reference standard. Using our in-house database of multiplexed immunofluorescence images of breast cancer tissue microarrays (TMAs), we experimented with two commonly used algorithms (PhenoGraph and FlowSOM). Our pipeline includes: 1) cell phenotyping using Phenograph/FlowSOM; 2) clustering TMA cores into four groups using the percentage of each cell phenotypes with the algorithms (PhenoGraph/Spectral/K-means); 3) comparing the tissue groups to clinically relevant subtypes that were manually assigned based on the immunohistochemistry scores of serial sections. We experimented with different hyperparameter settings and input markers. Cell phenotypes using Phenograph with 10 markers and tissue clustering using Spectral yielded the highest mean F-measure (average over four tissue subtypes) of 0.71. In general, our results showed that cell phenotypes by Phenograph yielded better performance with larger variations than FlowSOM, which gives very consistent results.
Digital pathology involves the digitization of high quality tissue biopsies on microscope slides to be used by physicians for patient diagnosis and prognosis. These slides have become exciting avenues for deep learning applications to improve care. Despite this, labels are difficult to produce and thus remain rare. In this work, we create a sparse capsule network with a spatial broadcast decoder to perform representation learning on segmented nuclei patches extracted from the BreastPathQ dataset. This was able to produce disentangled latent space for categories such as rotations, and logistic regression classifiers trained on the latent space performed well.
Purpose: The Breast Pathology Quantitative Biomarkers (BreastPathQ) Challenge was a Grand Challenge organized jointly by the international society for optics and photonics (SPIE), the American Association of Physicists in Medicine (AAPM), the U.S. National Cancer Institute (NCI), and the U.S. Food and Drug Administration (FDA). The task of the BreastPathQ Challenge was computerized estimation of tumor cellularity (TC) in breast cancer histology images following neoadjuvant treatment.
Approach: A total of 39 teams developed, validated, and tested their TC estimation algorithms during the challenge. The training, validation, and testing sets consisted of 2394, 185, and 1119 image patches originating from 63, 6, and 27 scanned pathology slides from 33, 4, and 18 patients, respectively. The summary performance metric used for comparing and ranking algorithms was the average prediction probability concordance (PK) using scores from two pathologists as the TC reference standard.
Results: Test PK performance ranged from 0.497 to 0.941 across the 100 submitted algorithms. The submitted algorithms generally performed well in estimating TC, with high-performing algorithms obtaining comparable results to the average interrater PK of 0.927 from the two pathologists providing the reference TC scores.
Conclusions: The SPIE-AAPM-NCI BreastPathQ Challenge was a success, indicating that artificial intelligence/machine learning algorithms may be able to approach human performance for cellularity assessment and may have some utility in clinical practice for improving efficiency and reducing reader variability. The BreastPathQ Challenge can be accessed on the Grand Challenge website.
For women of high risk ($>25%$ lifetime risk) for developing Breast Cancer combination screening of mammography and magnetic resonance imaging (MRI) is recommended. Risk stratification is based on current modeling tools for risk assessment. However, adding additional radiological features may improve AUC. To validate tissue features in MRI requires large scale epidemiological studies across health centres. Therefore it is essential to have a robust, fully automated segmentation method. This presents a challenge of imaging domain adaptation in deep learning. Here, we present a breast segmentation pipeline that uses multiple UNet segmentation models trained on different image types. We use Monte-Carlo Dropout to measure each model's uncertainty allowing the most appropriate model to be selected when the image domain is unknown. We show our pipeline achieves a dice similarity average of 0.78 for fibroglandular tissue segmentation and has good adherence to radiologist assessment.
Sensitivity of screening mammography is reduced by increased mammographic density (MD). MD can obscure or “mask” developing lesions making them harder to detect. Predicting masking risk may be an effective tool for a stratified screening program where selected women can receive alternative screening modalities that are less susceptible to masking. Here, we investigate whether the use of artificial intelligence can accurately predict the masking risk and compare its performance to that of conventional BI-RADS density classification. The analysis was based on mammograms of 214 subjects comprised of 147 women with a screen-detected (SD) or “non-masked” cancer and 67 that developed a non-screen detected (NSD) or presumably masked cancer within 2 years following a negative screen. Prior to analysis, mammograms were pre-processed into quantitative MD maps using an in-house algorithm. A transfer learning approach was used to train a convolutional neural network (CNN) based on VGG-16 in a seven cross-fold approach to classify masking status. A two-step transfer learning method was also used where the pre-trained CNN was initially trained on 5,865 mammograms to classify by BI-RADS density category and then trained for masking status. Using BI-RADS density as a masking risk predictor has an AUC of 0.64 [0.57 - 0.71 95CI]. The CNN-mask yielded an AUC of 0.76 [0.68 - 0.81]. Combining the CNN-mask with our previous hand-crafted masking risk predictor, the AUC improved to 0.78 [0.70 - 0.83]. The combined AUC improved to 0.81 [0.72-0.90] when analysis was restricted to NSD cancers surfacing clinically within one year after a negative screen. The two-step transfer learning yielded similar performance. This work suggests that a CNN masking risk predictor can be used to guide a stratified screening program to overcome the limitations of screening mammography in dense breasts.
Neoadjuvant therapy (NAT) is an option for locally advanced breast cancer patients to downsize tumour allowing for less extensive surgical operation, better cosmetic outcomes, and lesser post-operative complications. The quality of NAT is assessed by pathologists after examining the tissue sections to reveal the efficacy of treatment, and also associate the outcome with the patient's prognosis. There are many factors involved with assessing the best treatment efficacy, including the amount of residual cancer within tumour bed. Currently, the process of assessing residual tumour burden is qualitative, which may be time-consuming and impaired by inter-observer variability. In this study, an automated method was developed to localize, and subsequently classify nuclei figures into three categories of lymphocyte (L), benign epithelial (BE), and malignant epithelial (ME) figures from post-NAT tissue slides of breast cancer. A fully convolutional network (FCN) was developed to perform both tasks in an efficient way. In order to find the cell nuclei in image patches (localization), the FCN was applied over the entire patch, generating four heatmaps corresponding to the probability of a pixel being the centre of an L, BE, ME, or non-cell nuclei. Non-maximum suppression algorithm was subsequently applied to the generated heatmaps to estimate the nuclei locations. Finally, the highest probability corresponding to each predicted cell nucleus in the heatmaps was used for the classification of the nucleus to one of the three classes (L, BE, or ME). The final classification accuracy on detected nuclei was 94.6%, surpassing previous machine learning methods based on handcrafted features on this dataset.
The residual cancer burden index is a powerful prognostic factor which is used to measure neoadjuvant therapy response in invasive breast cancers. Tumor cellularity is one component of the residual cancer burden index and is currently measured manually through eyeballing. As such it is subject to inter- and intra-variability and is currently restricted to discrete values. We propose a method for automatically determining tumor cellularity in digital slides using deep learning techniques. We train a series of ResNet architectures to output both discrete and continuous values and compare our outcomes with scores acquired manually by an expert pathologist. Our configurations were validated on a dataset of image patches extracted from digital slides, each containing various degrees of tumor cellularity. Results showed that, in the case of discrete values, our models were able to distinguish between regions-of-interest containing tumor and healthy cells with over 97% test accuracy rates. Overall, we achieved 76% accuracy over four predefined tumor cellularity classes (no tumor/tumor; low, medium and high tumor cellularity). When computing tumor cellularity scores on a continuous scale, ResNet showed good correlations with manually-identified scores, showing potential for computing reproducible scores consistent with expert opinion using deep learning techniques.
The registration of two-dimensional histology images to reference images from other modalities is an important preprocessing step in the reconstruction of three-dimensional histology volumes. This is a challenging problem because of the differences in the appearances of histology images and other modalities, and the presence of large nonrigid deformations which occur during slide preparation. This paper shows the feasibility of using densely sampled scale-invariant feature transform (SIFT) features and a SIFTFlow deformable registration algorithm for coregistering whole-mount histology images with blockface optical images. We present a method for jointly optimizing the regularization parameters used by the SIFTFlow objective function and use it to determine the most appropriate values for the registration of breast lumpectomy specimens. We demonstrate that tuning the regularization parameters results in significant improvements in accuracy and we also show that SIFTFlow outperforms a previously described edge-based registration method. The accuracy of the histology images to blockface images registration using the optimized SIFTFlow method was assessed using an independent test set of images from five different lumpectomy specimens and the mean registration error was 0.32±0.22 mm.
The accurate localization of brain metastases in magnetic resonance (MR) images is crucial for patients undergoing stereotactic radiosurgery (SRS) to ensure that all neoplastic foci are targeted. Computer automated tumor localization and analysis can improve both of these tasks by eliminating inter and intra-observer variations during the MR image reading process. Lesion localization is accomplished using adaptive thresholding to extract enhancing objects. Each enhancing object is represented as a vector of features which includes information on object size, symmetry, position, shape, and context. These vectors are then used to train a random forest classifier. We trained and tested the image analysis pipeline on 3D axial contrast-enhanced MR images with the intention of localizing the brain metastases. In our cross validation study and at the most effective algorithm operating point, we were able to identify 90% of the lesions at a precision rate of 60%.
Purpose: Automatic cell segmentation plays an important role in reliable diagnosis and prognosis of patients. Most of the state-of-the-art cell detection and segmentation techniques focus on complicated methods to subtract foreground cells from the background. In this study, we introduce a preprocessing method which leads to a better detection and segmentation results compared to a well-known state-of-the-art work. Method: We transform the original red-green-blue (RGB) space into a new space defined by the top eigenvectors of the RGB space. Stretching is done by manipulating the contrast of each pixel value to equalize the color variances. New pixel values are then inverse transformed to the original RGB space. This altered RGB image is then used to segment cells. Result: The validation of our method with a well-known state-of-the-art technique revealed a statistically significant improvement on an identical validation set. We achieved a mean F1-score of 0.901. Conclusion: Preprocessing steps to decorrelate colorspaces may improve cell segmentation performances.
Segmentation of breast tissue in MRI images is an important pre-processing step for many applications. We present a new method that uses a random forest classifier to identify candidate edges in the image and then applies a Poisson reconstruction step to define a 3D surface based on the detected edge points. Using a leave one patient out cross validation we achieve a Dice overlap score of 0.96 ± 0.02 for T1 weighted non-fat suppressed images in 8 patients. In a second dataset of 332 images acquired using a Dixon sequence, which was not used in training the random classifier, the mean Dice score was 0.90 ± 0.03. Using this approach we have achieved accurate, robust segmentation results using a very small training set.
We aim to develop a CAD system for robust and reliable di erential diagnosis of breast lesions, in particular non-mass lesions. A necessary prerequisite for the development of a successful CAD system is the selection of the best subset of lesion descriptors. But an important methodological concern is whether the selected features are in uenced by the model employed rather than by the underlying characteristic distribution of descriptors for positive and negative cases. Another interesting question is how a particular classi er exploits the relationships between descriptors to increase the accuracy of the classi cation. In this work we set to: (1) Characterize kinetic, morphological and textural features among mass and non-mass lesions; (2) Examine feature spaces and compare selection of subset of features based on similarity of feature importance across feature rankings; (3) Compare two classi er performances namely binary Support Vector Machines (SVM) and Random Forest (RF) for the task of di erentiating between positive and negative cases when using binary classi cation for mass and non-mass lesions separately or when employing a multi-class classi cation. Breast MRI datasets consists of 243 (173 mass and 70 non-mass) lesions. Results show that RF variable importance used with RF-binary based classi cation optimized for mass and non-mass lesions separately o ers the best classi cation accuracy.
KEYWORDS: Databases, Magnetic resonance imaging, Computer aided diagnosis and therapy, Pathology, Breast, Biopsy, Mammography, Radiology, Computer aided design, Breast cancer
This work presents the creation of a semantic lesion database which will support research into computer-aided lesion
detection (CAD) in breast screening MRI. As an adjunct to conventional X-ray mammography, MR-mammography has
become a popular screening tool for women with a high risk of breast cancer because of its high sensitivity in detecting
malignancy. To address the needs of research and development into CAD for breast MRI an integrated tool has been
designed to collect all lesion related information, conduct quantitative analysis, and then present crucial data to clinicians
and researchers. A lesion database is an essential component of this system as it provides a link between the DICOM
database of MR images and the meta-information contained in the Electronic Patient Record. The patient history,
radiology reports from MRI screening visits and pathology reports are all collected, dissected, and stored in a
hierarchical structure in the database. Moreover, internal links between pathology specimens and the location of the
corresponding lesion in the image are established allowing diagnostic information to be displayed alongside the relevant
images. If "ground truth" for an imaging visit can be established either by biopsy or by 2-year follow-up, then the case is
labeled as suitable for use in training and testing CAD algorithms. At present a total of 1882 lesions (benign/malignant),
200 pathology specimens over 405 subjects and 1794 screening (455 CAD studies) are included in the database. As well
as providing an excellent resource for CAD development this also has potential applications in resident radiologists'
training and education.
The aim of this paper is to validate an image registration pipeline used for histology image alignment. In this work a set
of histology images are registered to their correspondent optical blockface images to make a histology volume. Then
multi-modality fiducial markers are used to validate the alignment of histology images. The fiducial markers are
catheters perfused with a mixture of cuttlefish ink and flour. Based on our previous investigations this fiducial marker is
visible in medical images, optical blockface images and it can also be localized in histology images. The properties of
this fiducial marker make it suitable for validation of the registration techniques used for histology image alignment.
This paper reports on the accuracy of a histology image registration approach by calculation of target registration error
using these fiducial markers.
A statistical shape model (SSM) is constructed and applied to automatically segment the breast in 3D MRI. We present
an approach to automatically construct a SSM: first, a population of 415 semi-automatically segmented breast MRI
volumes is groupwise registered to derive an average shape. Second, a surface mesh is extracted and further decimated
to reduce the density of the shape representation. Third, landmarks are obtained from the averaged decimated mesh,
which are non-rigidly deformed to each individual shape in the training set, using a set of pairwise deformations. Finally,
the resulting landmarks are consistently obtained in all cases of the population for further statistical shape model (SSM)
generation. A leave-one-out validation demonstrated that near sub-voxel resolution reconstruction (2.5mm) error is
attainable when using a minimum of 15 modes of variation. The model is further applied to automatically segment the
anatomy of the breast in 3D. We illustrate the results of our segmentation approach in which the model is adjusted to the
image boundaries using an iterative segmentation scheme.
Assessing the hemodynamic status of the brain and its variations in response to stimulations is required to understand the
local cerebral circulatory mechanisms. Dynamic contrast enhanced imaging of cerebral microvasculature provides
information that can be used in understanding physiology of cerebral diseases. Bolus tracking is used to extract
characteristic parameters that quantify local cerebral blood flow. However, post-processing of the data is needed to
segment the field of view (FOV) and to perform deconvolution to remove the effects of input bolus profile and the path it
travels to reach the imaging window. Finding the arterial input function (AIF) and dealing with the ill-posedness of
deconvolution system make this process are the main challenges. We propose using ICA to segment the FOV and to
extract a local AIF as well as the venous output function that is required for deconvolution. This also helps to stabilize
the system as ICA suppresses noise efficiently. Tikhoniv regularization (with L-curve analysis to find the best
regularization parameter) is used to make the system stable. In-vivo dynamic 2PLSM images of a rat brain in two
conditions (when the animal is at rest and when it is stimulated) are used in this study. The experimental along with the
simulation studies provided promising results that demonstrate the feasibility and importance of performing
deconvolution.
A multi-modality fiducial marker is presented in this work, which can be used for validating the correlation of histology
images with medical images. This marker can also be used for landmark-based image registration. Seven different
fiducial markers including a catheter, spaghetti, black spaghetti, cuttlefish ink, and liquid iron are implanted in a mouse
specimen and then investigated based on visibility, localization, size, and stability. The black spaghetti and the mixture
of cuttlefish ink and flour are shown to be the most suitable markers. Based on the size of the markers, black spaghetti is
more suitable for big specimens and the mixture of the cuttlefish ink, flour, and water injected in a catheter is more
suitable for small specimens such as mouse tumours. These markers are visible on medical images and also detectable on
histology and optical images of the tissue blocks. The main component in these agents which enhances the contrast is
iron.
A fully automatic ICA based data driven technique which incorporates additional a priori information from
physiological modeling of the cerebral microcirculation (gamma variate model) is developed for the separation of
arteries and veins in contrast-enhanced studies of the cerebral microvasculature. A dynamic data set of 50 images taken
by a two-photon laser scanning microscopy technique that monitors the passage of a bolus of dye through artery and vein
is used here. A temporally constrained ICA (TCICA) technique is developed to extract the vessel specific dynamics of
artery and vein by adding two constraints to classical ICA algorithm. One of the constraints guarantees that the extracted
curves follow the gamma variate model of blood passage through vessels. Positivity as the second constraint indicates
that none of the extracted component images that correspond to the artery, vein or other capillaries in the imaging field
of view, has negative impact on the acquired images.
Experimental results show improved performance of the proposed temporally constrained ICA (TCICA) over the most
commonly used classical ICA technique (fast-ICA) in generating physiologically meaningful curves; they are also closer
to that of pixel by pixel model fitting algorithms and perform better in handling noise. This technique is also fully
automatic and does not require specifying regions of interest which is critical in model based techniques.
KEYWORDS: Magnetic resonance imaging, Factor analysis, Brain, Principal component analysis, Magnetism, Signal to noise ratio, Neuroimaging, Remote sensing, Arteries, Clouds
In order to extract quantitative information from dynamic contrast-enhanced MR images (DCE-MRI) it is usually necessary to identify an arterial input function. This is not a trivial problem if there are no major vessels present in the field of view. Most existing techniques rely on operator intervention or use various curve parameters to identify suitable pixels but these are often specific to the anatomical region or the acquisition method used. They also require the signal from several pixels to be averaged in order to improve the signal to noise ratio, however this introduces errors due to partial volume effects. We have described previously how factor analysis can be used to automatically separate arterial and venous components from DCE-MRI studies of the brain but although that method works well for single slice images through the brain when the blood brain barrier technique is intact, it runs into problems for multi-slice images with more complex dynamics. This paper will describe a factor analysis method that is more robust in such situations and is relatively insensitive to the number of physiological components present in the data set. The technique is very similar to that used to identify spectral end-members from multispectral remote sensing images.
Contrast-enhanced magnetic resonance (MR) imaging offers a minimally invasive method of investigating brain blood flow. This paper describes two different methods of extracting quantitative and qualitative information from this data. The first approach is to generate parametric images showing blood flow, blood volume and time-to-peak activity on a pixel by pixel basis. The second approach uses factor analysis. Principal components are extracted from the data and these orthogonal factors are then rotated to give a set of oblique factors, which satisfy certain simple constraints. In most cases three factors can be identified: a background or non- enhancing factor, an early vascular factor which is strongly correlated to arterial flow, and a late vascular factor which is strongly correlated to venous flow. The parametric and factor images are complimentary in nature: the former provides quantitative information that is readily understood by the clinician, while the latter makes no a priori assumptions about the underlying physiology and also allows more subtle changes in cerebral blood flow to be assessed. The factor images may also be of great value in defining regions of interest over which to carry out a more detailed quantitative analysis. This dual approach can be readily adapted to assess perfusion in other organs such as the heart or kidneys.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.