Virtual Reality (VR) has made significant strides, offering users a multitude of ways to interact with virtual environments. Each sensory modality in VR provides distinct inputs and interactions, enhancing the user's immersion and presence. However, the potential of additional sensory modalities, such as haptic feedback and 360° locomotion, to improve decision-making performance has not been thoroughly investigated. This study addresses this gap by evaluating the impact of a haptic feedback, 360° locomotion-integrated VR framework and longitudinal, heterogeneous training on decision-making performance in a complex search-and-shoot simulation. The study involved 32 participants from a defence simulation base in India, who were randomly divided into two groups: experimental (haptic feedback, 360° locomotion-integrated VR framework with longitudinal, heterogeneous training) and placebo control (longitudinal, heterogeneous VR training without extrasensory modalities). The experiment lasted 10 days. On Day 1, all subjects executed a search-and-shoot simulation closely replicating the elements/situations in the real world. From Day 2 to Day 9, the subjects underwent heterogeneous training, imparted by the design of various complexity levels in the simulation using changes in behavioral attributes/artificial intelligence of the enemies. On Day 10, they repeated the search-and-shoot simulation executed on Day 1. The results showed that the experimental group experienced a gradual increase in presence, immersion, and engagement compared to the placebo control group. However, there was no significant difference in decision-making performance between the two groups on day 10. We intend to use these findings to design multisensory VR training frameworks that enhance engagement levels and decision-making performance.
In this work, we address a contemporary research problem in the domain of perceptual brain decoding, involving image synthesis from EEG signals in an adversarial deep learning framework. The specific task involves reconstructing images of different object classes, using the EEG recordings acquired when subjects are shown the images of those objects. For this work, we use an EEG encoder for generating EEG encodings. These EEG encodings act as an input to the generator of the GAN network. In addition to the adversarial loss, we also use perceptual loss for generating decent quality images. Through experiments, we demonstrate that the proposed network is generating better quality images than the available state-of-the-art methods.
Depth images captured from modern depth cameras generally suffer from low spatial resolution, noise, and missing regions. These kinds of images cannot be used directly in applications related to depth images, e.g., robot navigation, 3DTV, and augmented reality, which basically need high-resolution input images with no noise o missing regions to function properly. To address the problem of low spatial resolution, noise degradation, and missing regions in depth images, we propose methods based on a guidance color image for depth reconstruction (DR) from sparse depth inputs and depth image super-resolution (SR). We also suggest a scenario wherein these problems can be integrated and addressed simultaneously. Further, we also demonstrate applications of the proposed approach for depth image denoising and depth image inpainting. In our approach, the guidance color image is used for obtaining the segment cues by applying mean-shift (MS) or simple linear iterative clustering (SLIC) segmentation on it. These strong segment cues help in aiding the DR and SR problems by considering the corresponding segments in the input depth image, and estimate the unknown pixels by either plane fitting or median filling approaches. Furthermore, we explore both direct and pyramidal (hierarchical) approaches for SR and DR-SR for higher upsampling factor. As such, our approaches are relatively simpler than some of the contemporary methods, yet the experimental results of the proposed methods show superior performance as compared with some other state-of-the-art DR and SR methods.
Cervical cancer is the second most common cause of death among women worldwide, but it can be treated if detected early. However, due to inter and intra observer variability in manual screening, automating the process is need of the hour. For classifying the cervical cells as normal vs abnormal, segmentation of nuclei as well as cytoplasm is a prerequisite. But the segmentation of nuclei is relatively more reliable and equally efficient for classification to that of cytoplasm. Hence, this paper proposes a new approach for segmentation of nuclei based on selective pre-processing and then passing the image patches to respective deep CNN (trained with/without pre-processed images) for pixel-wise 3 class labelling as nucleus, edge or background. We argue and demonstrate that a single pre-processing approach may not suit all images, as there are significant variations in nucleus sizes and chromatin patterns. The selective pre-processing is carried out to effectively address this issue. This also enables the deep CNNs to be better trained in spite of relatively less data, and thus better exploit the capability of CNN of good quality segmentation. The results show that the approach is effective for segmentation of nuclei in PAP-smears with an F-score of 0.90 on Herlev dataset as opposed to the without selective pre-processing F-scores of 0.78 (without pre-processing) and 0.82 (with pre-processing). The results also show the importance of considering 3 classes in CNN instead of 2 (nucleus and background) where the latter achieves an F-score as low as 0.63.
We propose a novel automated strategy for classification of HEp-2 specimens as Mitotic Spindle (MS) or non-Mitotic Spindle (non-MS), which is important for CAD-based Anti-Nuclear Antibody (ANA) detection, in diagnosis of autoimmune disorders. Our strategy is based on the observation that few MS type cells are present in the image along with some other pattern cells in a MS labeled HEp-2 specimen. Hence, the commonly followed majority rule in classification of non-MS cells cannot be applied in this case. We propose that the decision for classifying a specimen as MS or non-MS is based on a pre-defined threshold value on the number of detected MS cells in a specimen. In literature, such evaluation criteria is not clearly analyzed. We note that the MS cells have a distinct visual characteristic, which enables us to use simplistic features representation using the fusion of Gabor and LM filter banks, followed by the Bag-of-words framework and Support Vector Machine (SVM) classification. The experimental results are shown using I3A contest HEp-2 specimen dataset. We achieve 100% True-positive, 5.55% False-positive and 0.97 F-score at the best threshold value of MS. The novel and clearly defined decision strategy makes our approach a good alternative for detection of MS specimen.
Conference Committee Involvement (3)
Geospatial Informatics XI
12 April 2021 | Online Only, Florida, United States
Geospatial Informatics X
27 April 2020 | Online Only, California, United States
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.