Road traffic crashes have become the leading cause of death for young people. Approximately 1.3 million people die due to road traffic crashes, and more than 30 million people suffer non-fatal injuries. Various studies have shown that emotions influence driving performance. In this work, we focus on frame-level video-based categorical emotion recognition in drivers. We propose a Convolutional Bidirectional Long Short-Term Memory Neural Network (CBiLSTM) architecture to capture the spatio-temporal features of the video data effectively. For this, the facial videos of drivers are obtained from two publicly available datasets, namely Keimyung University Facial Expression of Drivers (KMU-FED), a subset of the Driver Monitoring Dataset (DMD), and an experimental dataset. Firstly, we extract the face region from the video frames using the Facial Alignment Network (FAN). Secondly, these face regions are encoded using a lightweight SqueezeNet CNN model. The output of the CNN model is fed into a two-layered BiLSTM network for spatio-temporal feature learning. Finally, a fully-connected layer outputs the emotion class softmax probabilities. Furthermore, we enable interpretable visualizations of the results using Axiom-based Grad-CAM (XGrad-CAM). For this study, we manually annotated the DMD and our experimental dataset using an interactive annotation tool. Our framework achieves an F1-score of 0.958 on the KMU-FED dataset. We evaluate our model using Leave-One-Out Cross-Validation (LOOCV) for the DMD and the experimental dataset and achieve average F1-scores of 0.745 and 0.414 respectively.
KEYWORDS: Reconstruction algorithms, 3D modeling, Atomic force microscopy, Cameras, 3D image reconstruction, 3D image processing, Image quality, 3D metrology, Optical spheres
Quantitative assessment is essential to ensure correct diagnosis and effective treatment of chronic wounds. So far, devices with depth cameras and infrared sensors have been used for the computer-aided diagnosis of cutaneous wounds. However, these devices have limited accessibility and usage. On the other hand, smartphones are commonly available, and threedimensional (3D) reconstruction using smartphones can be an important tool for wound assessment. In this paper, we analyze various open source libraries for smartphone-based 3D reconstruction of wounds. For this, point clouds are obtained from cutaneous wound regions using Google ARCore and Structure from Motion (SfM) libraries. These point clouds are subjected to de-noising filters to remove outliers and to improve the density of the point cloud. Subsequently, surface reconstruction is performed on the point cloud to generate a 3D model. Six different mesh-reconstruction algorithms namely Delaunay triangulation, convex hull, point crust, Poisson surface reconstruction, alpha complex, and marching cubes are considered. The performances are evaluated using the quality metrics such as complexity, the density of point clouds, the accuracy of depth information and the efficacy of the reconstruction algorithm. The result shows that the point clouds are able to perform 3D reconstruction of wounds using open source libraries. It is found that the point clouds obtained from SfM have higher density and accuracy as compared to ARCore. Comparatively, the Poisson surface reconstruction is found to be the best algorithm for effective 3D reconstruction from the point clouds. However, research is still required on the techniques to enhance the quality of point clouds obtained through the smartphones and to reduce the computational cost associated with point cloud based 3D-reconstruction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.