Traditional facial recognition techniques often struggle to balance accuracy with model complexity. High accuracy typically demands intricate models, slowing recognition speeds on devices such as smartphones. Conversely, faster methods often sacrifice accuracy. We introduce a lightweight deep convolutional generative adversarial network (LW-DCGAN), designed specifically to address the challenges of occluded face recognition. By simplifying the network architecture and employing efficient feature extraction techniques such as transpose convolution, batch normalization, feature pyramid networks, and attention modules, we enhance both hierarchical sampling and contextual relevance. Furthermore, L1 regularization and channel sparsity techniques compress the model for resource-constrained environments. We thoroughly evaluate LW-DCGAN’s generalization and robustness, comparing its performance against other generative adversarial network variants and common face recognition models. The results demonstrate that LW-DCGAN achieves higher accuracy while significantly reducing model size and computational overhead, offering a promising advancement in lightweight face recognition technology.
Rumination plays a pivotal role in assessing the health status of ruminants. However, conventional contact devices such as ear tags and pressure sensors raise animal welfare concerns during rumination behavior detection. Deep learning offers a promising solution for non-contact rumination recognition by training neural networks on datasets. We introduce UD-YOLOv5s, an approach for bovine rumination recognition that incorporates jaw skeleton feature extraction techniques. Initially, a skeleton feature extraction method is proposed for the upper and lower jaws, employing skeleton heatmap descriptors and the Kalman filter algorithm. Subsequently, the UD-YOLOv5s method is developed for rumination recognition. To optimize the UD-YOLOv5s model, the traditional intersection over the union loss function is replaced with the generalized one. A self-built bovine rumination dataset is used to compare the performance of three deep learning techniques: mean shift algorithm, mask region-based convolutional neural network, and you only look once version 3 (YOLOv3). The results of the ablation experiment demonstrate that UD-YOLOv5s achieves impressive precision (98.25%), recall (97.75%), and a mean average precision of 93.43%. We conducted a generalization performance evaluation in a controlled experimental environment to ensure fairness, indicating that UD-YOLOv5s converges faster than other models while maintaining comparable recognition accuracy. Moreover, our work reveals that when convergence speed is equal, UD-YOLOv5s outperforms other models regarding recognition accuracy. These findings provide robust support for accurately identifying cattle rumination behavior, showcasing the potential of the UD-YOLOv5s method in advancing ruminant health assessment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.