With the expansion of head-mounted displays into various professional domains, there is an increasing demand for adaptive complex illumination image enhancement algorithms that are computationally efficient and resource-conservative. This study introduces a method to categorize input images into high-light, normal, and low-light images based on brightness thresholds. Enhanced Retinex-based algorithms are proposed to process these categories. For low-light images, operations such as histogram equalization and sharpening are applied to enhance the details of the illumination component. For highlight images, an illumination component estimation method is utilized to effectively reduce noise and enhance contour information, followed by normalization using a sigmoid function. The effectiveness of the low-light enhancement algorithm is validated using the LOL dataset. The high-light enhancement algorithm is validated using a self-constructed dataset.
Using the roadside fixed camera to identify and calculate the three-dimensional coordinates of vehicles in the scene is of great significance for ensuring vehicle safety and realizing vehicle intelligent network connection and automatic driving. This paper proposes a monocular camera vehicle detection algorithm based on YOLOv7 to realize vehicle identification and frame selection in the scene. In the process of three-dimensional coordinate calculation, this paper abandons the common depth estimation method based on deep learning, but adopts the coordinate calculation method based on camera calibration, which greatly improves the target depth calculation speed and can well realize the real-time positioning of vehicles in motion.
With the emergence of deep learning methodologies, vision-based gesture recognition technology has continuously advanced. This paper primarily delves into four main stages of vision-based gesture recognition: gesture segmentation, gesture tracking, feature extraction, and gesture classification. It sequentially introduces pertinent techniques from representative literature spanning from 2018 to 2023. Based on this analysis, the current status of vision-based gesture recognition technology is examined, paving the way for predicting its future trends and developments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.