With the wide applications of three-dimensional (3D) mesh model in digital entertainment, animation, virtual reality and other fields, there are more and more processing techniques for 3D mesh models, including watermarking, compression, and simplification. These processing techniques will inevitably lead to various distortions in 3D mesh. Thus, it is necessary to design effective tools for 3D mesh quality assessment. In this work, considering that the curvature can measure concavity and convexity of surface well, and the human eyes are also very sensitive to the change of curvature, we propose a new objective 3D mesh quality assessment method. Curvature features are used to evaluate the visual difference between the reference and distorted meshes. Firstly, the Gaussian curvature and the mean curvature on the vertices of the reference and distorted meshes are calculated, and then the correlation function is used to measure the correlation coefficient of these meshes. In this case, the degree of degradation of the distorted mesh can be well represented. Finally, the Support Vector Regression model is used to fuse the two features and the objective quality score could be obtained. The proposed method is compared with seven existing 3D mesh quality assessment methods. Experimental results on the LIRIS_EPFL_GenPurpose Database show that the PLCC and SROCC of the proposed method are increased by 13.60% and 6.23%, compared with the best results of the seven representative methods. It implies that the proposed model has stronger consistency with the subjective visual perception of human eyes.
The existing saliency detection methods are not suitable for high dynamic range (HDR) images. In this work, based on human visual system, we propose a new method for detecting the saliency of HDR images via luminance regionalization. First, considering the visual characteristics of a wider luminance range of HDR images, luminance information of the HDR image is extracted, and the HDR image is divided into high, medium, and low luminance regions by luminance thresholding. Then, saliency map of each luminance region is detected, respectively. Color and texture features are extracted for the high luminance region, luminance and texture features are extracted for the low luminance region, and an existing LDR image saliency detection method is used for the medium luminance region. Finally, the three saliency maps are linearly fused to obtain the final HDR image saliency map. Experimental results on two public databases (EPFL HDR eye tracking database and TMID database) demonstrate that the proposed method performs well when against the five state-of-the-art methods in terms of detecting the salient regions of HDR images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.