In recent times, due to the fast progression of artificial intelligence technology, the recognition of emotions has emerged as a trending area of study. Among many emotion recognition technologies, the method based on expression recognition is particularly critical. We propose a two-stream model that can simultaneously extract static and dynamic expression features. To verify the information representation ability of the two-stream model for expression images, we conducted emotion classification experiments based on a deep convolutional neural network model on two public dynamic expression databases (CK+ and Oulu-CASIA). Experimental results show that our two-stream model has significant advantages in classification performance compared to single static or dynamic expression classification methods, as well as classification techniques based on 3D convolutional neural networks.
|