In medical institutions, pain is one of the important clues for patients to transmit their conditions effectively, which makes the estimation of pain status an exceedingly important task. Of late, many methods have been proposed to address this task. However, most of them estimate the pain from entire face images or videos instead of paying more attention to the regions most relevant to pain. We propose a pain-awareness multistream convolutional neural network (CNN) for pain estimation. Specifically, we separate the regions most relevant to the pain expression, and the multistream CNN is used to learn the corresponding pain-awareness features. These features are combined into pain features with adaptive weights to estimate the intensity of pain. Extensive experiments on the publicly available pain database indicate that our multistream CNN-based method has achieved inspiring results compared to the state-of-the-art technologies.
In this paper, a real time method for detecting multiple dim targets in deep space background is presented and special
attention is paid to occlusion handling. We matched the stars in tow continuous images to get their speed at first and found
moving target pairs through speed in both images, a kalman filter whose equation was updated by the centroid was adopted
to track the target. The star's area was used to judge occlusion, a two Gaussian mixture model was build using the pixels'
gray value of fusing region and we used the predicted value which the kalman filter given to detect the target. The model's
parameters were estimated using the expectation-maximization method and applied to separate the target and the star as
well as computing the precise centroid. Extensive experiments on real images sequences show that the proposed approach
could effectively meet the requirements of the real-time detection with a low false alarm rate and a high detection
probability, simulation results show that it can also create a accuracy centroid when occlusion happens.