No-contact heart rate monitoring like Fig.1 based on remote Photoplethysmography (rPPG) via common camera has drawn more and more attention because of its promising use in patient nursing, telemedicine, fitness, trial. Many traditional signal processing methods (FFT, ICA, PCA) have been proposed to solve this problem, but still subject to interference of motion and lighting conditions. In facial RGB images, the signal-to-noise ratio of green channel is higher than that of the other two channels, and the heart rate can be measured more accurately by assigning different weights to three channels. In this paper we propose a novel deep convolution neural network model based on channel-attention mechanism to extract the heart rate information from each frame of the video. To get more accurate result of the heart rate in the condition of face moving, light change and other interference factors, the model was trained on the newly introduced public challenge ECG-Fitness dataset and the model’s robustness was tested on this dataset. Testing results show that the model outperforms previous methods.