We develop a gender classification method using convolutional neural networks. We train Alexnet Architecture using the luminance (Y) component of the facial image (YCbCr) for the SoF, groups, and face recognition technology datasets. The Y component is reduced to a size of 32 × 32 via discrete wavelet transform (DWT). The use of the Y plane and a low-resolution subband image of the DWT significantly reduce the amount of processed data. We are able to achieve better results than other machine learning, rule-based approaches and the traditional convolutional neural net structure that are trained with three-dimensional RGB images. We are able to maintain comparably high recognition accuracy, even with the reduction of the number of network layers. We have also compared our structure with the state-of-the-art methods and provided the recognition rates.
Gender classification, a two-class problem (male or female), has been the subject of extensive research recently and gained a lot of attention due to its varied set of applications. The proposed work relies on individual facial features to train a convolutional neural network (CNN) for gender classification. In contrast with previously reported results that assume the facial features are independent, we consider the facial features as correlated features by training a single CNN that jointly learns from all facial features. In terms of accuracy, our results either outperform, or are on par with, other gender classification techniques applied to three different datasets namely specs on faces, groups, and face recognition technology. In terms of performance, the proposed CNN has significantly fewer parameters as compared with other techniques reported in the literature. Our learnable parameters are fewer than those required in techniques reported in recent work, which enables them to make the network less sensitive to over-fitting and easier to train than techniques that use different CNNs for each facial feature as reported in the literature.