Recent years have witnessed tremendous success of convolutional neural networks in image classification. In the training of image classification, softmax cross-entropy (SCE) loss is widely used because of its clear physical definition and concise gradient calculation. We reformulate SCE loss into a more basic form, in light of which we can understand SCE loss better and rethink the loss functions in image classification. We propose a novel loss function called another metric (AM) loss, which is different from SCE loss in form but is the same in essence. AM loss not only retains the advantages of SCE loss but also provides more gradient information at the later phase of training and has larger inter-classes distances. Extensive experiments on CIFAR-10/100, Tiny ImageNet, and three fine-grained datasets show that AM loss can outperform SCE loss, which also shows the great potential of improving deep neural network training by exploring new loss functions. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
Education and training
Image classification
Neural networks
Data modeling
Feature extraction
Convolutional neural networks
Facial recognition systems