Deep learning models are currently the models of choice for image classification tasks. But large scale models require large quantities of data. For many tasks, acquiring a sufficient quantity of training data is not feasible. Because of this, an active area of research in machine learning is the field of few sample learning or few shot learning (FSL), with architectures that attempt to build effective models for a low-sample regime. In this paper, we focus on the established few-shot learning algorithm developed by Snell et al.1 We propose an FSL model where the model is produced via traditional encoding with the backend output layer replaced with the prototypical clustering classifier of Snell et al.. We hypothesize that this algorithm’s encoding structure produced by this training may be equivalent to models produced by traditional cross-entropy deep learning optimization. We compare few shot classification performance on unseen classes between models trained using the FSL training paradigm and our hybrid models trained traditionally with softmax, but modified for FSL use. Our empirical results indicate that traditionally trained models can be effectively re-used for few sample classification.
Deep learning models are pervasive for a multitude of tasks, but the complexity of these models can limit interpretation and inhibit trust in their estimates of confidence. For the classification task, we investigate the induced geometric relationships between the class conditioned data distributions with the deep learning models’ output weight vectors. We propose a simple statistic, which we call Angular Margin, to characterize the “confidence” of the model given a new input. We compare and contrast our statistic to Angular Visual Hardness and Softmax outputs. We demonstrate that Angular Margin provides a superior statistic for detecting minimum-perturbation adversarial attacks and/or misclassified images than standard Softmax predictions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.