Visual question answering task requires utilizing the content of the question to locate the corresponding regions of the image. But the traditional attention-based VQA methods can not accurately match the regions of the image that is relevant to the question, which results in less satisfactory performance. In this paper, an Efficient Multi-step Reasoning Attention Network (EMRA), which is mainly composed of the multi-step reasoning attention module and the G-LReLU non-linear layers, is proposed to address this problem. Specifically, the multi-step reasoning attention module combines the initial visual features, question features and the jointed features to generate more effective attended features, which can precisely represent the regions information of an image related to question. Then, the attended visual features generated by multi-step reasoning attention model and the question features are fed into the G-LReLU non-linear layers executing non-linear transformation to better fusion for answer prediction. In addition, considering the relationship between the scaling and the reasoning steps, as the number of inference steps increases, increasing the model width will improve the accuracy of our model. Experimental results on the VQA v2.0 dataset demonstrate that our model significantly outperforms the Bottom up and Top-down Attention based methods, and can be competitive with state-of-the-art models.
The purpose of image activity recognition is the understanding of human visual behavior presented in images, and marking the identified activity picture with a pre-defined category label. Still Image activity recognition plays an important role in the field of image recognition. But now, the performance of activity recognition has not reached satisfactory results. In this paper, we construct a hybrid recognition model combined with Region-CNN (RCNN), AlexNet and SVM (or Random Forest model), which can effectively improve the performance compared with each single method. Firstly, the main object regions in an image are extracted using RCNN, then for each object region, the AlexNet is implemented to extract features, finally, all these object features are concatenated together and construct a SVM classifier (or Random Forest classifier) for still image activity recognition. The experimental results on still image dataset including 40 activity categories show that the hybrid model achieves better performance compared with single CNN model and other traditional methods. The AlexNet model only achieves an accuracy of 69.48%, the hybrid model of RCNN, AlexNet and SVM achieves an accuracy of 75.48%, and the hybrid model of RCNN, AlexNet and RF even reaches 78.15%, which verifies the effectiveness of our method.
Achieving better performance has always been an important research target in the field of automatic image annotation. This paper draws on the current popular deep learning model for the field of automatic image annotation. We propose a multiple convolutional neural networks (CNN) combination model for image annotation, which achieves satisfactory performance. First of all, we use three classical convolutional neural networks, and subsequently we examine the annotation accuracy for each CNN model. Then we take full advantage of the powerful feature representation capabilities of deep CNN, thus the last two layers of the deep CNN are extracted for each model and merged to form a new combined feature. Finally, we form our combination models by concatenating these features from each CNN model, and utilize these concatenated features to linear SVM classifier for image annotation. Experimental results on ImageCLEF2012 image annotation dataset illustrate that our combination method outperforms the traditional classifiers and the individual CNN models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.