In our paper we combine neural networks with Hidden Markov Models for multiview object recognition. While convolutional neural networks are very efficient in object recognition there is still need for improvements in many practical cases. For example if the training is not satisfactory or the object localization is not solved with the neural network then information fusion from several images and from inertial sensors can still help a lot to improve recognition rate. In our use case we are to recognize objects from several directions with the VGG16 network. We assume that no localization of objects is possible on the images due to the lack of bounding box annotations, we have to recognize the objects even if they occupy only about 25% of the field of view. To overcome this problem we propose to use a Hidden Markov Model approach where the consecutive queries, shots taken from different viewing directions, are first evaluated with VGG16 inference and then with the Viterbi algorithm. The role of the later is to estimate the most probable sequence of poses of candidates (from the predefined 8 horizontal views in our experiments), thus we can select the most probable object. The approach, as evaluated with different number of queries over a set of 40 objects from the COIL-100 dataset, can result in significant increase of hit rate compared to one shot recognition or to combining individual shots without the HMM model.