KEYWORDS: Medical imaging, Scanning electron microscopy, Visualization, Convolution, Education and training, Lawrencium, Feature fusion, Image fusion, Image quality, Super resolution
Recent convolutional neural network (CNN)-based super-resolution (SR) studies incorporated non-local attention (NLA), to consider the long-range feature correlations and achieve considerable performance improvement. Here, we proposed an innovative NLA scheme called multi-scale NLA (MS-NLA) that computes NLAs at multiple scales and fuses them. To effectively fuse NLAs, we also proposed two learning-based methods and analyzed their performance on a recurrent SR network. Furthermore, the effect of weight sharing in the fusion methods is analyzed as well. In 2 × and 4 × SR experiments on benchmark datasets, our method had higher PSNR values of 0.295 and 0.148 dB on average than those using single-scale NLA and cross-scale NLA, respectively, and produced visually more pleasing SR results. The weight sharing had a limited but positive effect, depending on datasets. The source code is accessible at https://github.com/Dae12-Han/MSNLN.
Computer-generated holography (CGH), which is a process of generating digital holograms, is computationally expensive. Recently, several methods/systems of parallelizing the process using graphic processing units (GPUs) have been proposed. Indeed, use of multiple GPUs or a personal computer (PC) cluster (each PC with GPUs) enabled great improvements in the process speed. However, extant literature has less often explored systems involving rapid generation of multiple digital holograms and specialized systems for rapid generation of a digital video hologram. This study proposes a system that uses a PC cluster and is able to more efficiently generate a video hologram. The proposed system is designed to simultaneously generate multiple frames and accelerate the generation by parallelizing the CGH computations across a number of frames, as opposed to separately generating each individual frame while parallelizing the CGH computations within each frame. The proposed system also enables the subprocesses for generating each frame to execute in parallel through multithreading. With these two schemes, the proposed system significantly reduced the data communication time for generating a digital hologram when compared with that of the state-of-the-art system.
KEYWORDS: Steganography, Roads, Data hiding, Visualization, Defense and security, Electronics engineering, Eye, Image processing, Current controlled current source, Protactinium
This paper proposes a layered approach to improve the embedding capacity of the existing pixel-value differencing (PVD) methods for image steganography. Specifically, one of the PVD methods is applied to embed a secret information into a cover image and the resulting image, called stego-image, is used to embed additional secret information by the same or another PVD method. This results in a double-layered stego-image. Then, another PVD method can be applied to the double-layered stego-image, resulting in a triple-layered stego-image. Likewise, multi-layered stego-images can be obtained. To successfully recover the secret information hidden in each layer, the embedding process is carefully designed. In the experiment, the proposed layered PVD method proved to be effective.
It is difficult to visually track a user’s hand because of the many degrees of freedom (DOF) a hand has. For this reason, most model-based hand pose tracking methods have relied on the use of multiview images or RGB-D images. This paper proposes a model-based method that accurately tracks three-dimensional hand poses using monocular RGB images in real time. The main idea of the proposed method is to reduce hand tracking ambiguity by adopting a step-by-step estimation scheme consisting of three steps performed in consecutive order: palm pose estimation, finger yaw motion estimation, and finger pitch motion estimation. In addition, this paper proposes highly effective algorithms for each step. With the assumption that a human hand can be considered as an assemblage of articulated planes, the proposed method uses a piece-wise planar hand model which enables hand model regeneration. The hand model regeneration modifies the hand model to fit the current user’s hand and improves the accuracy of the hand pose estimation results. Above all, the proposed method can operate in real time using only CPU-based processing. Consequently, it can be applied to various platforms, including egocentric vision devices such as wearable glasses. The results of several experiments conducted verify the efficiency and accuracy of the proposed method.
The performance of mobile phones has rapidly improved, and they are emerging as a powerful platform. In many vision-based applications, human hands play a key role in natural interaction. However, relatively little attention has been paid to the interaction between human hands and the mobile phone. Thus, we propose a vision- and hand gesture-based interface in which the user holds a mobile phone in one hand but sees the other hand’s palm through a built-in camera. The virtual contents are faithfully rendered on the user’s palm through palm pose estimation, and reaction with hand and finger movements is achieved that is recognized by hand shape recognition. Since the proposed interface is based on hand gestures familiar to humans and does not require any additional sensors or markers, the user can freely interact with virtual contents anytime and anywhere without any training. We demonstrate that the proposed interface works at over 15 fps on a commercial mobile phone with a 1.2-GHz dual core processor and 1 GB RAM.
We describe and evaluate a practical approach for implementing computer-generated-holography (CGH) using multiple graphic processing units (GPUs). The proposed method can generate high-definition (HD) resolution (1920×1080) digital holograms in real-time. In order to demonstrate the plausibility of our method, some experimental results will be given. First, we discuss the advantage of GPUs for CGH against central processing units (CPUs) by comparing the performance of both. Our results show that use of GPUs can shorten CGH computation time by 2791 times. Then, we discuss the potential of multiple GPUs for generating HD resolution digital holograms in real-time by measuring and analyzing the CGH computational time in accordance with the number of GPUs. Our result shows that the CGH computational time decreases nonlinearly, with a logarithmic-like curve, as the number of GPU increases. Therefore, we can determine the number of GPUs to maximize the efficiency. Consequently, our implementation can generate HD resolution digital holograms at a rate of more than 66 hps (holograms-per-second) using two NVIDIA GTX 590 cards.
Natural feature-based approaches are still challenging for mobile applications (e.g., mobile augmented reality), because they are feasible only in limited environments such as highly textured and planar scenes/objects, and they need powerful mobile hardware for fast and reliable tracking. In many cases where conventional approaches are not effective, three-dimensional (3-D) knowledge of target scenes would be beneficial. We present a well-established framework for real-time visual tracking of less textured 3-D objects on mobile platforms. Our framework is based on model-based tracking that efficiently exploits partially known 3-D scene knowledge such as object models and a background’s distinctive geometric or photometric knowledge. Moreover, we elaborate on implementation in order to make it suitable for real-time vision processing on mobile hardware. The performance of the framework is tested and evaluated on recent commercially available smartphones, and its feasibility is shown by real-time demonstrations.
Augmented reality (AR) has recently gained significant attention. The previous AR techniques usually need a fiducial marker with known geometry or objects of which the structure can be easily estimated such as cube. Placing a marker in the workspace of the user can be intrusive. To overcome this limitation, we present an AR system using invisible markers which are created/drawn with an infrared (IR) fluorescent pen. Two cameras are used: an IR camera and a
visible camera, which are positioned in each side of a cold mirror so that their optical centers coincide with each other. We track the invisible markers using IR camera and visualize AR in the view of visible camera. Additional algorithms are employed for the system to have a reliable performance in the cluttered background. Experimental results are given to demonstrate the viability of the proposed system. As an application of the proposed system, the invisible marker can act as a Vision-Based Identity and Geometry (VBIG) tag, which can significantly extend the functionality of RFID. The invisible tag is the same as RFID in that it is not perceivable while more powerful in that the tag information can be
presented to the user by direct projection using a mobile projector or by visualizing AR on the screen of mobile PDA.
A linear method that calibrates a camera from a single view of two concentric semicircles of known radii is presented. Using the estimated centers of the projected semicircles and the four corner points on the projected semicircles, the focal length and the pose of the camera are accurately estimated in real-time. Our method is applied to augmented reality applications and its validity is verified.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.