In order to identify the camouflage materials in military targets, this paper extracts multiple features to study the difference in optical characteristics between natural targets and man-made camouflage materials. Since Fresnel reflection can be regarded as a statistical description of scattering, this paper uses a multi-angle polarization measurement device to measure polarization and scattering characteristics. According to the physical meaning of the Mueller-Jones matrix, the expressions of amplitude ratio and phase retardation are extracted. Based on Pauli decomposition, new scattering similarity parameter formulas is defined. We discuss the curves of three characteristic parameters and analyze the difference between natural objects and camouflage materials. The experimental results show that the characteristic curves change significantly at Brewster’s angle, which clearly distinguishes the target from the camouflage material.
Infrared images and visible images have different imaging principles and contain different information. The fusion of infrared and visible images can synthesize the information of both, and at the same time, the complete edge structure of infrared images can guarantee the acquisition of image information under harsh and complex environments. Therefore, this paper proposes an infrared and visible image fusion method based on deep learning. Visible and infrared image pairs are divided into high-frequency and low-frequency parts in this paper. The weighted average strategy is directly used to add the low-frequency parts of the fused image. This method Uses the ResNet network to visible and infrared images of the high frequency parts of image feature extraction. FISHER discriminant method was used to screen the extracted features, and ZCA whitening was performed on the selected features to further remove the redundant information in the features. The initial weight graph was obtained by L1 generalization of the whitening features, and the final weight graph was obtained by softmax method. The high-frequency parts of infrared image and visible light image were added according to the weights to get the fused image high-frequency part, and the high and low frequency parts of the fused image were added to get the final fused image. The experimental results were compared with other methods in terms of subjective feeling and objective indicators respectively. The experimental results showed that the proposed method was more natural in fusion effect and had advantages in objective indicators.
Optical frequency-domain reflectometry and frequency-modulated continuous wave (FMCW)-based sensing technologies, such as LiDAR and distributed fiber sensors, fundamentally rely on the performance of frequency-swept laser sources. Specifically, frequency-sweep linearity, which determines the level of measurement distortion, is of paramount importance. Sweep-velocity-locked semiconductor lasers (SVLLs) controlled via phase-locked loops (PLLs) have been studied for many FMCW applications owing to its simplicity, low cost, and low power consumption. We demonstrate an alternative, self-adaptive laser control system that generates an optimized predistortion curve through PLL iterations. The described self-adaptive algorithm was successfully implemented in a digital circuit. The results show that the phase error of the SVLL improved by around 1 order of magnitude relative to the one without using this method, demonstrating that this self-adaptive algorithm is a viable method of linearizing the output of frequency-swept laser sources.
The accuracy of air target identification is of great significance for air defense operations and civilian management. A fine-grained recognition model of aerial target based on bilayer faster regions with convolution neural network (Faster R-CNN) with feedback is proposed in the paper. Faster R-CNN model is a typical target detection model based on deep learning. However, its ability to distinguish categories with subtle differences is not enough. In the proposed model, Faster R-CNN model is used for the first training to get a classification model and the clustering analysis of the classification result is used to get confused categories. Then the first training model is fine-tuned to retrain the confusing categories. The model is tested in the FGVC-Aircraft-2013b data set, and the average training accuracy is raised from 88.7% to 89.3%, the accuracy of the classification is raised from 88.98% to 91.21%, which shows that this model is effective in improving the fine-grained identification of air targets.
Infrared small target detection is one of the key techniques in infrared search and track system, and the essence of infrared small detection is background suppression and target enhancement. Inspired by that fact that phase spectrum is proved to be more effective to extract the salient areas than the amplitude spectrum of Fourier transform, a new infrared small target detection method based on phase spectrum of quaternion Fourier transform (PQFT) is proposed in this paper. First of all, four features including intensity, motion, gradients of horizontal and vertical directions are used to construct a quaternion of PQFT. Then, the target enhancement map that highlights the salient regions in the time domain is computed using the inverse PQFT. At last, the real target is directly segmented by an adaptive threshold. Both qualitative and quantitative experiments implemented on real infrared sequences evaluate the proposed method, and the results demonstrate that our method possesses more robustness and effectiveness in terms of background suppression and target enhancement when compared with other conventional methods.
In order to obtain accurate and stable image stitching results, we propose a stitching method for two images captured from different viewpoints based on correlation transformation. Aiming at resolving the limitation of the projective transformation that is commonly used in image stitching, a transformation called dual-correlation transformation is proposed in this paper. First, the estimation result of the fundamental matrix is calculated by the direct linear transformation based on the corresponding points in two images. Second, according to the presented dual-correlation transformation, a pair of correlation transformation matrices that are needed for dual-correlation warp can be obtained to realize the correspondence of each pixel in different images. Up to this stage, the method of image stitching based on transformation matrices has been accomplished. Finally, an optimization method based on factorization is especially proposed to solve the discontinuity problem that may occur in the dual-correlation warp. The experimental results and analyses show that the proposed method can achieve more accurate and natural stitching effects and has less computing time of the images in separate scenes compared with other similar methods.
Distributed optical fiber sensors are an increasingly utilized method of gathering distributed strain and temperature data. However, the large amount of data they generate present a challenge that limits their use in real-time, in-situ applications. This letter describes a parallel and pipelined computing architecture that accelerates the signal-processing speed of sub-terahertz fiber sensor (sub-THz-fs) arrays, maintaining high spatial resolution while allowing for expanded use for real-time sensing and control applications. The computing architecture described was successfully implemented in a field programmable gate array (FPGA) chip. The signal processing for the entire array takes only 12 system clock cycles. In addition, this design removes the necessity of storing any raw or intermediate data.
The recently proposed robust principal component analysis (RPCA) theory and its derived methods have attracted much attention in many computer vision and machine intelligence applications. From a wide view of these methods, independent motion objects are modeled as pixel-wised sparse or structurally sparse outliers from a highly correlated background signal, and all these methods are implemented under an ℓ1 -penalized optimization. Real data experiments reveal that even if ℓ1-penalty is convex, the optimization sometimes cannot be satisfactorily solved, especially when the signal-to-noise ratio is relatively high. In addition, the unexpected background motion (e.g., periodic or stochastic motion) may also be included. We propose a moving object detection method based on a proximal RPCA along with saliency detection. Convex penalties including low-rank and sparse regularizations are substituted with proximal norms to achieve robust regression. After the foreground candidates have been extracted, a motion saliency map using spatiotemporal filtering is constructed. The foreground objects are then filtered out by dynamically adjusting the penalty parameter according to the corresponding saliency values. Evaluations on challenging video clips and qualitative and quantitative comparisons with several state-of-the-art methods demonstrate that the proposed approach works efficiently and robustly.
This paper proposes a finite adaptive neighborhood suppression algorithm based on singular value decomposition for small target detection in the infrared imaging system. The algorithm firstly does singular value decomposition on the whole gray image, selecting the larger singular values to reconstruct the image and achieving the purpose of noise suppression, thereby obtaining the image matrix contains only weak point of the target and its possible. Then, the pixels are divided into foreground and background in the fixed neighborhood followed by contrast enhancement. Experimental results show that this method can effectively preserve image details and the inhibiting effect is better.
Moving small target detection in infrared image is a crucial technique of infrared search and tracking system. This paper present a novel small target detection technique based on frequency-domain saliency extraction and image sparse representation. First, we exploit the features of Fourier spectrum image and magnitude spectrum of Fourier transform to make a rough extract of saliency regions and use a threshold segmentation system to classify the regions which look salient from the background, which gives us a binary image as result. Second, a new patch-image model and over-complete dictionary were introduced to the detection system, then the infrared small target detection was converted into a problem solving and optimization process of patch-image information reconstruction based on sparse representation. More specifically, the test image and binary image can be decomposed into some image patches follow certain rules. We select the target potential area according to the binary patch-image which contains salient region information, then exploit the over-complete infrared small target dictionary to reconstruct the test image blocks which may contain targets. The coefficients of target image patch satisfy sparse features. Finally, for image sequence, Euclidean distance was used to reduce false alarm ratio and increase the detection accuracy of moving small targets in infrared images due to the target position correlation between frames.
CFAR (Constant False Alarm Rate) is a key technology in Infrared dim-small target detection system. Because the traditional constant false alarm rate detection algorithm gets the probability density distribution which is based on the pixel information of each area in the whole image and calculates the target segmentation threshold of each area by formula of Constant false alarm rate, the problems including the difficulty of probability distribution statistics and large amount of algorithm calculation and long delay time are existing. In order to solve the above problems effectively, a formula of Constant false alarm rate based on target coordinates distribution is presented. Firstly, this paper proposes a new formula of Constant false alarm rate by improving the traditional formula of Constant false alarm rate based on the single grayscale distribution which objective statistical distribution features are introduced. So the control of false alarm according to the target distribution information is implemented more accurately and the problem of high false alarm that is caused of the complex background in local area as the cloud reflection and the ground clutter interference is solved. At the same time, in order to reduce the amount of algorithm calculation and improve the real-time characteristics of algorithm, this paper divides the constant false-alarm statistical area through two-dimensional probability density distribution of target number adaptively which is different from the general identifying methods of constant false-alarm statistical area. Finally, the target segmentation threshold of next frame is calculated by iteration based on the function of target distribution probability density in image sequence which can achieve the purpose of controlling the false alarm until the false alarm is down to the upper limit. The experiment results show that the proposed method can significantly improve the operation time and meet the real-time requirements on condition of keeping the target detection performance.
In the analysis of neural cell images gained by optical microscope, accurate and rapid segmentation is the foundation of nerve cell detection system. In this paper, a modified image segmentation method based on Support Vector Machine (SVM) is proposed to reduce the adverse impact caused by low contrast ratio between objects and background, adherent and clustered cells’ interference etc. Firstly, Morphological Filtering and OTSU Method are applied to preprocess images for extracting the neural cells roughly. Secondly, the Stellate Vector, Circularity and Histogram of Oriented Gradient (HOG) features are computed to train SVM model. Finally, the incremental learning SVM classifier is used to classify the preprocessed images, and the initial recognition areas identified by the SVM classifier are added to the library as the positive samples for training SVM model. Experiment results show that the proposed algorithm can achieve much better segmented results than the classic segmentation algorithms.
Considering the complex features of public
places such as mass passenger flow, congestion
and disorder, it is hard to count the number of
passengers precisely. In this paper, a method of
passenger counting system is proposed based on
the range image. This system takes advantage of a
Kinect sensor to acquire the 3D depth information.
First of all, the range image is smoothed with Mean
Shift algorithm to direct every partial pixel toward the
maximal probability density enhanced. Therefore,
the smoothened range image can be better applied
to the subsequent image processing. Secondly, a
classical dynamic threshold segmentation method is
applied to segment the head regions, and the 3D
characteristics of heads are analyzed. They are
differentiated by pixel width, area and circle-like
shape, which efficiently surpass the limits of 2D
images. In addition, the self-adaptive multi-window
tracing method is applied for predicting possible
trajectories, speeds and positions of multi-windows,
in which we establish tracing chains of multiple
targets and lock the tracing targets precisely. This
method proves to be efficient for background noise
removal and environmental disturbance suppression
and can be applied for implementation of the
identifying and counting of heads in public places.