Open Access
15 October 2019 Zernike moment invariants for hand vein pattern description from raw biometric data
Author Affiliations +
Abstract

We propose an invariant description method based on Zernike moments to classify hand vein patterns from raw infrared (IR) images. Orthogonal moments provide linearly independent descriptors and are invariant to affine transformations, such as translation, rotation, and scaling. A mathematical expression is given to derive a set of moment invariants. The obtained features have all the properties of moment invariants with the additional feature of image contrast invariance. For dorsal hand vein pattern acquisition, an IR imaging system is implemented. Also, a public database is used for a palm vein recognition task. A correct rate classification (CRC) above 99.9% is achieved using a set of rotation, scale, and intensity Zernike moment invariants. Additionally, multilayer perceptron and K-nearest neighbors are used as classifiers having as input data the Zernike normalized moments. A discriminative feature evaluation of the image moments allows the reduction of the number of descriptors while maintaining a high classification rate of 99%. The efficiency of the moment descriptors is evaluated in terms of accuracy and reduced computational cost by (a) avoiding the necessity of a preprocessing stage and (b) reducing the feature vector dimension. Experimental results show that Zernike moment invariants are able to achieve hand vein recognition without image preprocessing or image normalization with respect to change of size, rotation, and intensity.

1.

Introduction

Biometric technology has been used in the accurate determination of an individual’s identity based on physical, chemical, or behavioral attributes.1 Human identification through hand vein patterns is a technique that appeared in 1990 and has been studied since then by several researchers. Typically, the vein pattern images used for people recognition include zones of interest like fingers,24 palmar region,57 dorsum of hand,811 forearm, and wrist.12,13 Most of the hand vein recognition systems4,6,814 require four steps: (1) image acquisition, (2) preprocessing of digital images that define the region of interest (RoI), (3) feature extraction of hand pattern, and (4) classification/matching, as shown in Fig. 1.

Fig. 1

Flowchart of a hand vein recognition system: traditional approach (top stream) and the proposed approach (bottom stream). As the sensed image D(f(x,y)) is a degraded version of the original scene f(x,y), then the invariant vector ψ1,ψ2,,ψχ satisfies that |ψ(f)ψ(D(f(x,y)))|0.

JEI_28_5_053019_f001.png

To extract vein features, reference points15 such as bifurcations and end-points of veins have been computed from a segmented and improved image using morphological operators and contrast enhancement techniques, respectively. Under ideal imaging conditions and preprocessing, reference points can easily be extracted from the image skeleton. However, image skeletons extracted from vein images are often unstable because the raw vein images suffer from low contrast. Usually, the feature extraction methods like histogram of oriented gradients16 and scale-invariant feature transform17 are often used as descriptors of orientation, scale, and intensity for vein patterns. However, they are not robust to noise presence and are partially invariant to translation, rotation, scale, and intensity (TRSI). Also, the vector descriptors are large with variable size, which complicates classification. Moreover, both techniques require high computational time.18 On the other hand, the local binary pattern (LBP)14,19 algorithm has been used for vein recognition, but whenever there are spatial and contrast changes during image acquisition, the performance of this description technique decreases.19

Moment invariants also have been implemented for vein pattern description. Xueyan et al.2 extract finger-vein pattern features with modified Hu moment invariants, which are computed from reconstructed images by dyadic wavelet transform. Li et al.3 use Zernike moments to describe shape features of preprocessed finger-vein images. In these last works, a preprocessing stage is carried out to deal with spatial distortions and contrast changes in the input images. These procedures can be time consuming and require computing resources during the image geometric corrections related to scale, translation, rotation, and the radiometric normalization.

In this paper, we describe the hand vein pattern images by a set of feature invariants to TRSI transformations. Zernike orthogonal moments defined in polar coordinates20 are used for invariant feature extraction from raw biometric data, following the bottom stream, as is shown in Fig. 1. The performance through Zernike moments technique has a higher accuracy because it does not require a preprocessing stage. The main advantage of this approach is that the vein features based on Zernike orthogonal moments have a minimum amount of redundant information, which are invariant to spatial and radiometric transformations and also robust to noise.21

In this work, each input raw biometric image is described by a pattern vector ψ1,ψ2,,ψχ obtained from the selected TRSI moment invariants. The classification step is done in the feature space. This results in a stable CRC curve. Furthermore, four different types of classifiers are used: K-nearest neighbors (KNN), multilayer perceptron (MP),22 Bayesian (BN),22 and naive Bayesian (NB) networks.22 These classifiers have shown to perform well, obtaining a CRC over 99%.

In this paper, sections are organized as follows: Sec. 2 shows a scheme of the implemented infrared (IR) imaging system for vein pattern image acquisition. The public database used is also described. Section 3 defines a set of TRSI invariant descriptors based on Zernike orthogonal moments. Experimental results of four different classifiers that use a discriminative metric to select invariant descriptors are given in Sec. 4. Finally, the conclusions are presented in Sec. 5.

2.

Infrared Imaging System

2.1.

Home Database

The hand vein pattern is an interconnected network of blood vessels located underneath human skin. Vein pattern structure is approximately located at 2.5 to 3.0 mm in the subcutaneous layer. From 700 to 900 nm, IR light can penetrate the skin deeply, reaching the blood vessels located in subcutaneous tissue.23,24 Vein detection through near-IR (NIR) light is based on the absorption principle of IR radiation by principal blood components like oxyhemoglobin, deoxyhemoglobin, and water.2426 Through IR radiation, we obtain an image in which veins appear darker than the surrounding tissue in response to IR radiation exposure.

For vein pattern image acquisition, a JAI progressive scanning multispectral 2CCD camera was used, which can capture information in visible and IR channels simultaneously by means of a dichroic prism along the same optical axis.27 Visible and NIR sensors’ size are 4.76  mm×3.57  mm. The spatial resolution of acquired images is 1024×768 pixels. Wavelengths for the visible channels are approximately 450, 550, and 630 nm, whereas for it IR channel is around 880 nm. Illumination source has a maximum emission peak of about 880 nm; this IR source contains 60 light emission diodes distributed in a concentric circle. Figure 2 shows the implemented image acquisition system.28 The field of view of the camera is β=2arctan[hs/(2f)]=16.38  deg, where ho=20  cm and hs=3.57  mm denote target and sensor’s size, respectively, s0=53  cm indicates the distance between target and camera lens, and fl=25  mm represents the focal distance.

Fig. 2

Infrared hand vein acquisition system.

JEI_28_5_053019_f002.png

For the home database, volunteers were informed how to put their hands on the base in front of a uniform-colored background so that their knuckles coincided with the edge of the base. During the image capturing, we allowed a certain degree of variations of hand pose. This was done in order to increase intraclass diversity and simulate a real environment application.

The UPT database consists of 576 vascular pattern images obtained from 36 volunteers, 19 females, and 17 males aged 20 to 25, from which 8 images of each hand were acquired in NIR. Because the vein pattern of the right hand is different from the vein pattern of the left hand, they were taken as two different subjects,29 therefore, the subject’s number is 72.

2.2.

PolyU Multispectral Palmprint Database

In order to evaluate the feature extraction algorithms, we use the PolyU Multispectral Palmprint Database (PUMPD) from the Biometric Research Center of the Hong Kong Polytechnic University. The database consists of 6000 vascular pattern images obtained from 250 volunteers, 55 females, and 195 males, from which 24 images of both hands from each subject were acquired in four channels (red, green, blue, and NIR).30 Again, because of vein patterns of the right and left hands are different, the number of subjects is 500 from 250 volunteers. Some images from the PolyU Database are shown in Fig. 3.

Fig. 3

Images of subject 9 from PUMPD in different channels: (a) red, (b) green, (c) blue, and (d) NIR.

JEI_28_5_053019_f003.png

3.

TRSI Zernike Moment Invariants

Image representation through characteristic descriptors is the main objective in this section. Moment invariants are widely used in pattern recognition because they can effectively characterize an image in a general way through a small set of moments31,32 and are invariant to the most common affine TRS transformations that an image undergoes. Additionally, orthogonal moments are robust to noise presence.21 Invariant moments proposed in this work are based on Zernike polynomials.

3.1.

Affine Transformations

Imaging conditions cause that vein pattern image can change. According to Flusser et al., imaging conditions are commonly imperfect, so observed image represents a degraded version of the original scene.21 Degradations in the digital image can be radiometric and/or geometric. A common geometric spatial transform is affine transformation, which can be represented by means of the following transformation matrix:33

Eq. (1)

g(x,y)=kf(x,y),

Eq. (2)

(xy)=[(cx00cy)(cos(α)sin(α)sin(α)cos(α))](xy)+(txty),
where k is the intensity factor. The vector (cx,cy) gives the geometrical center of the image region, tx and ty are the horizontal and vertical translations, and α is the rotation angle. Pixel coordinates in the input image and its corresponding transformed image are, respectively, (x,y) and (x,y).

3.2.

Zernike Orthogonal Moments on a Unitary Disk

Let f(ri,j,θi,j) be a M×N gray level image defined in discrete polar coordinates: ri,j=xj2+yi2 and θi,j=arctan(yixj), for xj=a+j·(ba)N1, yi=bi·(ba)M1, i=0,,M1, and j=0,,N1. Parameters a and b are real numbers and take values according to a suitable domain inside (or outside) a unit circle |r|1.34

The 2-D discrete Zernike moments of radial order n and angular repetition l are as follows:20

Eq. (3)

Zn,l=n+1πi=0M1j=0N1f(ri,j,θi,j)·Rn,l(ri,j)·e1i·lθi,j,
with |l|n and n|l| being even. Here, Rn,l(ri,j) is the discrete real value radial polynomial given by

Eq. (4)

Rn,l(r)=s=0n|l|2(1)s(ns)!s!(n+|l|2s)!(n|l|2s)!rn2  s.

The number of Zernike moments can be computed using the following expression given by35

Eq. (5)

χ=n=0Max{n2+1},
where Max represents the highest order of Zernike moments.

3.2.1.

TRSI invariant descriptors based on Zernike moments

A set of Zernike moment invariants is given as follows:

  • For the translation-invariant description, let the origin of a coordinate system be located at the image centroid (xc=m1,0/m0,0,yc=m0,1/m0,0). It can be calculated from the zero-order geometric moment m0,0 of a binary image and the first-order geometric moments m1,0 and m0,1.

  • If an image object f(r,θ) is rotated as f(r,θα), where α is the rotation angle, its corresponding moments are Zn,lR(f)=Zn,l(f)expiαl. The magnitude-based method36 is used for rotation invariance for which |Zn,lR(f)|=|Zn,l(f)|.

  • If an image object f(r,θ) is scaled as f(r/c,θ), the scaling factor c can be computed using c=m0,0(f)/m0,0(f). Let n=l+ in Eq. (3), the invariants to image rotation and scaling are37

    Eq. (6)

    ψl+,l(f)=t=0l++1l+t+1(κ=t(Γf)(l+κ+2)C,κl·Dκ,tl)·|Zl+t,l(f)|,
    for Γf=|Z0,0(f)| C,κl=(1)κ·(2l++1+κ)!κ!(κ)!(2l+1+κ)! and Dκ,tl=(2l+2  t+2)κ!(2l+κ+1)!(κt)!(2l+κ+t+2)! 0tκ

  • If the intensity distribution of an image f(r,θ) is changed as kf(r,θ), the intensity factor k can be obtained using k=1c2(Z0,0(f)/Z0,0(f)) using m0,0=Z0,0/π.38

    If =0 and l=1,2,3,, then the proposed n=l TRSI Zernike moment invariants are given by

    Eq. (7)

    ψ˜l,l(f)=kl/2·|Zl,l(f)||Z0,0(f)|1+l/2.

3.3.

Numerical Experiments

In this subsection, the TRSI Zernike moment invariants are proven using a set of artificial distorted images. In Fig. 4, the test images are shown. The values of the moment invariants were computed for each one of these images using Eq. (7) and the logarithm of the values was taken to reduce the dynamic range. The TRSI moment invariants of the i=1,,10 distorted images of Fig. 4 are given in Table 1 and graphed in Fig. 5.

Fig. 4

Images used to demonstrate the invariant properties of descriptors.

JEI_28_5_053019_f004.png

Table 1

TRSI moment invariants for the i=1,…,10 images of Fig. 4.

Sample imageSpatial distortions and contrast changesψ˜11ψ˜22ψ˜33ψ˜44ψ˜55ψ˜66ψ˜77ψ˜88ψ˜99ψ˜1010
1α=06.8511.9919.3823.9529.8735.3941.1546.7552.8358.49
c=1
k=1
2α=40  deg6.8511.9819.3723.9529.8635.3941.1446.7552.8358.50
c=1
k=1
3α=240  deg6.9711.9819.3323.9329.8835.4241.1846.7952.8758.51
c=0.9
k=1.2
4α=06.8512.0019.4323.9729.9035.4341.2046.8052.8958.56
c=0.7
k=1
5α=280  deg6.8512.0019.4123.9629.8935.4141.1846.7852.8858.54
c=0.8
k=1
6α=200  deg6.8511.9919.3823.9529.8735.3941.1546.7552.8358.49
c=1
k=1
7α=120  deg6.8511.9819.3823.9429.8635.4041.1546.7552.8358.49
c=1.1
k=1
8α=160  deg6.9111.9419.1723.8929.8135.3241.0946.6452.7558.32
c=1.2
k=0.8
9α=06.8511.9819.3623.9429.8735.3941.1546.7552.8258.47
c=1.3
k=1
10α=06.8311.9919.4723.9529.8835.4141.1446.7652.8258.51
c=1
k=1.1
σ0.04200.01700.07970.02160.02420.02990.02980.04370.03950.0646

Fig. 5

TRS and TRSI Zernike moment invariants computed for each i=1,,10 distorted image. The proposed features have all the properties of moment invariants along with the additional feature of image contrast invariance.

JEI_28_5_053019_f005.png

An image that undergoes uniform contrast variation k, like those that are shown in Figs. 6(a)6(c), can be represented equivalently by scaling of the intensity function.38 Figures 6(d)6(f) exemplify the processes of intensity normalization using the factor k.

Fig. 6

(a)–(c) Input images from CASIA database. (d)–(f) Intensity normalized images of (a)–(c), respectively.

JEI_28_5_053019_f006.png

However, in this work, the k factor is used to normalize the descriptors in intensity but not to normalize the raw biometric data. The normalization factor k is used in Eq. (7).

4.

Experimental Results

The classification stage is carried out in the obtained space of descriptors using the feature extraction techniques previously described in Sec. 3. During this stage, the input images are transformed from raw biometric data to Zernike moments. Afterward, by means of Eqs. (6) and (7), a set of descriptors are obtained. It converts the image of M×N pixel values into a pattern vector composed by the first χ TRSI Zernike moment invariants. This method was applied to the PUMPD and also in our home database. A 3-D space of descriptors based on TRSI Zernike moment invariants is shown in Fig. 7(a).

Fig. 7

Three-dimensional feature space with six vein pattern classes and eight sample images from our home database. (a) Zernike normalized moments invariants to affine transformations and illumination changes. (b) Reference and distorted versions of the reference image due to (c) intensity, (d) rotation, and (e) scale changes. Moreover, it is important to see that some images like (d) are distorted geometrically by vertical shearing. The images from (b) to (e) correspond to subject F.

JEI_28_5_053019_f007.png

In spite of some images including extra information about the hand, such as parts of the thumb, wrist, or scars, it can be seen that each class forms a cluster because the Zernike moments are invariant to affine transformation and illumination changes. Some points in the graph have been slightly scattered from their respective cluster. This dispersion is because the input images suffer from perspective deformations due to a nonperpendicular view (for example shearing) during image acquisition.

As we can see, similar samples are grouped in closer proximity to each other. Nearly identical or identical samples are placed in the same cluster.

4.1.

Discriminative Feature Selection Algorithm

Since feature selection is meaningful to establish a functional neural network, a discriminative metric is implemented.39 It evaluates the effectiveness of a given moment invariant by means of the formula:40

Eq. (8)

Q(|ψ˜n,l|,Si,Sj)=η[σ(Si|ψ˜n,l|)+σ(Sj,|ψ˜n,l|)][m(Si,|ψ˜n,l|)m(Sj,|ψ˜n,l|)],
where σ(Si,|ψ˜n,l|) and m(Si,|ψ˜n,l|) are the standard deviation and the mean of each invariant feature, respectively, and η=3.0. Si and Sj are the i’th and j’th classes, and ψ˜n,l are the orthogonal moment invariants.

4.2.

Correct Rate Classification Using TRSI Zernike Moment Invariants

In this work, we use WEKA software, which is commonly used as a test platform to measure the classification capacity of several well-known pattern recognition models, such as MP, BN, NB, and KNN.22 All of the percentages shown in this work were calculated through cross-validation. From this point of view, an experimental comparison assesses the ability of TRSI Zernike moment invariants of Eqs. (6) and (7) for vein pattern recognition using the raw biometric data.

4.2.1.

Set 1: PUMPD database

Let w=(w1,w2,,wW) and W=500 pattern classes of the PUMPD public database. For each class wk, there are 12 acquired versions in the testing dataset. The χ-dimensional pattern vector ψ˜=(ψ˜1,ψ˜2,,ψ˜χ) is based on the TRSI Zernike moment invariants using Eqs. (6) and (7) for maximum order nmax=lmax. The classification results for order nmax=18 with χ=100 TRSI Zernike moment invariants are shown in Fig. 8(a). Furthermore, the percentage behavior of CRC is given against the number of descriptors that are used in the classification stage, using MP as a classifier in Fig. 8(b).

Fig. 8

Classification results using TRSI Zernike moment invariants. (a) Performance of the classification method for orders nmax=18 and (b) comparison of different orders using an MP.

JEI_28_5_053019_f008.png

From Table 2, it is clear to see that the MP classifier achieves a CRC classification above 99% using at least χ=16 invariants descriptors.

Table 2

CRC classification results above 99% for the UPHK database.

nmaxχ(nmax) Zernike moment invariantsKNNBNNBMP
61698.9797.5798.3799.35
82599.4298.9299.0299.52
103699.5099.1399.0399.57
124999.4899.2899.0099.52
146499.5899.3098.8599.55
168199.9799.3098.9399.50
1810099.5799.3398.8299.42
Note: Bold values correspond to the highest CRC obtained for each nmax.

The receiver operating characteristic (ROC) curves from the four tested models reached high performances; MP apparently displays better suboptimal results, as shown in Fig. 9. On the other hand, the area under ROC confirms that MP has a better performance since it has an area of 0.9577, followed by KNN with 0.9523, NB with 0.9457 and finally, the BayesNet with 0.9338.

Fig. 9

Comparison of ROC curves using four different classification algorithms.

JEI_28_5_053019_f009.png

Using the discriminative feature metric of Eq. (8), a set of χselected TRSI Zernike moment invariants were selected for the classification task. In Fig. 10, the results are shown; through the selection of Zernike descriptors (χ=100 and χselected=46), the input data to the classifiers are reduced 54%. In this case, the CRC drops less than 1%.

Fig. 10

CRC percentage using a complete set (C) with χ=100 and reduced set (R) with χselected=46 of TRSI Zernike moment invariants.

JEI_28_5_053019_f010.png

Since the first stage of a recognition system includes traditional image processing methods in order to improve information about the potential objects of interest in the scene, most of the papers in Table 3 use this procedure to enhance and normalize the original input images. Conversely, the proposed method analyzes the parametric space of geometric and radiometric image degradations. This method excludes the contrast enhancement, RoI extraction stage, and the image normalization. Moreover, our method is robust to noise presence and uses a minimal descriptors number χ=16 to obtain a CRC above 99%.

Table 3

CRC classification results for the UPHK database.

ReferencePreprocessingFeature extractionCRC
Cao et al.41RoI extractionThinning algorithmMatching score = 99.50%
Contrast enhancement
Multiscale Gaussian matched filter
Binarization
Noise reduction
Al-Juboori et al.42Enhancement filterWavelet transformEuclidean matching = 99.86%
Locality preserving
Projections (LPP)
LBP
Variance (LBPV)
Gumaei et al.43RoI extraction Whitening filter and contrast normalizationNormalized Gist-based feature extraction and feature reduction using autoencoderRegularized extreme learning machine = 99.83%
Zhang et al.44RoI extractionPalmprint feature extraction by texture coding Palmvein feature extraction by matched filters Postprocessing operations to remove some small regionsScore level fusion
99.69%
Zhang et al.45Visual Geometry Group model F (VGG-F)Convolutional neural Networks (CNN) and Vector of locally Aggregated descriptors (VLAD)Equal error rate weighted fusion = 100%
Proposed approachAnyTRSI Zernike moment invariantsKNN = 99.97%
BN = 99.33%
NB = 99.03%
MP = 99.57%

4.2.2.

Set 2: UPT home database

Let w=(w1,w2,,wW) and W=72 pattern classes of the UPT home database. For each class wk, there are eight acquired versions in the testing dataset. The χ-dimensional pattern vector ψ˜=(ψ˜1,ψ˜2,,ψ˜χ) is based on the TRSI Zernike moment invariants using Eqs. (6) and (7) for maximum order n=l=4. Figure 11 shows images from the UPT database.

Fig. 11

Input images from subject 28 from our home database.

JEI_28_5_053019_f011.png

We can observe that in addition to the geometric and radiometric distortions, the images suffer from perspective deformations due to a nonperpendicular view. Again, some images include extra information about the hand, such as parts of the thumb, wrist, or scars. In spite of that, Fig. 12(a) shows a CRC above 80% using only χ=9 TRSI moment invariants with order n=l=4. An ROC curve using MP for the UPT database is shown in Fig. 12(b). It is visually clear that there are more true positives than false positives in the entire curve.

Fig. 12

(a) Classification results using TRSI Zernike moment invariants with order n=l=4, and χ=9. (b) ROC curve using MP as a classifier.

JEI_28_5_053019_f012.png

Due to image acquisition system conditions (some perspective variations and other alterations), this experiment did not reach higher performance rates; nevertheless, the area under ROC is close to 0.7 (0.6783).

5.

Conclusions and Discussion

In practice, some factors, for instance, environmental, nonuniform illumination, and hand pose affect the image acquisition stage and increment the presence of spatial distortions and contrast changes in the sensed image. It is well known that a traditional approach of a hand vein recognition system requires RoI extraction followed by data preprocessing like contrast enhancement, spatial filters, binarization, mathematical morphology, and so on. Additionally, image normalization with respect to change of size, translation, rotation, and intensity can be required.

In this paper, we describe all images by a set of normalized features that are invariant with respect to TRSI transformations. Numerical experiments have been done using a set of artificial distorted images. We can see in Fig. 5 and Table 1 that the close range of the proposed TRSI Zernike moment invariants is reduced. This means that the descriptors defined in Eq. (7) have all the properties of TRS Zernike moment invariants with the additional feature of image contrast invariance.

In this way, two experiments were carried out in order to evaluate the performance of the proposed TRSI Zernike moment invariants on hand vein images without any kind of preprocessing. For the PUMPD database (500 subjects with 12 versions of each subject), an optimized approach selects χselected=46 TRSI moment invariants achieve a CRC above 99.52% using MP as a classifier, as can be seen in Fig. 10. The results obtained from real data show that the invariant selected features require a lower computational cost compared to existing methods listed in Table 3.

On the other hand, for the UPT home database (72 subjects with eight versions of each subject), χselected=9 TRSI selected moment invariants achieve 80% classification rate. In this case, in addition to the geometric and radiometric distortions, the images suffer from perspective deformations due to a nonperpendicular view during image acquisition and also include more information about the hand, such as parts of the thumb, wrist, or scars. In Fig. 7, we can see similar samples that are grouped in closer proximity to each other. Nearly identical or identical samples are placed in the same cluster. During the pattern classification process, the recognition system is able to handle changes in the dataset imputed to spatial distortions and extra information. However, due to k-fold cross-validation is a reliable test for classification models, and the UPT home database has several distortions, then the tested MP does not reach high recognition rates. In future work, it is proposed to add shearing invariants to TRSI Zernike moment invariants. In addition, more distorted samples for each vein pattern class can be added to the home database.

Acknowledgments

R.C.-O. thanks to Consejo Nacional de Ciencia y Tecnología (CONACyT), award no. 436298. We extend our gratitude to the reviewers and Jennifer Speier for their useful suggestions.

References

1. 

A. K. Jain, P. Flynn and A. A. Ross, Handbook of Biometrics, Springer, New York (2007). Google Scholar

2. 

L. Xueyan et al., “Vein pattern recognitions by moment invariants,” in 1st Int. Conf. Bioinf. and Biomed. Eng., 612 –615 (2007). https://doi.org/10.1109/ICBBE.2007.160 Google Scholar

3. 

J. Li et al., “Finger-vein recognition based on improved Zernike moment,” in Chin. Autom. Cong. (CAC), 2152 –2157 (2017). https://doi.org/10.1109/CAC.2017.8243129 Google Scholar

4. 

B. J. Kang et al., “Multimodal biometric method that combines veins, prints, and shape of a finger,” Opt. Eng., 50 (1), 017201 (2011). https://doi.org/10.1117/1.3530023 Google Scholar

5. 

G. S. Badrinath and P. Gupta, “Palmprint based recognition system using phase-difference information,” Future Gener. Comput. Syst., 28 (1), 287 –305 (2012). https://doi.org/10.1016/j.future.2010.11.029 FGSEVI 0167-739X Google Scholar

6. 

J.-G. Wang et al., “Person recognition by fusing palmprint and palm vein images based on ‘Laplacianpalm’ representation,” Pattern Recognit., 41 (5), 1514 –1527 (2008). https://doi.org/10.1016/j.patcog.2007.10.021 Google Scholar

7. 

K. Zhang, D. Huang and D. Zhang, “An optimized palmprint recognition approach based on image sharpness,” Pattern Recognit. Lett., 85 65 –71 (2017). https://doi.org/10.1016/j.patrec.2016.11.014 PRLEDG 0167-8655 Google Scholar

8. 

Z. Honarpisheh and K. Faez, “An efficient dorsal hand vein recognition based on firefly algorithm,” Int. J. Electr. Comput. Eng., 3 (1), 30 –41 (2013). https://doi.org/10.11591/ijece.v3i1.1760 Google Scholar

9. 

W. Kang, “Vein pattern extraction based on vectorgrams of maximal intra-neighbor difference,” Pattern Recognit. Lett., 33 (14), 1916 –1923 (2012). https://doi.org/10.1016/j.patrec.2012.02.020 PRLEDG 0167-8655 Google Scholar

10. 

L. Wang, G. Leedham and D. S.-Y. Cho, “Minutiae feature analysis for infrared hand vein pattern biometrics,” Pattern Recognit., 41 (3), 920 –929 (2008). https://doi.org/10.1016/j.patcog.2007.07.012 Google Scholar

11. 

A. M. Badawi, “Hand vein biometric verification prototype: a testing performance and patterns similarity,” in Int. Conf. Image Process., Comput. Vision and Pattern Recognit., 3 –9 (2006). Google Scholar

12. 

Q. Zhao et al., “Design and implementation of a contactless multiple hand feature acquisition system,” Proc. SPIE, 8371 83711Q (2012). https://doi.org/10.1117/12.919100 PSISDG 0277-786X Google Scholar

13. 

M. Heenaye and M. Khan, “A multimodal hand vein biometric based on score level fusion,” Procedia Eng., 41 897 –903 (2012). https://doi.org/10.1016/j.proeng.2012.07.260 Google Scholar

14. 

M. Stanuch and A. Skalski, “Artificial database expansion based on hand position variability for palm vein biometric system,” in IEEE Int. Conf. Imaging Syst. and Techn. (IST), 1 –6 (2018). https://doi.org/10.1109/IST.2018.8577123 Google Scholar

15. 

Y.-P. Hu et al., “Hand vein recognition based on the connection lines of reference point and feature point,” Infrared Phys. Technol., 62 110 –114 (2014). https://doi.org/10.1016/j.infrared.2013.10.004 IPTEEY 1350-4495 Google Scholar

16. 

N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in IEEE Comput. Soc. Conf. Comput. Vision and Pattern Recognit. (CVPR), 886 –893 (2005). https://doi.org/10.1109/CVPR.2005.177 Google Scholar

17. 

H.-G. Kim et al., “Illumination normalization for SIFT based finger vein authentication,” Lect. Notes Comput. Sci., 7432 21 –30 (2012). https://doi.org/10.1007/978-3-642-33191-6 LNCSD9 0302-9743 Google Scholar

18. 

Z. Wang et al., “Pedestrian detection using boosted HOG features,” in 11th Int. IEEE Conf. Intell. Transp. Syst., 1155 –1160 (2008). https://doi.org/10.1109/ITSC.2008.4732553 Google Scholar

19. 

B. C. Liu, S. J. Xie and D. S. Park, “Finger vein recognition using optimal partitioning uniform rotation invariant LBP descriptor,” J. Electr. Comput. Eng., 2016 1 –10 (2016). https://doi.org/10.1155/2016/7965936 Google Scholar

20. 

M. R. Teague, “Image analysis via the general theory of moments,” J. Opt. Soc. Am., 70 (8), 920 –930 (1980). https://doi.org/10.1364/JOSA.70.000920 JOSAAH 0030-3941 Google Scholar

21. 

J. Flusser, T. Suk and B. Zitova, 2D and 3D Image Analysis by Moments, Wiley, Chichester (2016). Google Scholar

22. 

I. H. Witten et al., Data Mining, Practical Machine Learning Tools and Techniques, 4th ed.Morgan Kaufmann Publishers Inc., Elsevier Inc., San Francisco, California (2017). Google Scholar

23. 

P. R. Deepak et al., “Enhancement of vein patterns in hand image for biometric and biomedical application using various image enhancement techniques,” Procedia Eng., 38 1174 –1185 (2012). https://doi.org/10.1016/j.proeng.2012.06.149 Google Scholar

24. 

A. Marcotti, M. B. Hidalgo and L. Mathé, “Non-invasive vein detection method using infrared light,” IEEE Lat. Am. Trans., 11 (1), 263 –267 (2013). https://doi.org/10.1109/TLA.2013.6502814 Google Scholar

25. 

M. Toro and H. L. Correa, “Biometric identification using infrared dorsum hand vein images,” Ing. Invest., 29 (1), 90 –100 (2009). https://doi.org/10.15446/ing.investig Google Scholar

26. 

M. Watanabe et al., “Palm vein authentication technology and its applications,” in Proc. Biom. Consortium Conf., (2015). Google Scholar

27. 

, “User’s manual. AD-080GE digital 2CCD progressive scan multi-spectral camera,” (2017) http://www.jai.com/products/ad-080-ge May ). 2017). Google Scholar

28. 

R. Castro-Ortega et al., “Biometric analysis of the palm vein distribution by means two different techniques of feature extraction,” Proc. SPIE, 9217 92171W (2014). https://doi.org/10.1117/12.2061085 PSISDG 0277-786X Google Scholar

29. 

A. K. Jain, R. Bolle and S. Pankanti, Biometrics: Personal Identification in Networked Society, Springer, New York (2005). Google Scholar

30. 

, “PolyU multispectral palmprint database,” (2012) http://www4.comp.polyu.edu.hk/biometrics/MultispectralPalmprint/MSP.htm August ). 2012). Google Scholar

31. 

M.-K. Hu, “Visual pattern recognition by moment invariants,” IRE Trans. Inf. Theory, 8 (2), 179 –187 (1962). https://doi.org/10.1109/TIT.1962.1057692 IRITAY 0018-9448 Google Scholar

32. 

C. Camacho-Bello et al., “High-precision and fast computation of Jacobi-Fourier moments for image description,” J. Opt. Soc. Am. A, 31 (1), 124 –134 (2014). https://doi.org/10.1364/JOSAA.31.000124 JOAOD6 0740-3232 Google Scholar

33. 

R. C. Gonzalez and R. E. Woods, Digital Image Processing, Pearson/Prentice Hall, Upper Saddle River (2008). Google Scholar

34. 

A. Padilla-Vivanco, A. Martínez-Ramírez and F. Granados-Agustín, “Digital image reconstruction using Zernike moments,” Proc. SPIE, 5237 281 –289 (2004). https://doi.org/10.1117/12.514248 PSISDG 0277-786X Google Scholar

35. 

A. Padilla-Vivanco et al., “Comparative analysis of pattern reconstruction using orthogonal moments,” Opt. Eng., 46 (1), 017002 (2007). https://doi.org/10.1117/1.2432878 Google Scholar

36. 

B. J. Chen et al., “Quaternion Zernike moments and their invariants for color image analysis and object recognition,” Signal Process., 92 (2), 308 –318 (2012). https://doi.org/10.1016/j.sigpro.2011.07.018 Google Scholar

37. 

C.-W. Chong, P. Raveendran and R. Mukundan, “The scale invariants of pseudo-Zernike moments,” Pattern Anal. Appl., 6 (3), 176 –184 (2003). https://doi.org/10.1007/s10044-002-0183-5 Google Scholar

38. 

R. Mukundan and K. R. Ramakrishnan, Moment Functions in Image Analysis—Theory and Applications, World Scientific Publishing Company, Singapore (1998). Google Scholar

39. 

K. L. Priddy and P. E. Keller, Artificial Neural Networks: An Introduction, SPIE Press, Bellingham, Washington (2005). Google Scholar

40. 

D. Shen and H. H. S. Ip, “Discriminative wavelet shape descriptors for recognition of 2-D patterns,” Pattern Recognit., 32 (2), 151 –165 (1999). https://doi.org/10.1016/S0031-3203(98)00137-X Google Scholar

41. 

J. Cao et al., “MyPalmVein: a palm vein-based low-cost mobile identification system for wide age range,” in IEEE First Int. Conf. Connected Health: Appl., Syst. and Eng. Technol. (CHASE), 324 –325 (2016). https://doi.org/10.1109/CHASE.2016.64 Google Scholar

42. 

A. M. Al-Juboori et al., “Palm vein verification using multiple features and locality preserving projections,” Sci. World J., 2014 1 –11 (2014). https://doi.org/10.1155/2014/246083 Google Scholar

43. 

A. Gumaei et al., “An improved multispectral palmprint recognition system using autoencoder with regularized extreme learning machine,” Comput. Intell. Neurosci., 2018 1 –13 (2018). https://doi.org/10.1155/2018/8041609 Google Scholar

44. 

D. Zhang et al., “Online joint palmprint and palmvein verification,” Expert Syst. Appl., 38 (3), 2621 –2631 (2011). https://doi.org/10.1016/j.eswa.2010.08.052 Google Scholar

45. 

J. Zhang et al., “Bidirectional aggregated features fusion from CNN for palmprint recognition,” Int. J. Biom., 10 (4), 334 –351 (2018). https://doi.org/10.1504/IJBM.2018.095292 Google Scholar

Biography

Raúl Castro-Ortega received his bachelor’s degree in computational systems from the Higher Technological Institute of Huauchinango (ITSH) and his master’s degree from the Polytechnic University of Tulancingo (UPT) in 2014 and 2015, respectively. He is a PhD degree student in optomechatronics from UPT. His current research areas include digital image processing, biometrics, and pattern recognition. He is a member of SPIE.

Carina Toxqui-Quitl is an assistant professor at the Polytechnic University of Tulancingo. She received her BS degree from the Puebla Autonomous University, Mexico, in 2004. She received her MS and PhD degrees in optics from the National Institute of Astrophysics, Optics, and Electronics in 2006 and 2010, respectively. Her current research areas include image moments, multifocus image fusion, wavelet analysis, and computer vision.

Alfonso Padilla-Vivanco received his bachelor’s degree in physics from Puebla Autonomous University, Mexico, and his MS and PhD degrees both in optics from the National Institute of Astrophysics, Optics, and Electronics in 1995 and 1999, respectively. In 2000, he held a postdoctoral position in the Physics Department at the University of Santiago de Compostela, Spain. He is a professor at the Polytechnic University of Tulancingo. His research interests include optical information processing, image analysis, and computer vision.

Jose Francisco Solís-Villarreal graduated from the Research Center in Information Technologies and Systems (CITIS) of the Autonomous University of Hidalgo State (UAEH) in 2004 as master in computer science. He was graduated from the Computer Research Center (CIC) of the National Polytechnic Institute (IPN) as a doctor of computer science in 2007. Since 2012, he has been a full-time research professor at the University Center UAEM of Teotihuacan Valley of Autonomous University of Mexico State (UAEMex).

Eber Enrique Orozco-Guillén received his bachelor’s degree in physics from the University of Andes, Venezuela, and his MS and PhD degrees both in optics from the National Institute of Astrophysics, Optics, and Electronics in 2007 and 2009, respectively. He has 18 years of experience in bachelor and postgraduate and since 2011 has been a full-time researcher professor at the Polytechnic University of Sinaloa (UPSIN), México. His research interests include infrared thermography, image analysis, and diffuse reflectance spectroscopy.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Raúl Castro-Ortega, Carina Toxqui-Quitl, Alfonso Padilla-Vivanco, Jose Francisco Solís-Villarreal, and Eber Enrique Orozco-Guillén "Zernike moment invariants for hand vein pattern description from raw biometric data," Journal of Electronic Imaging 28(5), 053019 (15 October 2019). https://doi.org/10.1117/1.JEI.28.5.053019
Received: 28 February 2019; Accepted: 12 September 2019; Published: 15 October 2019
Lens.org Logo
CITATIONS
Cited by 5 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Veins

Databases

Biometrics

Infrared imaging

Error control coding

Feature extraction

Image classification

Back to Top