Open Access Paper
28 December 2022 Cross-receiver specific emitter identification based on a deep adversarial neural network with separated batch normalization
Author Affiliations +
Proceedings Volume 12506, Third International Conference on Computer Science and Communication Technology (ICCSCT 2022); 125066N (2022) https://doi.org/10.1117/12.2662708
Event: International Conference on Computer Science and Communication Technology (ICCSCT 2022), 2022, Beijing, China
Abstract
Specific Emitter Identification (SEI) is the approach to identify emitter individuals using received wireless signals. Despite the fact that deep learning has been successfully applied in SEI, the performance is still unsatisfying when the receiver changes. In this paper, we introduce a domain adaptation method, namely Deep Adversarial Neural Network (DANN), for cross-receiver SEI. Furthermore, separated batch normalization (SepBN) is proposed to improve the performance. Results of experiments using real data show that the proposed SepBN-DANN method performs desirably for cross-receiver SEI.

1.

INTRODUCTION

Specific emitter identification (SEI) is the approach to identify wireless devices from corresponding radio frequency (RF) emissions1. SEI is achievable due to the fact that the electronic circuits of different emitters possess unique characteristics, which are determined during the production and manufacturing processes2. As these physical-layer characteristics are distinguishable indpendent of the content of signals, SEI has been extensively applied for wireless security in both military3 civilian fields4.

Recently, deep learning methods have shown superior performance for SEI. Neural networks including recurrent neural network (RNN) and convolutional neural network (CNN) are utilized. Long-short term memory (LSTM), a typical architecture of RNN, is adopted for SEI in References5,6. CNNs are employed for SEI in References7-11. Although deep learning methods have achieved superior performance for SEI if the training and testing data are received under the same condition, performance degrades when the testing data is received under a different condition. In particular, when the test data is received by a receiver different form the receiver of training data, the shifts of data distributions caused by receiver changing, which are also known as domain shifts, can influence the performance dramatically if not considered properly.

An approach based on deep adversarial neural network (DANN) is applied for cross-receiver SEI in this paper. DANN is an unsupervised domain adaptation method, which can mitigate domain shifts caused by different receivers. Furthermore, separated batch normalization (SepBN) is proposed to enhance the performance.

2.

PROBLEM FORMULATION

Assume there are K emitters {E1,E2,…,EK} and M receivers {R1,R2,…,RM}. The ideal equivalent baseband signal transmitted by the emitter is defined as s(t), then the signals emitted by Ek,k ∈ {1,2,…,K} and received by Rm,m ∈ {1,2,…,M} are formulated as:

00225_PSISDG12506_125066N_page_1_1.jpg

where fk (·) is a function denoting the characteristics of Ek, ri(t) denotes the properties of Rm, and n(t) indicates additive white Gaussian noise. Samples are obtained by sampling from x(t):

00225_PSISDG12506_125066N_page_2_1.jpg

where T is the sampling period.

The datasets collected by Rm are denoted as Dm = {(x1,y1),(x2,y2),…,(xN,yN)}, where N represents the number of samples, xi is the ith signal sample and yi is the corresponding label of emitter identity, i.e., yj ∈ {E1,E2,…,EK}. When samples of DS, S ∈ {l,2,…,M} are used for training and samples of DT, T ∈ {l,2,…,M},TS are used for testing, due to the different characteristics of RS and RT, samples of DS and DT are not independently and identically distributed. Therefore, a neural network trained on samples from one receiver can perform poorly on samples from another receiver.

3.

SEPBN-DANN

In this paper, a method named SepBN-DANN is proposed for cross-receiver SEI. The network architecture of SepBN-DANN is shown in Figure 1. SepBN-DANN is a transductive learning method, which utilizes training data with labels and unlabeled testing data. By aligning feature space of training data and testing data, SepBN-DANN learns receiver-invariant features so that performance of cross-receiver SEI can be improved.

Figure 1.

Network architecture of SepBN-DANN.

00225_PSISDG12506_125066N_page_2_2.jpg

3.1

DANN

DANN12 is a typical unsupervised domain adaptation method, which has been successfully applied for image classification12, speaker recognition13 and SEI under varying frequency14. DANN is composed by three sub-networks, namely feature extractor, label classifier and domain discriminator, the parameters of which are denoted as ΘF, ΘC and ΘD, respectively. The feature extractor seeks to learn features that are invariant for receivers and discriminative for emitter identities. Based on the outputs of the feature extractor, the label classifier identifies emitters of inputs, while the domain discriminator distinguishes receivers of inputs. While the feature extractor and the domain discriminator are trained in an adversarial manner to improve features’ invariance for receivers, the feature extractor and the label classifier are trained cooperatively so that the learnt features are discriminative for emitter identities. Therefore, the loss function of ΘC is defined as the cross entropy between the outputs of the label classifier and emitter identities:

00225_PSISDG12506_125066N_page_3_1.jpg

where 𝔼 denotes expectation, 𝕀(·) is the identity function, and ln(·) is the natural logarithm function, 00225_PSISDG12506_125066N_page_3_2.jpg is the output of the label classifier, with 00225_PSISDG12506_125066N_page_3_3.jpg representing the probability of the input belonging to Ek. Similarly, the loss function of ΘD is defined as the cross entropy between the outputs of the domain discriminator and corresponding receivers:

00225_PSISDG12506_125066N_page_3_4.jpg

where 00225_PSISDG12506_125066N_page_3_5.jpg is the output of the domain discriminator, denoting the probability of the input belonging to RS. Since the feature extractor is trained cooperatively with the label classifier, LC is also the objective of the feature extractor. The feature extractor is trained adversarially with the domain discriminator, the adversarial loss function is defined as the cross entropy between the outputs of the domain discriminator and uniform distribution:

00225_PSISDG12506_125066N_page_3_6.jpg

The total objective of ΘF is:

00225_PSISDG12506_125066N_page_3_7.jpg

where λ ~ [0,1] is a hyperparameter weighting the relative importance of LA. During training, the three sub-networks are trained iteratively with corresponding objectives.

3.2

SepBN

To further enhance the stability and improve the performance of DANN, separated batch normalization is proposed to replace conventional batch normalization. Due to the different characteristics of receivers, samples of DS and DT are not identically distributed. Therefore, using the same batch normalization layer for samples of both DS and DT may cause fluctuations of parameters, which leads to instability of training. To address this issue, we use two separated batch normalization layers for samples of DS and DT respectively, i.e., one batch normalization layer for training data and another batch normalization layer for testing data, as shown in Figure 1.

4.

EXPERIMENTS

To evaluate performance of the proposed approach for cross-receiver SEI, a dataset of 20 emitters and 2 receivers (R1and R2) is collected at 3 days (Day 1, Day 2 and Day 3). The parameters of emitters are kept the same and the parameters of receivers are identical. At each day, the number of samples for each emitter with each receiver is 1800. The proposed SepBN-DANN method is compared with DANN, which does not use SepBN, and vanilla CNN, which simply trains a convolutional neural network using training data for testing.

The accuracies of different methods for cross-receiver SEI at each day are shown in Table 1. SepBN-DANN achieves the highest accuracy under most conditions, which indicates the superiority of the proposed SepBN-DANN method. For CNN and DANN, the accuracies of cross-receiver SEI at Day1 are evidently lower than those at Day 2 and Day 3. This may be due to the fact that the signals collected by R1 and R2 at Day1 are more divergent. However, SepBN-DANN performs comparably at Day 1, demonstrating that SepBN-DANN is more stable.

Table 1.

Accuracy of cross-receiver SEI.

 Day 1Day 2Day 3Avg.
R1 → R2R2 → R1R1 → R2R2 → R1R1 → R2R2 → R1
CNN0.24370.53290.81630.90990.73190.85820.6822
DANN0.78210.79560.95200.98660.93330.96120.9018
SepBN-DANN0.97280.91690.97560.99370.93460.90830.9503

Features learned by different methods at Day 1 for R1R2 are visualized using t-SNE, as shown in Figure 2. Solid dots represent features of samples from R1, while hollow dots represent features of samples from R2. Different colors correspond to features of samples from different emitters. Since CNN does not take domain shifts into account, the learned features of R2 are largely diverging from features of R1, leading to poor performance. DANN partially mitigates the divergence of features between R1 and R2, and hence improves performance compared to CNN. By contrast, SepBN-DANN aligns the features of R1 and R2 to learn receiver-invariant features, so that desirable accuracy can be achieved.

Figure 2.

(a): Feature visualization of CNN; (b): Feature visualization of DANN; (c): Feature visualization of SepBN-DANN.

00225_PSISDG12506_125066N_page_4_1.jpg

5.

CONCLUSION

In this paper, a method named SepBN-DANN is proposed for cross-receiver SEI. SepBN-DANN learns receiver-invariant features by aligning feature distributions of different receivers, and is more stable than conventional unsupervised domain adaptation method. Experimental results show that SepBN-DANN can improve accuracy substantially for cross-receiver SEI. Since SepBN-DANN only relies on the assumption that distributions of training and testing data are similar, the method may also be beneficial for other conditions, such as cross-channel and cross-modulation SEI. Like all other unsupervised domain adaptation methods, SepBN-DANN also suffers from the problem of negative transfer, i.e., signals of one emitter from source domain can be misaligned with signals of another emitter from target domain, which will be considered in future work.

REFERENCES

[1] 

Talbot, K., Duley, P. and Hyatt, M., “Specific emitter identification and verification,” Technology Review, (2003). Google Scholar

[2] 

Baldini, G., Steri, G. and Giuliani, R., “Identification of wireless devices from their physical layer radio-frequency fingerprints,” Encyclopedia of Information Science and Technology (IGI Global), (2018). Google Scholar

[3] 

Anthony, S. E., “Electronic warfare systems,” IEEE Transactions on Microwave Theory and Techniques, 50 (3), 633 –644 (2002). https://doi.org/10.1109/22.989948 Google Scholar

[4] 

Ureten, O. and Serinken, N., “Wireless security through rf fingerprinting,” Canadian Journal of Electrical and Computer Engineering, 32 (1), 27 –33 (2007). https://doi.org/10.1109/CJECE.2007.364330 Google Scholar

[5] 

He, B. and Wang, F., “Cooperative specific emitter identification via multiple distorted receivers,” IEEE Transactions on Information Forensics and Security, 15 3791 –3806 (2020). https://doi.org/10.1109/TIFS.10206 Google Scholar

[6] 

Wu, Q., Feres, C., Kuzmenko, D., Zhi, D., Yu, Z. and Liu, X., “Deep learning based rf fingerprinting for device identification and wireless security,” Electronics Letters, 54 (24), 1405 –1407 (2018). https://doi.org/10.1049/ell2.v54.24 Google Scholar

[7] 

Shamnaz, R., Kunal, S., Stratis, I. and Kaushik, C., “Deep learning convolutional neural networks for radio identification,” IEEE Communications Magazine, 56 (9), 146 –152 (2018). https://doi.org/10.1109/MCOM.2018.1800153 Google Scholar

[8] 

Ding, L., Wang, S., Wang, F. and Zhang, W., “Specific emitter identification via convolutional neural networks,” IEEE Communications Letters, 22 (12), 2591 –2594 (2018). https://doi.org/10.1109/LCOMM.2018.2871465 Google Scholar

[9] 

Pan, Y., Yang, S., Peng, H., Li, T. and Wang, W., “Specific emitter identification based on deep residual networks,” IEEE Access, 7 54425 –54434 (2019). https://doi.org/10.1109/Access.6287639 Google Scholar

[10] 

Yun, L., Ya, T., Zheng, D., Lei, C. and Mao, S., “Contour stella image and deep learning for signal recognition in the physical layer,” IEEE Transactions on Cognitive Communications and Networking, 7 (1), 34 –46 (2020). Google Scholar

[11] 

Yu, W., Guan, G., Haris, G., Tomoaki, O., Octavia, A. D. and Vincent, P. H., “An efficient specific emitter identification method based on complexvalued neural networks and network compression,” IEEE Journal on Selected Areas in Communications, (2021). Google Scholar

[12] 

Ganin, Y., Ustinova, E. and Ajakan, H., “Domain-adversarial training of neural networks,” The journal of Machine Learning Research, 17 (1), 2096 –2130 (2016). Google Scholar

[13] 

Wang, Q., Rao, W. and Sun, S., “Unsupervised domain adaptation via domain adversarial training for speaker recognition,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 4889 –4893 (2018). Google Scholar

[14] 

Huang, K., Yang, J. and Liu, H., “Deep adversarial neural network for specific emitter identification under varying frequency,” Bulletin of the Polish Academy of Sciences: Technical Sciences, e136737 –e136737 (2021). Google Scholar

[15] 

Laurens, M. V. D. and Geoffrey, H., “Visualizing data using t-sne,” Journal of Machine Learning Research, 9 (11), (2008). Google Scholar
© (2022) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Keju Huang, Bin Jiang, Junan Yang, Jian Wang, Pengjiang Hu, Yiming Li, Kun Shao, and Dongxing Zhao "Cross-receiver specific emitter identification based on a deep adversarial neural network with separated batch normalization", Proc. SPIE 12506, Third International Conference on Computer Science and Communication Technology (ICCSCT 2022), 125066N (28 December 2022); https://doi.org/10.1117/12.2662708
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Receivers

Neural networks

Visualization

Convolutional neural networks

Defense technologies

Information security

Network architectures

Back to Top