Paper
2 May 2019 DLA-PUF: deep learning attacks on hardware security primitives
Anugayathiri Pugazhenthi, Nima Karimian, Fatemeh Tehranipoor
Author Affiliations +
Abstract
Physical Unclonable Functions (PUFs) act as functions encoded in hardware, which produce a unique output, being referred to as a response, for a specific input, being called a challenge. PUFs provide a varying level of security, and can, therefore, be used in different applications, depending on the number of their available inputoutput pairs, which are referred to as Challenge-Response Pairs (CRPs). For example, a PUF with only a single challenge-response pair can be used for identification, while a PUF with multiple CRPs can be used to provide multiple different session keys for authentication.1, 2 In the first case, the response needs to be secret, while, in the second one, responses can be also used without any secrecy, as long as the related CRPs are not used again. Generally, PUFs are vulnerable to modeling and machine learning attacks. In this paper we investigate and show the resiliency of DRAM-based PUFs against Machine Learning (Naive Bayes (NB), Logistic Regression (LR) and Support Vector machine (SVM)) and also Deep Learning (in particular convolutional neural network (CNN) attacks. We are the first to provide a detailed analysis of on-board DRAM startup values for the purpose of generating unique IDs and their vulnerabilities to attacks. We performed our experiments on the Digilent Atlys board (Xilinx Spartan 6 FPGA); using the on-board DRAM memories (MIRA P3R1GE3EGF G8E DDR2). Our results indicates that the 3 startup value-based DRAM PUFs (DRAM1, DRAM2, and DRAM3) are robust against machine learning attacks.
© (2019) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Anugayathiri Pugazhenthi, Nima Karimian, and Fatemeh Tehranipoor "DLA-PUF: deep learning attacks on hardware security primitives", Proc. SPIE 11009, Autonomous Systems: Sensors, Processing, and Security for Vehicles and Infrastructure 2019, 110090B (2 May 2019); https://doi.org/10.1117/12.2519257
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Machine learning

Data modeling

Field programmable gate arrays

Information security

Lawrencium

Capacitors

Computer security

Back to Top