Wireless communication is susceptible to security breaches by adversarial actors mimicking Media Access Controller (MAC) addresses of currently-connected devices. Classifying devices by their “physical fingerprint” can help to prevent this problem since the fingerprint is unique for each device and independent of the MAC address. Previous techniques have mapped the WiFi signal to real values and used classification methods that support solely real-valued inputs. In this paper, we put forth four new deep neural networks (NNs) for classifying WiFi physical fingerprints: a real-valued deep NN, a corresponding complex-valued deep NN, a real-valued deep CNN, and the corresponding complex-valued deep convolutional NN (CNN). Results show state-of-the-art performance against a dataset of nine WiFi devices.
In wireless networks, MAC-address spoofing is a common attack that allows an adversary to gain access to the system. To circumvent this threat, previous work has focused on classifying wireless signals using a “physical fingerprint”, i.e., changes to the signal caused by physical differences in the individual wireless chips. Instead of relying on MAC addresses for admission control, fingerprinting allows devices to be classified and then granted access. In many network settings, the activity of legitimate devices—those devices that should be granted access— may be dynamic over time. Consequently, when faced with a device that comes online, a robust fingerprinting scheme must quickly identify the device as legitimate using the pre-existing classification, and meanwhile identify and group those unauthorized devices based on their signals. This paper presents a two-stage Zero-Shot Learning (ZSL) approach to classify a received signal originating from either a legitimate or unauthorized device. In particular, during the training stage, a classifier is trained for classifying legitimate devices. The classifier learns discriminative features and the outlier detector uses these features to classify whether a new signature is an outlier. Then, during the testing stage, an online clustering method is applied for grouping those identified unauthorized devices. Our approach allows 42% of unauthorized devices to be identified as unauthorized and correctly clustered.
Semantic Segmentation using convolutional neural networks is a trending technique in scene understanding. As these techniques are data-intensive, several devices struggle to store and process even a small batch of images at a time. Also, as the volume of training datasets required by the training algorithms is very high, it might be wise to store these datasets in their compressed form. Not only this, in order to correspond the limited bandwidth of the transmission network the images could be compressed before sending to the destination. Joint Photography Expert Group (JPEG) is a famous technique for image compression. However, JPEG introduces several unwanted artifacts in the images after compression. In this paper, we explore the effect of JPEG compression on the performance of several deep-learning-based semantic segmentation techniques for both the synthetic and real-world dataset at various compression levels. For some established architectures trained with compressed synthetic and real-world dataset, we noticed the equivalent (and sometimes better) performances compared to uncompressed dataset with substantial amount of storage space reduced. We also analyze the effect of combining original dataset with the compressed dataset with different JPEG quality levels and witnessed a performance improvement over the baseline. Our evaluation and analysis indicates that the segmentation network trained on compressed dataset could be a better option in terms of performance. We also illustrate that the JPEG compression acts as a data augmentation technique improving the performance of semantic segmentation algorithms.
Signal attributes such as angle of arrival (AoA), time of arrival (ToA), signal amplitude, and phase can be used by a set of receivers (detectors) to perform location fingerprinting (LF), whereby the location of a wireless source is determined. In validating new approaches for location fingerprinting, it is useful to simulate these attributes for the subset of signals that intersect detectors. However, given indoor settings with a complex architecture, it is computationally expensive to simulate multipath propagation while preserving detailed signal information. Moreover, this cost can be unnecessary since determining whether an LF approach is promising may not require tracing all rays that impact the detector. Here, we report on our preliminary efforts to design and test a MATLAB-based simulation tool for wireless propagation that addresses this issue. Our approach builds upon well-known ray-tracing techniques, but innovates via an algorithm designed to obtain a sizable subset of rays that intersect a detector, along with the AoA, ToA, signal amplitude, and phase for each such ray. Finally, we employ our tool in conjunction with a neural network-based method for location fingerprinting, demonstrating the intended use case for our simulation tool.
For autonomous vehicles 3D, rotating LiDAR sensors are often critically important towards the vehicle’s ability to sense its environment. Generally, these sensors scan their environment, using multiple laser beams to gather information about the range and the intensity of the reflection from an object. LiDAR capabilities have evolved such that some autonomous systems employ multiple rotating LiDARs to gather greater amounts of data regarding the vehicle’s surroundings. For these multi–LiDAR systems, the placement of the sensors determine the density of the combined point cloud. We perform preliminary research regarding the optimal LiDAR placement strategy on an off–road, autonomous vehicle known as the Halo project. We use the Mississippi State University Autonomous Vehicle Simulator (MAVS) to generate large amounts of labeled LiDAR data that can be used to train and evaluate a neural network used to process LiDAR data in the vehicle. The trained networks are evaluated and their performance metrics are then used to generalize the performance of the sensor pose. Data generation, training, and evaluation, was performed iteratively to perform a parametric analysis of the effectiveness of various LiDAR poses in the Multi–LiDAR system. We also, describe and evaluate intrinsic and extrinsic calibration methods that are applied in the multi–LiDAR system. In conclusion we found that our simulations are an effective way to evaluate the efficacy of various LiDAR placements based on the performance of the neural network used to process that data and the density of the point cloud in areas of interest.
Temperature monitoring and regulation is a critical aspect of data center administration. Currently, conventional discrete transistor-based thermal sensing systems are widely used for this purpose, which requires a discrete device for each temperature measurement in the special domain. This leads to an increase in both complexity and cost as the data center grows in scale. This manuscript describes a real-time multiplexed optical fiber thermal sensing system for data center applications which simultaneously measures thousands of discrete points along the length of the fiber under test. This system allows for real-time thermal monitoring of several hundred servers with a spatial resolution of 1 cm, a temperature resolution of <1 °C, and a system update rate of 1 Hz. Temperature inside of individual servers and the ambient room temperature outside the racks can be simultaneously monitored in real time using a single optical fiber probe. To investigate this concept, a pilot experiment is presented which monitored the dynamic server temperature distribution using the proposed fiber sensing system. Temperature data recorded using built-in thermal sensors within the CPU of the server under test were simultaneously recorded and compared to measurements made. In order to induce a temperature change within the server, a computationally intensive task was undertaken during temperature testing. Both methods of temperature measurement demonstrated similar trends, indicating that the proposed multiplexed optical fiber-based system has substantial potential as a scalable method of distributed data center temperature monitoring.
In this paper, a modified particle swarm optimization (PSO) approach, particle swarm optimization with ε- greedy exploration εPSO), is used to tackle the object tracking. In the modified εPSO algorithm, the cooperative learning mechanism among individuals has been introduced, namely, particles not only adjust its own flying speed according to itself and the best individual of the swarm but also learn from other best individuals according to certain probability. This kind of biologically-inspired mutual-learning behavior can help to find the global optimum solution with better convergence speed and accuracy. The εPSO algorithm has been tested on benchmark function and demonstrated its effectiveness in high-dimension multi-modal optimization. In addition to the standard benchmark study, we also combined our new εPSO approach with the traditional particle filter (PF) algorithm on the object tracking task, such as car tracking in complex environment. Comparative studies between our εPSO combined PF algorithm with those of existing techniques, such as the particle filter (PF) and classic PSO combined PF will be used to verify and validate the performance of our approach.