Apache Storm is a popular open-source distributed computing platform for real-time big-data processing. However, the existing task scheduling algorithms for Apache Storm do not adequately take into account the heterogeneity and dynamics of node computing resources and task demands, leading to high processing latency and suboptimal performance. In this thesis, we propose an innovative machine learning-based task scheduling scheme tailored for Apache Storm. The scheme leverages machine learning models to predict task performance and assigns a task to the computation node with the lowest predicted processing latency. In our design, each node operates a machine learning-based monitoring mechanism. When the master node schedules a new task, it queries the computation nodes obtains their available resources, and processes latency predictions to make the optimal assignment decision. We explored three machine learning models, including Long Short-Term Memory (LSTM), Convolutional Neural Networks (CNN), and Deep Belief Networks (DBN). Our experiments showed that LSTM achieved the most accurate latency predictions. The evaluation results demonstrate that Apache Storm with the proposed LSTM-based scheduling scheme significantly improves the task processing delay and resource utilization, compared to the existing algorithms.
KEYWORDS: Augmented reality, Distributed computing, 3D modeling, Sensors, Data modeling, Data processing, Computer architecture, Autoregressive models, Object detection, Computing systems
Cooperative Augmented Reality (AR) can provide real-time, immersive, and context-aware situational awareness while enhancing mobile sensing capabilities and benefiting various applications. Distributed edge computing has emerged as an essential paradigm to facilitate cooperative AR. We designed and implemented a distributed system to enable fast, reliable, and scalable cooperative AR. In this paper, we present a novel approach and architecture that integrates advanced sensing, communications, and processing techniques to create such a cooperative AR system and demonstrate its capability with HoloLens and edge servers connected over a wireless network. Our research addresses the challenges of implementing a distributed cooperative AR system capable of capturing data from a multitude of sensors on HoloLens, performing fusion and accurate object recognition, and seamlessly projecting the reconstructed 3D model into the wearer’s field of view. The paper delves into the intricate architecture of the proposed cooperative AR system, detailing its distributed sensing and edge computing components, and the Apache Storm-integrated platform. The implementation encompasses data collection, aggregation, analysis, object recognition, and rendering of 3D models on the HoloLens, all in real-time. The proposed system enhances the AR experience while showcasing the vast potential of distributed edge computing. Our findings illustrate the feasibility and advantages of merging distributed cooperative sensing and edge computing to offer dynamic, immersive AR experiences, paving the way for new applications.
Single sensor, such as 3D LiDAR camera, has relatively limited perception performance of providing comprehensive environmental information though the perception results from single sensor is accurate. Therefore, multiple sensors are perferred for surveillant tasks in either tactical or civilian scenarios. Cooperative perception is one of the solution to enable sensors to share sensory information with other sensors and infrastructure, extending coverage and enhancing the detection accuracy of surrounding objects for better safety and path planning. However, an efficient management of the large volume of sensory data across multiple sensors in the wirelss network is needed to maintain real-time sensing. In this work, we design a complete cooperative perception framework with varies networking, image processing and data fusion technologies integrated to enhance the situational awareness performance with multiple sensors. The framework uses information-centric networking and deep reinforc
The rapid growth of the demand for mobile sensing makes it difficult to process all sensing tasks on a single mobile device. Therefore, the concept of distributed computing was proposed, in which the computation tasks are distributed to all available devices in the same edge network to achieve faster data processing. However, in some critical scenarios, in which the network condition of the edge is poor, the bandwidth of the edge network is limited, and the connection is unstable, which can significantly affect the performance of distributed computing. To overcome such issues, we propose a resilient mobile distributed computing framework adopting an integrated solution combining Coded Computing (CC) and Named Data Networking (NDN). With NDN, the network traffic and information sharing within the edge network is optimized dynamically to adapt to the timevarying network condition. The CC technique can recover some of the missing computation results when an edge node is failed or disconne
KEYWORDS: Relays, Mobile devices, Energy efficiency, Unmanned aerial vehicles, Standards development, Optimization (mathematics), Mobile communications
Next-generation (5G & beyond) cellular networks promise much higher throughput and lower latency. However, mobile users experiencing poor channel quality not only suffer low data-rate connections with the base station but also reduce cell’s aggregate throughput and increase overall delay. In this paper, we consider a hybrid cellular and mobile ad hoc Device-to-Device (D2D) network that leverages the advantages of both wide-area cellular coverage and high-speed ad hoc D2D relaying to enhance network performance and scalability. Dedicated relay devices, such as Unmanned Aerial Vehicles (UAVs)/drones, can also be deployed to further improve network connectivity and thus throughput. The base station may send the packets destined for a mobile user with poor cellular channel quality to a proxy mobile device with better cellular channel quality. The proxy mobile device will relay the packets to the destination, thereby significanltly improving network throughput and delay. We formulate the data transmission problem and design an online reinforcement learning-based algorithm to achieve the best transmission performance.
Mobile edge computing (MEC) is an emerging and fast-growing distributed computing paradigm. It brings the computation and storage resources closer to mobile users while also processing data at the network edge to improve response time and save bandwidth. In tactical virtual training environments, latency is a key factor that affects training performance. Additionally, MEC provides both information service environment and cloud computing capabilities to enable real-time virtual training. Therefore, we designed a machine learning-based data caching and processing scheme for the virtual training networks. The design consists of three tiers, mobile devices, edge servers, and cloud servers, respectively. By pre-caching the critical content objects close to the mobile devices, our MEC network enables data transmission and processing at low latency. Utilizing machine learning techniques, our caching scheme can predict and select the content objects to be cached with optimal storage efficiency at network edge servers. Specifically, we decoupled the content caching problem into two subproblems, namely probability learning and content selection. For probability learning, the edge servers estimate the probability and frequency that each content object will be requested in the near future. The estimate is according to the content request pattern learned over time. For the content selection, the edge servers determine the content objects for caching to minimize the expected content delay with limited storage. To evaluate the performance of our proposed scheme, we developed a testbed with real mobile devices and servers. The experimental results validated the feasibility and significant performance gains of the proposed scheme.
Mobile Edge Computing (MEC) is a key technology to support the emerging low-latency Internet of Things (IoT) applications. With computing servers deployed at the network edge, the computational tasks generated by mobile users can be offloaded to these MEC servers and executed there with low latency. Meanwhile, with the ever-increasing number of mobile users, the communication resource for offloading and the computational resource allocated to each user would become quite limited. As a result, it would be difficult for the MEC servers alone to process all the tasks in a timely manner. An effective approach to deal with this challenge is offloading a proportion of the tasks at MEC servers to the cloud servers, such that both types of servers are efficiently utilized to reduce latency. Given multiple MEC and cloud servers and the dynamics of communication latency, intelligent task assignment between different servers is required. In this paper, we propose a deep reinforcement learning (DRL) based task assignment scheme for MEC networks, aiming to minimize the average task processing latency. Two design parameters of task assignment are optimized, including cloud server selection and task partitioning. Such a problem is formulated as a Markov Decision Process (MDP) and solved with a DRL-based approach, which enables the edge servers to capture the system dynamics and make optimized task assignment strategies accordingly. Simulation results show that the proposed scheme can significantly lower the average task completion latency.
Mobile edge computing is a new distributed computing paradigm which brings computation and data storage closer to the location where it is needed, to improve response times and save bandwidth in the dynamic mobile networking environment. Despite the improvements in network technology, data centers cannot always guarantee acceptable transfer rates and response times, which could be a critical requirement for many applications. The aim of mobile edge computing is to move the computation away from data centers towards the edge of the network, exploiting smart objects, mobile phones or network gateways to perform tasks and provide services on behalf of the cloud. In this paper, we design a task offloading scheme in the mobile edge network to handle the task distribution, offloading and management by applying deep reinforcement learning. Specifically, we formulate the task offloading problem as a multi-agent reinforcement learning problem. The decision-making process of each agent is modeled as a Markov decision process and deep Q-learning approach is applied to deal with the large scale of states and actions. To evaluate the performance of our proposed scheme, we develop a simulation environment for the mobile edge computing scenario. Our preliminary evaluation results with a simplified multi-armed bandit model indicate that our proposed solution can provide lower latency for the computational intensive tasks in mobile edge network, and outperforms than naïve task offloading method.
Satellite Communication (SATCOM) systems are playing a more and more important role in both civilian and tactical scenarios with large deployment and user groups currently. However, long propagation delay and high packet loss rate of the SATCOM in higher earth orbits satellites degrades the communication performance severely. Existing works, such as Performance Enhancing Proxy (PEP), have addressed this performance issue via splitting the end-to-end TCP connection into several sub-connections, so that the low performance SATCOM communication link will no longer affect the entire TCP connection performance from the sender to the receiver. Nevertheless, PEP's functionalities can be disabled once the data encryption like High Assurance Internet Proto- col Encryption (HAIPE) was introduced due to the security requirement of the communication over SATCOM. Therefore, we targeted on the solution of using transportation layer tunneling, i.e., TCP and UDP tunnel, in this paper to explore the opportunity of re-enabling the PEP functionalities at the presence of data encryption and further enhancing the performance of the end-to-end TCP communication. We also designed and implemented a Mininet based emulation testbed to conduct all experiments and evaluations for better understanding on the effectiveness of the tunnel solution with different configurations (e.g., TCP congestion control mechanism). Based on the evaluation results presented in the paper, we successfully further improved the end-to-end TCP communication performance with TCP/UDP tunnel while maintaining the functionalities of the performance enhancement solution like PEP. Moreover, we provided detailed analysis on the advantages and disadvantages of using different tunnels with different configurations as well as our recommendations for performance enhancement in such SATCOM communication environment.
When applying the Disruption Tolerant Networking (DTN) technique to satellite communications (SATCOM) with significant long delays, two problems result. First, to enhance the communication efficiency, Performance Enhancing Proxies (PEPs) used in satellite communications need to be integrated with DTN around SATCOM links, and the interoperability between DTN and PEP should be developed. Second, all data moving from a red core (secure intranet) to a black core (unsecured public network) should be encrypted using High Assurance Internet Protocol Encryption (HAIPE) devices. To solve the encryption problem, a TCP over TCP solution was proposed, which encodes original TCP flow information from HAIPE, and then reconstructs new TCP streams and encapsulates HAIPE-encrypted original TCP packets in them. These new TCP streams can be natively handled by PEPs and thus the full TCP performance can be achieved. However, the TCP over TCP solution requires special mechanisms to deal with the interaction between the congestion control of the inner and outer TCP links. To achieve congestion goals, this paper develops a throughput system model, and provides an analysis of the impacts of TCP retransmission. Our analysis shows a throughput reduction when both inner and outer TCP react to packet loss. Possible solutions are also proposed using delay shaping to remove the congestion control of the TCP tunnel. An analysis is provided to explain the mechanisms behind our solutions, and experiment results are also provided to support our design.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.