Proc. SPIE. 11422, Sensors and Systems for Space Applications XIII
KEYWORDS: Information fusion, Data modeling, Sensors, Control systems, Avionic systems, Space operations, Computer architecture, Systems modeling, Computer security, Network security
Advancement in artificial intelligence (AI) and machine learning (ML), dynamic data driven application systems (DDDAS), and hierarchical cloud-fog-edge computing paradigm provide opportunities for enhancing multi-domain systems performance. As one example that represents multi-domain scenario, a “fly-by-feel” system utilizes DDDAS framework to support autonomous operations and improve maneuverability, safety and fuel efficiency. The DDDAS “fly-by-feel" avionics system can enhance multi-domain coordination to support domain specific operations. However, conventional enabling technologies rely on a centralized manner for data aggregation, sharing and security policy enforcement, and it incurs critical issues related to bottleneck of performance, data provenance and consistency. Inspired by the containerized microservices and blockchain technology, this paper introduces BLEM, a hybrid BLockchain-Enabled secure Microservices fabric to support decentralized, secure and efficient data fusion
Connected societies require reliable measures to assure the safety, privacy, and security of members. Public safety technology has made fundamental improvements since the first generation of surveillance cameras were introduced, which aims to reduce the role of observer agents so that no abnormality goes unnoticed. While the edge computing paradigm promises solutions to address the shortcomings of cloud computing, e.g., the extra communication delay and network security issues, it also introduces new challenges. One of the main concerns is the limited computing power at the edge to meet the on-site dynamic data processing. In this paper, a Lightweight IoT (Internet of Things) based Smart Public Safety (LISPS) framework is proposed on top of microservices architecture. As a computing hierarchy at the edge, the LISPS system possesses high flexibility in the design process, loose coupling to add new services or update existing functions without interrupting the normal operations, and efficient power balancing. A real-world public safety monitoring scenario is selected to verify the effectiveness of LISPS, which detects, tracks human objects and identify suspicious activities. The experimental results demonstrate the feasibility of the approach.
Proc. SPIE. 10988, Automatic Target Recognition XXIX
KEYWORDS: Information fusion, Image fusion, Infrared imaging, Mid-IR, Data modeling, Sensors, Image processing, Molybdenum, Systems modeling, Data fusion
The resurgence of interest in artificial intelligence (AI) stems from impressive deep learning (DL) performance such as hierarchical supervised training using a Convolutional Neural Network (CNN). Current DL methods should provide contextual reasoning, explainable results, and repeatable understanding that require evaluation methods. This paper discusses DL techniques using multimodal (or multisource) information that extend measures of performance (MOP). Examples of joint multi-modal learning include imagery and text, video and radar, and other common sensor types. Issues with joint multimodal learning challenge many current methods and care is needed to apply machine learning methods. Results from Deep Multimodal Image Fusion (DMIF) using Electro-optical and infrared data demonstrate performance modeling based on distance to better understand DL robustness and quality to provide situation awareness.
The purpose of this paper is on the study of data fusion applications in traditional, spatial and aerial video stream applications which addresses the processing of data from multiple sources using co-occurrence information and uses a common semantic metric. Use of co-occurrence information to infer semantic relations between measurements avoids the need to make use of such external information, such as labels. Many of the current Vector Space Models (VSM) do not preserve the co-occurrence information leading to a not so useful similarity metric. We propose a proximity matrix embedding part of the learning metric embedding which has entries showing the relations between co-occurrence frequency observed in input sets. First, we show an implicit spatial sensor proximity matrix calculation using Jaccard similarity for an array of sensor measurements and compare with the state-of-the-art kernel PCA learning from feature space proximity representation; it relates to a k-radius ball of nearest neighbors. Finally, we extend the class co-occurrence boosting of our unsupervised model using pre-trained multi-modal reuse.
Traditional event detection from video frames are based on a batch or offline based algorithms: it is assumed that a single event is present within each video, and videos are processed, typically via a pre-processing algorithm which requires enormous amounts of computation and takes lots of CPU time to complete the task. While this can be suitable for tasks which have specified training and testing phases where time is not critical, it is entirely unacceptable for some real-world applications which require a prompt, real-time event interpretation on time. With the recent success of using multiple models for learning features such as generative adversarial autoencoder (GANS), we propose a two-model approach for real-time detection. Like GANs which learns the generative model of the dataset and further optimizes by using the discriminator which learn per sample difference between generated images. The proposed architecture uses a pre-trained model with a large dataset which is used to boost weekly labeled instances in parallel with deep-layers for the small aerial targets with a fraction of the computation time for training and detection with high accuracy. We emphasize previous work on unsupervised learning due to overheads in training labeled data in the sensor domain.
In this work, we investigate and compare centrality metrics on several datasets. Many real-world complex systems can be addressed using a graph-based analytical approach, where nodes represent the components of the system and edges are the interactions or relationships between them. Different systems such as communication networks and critical infrastructure are known to exhibit common characteristics in their behavior and structure. Infrastructure networks such as power girds, communication networks and natural gas are interdependent. These systems are usually coupled such that failures in one network can propagate and affect the entire system. The purpose of this analysis is to perform a metric analysis on synthetic infrastructure data. Our view of critical infrastructure systems holds that the function of each system, and especially continuity of that function, is of primary importance. In this work, we view an infrastructure as a collection of interconnected components that work together as a system to achieve a domain-specific function. The importance of a single component within an infrastructure system is based on how it contributes, which we assess with centrality metrics.
In machine learning, a good predictive model is the one that generalizes well over future unseen data. In general, this problem is ill-posed. To mitigate this problem, a predictive model can be constructed by simultaneously minimizing an empirical error over training samples and controlling the complexity of the model. Thus, the regularized least squares (RLS) is developed. RLS requires matrix inversion, which is expensive. And as such, its “big data” applications can be adversely affected. To address this issue, we have developed an efficient machine learning algorithm for pattern recognition that approximates RLS. The algorithm does not require matrix inversion, and achieves competitive performance against the RLS algorithm. It has been shown mathematically that RLS is a sound learning algorithm. Therefore, a definitive statement about the relationship between the new algorithm and RLS will lay a solid theoretical foundation for the new algorithm. A recent study shows that the spectral norm of the kernel matrix in RLS is tightly bounded above by the size of the matrix. This spectral norm becomes a constant when the training samples have independent centered sub-Gaussian coordinators. For example, typical sub-Gaussian random vectors such as the standard normal and Bernoulli satisfy this assumption. Basically, each sample is drawn from a product distribution formed from some centered univariate sub-Gaussian distributions. These new results allow us to establish a bound between the new algorithm and RLS in finite samples and show that the new algorithm converges to RLS in the limit. Experimental results are provided that validate the theoretical analysis and demonstrate the new algorithm to be very promising in solving “big data” classification problems.
Proc. SPIE. 9091, Signal Processing, Sensor/Information Fusion, and Target Recognition XXIII
KEYWORDS: Information fusion, Roads, Visual analytics, Data modeling, Visualization, Sensors, Video, Optical tracking, Information visualization, Data fusion
Graphical fusion methods are popular to describe distributed sensor applications such as target tracking and pattern
recognition. Additional graphical methods include network analysis for social, communications, and sensor
management. With the growing availability of various data modalities, graphical fusion methods are widely used to
combine data from multiple sensors and modalities. To better understand the usefulness of graph fusion approaches, we
address visualization to increase user comprehension of multi-modal data. The paper demonstrates a use case that
combines graphs from text reports and target tracks to associate events and activities of interest visualization for testing
Measures of Performance (MOP) and Measures of Effectiveness (MOE). The analysis includes the presentation of the
separate graphs and then graph-fusion visualization for linking network graphs for tracking and classification.
KEYWORDS: Information fusion, Analytics, Visual process modeling, Visual analytics, Data modeling, Aerospace engineering, Visualization, Sensors, Information visualization, Data fusion
Visualization is important for multi-intelligence fusion and we demonstrate issues for presenting physics-derived (i.e.,
hard) and human-derived (i.e., soft) fusion results. Physics-derived solutions (e.g., imagery) typically involve sensor
measurements that are objective, while human-derived (e.g., text) typically involve language processing. Both results
can be geographically displayed for user-machine fusion. Attributes of an effective and efficient display are not well
understood, so we demonstrate issues and results for filtering, correlation, and association of data for users - be they
operators or analysts. Operators require near-real time solutions while analysts have the opportunities of non-real time
solutions for forensic analysis. In a use case, we demonstrate examples using the JVIEW concept that has been applied
to piloting, space situation awareness, and cyber analysis. Using the open-source JVIEW software, we showcase a big
data solution for multi-intelligence fusion application for context-enhanced information fusion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.