PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 9095, including the Title Page, Copyright information, Table of Contents, and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Resident space objects (RSOs) pose a significant threat to orbital assets. Due to high relative velocities, even a small
RSO can cause significant damage to an object that it strikes. Worse, in many cases a collision may create numerous
additional RSOs, if the impacted object shatters apart. These new RSOs will have heterogeneous mass, size and orbital
characteristics. Collision avoidance systems (CASs) are used to maneuver spacecraft out of the path of RSOs to prevent
these impacts. A RSO CAS must be validated to ensure that it is able to perform effectively given a virtually unlimited
number of strike scenarios.
This paper presents work on the creation of a testing environment and AI testing routine that can be utilized to perform
verification and validation activities for cyber-physical systems. It reviews prior work on automated and autonomous
testing. Comparative performance (relative to the performance of a human tester) is discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Portability scenarios are critical in ensuring that a piece of AI control software will run effectively across the collection
of craft that it is required to control. This paper presents scenarios for control software that is designed to control
multiple craft with heterogeneous movement and functional characteristics. For each prospective target-craft type, its
capabilities, mission function, location, communications capabilities and power profile are presented and performance
characteristics are reviewed. This work will inform future work related to decision making related to software
capabilities, hardware control capabilities and processing requirements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To better study hyperspectral imaging sensors and remote sensing system performance, different parts of the remote sensing system can be modeled. Here, a scientific overview of recent modeling techniques is presented to come up with an appropriate approach for modeling of a target detection system. In particular, this study focuses on the development in modeling of scene, sensors, and processing algorithms. Moreover, the parallelization of the detection methods is emphasized for which the process of target detection accelerates. In conclusion, an appropriate model to evaluate a target detection system can be a hybrid model in which hyperspectral sensors, radars, and local sensors have been modeled.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Night Vision Image Generator (NVIG), a product of US Army RDECOM CERDEC NVESD, is a visualization tool
used widely throughout Army simulation environments to provide fully attributed synthesized, full motion video using
physics-based sensor and environmental effects. The NVIG relies heavily on contemporary hardware-based acceleration
and GPU processing techniques, which push the envelope of both enterprise and commodity-level hypervisor support for
providing virtual machines with direct access to hardware resources.
The NVIG has successfully been integrated into fully virtual environments where system architectures leverage cloudbased
technologies to various extents in order to streamline infrastructure and service management. This paper details the
challenges presented to engineers seeking to migrate GPU-bound processes, such as the NVIG, to virtual machines and,
ultimately, Cloud-Based IAS architectures. In addition, it presents the path that led to success for the NVIG.
A brief overview of Cloud-Based infrastructure management tool sets is provided, and several virtual desktop solutions
are outlined. A discrimination is made between general purpose virtual desktop technologies compared to technologies
that expose GPU-specific capabilities, including direct rendering and hard ware-based video encoding. Candidate
hypervisor/virtual machine configurations that nominally satisfy the virtualized hardware-level GPU requirements of the
NVIG are presented , and each is subsequently reviewed in light of its implications on higher-level Cloud management
techniques. Implementation details are included from the hardware level, through the operating system, to the 3D graphics
APls required by the NVIG and similar GPU-bound tools.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a high-fidelity RF modeling and simulation framework is demonstrated to model an airborne multi-channel
receiver system that is used to estimate the angle of arrival (AoA) of received signals from a stationary emitter. The
framework is based on System Tool Kit (STK®), Matlab and SystemVue®. The SystemVue-based multi-channel receiver
estimates the AoA of incoming signals using adjacent channel amplitude and phase comparisons, and it estimates the
Doppler frequency shift of the aircraft by processing the transmitted and received signals. The estimated AoA and
Doppler frequency are compared with the ground-truth data provided by STK to validate the efficacy of the modeling
process. Unlike other current RF electronic warfare simulation frameworks, the received signal described herein is
formed using the received power, the propagation delay and the transmitted waveform, and does not require information
such as Doppler frequency shift or radial velocity of the moving platform from the scenario; hence, the simulation is
more computationally efficient. In addition, to further reduce the overall modeling and simulation time, since the high-fidelity
model computation is costly, the high-fidelity electronic system model is evoked only when the received power is
higher than a predetermined threshold.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Motion tracking has long been one of the primary challenges in mixed reality (MR), augmented reality (AR), and virtual
reality (VR). Military and defense training can provide particularly difficult challenges for motion tracking, such as in
the case of Military Operations in Urban Terrain (MOUT) and other dismounted, close quarters simulations. These
simulations can take place across multiple rooms, with many fast-moving objects that need to be tracked with a high
degree of accuracy and low latency. Many tracking technologies exist, such as optical, inertial, ultrasonic, and magnetic.
Some tracking systems even combine these technologies to complement each other. However, there are no systems that
provide a high-resolution, flexible, wide-area solution that is resistant to occlusion. While frameworks exist that
simplify the use of tracking systems and other input devices, none allow data from multiple tracking systems to be
combined, as if from a single system. In this paper, we introduce a method for compensating for the weaknesses of
individual tracking systems by combining data from multiple sources and presenting it as a single tracking system.
Individual tracked objects are identified by name, and their data is provided to simulation applications through a server
program. This allows tracked objects to transition seamlessly from the area of one tracking system to another.
Furthermore, it abstracts away the individual drivers, APIs, and data formats for each system, providing a simplified API
that can be used to receive data from any of the available tracking systems. Finally, when single-piece tracking systems
are used, those systems can themselves be tracked, allowing for real-time adjustment of the trackable area. This allows
simulation operators to leverage limited resources in more effective ways, improving the quality of training.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Integrated Sensor Architecture (ISA) is an interoperability solution that allows for the sharing of information
between sensors and systems in a dynamic tactical environment. The ISA created a Service Oriented Architecture (SOA)
that identifies common standards and protocols which support a net-centric system of systems integration. Utilizing a
common language, these systems are able to connect, publish their needs and capabilities, and interact with other
systems even on disadvantaged networks. Within the ISA project, three levels of interoperability were defined and
implemented and these levels were tested at many events. Extensible data models and capabilities that are scalable
across multi-echelons are supported, as well as dynamic discovery of capabilities and sensor management. The ISA has
been tested and integrated with multiple sensors, platforms, and over a variety of hardware architectures in operational
environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We examine the performance of a commercially available speckle imaging system in reconstructing static scenes from imagery corrupted by anisoplanatic distortions commonly observed when imaging over long horizontal paths near the ground. Performance is evaluated using the Mean Squared Error between system outputs and a diffraction-limited reference image. Input image frames are taken from a large library of simulated imagery of a static object observed over a 1 km horizontal path through volume turbulence in 3 turbulence conditions. 1000 image frames are available for each condition allowing for a statistically significant characterization of system performance over a range of turbulence conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The OpenCL standard for general-purpose parallel programming allows a developer to target highly parallel computations towards graphics processing units (GPUs), CPUs, co-processing devices, and field programmable gate arrays (FPGAs). The computationally intense domains of linear algebra and image processing have shown significant speedups when implemented in the OpenCL environment. A major benefit of OpenCL is that a routine written for one device can be run across many different devices and architectures; however, a kernel optimized for one device may not exhibit high performance when executed on a different device. For this reason kernels must typically be hand-optimized for every target device family. Due to the large number of parameters that can affect performance, hand tuning for every possible device is impractical and often produces suboptimal results. For this work, we focused on optimizing the general matrix multiplication routine. General matrix multiplication is used as a building block for many linear algebra routines and often comprises a large portion of the run-time. Prior work has shown this routine to be a good candidate for high-performance implementation in OpenCL. We selected several candidate algorithms from the literature that are suitable for parameterization. We then developed parameterized kernels implementing these algorithms using only portable OpenCL features. Our implementation queries device information supplied by the OpenCL runtime and utilizes this as well as user input to generate a search space that satisfies device and algorithmic constraints. Preliminary results from our work confirm that optimizations are not portable from one device to the next, and show the benefits of automatic tuning. Using a standard set of tuning parameters seen in the literature for the NVIDIA Fermi architecture achieves a performance of 1.6 TFLOPS on an AMD 7970 device, while automatically tuning achieves a peak of 2.7 TFLOPS
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The OpenCL API allows for the abstract expression of parallel, heterogeneous computing, but hardware implementations
have substantial implementation differences. The abstractions provided by the OpenCL API are
often insufficiently high-level to conceal differences in hardware architecture. Additionally, implementations
often do not take advantage of potential performance gains from certain features due to hardware limitations
and other factors. These factors make it challenging to produce code that is portable in practice, resulting in
much OpenCL code being duplicated for each hardware platform being targeted. This duplication of effort
offsets the principal advantage of OpenCL: portability.
The use of certain coding practices can mitigate this problem, allowing a common code base to be adapted
to perform well across a wide range of hardware platforms. To this end, we explore some general practices
for producing performant code that are effective across platforms. Additionally, we explore some ways of
modularizing code to enable optional optimizations that take advantage of hardware-specific characteristics.
The minimum requirement for portability implies avoiding the use of OpenCL features that are optional,
not widely implemented, poorly implemented, or missing in major implementations. Exposing multiple levels of
parallelism allows hardware to take advantage of the types of parallelism it supports, from the task level down
to explicit vector operations. Static optimizations and branch elimination in device code help the platform
compiler to effectively optimize programs. Modularization of some code is important to allow operations to
be chosen for performance on target hardware. Optional subroutines exploiting explicit memory locality allow
for different memory hierarchies to be exploited for maximum performance. The C preprocessor and JIT
compilation using the OpenCL runtime can be used to enable some of these techniques, as well as to factor in
hardware-specific optimizations as necessary.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.