PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This paper describes a method to generate a large area, hi- fidelity, synthetic thermal map used to create IR sensor synthetic images. The computer program used to construct the thermal map is called SYGTHERM. Construction of the detailed thermal map is achieved using general material polygons, detailed digital terrain maps, high resolution polygon objects, thermal estimation algorithms, and generic thermal textures extracted from radiometric data. Infrared images representing various times of the day in Texas and California are presented. Three dimensional (3D) polygonal information used in image construction is based on delineation of stereo pair images taken at the specified sites. High resolution wireframes are placed in the scenes to represent possible structures of interest. Spatial image comparisons to actual radiometric data are shown.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Imaging electro-optical system simulations require high resolution digital terrain databases to support the image inputs into the simulation. These systems frequently employ spectral processing of the input data as a part of their operation. Generation of spectral signature data is critical to accurate simulations of these systems. This paper describes a prototype effort to generate background signatures which account for the spectral variations in the illumination sources and the background optical properties. These spectral signature data are subsequently employed in the rendering of high fidelity scenes to support the simulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper highlights only three significant aspects of the proposed visualization system which features a lot of advanced techniques intended to synthesize a highly realistic visual environment for a diverse set of applications. The geometric model is based on non-polygonal representation, however it does support traditional objects, i.e. polygonal models could also be reconstructed and visualized. All geometric objects are uniformly and efficiently processed on dedicated hardware which implements the most specific and computationally excessive operations. Volume oriented rasterization algorithm and uniformity of object processing result in an efficient hidden surface removal and detection of spatial collisions. Chosen representation of terrain data is based on regular elevation map complemented with levels of detail. This approach has several advantages (rapid generation and modification, efficient data storing and retrieving) over polygomzed terrain models. Set of geometric objects is extended by freeform surfaces which simplify composition of smooth complex objects and simultaneously decrease data needed for storing them. To decorate surface of terrain thematic textures are used. It is supposed that observer is not often interested in the exact photographic information about the wide areas covered, for instance, by forest, water, etc., so that the benefits of photographic texture can be obtained by composing a number of patterns called "themes" to produce a final texture of the specific area. The goal of using thematic textures is to abolish the need of the global area texture, and additionally, animation features become available by simple means.
Keywords: freeform surfaces, recursive multilevel ray casting, shape texture, volume visualization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper reports on the continuing development of a DIS- compliant model for an airborne platform carrying a multisensor payload. This payload consists of a moving target indicator (MTI) radar, a cooperative battlefield combat identification system (BCIS), and imaging sensors. The imaging sensors are a synthetic aperture radar (SAR) and a forward looking infrared (FLIR) imager. The entire platform model is an extension to the ModSAF environment. The sensor model code is fully portable and integrated as ModSAF libraries. Relevant emission protocol data units (PDU) are generated and transmitted. The overall simulation architecture and the MTI and BCIS models have been described in detail elsewhere. The current work concentrates on the development of real-time model-based imaging functions. The software tools which provide this capability are available both in the government- owned inventory and as commercial products. The purpose of the current activity is to investigate the feasibility of integrating software of this kind with the ModSAF environment in order to produce realistic target/scene rendering similar to those obtained by high-resolution imaging sensors. To this end, we investigated real-time scene generation using two approaches. The first, through integration of the IRMA software package developed and distributed by the USAF Wright Laboratories, Eglin AFB, and the second is by use of the commercial software package SensorVisionTM, which is marketed and distributed by Paradigm Solutions, Inc. Both of these produce scene renderings in user specified wavebands by combining entity state PDU information with terrain data. The scene model information is passed to rendering software to produce an IR or SAR rendering of the scene.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The usable displayed area (UDA) in an airborne scanning sensor (ASS) depends on various factors, some of which are interdependent. We analyze the effects of these factors on the overall geometry of the UDA and usefulness of the image. First we assume a static target-sensor scenario and conclude that the shape of UDA is roughly trapezoidal. Then we consider the dynamics of the system and find out that the image can be divided into four regions where the central region offers the best quality image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ground area protected from attack by an Air Defense System and the volume in space where intercepts are feasible are critical measures of system effectiveness. Representation of these regions in three dimensions enables policy makers and mission planners to readily compare and contrast air defense systems and asset deployment options. This paper presents an innovative and efficient algorithm to create a data structure which permits the defended area and the intercept region to be represented as closed, color-coded, surfaces in three dimensions. The algorithm has been developed to be used within the MATLAB programming environment, and does not require data to be input on a uniform spatial grid. The resulting closed surfaces are complex and are not required to be symmetric or convex.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The user-friendly platform for ground-based radar analysis of debris environments (UPGRADE) workstation consists of a simulation architecture that has been developed to provide a flexible framework for modeling post-intercept debris and the resultant return signal produced by a radar situated in the vicinity of the debris impact point and illuminating the cloud of debris fragments. Characterization of the debris and radar signal is a complex process requiring models which can be brought together in an integrated visualization package. The UPGRADE architecture consists of a graphical user interface (GUI) which controls a group of MATLAB components used to generate inputs and graphical output products and C language modules which perform the analytic and algorithmic procedures required to generate and process the debris data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The traditional use of graphics and animation in engineering software development has been to demonstrate the function and utility of individual engineering tools. This paper illustrates the use of graphical rendering and animation for debugging large integrated simulations. The tools presented are part of the THAAD integrated system effectiveness simulation (TISES). TISES has integrated different segment software models to be able to perform analysis of a full THAAD (theater high altitude area defense) battalion. Within each model are implicit coordinates, transformations, reference values (i.e. earth radius) used which may or may not match those of adjacent models. Each interface or integration between the models introduces a source of error. TISES also utilized many different input parameters from a variety of external sources that can be a source of error. The TISES development team has found graphics and animation to be extremely helpful in testing and debugging these interface problems. This paper includes examples of input data verification, model to model interface, and model versus model perceptions that have been utilized in TISES development.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Visualization technologies are improving our ability to assess the effectiveness of the warfighter on today's battlefield. Increasingly, our ability to predict the behavior and performance of competing systems is being facilitated by simulations. These predictions typically involve visual and sensor simulations, but they may also be used for mission performance generalizations. A key link in this analysis involves the assessment of real optics, ATR algorithms, and observers under the changing influence of the natural atmosphere. The effects of the atmosphere can be as diverse as target contrast degradation, dynamic range influences of light and shadows in the viewed field, cloud free lines of sight, and optical turbulence. However, in general, the modeling and simulation community has only treated limited versions of the full influences of the atmosphere. In some cases these influences have been modeled using cartoon-like emulations of reality, bereft of physical content. Physics-based solutions are usually bypassed in the drive for near-realtime results. In this paper, we discuss the need for three-dimensional (3-D) solutions to near-earth atmospheric representation, describe a set of physics-based programs designed to generate cloudy- hazy atmospheric scenarios, run a robust 3-D radiative transfer (RT) model, and present this representation for visualization by a perspective view generator of the resulting radiance fields. The cloud fields are generated with a stochastic model that uses cloud layer height information, cloud type, and vertical sounding profile data. The output from this model is coupled to standard vertical haze profiles to produce a 3-D field of atmospheric properties. The official Army Research Laboratory discrete ordinates method (DOM) RT code, contained in the WAVES modeling package, has difficulties with dense cloud conditions. Here, we discuss a recommended upgrade to WAVES in the form of a specially designed RT code that accesses these cloud/haze data and is insensitive to cloud density variations. This feature allows it to effectively simulate effects in and around natural clouds. Further processing compresses and interprets the outputs of the RT code for a given sensor spectral response. Point to point calculations can then be performed on the resulting database for path characterization, including path radiance and contrast transmission calculations. These can be used in assessing system performance for each color channel of a sensor, or in visualizing the cloud fields themselves.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Gray level of image is greatly influenced by weather conditions if a TV camera is used out of doors. In this paper, relationships between gray level and weather conditions are addressed. First, by using daylight model, daylight distribution of scenes at different time and different weathers can be estimated by using an empiric formula. Subsequently, an image model of daylight is founded after analyzing a great deal of the experimental results. In this model, average gray level of image and its variance in a homogeneous region can be predicted if some parameters are determined by the experiments in advance. Finally beginning from the simplification of imaging model, a normalized gray level invariant is derived which can provide a scene simulation approach under different weather conditions if one of the corresponding image of the same scene at one weather is known. The experiment results demonstrate the feasibility of our theoretical analysis and are promising.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Geographic information systems (GIS) are gaining importance in military operations because of their capability to spatially and visually integrate various kinds of information. In an era of limited resources, geospatial data must be shared efficiently whenever possible. The military-initiated Global Geospatial Information and Services (GGI&S) Project aims at developing the infrastructure for GIS interoperability for the military. Current activities in standardization and new technology have strong implications on the design and development of GGI&S. To facilitate data interoperability at both the national and international levels, standards and specifications in geospatial data sharing are being studied, developed and promoted. Of particular interest to the military community are the activities related to the NATO DIGEST, ISO TC/211 Geomatics standardization and the industry-led Open Geodata Interoperability Specifications (OGIS). Together with new information technology, standardization provides the infrastructure for interoperable GIS for both civilian and military environments. The first part of this paper describes the major activities in standardization. The second part presents the technologies developed at DREV in support of the GGI&S. These include the Open Geospatial Datastore Interface (OGDI) and the geospatial data warehouse. DREV has been working closely with Defence Geomatics and private industry in the research and development of new technology for the GGI&S project.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Georgia Tech has developed a prototype system for the demonstration of the concepts of a virtual 3D geographic information system (GIS) in an urban environment. The virtual GIS integrates the technologies of GIS, remote sensing, and visualization to provide an interactive tool for the exploration of spatial data. A high density urban environment with terrain elevation, imagery, GIS layers, and three dimensional natural and manmade features is a stressing test for the integration potential of such a virtual 3D GIS. In preparation for the 1996 Olympic Games, Georgia Tech developed two highly detailed 3D databases over parts of Atlanta. A 2.5 meter database was used to depict the downtown Atlanta area with much higher resolution imagery being used for photo- texture of individual Atlanta buildings. Less than 1 meter imagery data was used to show a very accurate map of Georgia Tech, the 1996 Olympic Village. Georgia Tech developed visualization software was integrated via message passing with a traditional GIS package so that all commonly used GIS query and analysis functions could be applied within the 3D environment. This project demonstrates the versatility and productivity that can be accomplished by operating GIS functions within a virtual GIS and multi-media framework.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A study of the changes that have occurred over the last 22 years in coastal Georgia and South Carolina is being performed by the University of South Carolina and the Georgia Institute of Technology under NASA funding. This paper reports on the part of the study focused on the Savannah area of coastal Georgia. Eleven dates of Landsat multispectral scanner (MSS) and landsat thematic mapper (TM) multispectral image data were georeferenced to a base 1990 thematic mapper data set in a Georgia State plane projection. Changes between pairs of image dates were determined by automatic classification as well as manual interpretation. A time series visualization of the changes over time was created that dynamically showed the changes overlaid over the landsat imagery as they occurred with reference dates displayed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the development of an accurate airbase sortie generation capability for inclusion within military training distributed virtual environments (DVEs). Current training military DVEs lack appropriate modeling of airbase logistics and, therefore, the corresponding sortie generation model incorrectly portrays an airbase's wartime operational capabilities. As a result, DVE training participants, at both the command and staff level, develop expectations that are not realized in a real world environment and actually receive negative training. The airbase logistics system (ALS) provides accurate sortie generation capabilities by the incorporation of an existing constructive airbase logistics model, CWTSAR, into distributed interactive simulation (DIS) based military training DVEs. The airbase logistics system integrates CWTSAR with an object-oriented run-time data repository [the common object database (CODB)], the modular semi-autonomous forces' (ModSAF) network utilities, and the command and control simulation interface language (CCSIL) for communication between the ALS and other DIS-compatible systems in the DVE. The paper provides details on the sortie generation capability of ALS and its necessary interface utilities. The planned future development efforts to overcome shortfalls within ALS are also addressed within this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A major shortfall in the fidelity of current military distributed virtual environments (DVEs) is the lack of virtual global positioning system (GPS) timing and position signals for entities within the environment. The DVE's usefulness is reduced because positional errors and positional accuracy that would be available in the real world are not present in the DVE. This, in turn, affects the validity of the results of training, analysis, and evaluations involving systems that rely on GPS. The magnitude of the affect depends on the degree that the systems involved in the DVE rely on GPS in the real world. The project reported in this paper addresses this deficit in current military DVEs. The capability we developed to provide a virtual GPS-based navigation capability within a DVE is based upon three components. These components are a complete virtual GPS satellite constellation, a means for broadcasting GPS signals using the Distributed Interactive Simulation (DIS) simulation protocols, and a software system, the Virtual GPS Receiver (VGPSR), to calculate simulation entity position using the virtual GPS time and position signals. The virtual GPS satellites are propagated in their orbits using the solar system modeler (SM). The SM also performs the simulated GPS signal broadcast by transmitting a DIS protocol data unit (PDU) with the data that would appear within a real world GPS satellite broadcast. The VGPSR is a plug-in module available for simulation applications that require virtual GPS navigation. To demonstrate the capability of this system, we used the VGPSR in conjunction with the virtual cockpit to simulate virtual weapons deployment. We present the design of the VGPSR and the design of the modules added to the SM for GPS broadcast. We describe the calculations the system performs to calculate position in the virtual environment and we describe the accuracy and performance the system achieves when calculating virtual environment position using our system. We conclude with suggestions for further research in this area.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Rapidly reconfigurable systems in distributed virtual environments (DVEs) have the promise of reducing development, operational, and training facilities costs for human-operated entities in the DVE. Rapid reconfigurability can reduce costs through reuse of many components of the modeled systems and from the economies of scale achieved by procuring large numbers of identical systems. Our research is intended to develop system requirements and an adaptable software architecture for rapid reconfigurability and, as a spin-off, gauge the limits of rapid reconfigurability supported by current technology. To develop a system in support of rapid reconfigurability among modeled systems, a number of issues must be addressed. These issues include presentation of the controls for the system, support of interaction with the system's controls, physical motion fidelity for all modeled systems, the system software architecture, and achieving correct performance. Because the modeled systems exist in the real world, these issues must be resolved in a way that preserves the training value and fidelity of the modeled systems. We addressed these issues in the context of developing a prototype rapidly reconfigurable photorealistic virtual cockpit (VC). The paper describes the system controls, interaction support, aerodynamics model, and software architecture we used to achieve acceptable system fidelity for the rapidly reconfigurable photorealistic virtual cockpit. In addition, we document the performance of the current system and suggest avenues for further research.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In many professions where individuals must work in a team in a high stress environment to accomplish a time-critical task, individual and team performance can benefit from joint training using distributed virtual environments (DVEs). One professional field that lacks but needs a high-fidelity team training environment is the field of emergency medicine. Currently, emergency department (ED) medical personnel train by using words to create a metal picture of a situation for the physician and staff, who then cooperate to solve the problems portrayed by the word picture. The need in emergency medicine for realistic virtual team training is critical because ED staff typically encounter rarely occurring but life threatening situations only once in their careers and because ED teams currently have no realistic environment in which to practice their team skills. The resulting lack of experience and teamwork makes diagnosis and treatment more difficult. Virtual environment based training has the potential to redress these shortfalls. The objective of our research is to develop a state-of-the-art virtual environment for emergency medicine team training. The virtual emergency room (VER) allows ED physicians and medical staff to realistically prepare for emergency medical situations by performing triage, diagnosis, and treatment on virtual patients within an environment that provides them with the tools they require and the team environment they need to realistically perform these three tasks. There are several issues that must be addressed before this vision is realized. The key issues deal with distribution of computations; the doctor and staff interface to the virtual patient and ED equipment; the accurate simulation of individual patient organs' response to injury, medication, and treatment; and an accurate modeling of the symptoms and appearance of the patient while maintaining a real-time interaction capability. Our ongoing work addresses all of these issues. In this paper we report on our prototype VER system and its distributed system architecture for an emergency department distributed virtual environment for emergency medical staff training. The virtual environment enables emergency department physicians and staff to develop their diagnostic and treatment skills using the virtual tools they need to perform diagnostic and treatment tasks. Virtual human imagery, and real-time virtual human response are used to create the virtual patient and present a scenario. Patient vital signs are available to the emergency department team as they manage the virtual case. The work reported here consists of the system architectures we developed for the distributed components of the virtual emergency room. The architectures we describe consist of the network level architecture as well as the software architecture for each actor within the virtual emergency room. We describe the role of distributed interactive simulation and other enabling technologies within the virtual emergency room project.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Within current distributed virtual environments (DVEs) each participant operates in isolation and lacks facilities for moving information to other participants. Therefore, because the information flow is incorrect, the participants are encumbered with unnecessary work and are not prepared to accomplish the tasks they have been trained for in the real world. Our research is directed toward ending this isolation by investigating means for inserting collaborative tools into DVE applications. The research reported in this paper begins the process of identifying the tools and supporting technologies that are required to operate and manage forces effectively within a distributed virtual battlespace and other types of DVEs. Our hypothesis for the research reported in this paper was that collaborative tools can be used within a DVE to emulate real-world staff collaborative efforts. To be effective, these tools should allow the individual to communicate using techniques that are analogous to their current modes of work and that support anticipated future modes of work. The paper describes each of the tools for collaborative work that we developed; the rationale for the toolset we selected; the tools' purpose, capabilities, design, software architecture, and implementation; the techniques we use for display of information sent by other users; and the user's interface to the tools. We conclude the paper with suggestions for further work.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For a computer-generated force (CGF) system to be useful in training environments, it must be able to operate at multiple skill levels, exhibit competency at assigned missions, and comply with current doctrine. Because of the rapid rate of change in distributed interactive simulation (DIS) and the expanding set of performance objectives for any computer- generated force, the system must also be modifiable at reasonable cost and incorporate mechanisms for learning. Therefore, CGF applications must have adaptable decision mechanisms and behaviors and perform automated incorporation of past reasoning and experience into its decision process. The CGF must also possess multiple skill levels for classes of entities, gracefully degrade its reasoning capability in response to system stress, possess an expandable modular knowledge structure, and perform adaptive mission planning. Furthermore, correctly performing individual entity behaviors is not sufficient. Issues related to complex inter-entity behavioral interactions, such as the need to maintain formation and share information, must also be considered. The CGF must also be able to acceptably respond to unforeseen circumstances and be able to make decisions in spite of uncertain information. Because of the need for increased complexity in the virtual battlespace, the CGF should exhibit complex, realistic behavior patterns within the battlespace. To achieve these necessary capabilities, an extensible software architecture, an expandable knowledge base, and an adaptable decision making mechanism are required. Our lab has addressed these issues in detail. The resulting DIS-compliant system is called the automated wingman (AW). The AW is based on fuzzy logic, the common object database (CODB) software architecture, and a hierarchical knowledge structure. We describe the techniques we used to enable us to make progress toward a CGF entity that satisfies the requirements presented above. We present our design and implementation of an adaptable decision making mechanism that uses multi-layered, fuzzy logic controlled situational analysis. Because our research indicates that fuzzy logic can perform poorly under certain circumstances, we combine fuzzy logic inferencing with adversarial game tree techniques for decision making in strategic and tactical engagements. We describe the approach we employed to achieve this fusion. We also describe the automated wingman's system architecture and knowledge base architecture.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
By conducting [0, 1] treatment to time consuming of logistics system network key links, and regarding the time consumed by manufacture, inspection, storage, assembling, packing and market as a kind of existent extent of the joint and the time consumed by materials handling, transportation and logistics information as the connection strength between joints in a generalized multi-directional fuzzy map, a generalized multi-directional fuzzy map model of logistics system networks is built. The mutual flow among network joints and the special form of generalized fuzzy matrix is analyzed. Finally, an example of model building is given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The research and development of new defense systems is an expensive and time consuming process. The timescales and associated costs can be greatly reduced by the appropriate use of physically realistic synthetic visible and infrared imagery. Synthetic imagery does not directly replace the need for real trials data, rather it provides an effective means of increasing test data sets which can be used in conjunction with real imagery. This paper discusses the issues involved in generating physically realistic imagery as well as the balance between the use of synthetic imagery and real imagery during the research and development phases of defense programs. The discussion is based on a number of examples of defense programs that have successfully balanced the use of real and synthetic imagery, enabling a reduction in timescales and costs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper deals with a method to synthesize realistic outdoor scenes in infrared spectral band. This method makes use of physical models to simulate the energy balance at the surface of natural and artificial objects, for given meteorological conditions and landscape to be sensed, of mathematical model to predict surface temperature and of physical models to estimate radiance in the spectral band of the radiometer. The whole approach is described and infrared images are synthesized. The limits of the physical models are analyzed and their impact upon the resulting images are assessed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.