Real time Unmanned Arial Vehicle (UAV) image registration is achieved by stimulating one eye
with a live video image from a flying UAV while stimulating the other eye with calculated images. The
calculated image is initialized by telemetry signals from the UAV and corrected using the Perspective View
Nascent Technology (PVNT) software package model-image feedback algorithm. Live and registered
calculated images are superimposed allowing command functions including target geo-location, UAV
sensor slewing, tracking, and way point flight control. When the same equipment is used with the naked
eye the forward observer function can be implemented to produce accurate target coordinates.
The paper will then discuss UAV mission control and forward observer target tracking
experiments conducted at Camp Roberts, California.
Analysis of the brain as a physical system, that has the capacity of generating a display of every
day observed experiences and contains some knowledge of the physical reality which stimulates those
experiences, suggests the brain executes a self-measurement process described by quantum theory.
Assuming physical reality is a universe of interacting self-measurement loops, we present a model of space
as a field of cells executing such self-measurement activities. Empty space is the observable associated
with the measurement of this field when the mass and charge density defining the material aspect of the
cells satisfy the least action principle. Content is the observable associated with the measurement of the
quantum wave function ψ interpreted as mass-charge displacements. The illusion of space and its content
incorporated into cognitive biological systems is evidence of self-measurement activity that can be
associated with quantum operations.
Whether or not neuronal signal properties can engage 'non-trivial', i.e. functionally significant,
quantum properties, is the subject of an ongoing debate. Here we provide evidence that quantum
coherence dynamics can play a functional role in ion conduction mechanism with consequences on the
shape and associative character of classical membrane signals. In particular, these new perspectives predict
that a specific neuronal topology (e.g. the connectivity pattern of cortical columns in the primate brain) is
less important and not really required to explain abilities in perception and sensory-motor integration.
Instead, this evidence is suggestive for a decisive role of the number and functional segregation of ion
channel proteins that can be engaged in a particular neuronal constellation. We provide evidence from
comparative brain studies and estimates of computational capacity behind visual flight functions suggestive
for a possible role of quantum computation in biological systems.
KEYWORDS: Information operations, Computer programming, Standards development, Commercial off the shelf technology, Systems modeling, Interfaces, Telecommunications, Data communications, Receivers, Data modeling
When to implement a standard and how much benefit would result from its implementation is
often a seat of the pants value judgment.
We will address lack of cost/benefit metrics for interoperability standards by presenting a
generalized model of the interoperability problem which defines the tasks required to implement an NxN
matrix of interoperating system types. The model is then used to assess the work load required to achieve
interoperability and quantify the extent to which the introduction of standards reduces the work load as a
function of delineated standards characteristics. Characteristics such as format, execution, speed,
bandwidth, and, must notably knowledge definition mechanisms are delineated. Standards effectiveness in
terms of task costs are then estimated as a function of standards characteristics, latent ambiguities, and
number interoperating nodes.
Use case studies of several standards and guidelines for standards effectiveness evaluation will be
discussed.
The problem of real-time image geo-referencing is encountered in all vision based cognitive systems. In this
paper we present a model-image feedback approach to this problem and show how it can be applied to image
exploitation from Unmanned Arial Vehicle (UAV) vision systems. By calculating reference images from a known terrain
database, using a novel ray trace algorithm, we are able to eliminate foreshortening, elevation, and lighting distortions,
introduce registration aids and reduce the geo-referencing problem to a linear transformation search over the two
dimensional image space. A method for shadow calculation that maintains real-time performance is also presented.
The paper then discusses the implementation of our model-image feedback approach in the Perspective View
Nascent Technology (PVNT) software package and provides sample results from UAV mission control and target
mensuration experiments conducted at China Lake and Camp Roberts, California.
An extension to vonNeumann's analysis of quantum theory suggests self-measurement is a
fundamental process of Nature. By mapping the quantum computer to the brain architecture we will argue
that the cognitive experience results from a measurement of a quantum memory maintained by biological
entities. The insight provided by this mapping suggests quantum effects are not restricted to small atomic
and nuclear phenomena but are an integral part of our own cognitive experience and further that the
architecture of a quantum computer system parallels that of a conscious brain.
We will then review the suggestions for biological quantum elements in basic neural structures
and address the de-coherence objection by arguing for a self- measurement event model of Nature. We will
argue that to first order approximation the universe is composed of isolated self-measurement events which
guaranties coherence. Controlled de-coherence is treated as the input/output interactions between quantum
elements of a quantum computer and the quantum memory maintained by biological entities cognizant of
the quantum calculation results.
Lastly we will present stem-cell based neuron experiments conducted by one of us with the aim of
demonstrating the occurrence of quantum effects in living neural networks and discuss future research
projects intended to reach this objective.
KEYWORDS: Databases, Visualization, Sensors, Weapons, Detection and tracking algorithms, Vegetation, Global Positioning System, 3D modeling, Fourier transforms, Data modeling
This paper describes the requirements, data structures, and algorithms utilized in the run time
Player Unit of the OneTESS program. OneTESS is a combined instrumentation suit designed to satisfy the
requirements for both training and operational testing being developed by a team lead by AT&T.
Specifically we will describe the terrain services and Player Unit services required for geometric pairing
and engagement processing along with the accurate database design and procurement strategy required to
build it. The paper will also describe a voxel based visualization engine adapted to perform dynamic terrain
updates and high accurate test site preparation. We will also describe the process for procuring and testing
the fidelity of the terrain environment and describe the analysis to answer the "what is good enough"
question within the context of instrumentation accuracies and development strategies.
Lastly we will discuss the implications and opportunities afforded by onboard environment
models both for future test and training applications as well as in future deployable units.
The ability to rapidly detect and identify potential targets both fixed and mobile from multiple sensor feeds is a critical function in network centric warfare. In this paper we describe the use of Image Differencing and 3D terrain database editing in order to fuse oblique aerial photos, IR sensor imagery, and other non-traditional data sources to produce battlefield metrics that support network centric operations. Such metrics include target detection, recognition, and location, and improved knowledge of the target environment. Key to our approach is the rapid generation of target and background signatures from high-resolution 1-meter object descriptor terrain databases. This technique utilizes the difference between measured and calculated sensor images to 1) update and correct knowledge of the terrain background, 2) register multi sensor imagery 3) identify potential/candidate targets based on residual image differencing and 3) measure and report target locations based on scene matching. The technique is especially suited for utilizing imagery from reconnaissance and remotely piloted vehicle sensors. It also holds promise for automation and real-time data reduction of battlefield sensor feeds and for improving now-time situational awareness. We will present the algorithms and approach utilized in the Image Differencing technique. We will also describe the software developed to implement the approach. Lastly we will present the results of experiments and benchmarks conducted to identify and measure target locations in test locations at Ft. Hood, TX and Ft. Hunter Liggett, CA.
The ability to rapidly and inexpensively generate terrain databases to replicate actual terrain is critical to insuring correlation between the results from live, virtual, and constructive simulations used in testing and evaluating weapons, sensors, and battlefield command and control systems. In this paper we describe a technique for producing battlefield terrain data sets from oblique aerial photos and other nontraditional data sources using image differencing and 3D terrain editing tools. This technique uses a feedback loop to calculate terrain data parameters from differences between actual sensor imagery and synthetic imagery of replicated terrain created by an image generator. The technique is especially well suited for updating knowledge of battlefield situations from reconnaissance and remotely piloted vehicle sensors. It also holds promise for automation and real-time data reduction of battlefield sensor feeds and improved now-time situational awareness.
W
e will present the algorithms and approach utilized in the Image Differencing technique. We will also describe the software developed to implement the approach. Lastly we will present the results of
experiments and benchmarks conducted to measure the effectiveness and progress made toward real-time terrain database generation.
This paper describes a system, which compares aerial photographs of the same terrain taken at different times and tires to recognize straight-edged cultural features that have changed. This work is intended to be highly robust, handling very different lighting conditions, weather, times of year, camera, and film between the images to be compared. Our system AERICOMP is designed to facilitate battlefield terrain modeling by permitting automatic updates form new images. AERICOMP does coarse registration, image correction, feature detection, automatic refined registration, feature difference detection and reduction, feature difference presentation and operator acceptance, difference identification, and database update. It emphasizes line segments for comparisons because differences in them are more robust for photometric changes between terrain images. In addition, line segment comparisons require less computation than pixel comparisons and are more compatible with identification tasks. For our intended application of battlefield terrain modeling, detecting changes in man-made structures is of much greater importance than changes in vegetation, and line segments are the key to identifying such structures. We show results involving change analysis between color IR and black/white USGS photographs of the same area six years apart. Even a mostly automatic system benefits form user interacting at key points. AERICOMP exploits user judgements at the beginning and end of its processing to assist in coarse registration and to approve the significance of any differences found. AERICOMP is currently under development at the Naval Postgraduate School, and is supported by the TENCAPS project under the US Navy.
By utilizing images calculated on-the-fly as a filter improvements in real-time performance of object measurement and feature extraction can be achieved for automated aerial photograph analysis. The process requires the rapid calculation of images from an existing terrain database. The calculated images are then compared to incoming sensor data. The difference between the calculated and sensor image is then utilized as a parallel error signal for updating the state of knowledge of the objects and features measured. The advantage of this image feedback technique is that the calculation of sensor realistic perspective views from parameterized object models is easier than the direct interpretation of complex images. The feedback technique effectively eliminates what is already known from the measurement signal and thereby reduces the amount of data which must be processed by pattern recognition techniques by orders of magnitude. The paper presents the mathematical description of the image feedback technique and estimates update frame rates which can be expected for real time applications. We then discuss the incremental software development approach and the system design we are using for implementing the technique. The state of the current system is presented along with a discussion of experiments and experiences gained in building large-scale high-resolution terrain databases. The paper concludes by defining future research areas that need to be addressed for improving performance and accuracy.
KEYWORDS: Control systems, Laser stabilization, Carbon dioxide lasers, Laser welding, Signal detection, Laser processing, Laser systems engineering, Aluminum, High power lasers, Diagnostics
The working quality in laser material processing is influenced by a number of parameters as to laser source, beam guiding system and material. Although the reliability of the lasers and the stability of the system components have been improved there are still some weak points. In the following a closed-loop control system is presented which will help to meet the requirements.
The problem of rapidly generating accurate object material descriptor databases of the earth's surface has been approached using a quantized rendering transform to code photogrammetric image measurements into physical surface parameters. The approach eliminates the lighting effect inherent in aerial photos which view earth surface elements from different perspectives. It reduces the multi-aspect photographic spectral measurements to objective surface properties which are then used for automated object and surface material classification. This paper presents the algorithms and design for a terrain database creation workstation used to generate 1 meter resolution data. The system was used to digitize approximately 200 aerial photos covering a 400 sq km area of Ft. Hunter Liggett, Calif. and translated into a 1.2 gigabyte surface descriptor database. Included in the workstation is a parallel processing, transputer- based, perspective view generator which uses the rendering transform to calculate side views at real-time rates. The use of this subsystem as a real-time feedback and quality control mechanism during database creation is described and the technique extended to real-time terrain database update systems.
The problem of realistic, high-resolution, earth surface representation for real-time, rendered, video-quality perspective view generation has been approached by using a quantized rendering transform to code image measurements into physical surface modeling descriptors. This paper describes a physical earth surface model and approximates natural light energy scattering equations to derive a transform between photogrammetric measurements and model parameters. This transform was used to translate a 12 Gbyte, photo image, data base covering 400 sq km of Ft. Hunter Liggett, CA into a 1.2 Gbyte surface descriptor file. A prototype transputer-based parallel processing system is also presented. The system uses the rendering transform to calculate real-time perspective views at operator-selectable times, seasons, and environmental conditions. The system produces video-realistic perspective views at a rendering rate of .5 Mpixels/second and is scalable by a factor of 80.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.