In the last few years the ubiquitous availability of high bandwidth networks has changed the way both robotic and non-robotic telescopes operate, with single isolated telescopes being integrated into expanding "smart" telescope networks that can span continents and respond to transient events in seconds. The Heterogeneous Telescope Networks (HTN)* Consortium represents a number of major research groups in the field of robotic telescopes, and together we are proposing a standards based approach to providing interoperability between the existing proprietary telescope networks. We further propose standards for interoperability, and integration with, the emerging Virtual Observatory.
We present the results of the first interoperability meeting held last year and discuss the protocol and transport standards agreed at the meeting, which deals with the complex issue of how to optimally schedule observations on geographically distributed resources. We discuss a free market approach to this scheduling problem, which must initially be based on ad-hoc agreements between the participants in the network, but which may eventually expand into a electronic market for the exchange of telescope time.
Next-generation science and exploration systems will employ new observation strategies that will use multiple sensors in a dynamic environment to provide high quality monitoring, self-consistent analyses and informed decision making. The Science Goal Monitor (SGM) is a prototype software tool being developed to explore the nature of automation necessary to enable dynamic observing of earth phenomenon. The tools being developed in SGM improve our ability to autonomously monitor multiple independent sensors and coordinate reactions to better observe the dynamic phenomena. The SGM system enables users to specify events of interest and how to react when an event is detected. The system monitors streams of data to identify occurrences of key events previously specified by the scientist/user. When an event occurs, the system autonomously coordinates the execution of the users' desired reactions between different sensors. The information can be used to rapidly respond to a variety of fast temporal events. Investigators will no longer have to rely on after-the-fact data analysis to determine what happened.
This paper describes a series of prototype demonstrations that we have developed using SGM and NASA's Earth Observing-1 (EO-1) satellite and Earth Observing Systems' Aqua/Terra spacecrafts' MODIS instrument. Our demonstrations show the promise of coordinating data from different sources, analyzing the data for a relevant event, autonomously updating and rapidly obtaining a follow-on relevant image. SGM is being used to investigate forest fires, floods and volcanic eruptions. We are now identifying new earth science scenarios that will have more complex SGM reasoning. By developing and testing a prototype in an operational environment, we are also establishing and gathering metrics to gauge the success of automating science campaigns.
Infusion of automation technologies into NASA's future missions will be essential because of the need to: (1) effectively handle an exponentially increasing volume of scientific data, (2) successfully meet dynamic, opportunistic scientific goals and objectives, and (3) substantially reduce mission operations staff and costs. While much effort has gone into automating routine spacecraft operations to reduce human workload and hence costs, applying intelligent automation to the science side, i.e., science data acquisition, data analysis and reactions to that data analysis in a timely and still scientifically valid manner, has been relatively under-emphasized.
In order to introduce science driven automation in missions, we must be able to: capture and interpret the science goals of observing programs, represent those goals in machine interpretable language; and allow spacecrafts' onboard systems to autonomously react to the scientist's goals. In short, we must teach our platforms to dynamically understand, recognize, and react to the scientists' goals.
The Science Goal Monitor (SGM) project at NASA Goddard Space Flight Center is a prototype software tool being developed to determine the best strategies for implementing science goal driven automation in missions. The tools being developed in SGM improve the ability to monitor and react to the changing status of scientific events. The SGM system enables scientists to specify what to look for and how to react in descriptive rather than technical terms. The system monitors streams of science data to identify occurrences of key events previously specified by the scientist. When an event occurs, the system autonomously coordinates the execution of the scientist's desired reactions. Through SGM, we will improve our understanding about the capabilities needed onboard for success, develop metrics to understand the potential increase in science returns, and develop an "operational" prototype so that the perceived risks associated with increased use of automation can be reduced.
SGM is currently focused on two collaborations:
1. Yale University's SMARTS (Small and Moderate Aperture Research Telescope System) observing program - Modeling and testing ways in which SGM can be used to improve scientific returns on observing programs involving intrinsically variable astronomical targets.
2. The EO-1 (Earth Observing-1) mission - Modeling and testing ways in which SGM can be used to autonomously coordinate multiple platforms based on a set of scientific criteria.
In this paper, we will discuss the status of the SGM project focusing primarily on our progress with the SMARTS collaboration.
Proc. SPIE. 4854, Future EUV/UV and Visible Space Astrophysics Missions and Instrumentation
KEYWORDS: Observatories, Detection and tracking algorithms, Satellites, Data transmission, Data acquisition, Data processing, Software development, Space operations, Data communications, Target acquisition
Kronos is a multiwavelength observatory designed to map the accretion disks and environments of supermassive black holes in various environments using the natural intrinsic variability of the accretion-driven sources. Kronos is envisaged as a Medium Explorer mission to NASA Office of Space Science under the Structure and Evolution of the Universe theme.
We will achieve the Kronos science objectives by developing cost-effective techniques for obtaining and assimilating data from the research spacecraft and its subsequent work on the ground. The science operations assumptions for the mission are:
(1 Need for flexible scheduling due to the variable nature of targets,
(2) Large data volumes but minimal ground station contact,
(3) Very small staff for operations.
Our first assumption implies that we will have to consider an effective strategy to dynamically reprioritize the observing schedule to maximize science data acquisition. The flexibility we seek greatly increases the science return of the mission, because variability events can be properly captured. Our second assumption implies that we will have to develop some basic on-board analysis strategies to determine which data get downloaded. The small size of the operations staff implies that we need to "automate" as many routine processes of science operations as possible.
In this paper we will discuss the various solutions that we are considering to optimize our operations and maximize science returns on the observatory.
In the coming decade, the drive to increase the scientific returns on capital investment and to reduce costs will force automation to be implemented in many of the scientific tasks that have traditionally been manually overseen. Thus, spacecraft autonomy will become an even greater part of mission operations. While recent missions have made great strides in the ability to autonomously monitor and react to changing health and physical status of spacecraft, little progress has been made in responding quickly to science driven events. The new generation of space-based telescopes/observatories will see deeper, with greater clarity, and they will generate data at an unprecedented rate. Yet, while onboard data processing and storage capability will increase rapidly, bandwidth for downloading data will not increase as fast and can become a significant bottleneck and cost of a science program.
For observations of inherently variable targets and targets of opportunity, the ability to recognize early if an observation will not meet the science goals of variability or minimum brightness, and react accordingly, can have a major positive impact on the overall scientific returns of an observatory and on its operational costs. If the observatory can reprioritize the schedule to focus on alternate targets, discard uninteresting observations prior to downloading, or download them at a reduced resolution its overall efficiency will be dramatically increased.
We are investigating and developing tools for a science goal monitoring (SGM) system. The SGM will have an interface to help capture higher-level science goals from scientists and translate them into a flexible observing strategy that SGM can execute and monitor. SGM will then monitor the incoming data stream and interface with data processing systems to recognize significant events. When an event occurs, the system will use the science goals given it to reprioritize observations, and react appropriately and/or communicate with ground systems - both human and machine - for confirmation and/or further high priority analyses.
Fulfilling the promise of the era of great observatories, NASA now has more than three space-based astronomical telescopes operating in different wavebands. This situation provides astronomers with the unique opportunity of simultaneously observing a target in multiple wavebands with these observatories. Currently scheduling multiple observatories simultaneously, for coordinated observations, is highly inefficient. Coordinated observations require painstaking manual collaboration among the observatory staff at each observatory. Because they are time-consuming and expensive to schedule, observatories often limit the number of coordinated observations that can be conducted. In order to exploit new paradigms for observatory operation, the Advanced Architectures and Automation Branch of NASA's Goddard Space Flight Center has developed a tool called the Visual Observation Layout Tool (VOLT). The main objective of VOLT is to provide a visual tool to automate the planning of coordinated observations by multiple astronomical observatories. Four of NASA's space-based astronomical observatories - the Hubble Space Telescope (HST), Far Ultraviolet Spectroscopic Explorer (FUSE), Rossi X-ray Timing Explorer (RXTE) and Chandra - are enthusiastically pursuing the use of VOLT. This paper will focus on the purpose for developing VOLT, as well as the lessons learned during the infusion of VOLT into the planning and scheduling operations of these observatories.
In the Virtual Observatory (VO), software tools will perform the functions that have traditionally been performed by physical observatories and their instruments. These tools will not be adjuncts to VO functionality but will make up the very core of the VO. Consequently, the tradition of observatory and system independent tools serving a small user base is not valid for the VO. For the VO to succeed, we must improve software collaboration and code sharing between projects and groups. A significant goal of the Scientist's Expert Assistant (SEA) project has been promoting effective collaboration and code sharing among groups. During the past three years, the SEA project has been developing prototypes for new observation planning software tools and strategies. Initially funded by the Next Generation Space Telescope, parts of the SEA code have since been adopted by the Space Telescope Science Institute. SEA has also supplied code for the SIRTF planning tools, and the JSky Open Source Java library. The potential benefits of sharing code are clear. The recipient gains functionality for considerably less cost. The provider gains additional developers working with their code. If enough users groups adopt a set of common code and tools, de facto standards can emerge (as demonstrated by the success of the FITS standard). Code sharing also raises a number of challenges related to the management of the code. In this talk, we will review our experiences with SEA - both successes and failures, and offer some lessons learned that might promote further successes in collaboration and re-use.
A clear goal of the Virtual Observatory (VO) is to enable new science through analysis of integrated astronomical archives. An additional and powerful possibility of the VO is to link and integrate these new analyses with planning of new observations. By providing tools that can be used for observation planning in the VO, the VO will allow the data lifecycle to come full circle: from theory to observations to data and back around to new theories and new observations. The Scientist's Expert Assistant (SEA) Simulation Facility (SSF) is working to combine the ability to access existing archives with the ability to model and visualize new observations. Integrating the two will allow astronomers to better use the integrated archives of the VO to plan and predict the success of potential new observations more efficiently. The full circle lifecycle enabled by SEA can allow astronomers to make substantial leaps in the quality of data and science returns on new observations. Our talk examines the exciting potential of integrating archival analysis with new observation planning, such as performing data calibration analysis on archival images and using that analysis to predict the success of new observations, or performing dynamic signal-to-noise analysis combining historical results with modeling of new instruments or targets. We will also describe how the development of the SSF is progressing and what have been its successes and challenges.
This paper describes the approach and evaluation results of the Next Generation Space Telescope (NGST) Scientist's Expert Assistant (SEA) project. The plan describes the goals, and methodology for the evaluation. The objective of this evaluation is to provide a means for the targeted user community to provide feedback to the developers, and to determine if the advanced technologies investigated as part of SEA have achieved the goals that were to be its success criteria. We can with confidence say that visual, interactive tools in SEA were found to be highly useful by the users. On a scale of 1 - 5, where 1 was excellent and 5 was poor, the SEA as a whole ranked as 1.7, i.e., between excellent and above average.
As the Hubble Space Telescope (HST) moves into its Second Decade of observations, we are embarking on bringing our Phase I submission system into the 21st Century as well. Proposing for Hubble Space Telescope (HST) observing time and archival research proceeds in two phases. In Phase I, the scientific merits of the proposal are considered. Only accepted proposals enter Phase II, where the observations are specified in complete detail. With the advent of state of the art technology and the excellent prototyping work that has brought the Astronomer's Proposal Tool (APT), formerly the Scientist's Expert Assistant (SEA), from a concept 3 years ago, to an approved Project for implementation, at the Space Telescope Science Institute (STScI). We plan to make HST's Phase I submission system to be an integral part of the APT. We have always tried to maintain our Phase I strategy of keeping the interface simple, as well as having a minimal learning curve. This strategy will be maintained in the APT framework as well. In this paper we will present our concept for the Science definition, and Phase I proposal development and submission tools. We also discuss how we are transforming our current Call for Proposals (CP) document into a smaller and more concise electronic document that will address our policies and submission process. This document will be built and maintained using innovative tools and XML. We will provide links to existing documentation as well as provide all of the relevant information available via the tool as on-line 'context- sensitive' help.
In this new era of modern astronomy, observations across multiple wavelengths are often required. This implies understanding many different costly and complex observatories. Yet, the process for translating ideas into proposals is very similar for all of these observatories If we had a new generation of uniform, common tools, writing proposals for the various observatories would be simpler for the observer because the learning curve would not be as steep. As observatory staffs struggle to meet the demands for higher scientific productivity with fewer resources, it is important to remember that another benefit of having such universal tools is that they enable much greater flexibility within an organization. The shifting manpower needs of multiple- instrument support or multiple-mission operations may be more readily met since the expertise is built into the tools. The flexibility of an organization is critical to its ability to change, to plan ahead, and respond to various new opportunities and operating conditions on shorter time scales, and to achieve the goal of maximizing scientific returns. In this paper we will discuss the role of a new generation of tools with relation to multiple missions and observatories. We will also discuss some of the impact of how uniform, consistently familiar software tools can enhance the individual's expertise and the organization's flexibility. Finally, we will discuss the relevance of advanced tools to higher education.
Over the past two years, the Scientist's Expert Assistant team from NASA's Goddard Space Flight Center and the Space Telescope Science Institute has been prototyping tools to support General Observer proposal development for the Hubble Space Telescope and the Next Generation Space Telescope. One aspect of this effort has been the exploration of the use of expert systems in guiding the user in preparing their observing program. The initial goal was to provide the user with a question-and-answer style of interaction where the software would 'interview' the user for their science needs and recommend instrument settings. This design ultimately failed. The reasons for this failure, and the resulting evolution of our approach, are an interesting case study in the use of expert system technology for observing tools. Although the interview approach failed we felt that expert systems can still be used in the tools environment. This paper describes our current approach to the use of expert systems and how it has evolved over the project's lifetime. We also present suggestions on why expert systems are useful and when they are appropriate.
During the past two years, the Scientist's Expert Assistant (SEA) team has been prototyping proposal development tools for the Hubble Space Telescope in an effort to demonstrate the role of software in reducing support costs for the Next Generation Space Telescope (NGST). This effort has been a success. The Hubble Space Telescope has adopted two SEA prototype tools, the Exposure Time Calculator and Visual Target Tuner, for operational use. The Space Telescope Science Institute is building a new set of observing tools based on SEA technology. These tools will hopefully be foundation that is easily adaptable to other observatories including NGST. The SEA project has aggressively pursued the latest software technologies including Java, distributed computing, XML, Web distribution, and expert systems. Some technology experiments proved to be dead ends, while other technologies were unexpectedly beneficial. We have also worked with other projects to foster collaboration between the various observing tool programs. In two years, we have learned a great deal that will be useful to future software tool efforts. In this presentation, we will discuss the lessons that we've learned during the development and evaluation of the SEA. We will also discuss future directions for the project.
In this paper we present a strategy for developing the next generation of proposal preparation tools so that we can continue to optimize scientific returns from the Hubble Space Telescope in an era of constrained budgets. The new proposal preparation tools must be built with two goals: (1) to facilitate scientific investigation for observers, and (2) to decrease the effort spent on routine matters by observatory staff. We have based our conclusions on lessons learned from the Next Generation Space Telescope's Scientist's Expert Assistant experiment. We conclude that: (1) Compared to existing Hubble Space Telescope's Phase II RPS2 software, a modern set of proposal tools and an environment that integrates them will be appreciated by the user community. From the user's perspective the proposed software must be more intuitive, visual, and responsive. From the observatory's perspective the tools must be interoperable and extensible to other observatories. (2) To ensure state-of-the-art tools for proposal preparation for the user community, there needs to be a management structure that supports innovation. Further, the development activities need to be divided into innovating and fielding efforts to prevent operational pressures from inhibiting innovation. This will allow use of up-to-date technology so that the system can remain fluid and responsive to changes.
One of the manually intensive efforts of HST observing is the specification and validation of the detailed proposals for scientists observing with the telescope. In order to meet the operational cost objectives for the next generation telescope, this process needs to be dramatically less time consuming and less costly. We are prototyping a new proposal development system, the Scientist's Expert Assistant (SEA), using a combination of artificial intelligence and user interface techniques to reduce the time and effort involved for both scientists and the telescope operations staff. The advanced architectures and automation branch or Goddard's Information Systems Center is working with the Space Telescope Science Institute to explore SEA alternatives, using an iterative prototype-review-revise cycle. We are testing the usefulness of rule-based expert systems to painlessly guide a scientist to his or her desired observation specification. We are also examining several potential user interface paradigms and explore data visualization schemes to see which techniques are more intuitive. Our prototypes will be validated using HST's Advanced Camera for Surveys instrument as a live test instrument. Having an operational test-bed will ensure the most realistic feedback possible for the prototyping cycle. In addition, when the instruments for NGST are better defined, the SEA will already be a proven platform that simply needs adapting to NGST specific instruments.
Many scientific observational programs require the field of view (FOV) or aperture to have a specific orientation on the sky. Since orientation requirements have a very strong impact on other aspects of the execution of the observation, an observer must have the ability to visualize the orientation of the science aperture and determine the effect of the orientation on the possible scheduling of the observation. We are prototyping an interactive, visual tool for fine-tuning the target location and orientation. To make efficient use of any instrument the user needs to understand the various modes of the instrument and then calculate exposure times or signal-to-noise ratios for many different kinds of observations. Thus, the exposure time calculator (ETC) is an essential tool that is used by various users for many different purposes. We are prototyping a more dynamic graphical ETC in which the user can simulate to some extent and determine the effect of various input parameters. This interactive exposure time calculator will not only be intuitive but will provide various users the different level of detailed information they desire. The VTT and ETC are Web-based tools that can be used by themselves or as part of the Scientist's Expert Assistant, for the next generation space telescope proposal management system. Currently, the tools are being developed with the requirements of HST in mid, but will also be easily adaptable to other observatories. The underlying software for the tools is an object-oriented Java-based applet. The object-oriented nature of the design is intended to allow the tools to easily expand their features or to be customized. By making the system Java-based, we gain the ability to easily distribute the applet across a wide set of operating system and users. In addition to executing the tools as a Java applet, it can be loaded onto a user's workstation and run as an application independent of a Web browser.
The goal of STScI's user support is to provide HST observers with the tools, documentation and assistance they need to maximize the scientific return of their observations. This includes pre-observing support to design feasible observing programs which meet their scientific goals and post- observing support in the calibration, reduction, and analysis of the data. The current model for user support evolved over the first five years of HST operations and culminated in our contact scientist (CS) and program coordinator (PC) team. The CS is a professional astronomer as well as an instrument scientific for one of the HST instruments. The PC provides technical support as an expert in the language and tools of HST observation specification, implementation and scheduling. The underlying philosophy is that (1) the CS/PC team supports the observer from 'cradle to grave' of the observation and (2) the team is a 'single point of contact' for the observer. This means the observer can contact the CS/PC team during any phase in the life cycle of an HST program to receive assistance. It also ensure that the use obtains help from the two people at STScI who are the most familiar with the program, without being shuffled among many different experts. The STScI help desk provides parallel support for requests which do not deal with a given HST program. Requests are received, tracked, and assigned to the appropriate expert for reply. Our holistic approach combines CS/PC support with documentation, software and tools, and the help desk to create ann efficient and powerful support structure for observers.
Service mode observing simultaneously provides convenience, observing efficiency, cost-savings, and scheduling flexibility. To effectively optimize these advantages, the observer must exactly specify an observation with no real time interaction with the observatory staff. In this respect, ground-based service-mode observing and HST observing are similar. There are numerous details which, if unspecified, are either ambiguous or are left to chance, sometimes with undesirable results. Minimization of ambiguous/unspecified details is critical to the success of both HST and ground-based service observing. Smart observing proposal development tools which ave built in flexibility are therefore essential for both the proposer and the observatory staff. Calibration of the science observations is also an important facet of service observing. A centralized calibration process, while resource-intensive to install and maintain, is advantageous in several ways: it allows a more efficient overall use of the telescope, guarantees a standard quality of the observations, and makes archival observations more easily usable, greatly increasing the potential scientific return from the observations. In order to maximize the scientific results from an observatory in a service mode operations model, the observatory needs to be committed to performing a standard data quality evaluation on all science observations to assist users in their data evaluation and to provide data quality information to the observatory archive. The data quality control process at STScI adds value to the HST data and associated data products through examination and improvement of data processing, calibration, and archiving functions. This functionality is provided by a scientist who is familiar with the science goals of the proposal and assists its development throughout, from observation specification to the analysis of the processed data. Finally, archiving is essential to good service observing, because a good archive helps improve observing efficiency by not allowing unnecessary duplication of observations.