Proc. SPIE. 8448, Observatory Operations: Strategies, Processes, and Systems IV
KEYWORDS: Data processing, James Webb Space Telescope, Space telescopes, Calibration, Astronomy, Hubble Space Telescope, Optical instrument design, Computer architecture, Observatories, Charge-coupled devices
The Space Telescope Science Institute (STScI) has been operating the Hubble Space Telescope (HST) since its launch in 1990. The valuable experience gained by running the HST data management system as well as providing data and science software to the community proved extremely valuable in designing the James Webb Space Telescope science data processing (SDP) architecture. The HST experience has been distilled in two main "products": on one hand a rich set of requirements for the full JWST SDP system, on the other a large dataset (using both current and historical instruments) that is of vital importance in exercising and validating the architecture for the new mission. During the past years the JWST project has made significant progress in areas of architecture design, selection of relevant technologies and development of a functional prototype pipeline orchestration and workflow management system (the Condor-based OWL). Recently, the HST mission office has started a three-year project to replace the aging HST SDP system (OPUS) with the one being developed for JWST (OWL). This is proving to be a tremendous opportunity to not only give HST operations a technology refresh; but also validate the architecture being developed for JWST. The present paper describes the lessons learned from HST operations, how we are applying them to JWST design and development as well as our ongoing progress on the joint HST-JWST development and operations.
A survey program with multiple science goals will be driven by multiple technical requirements. On a ground-based
telescope, the variability of conditions introduces yet greater complexity. For a program that must be largely autonomous
with minimal dwell time for efficiency it may be quite difficult to foresee the achievable performance. Furthermore,
scheduling will likely involve self-referential constraints and appropriate optimization tools may not be available. The
LSST project faces these issues, and has designed and implemented an approach to performance analysis in its
Operations Simulator and associated post-processing packages. The Simulator has allowed the project to present detailed
performance predictions with a strong basis from the engineering design and measured site conditions. At present, the
Simulator is in regular use for engineering studies and science evaluation, and planning is underway for evolution to an
operations scheduling tool. We will describe the LSST experience, emphasizing the objectives, the accomplishments and
the lessons learned.
The LSST Data Management System is built on an open source software framework that has middleware and
application layers. The middleware layer provides capabilities to construct, configure, and manage pipelines on
clusters of processing nodes, and to manage the data the pipelines consume and produce. It is not in any way specific
to astronomical applications. The complementary application layer provides the building blocks for constructing
pipelines that process astronomical data, both in image and catalog forms. The application layer does not directly
depend upon the LSST middleware, and can readily be used with other middleware implementations. Both layers
have object oriented designs that make the creation of more specialized capabilities relatively easy through class
This paper outlines the structure of the LSST application framework and explores its usefulness for constructing
pipelines outside of the LSST context, two examples of which are discussed. The classes that the framework provides
are related within a domain model that is applicable to any astronomical pipeline that processes imaging data.
Specifically modeled are mosaic imaging sensors; the images from these sensors and the transformations that result
as they are processed from raw sensor readouts to final calibrated science products; and the wide variety of catalogs
that are produced by detecting and measuring astronomical objects in a stream of such images. The classes are
implemented in C++ with Python bindings provided so that pipelines can be constructed in any desired mixture of
C++ and Python.
We have developed an operation simulator for the Large Synoptic Survey Telescope (LSST) that is an implementation in Python language using the SimPy extension, with a modular and object-oriented design. The main components include a telescope model, a sky model, a weather database for 3 sites, a scheduler and multiple observing proposals. All the proposals derive from a parent class which is fully configurable through about 75 parameters to implement a specific science survey. These parameters control the target selection region, the composition of the sequence of observations for
each field, the timing restrictions and filter selection criteria of each observation, the lunation handling, seeing limits, etc. The current implemented proposals include Weak Lensing, Near Earth Asteroids, Supernova and Kuiper Belt Objects.
The telescope model computes the slew time delay from the current position to any given target position, using a complete kinematic model for the mount, dome and rotator, as well as optics alignment corrections. The model is fully configurable through about 50 parameters. The scheduler module combines the information received from the proposals and the telescope model for selecting the best target at each moment, promoting targets that fulfill multiple surveys and storing all the simulator activities in a MySQL database for further analysis of the run. This scheduler is also configurable; for example, balancing the weight of the slew time delay in selecting the next field to observe.
This simulator has been very useful in clarifying some of the technical and scientific capabilities of the LSST design, and gives a good baseline for a future observation scheduler.
In recent years, the operation of large telescopes with wide field detectors - such as the European Southern Observatory (ESO) Wide Field Imager (WFI) on the 2.2 meters telescope at La Silla, Chile - have dramatically increased the amount of astronomical data produced each year. The next survey telescopes, such as the ESO VST, will continue on this trend, producing extremely large datasets. Astronomy, therefore, has become an incredibly data rich field requiring new tools and new strategies to efficiently handle huge archives and fully exploit their scientific content. At the Space Telescope European Coordinating Facility we are working on a new project, code named Querator (http://archive.eso.org/querator/). Querator is an advanced multi-archive search engine built to address the needs of astronomers looking for multicolor imaging data across different astronomical data-centers. Querator returns sets of images of a given astronomical object or search region. A set contains exposures in a number of different wave bands. The user constraints the number of desired wave bands by selecting from a set of instruments, filters or by specifying actual physical units. As far as present-day data-centers are concerned, Querator points out the need for: - an uniform and standard description of archival data and - an uniform and standard description of how the data was acquired (i.e. instrument and observation characteristics). Clearly, these pieces of information will constitute an intermediate layer between the data itself and the data mining tools operating on it. This layered structure is a prerequisite to real data-center inter-operability and, hence, to Virtual Observatories. A detailed description of Querator's design, of the required data structures, of the problems encountered so far and of the proposed solutions will be given in the following pages. Throughout this paper we'll favor the term data-center over archive to stress the need to look at raw-pixels' archives and catalogues in an homogeneous way.
The joint archive facility of the European Southern Observatory (ESO) and the Space Telescope - European Coordinating Facility (ST-ECF) is undertaking particular efforts in the field of associating (grouping) Hubble Space Telescope (HST) observations for a number of years already. By now their users are given means for browsing associations of HST images. Soon the same capability will be provided for spectra as well. Associations of observations can either be defined and driven by requirements imposed by higher level algorithms like co-adding and drizzling techniques or by user defined constraints. In any case we consider these services an important precursor and testbed to a future virtual observatory. Two components complement an on-line interface (archive.eso.org) to such data products: For one part it is the selection process which can be greatly improved by adding preview capabilities for individual or multiple exposures. On the other hand it requires a request handling system which supports the concept of associations and which can expand a given association and computes and delivers calibrated and combined data products.