KEYWORDS: Systems modeling, Systems engineering, Large Synoptic Survey Telescope, Observatories, Connectors, Data processing, Data archive systems, Astronomy, Camera shutters, Information technology
We† provide an overview of the Model Based Systems Engineering (MBSE) language, tool, and methodology being used in our development of the Operational Plan for Large Synoptic Survey Telescope (LSST) operations. LSST’s Systems Engineering (SE) team is using a model-based approach to operational plan development to: 1) capture the topdown stakeholders’ needs and functional allocations defining the scope, required tasks, and personnel needed for operations, and 2) capture the bottom-up operations and maintenance activities required to conduct the LSST survey across its distributed operations sites for the full ten year survey duration. To accomplish these complimentary goals and ensure that they result in self-consistent results, we have developed a holistic approach using the Sparx Enterprise Architect modeling tool and Systems Modeling Language (SysML). This approach utilizes SysML Use Cases, Actors, associated relationships, and Activity Diagrams to document and refine all of the major operations and maintenance activities that will be required to successfully operate the observatory and meet stakeholder expectations. We have developed several customized extensions of the SysML language including the creation of a custom stereotyped Use Case element with unique tagged values, as well as unique association connectors and Actor stereotypes. We demonstrate this customized MBSE methodology enables us to define: 1) the rolls each human Actor must take on to successfully carry out the activities associated with the Use Cases; 2) the skills each Actor must possess; 3) the functional allocation of all required stakeholder activities and Use Cases to organizational entities tasked with carrying them out; and 4) the organization structure required to successfully execute the operational survey. Our approach allows for continual refinement utilizing the systems engineering spiral method to expose finer levels of detail as necessary. For example, the bottom-up, Use Case-driven approach will be deployed in the future to develop the detailed work procedures required to successfully execute each operational activity.
The Arizona-NOAO Temporal Analysis and Response to Events System (ANTARES) is a joint effort of NOAO and the Department of Computer Science at the University of Arizona to build prototype software to process alerts from time-domain surveys, especially LSST, to identify those alerts that must be followed up immediately. Value is added by annotating incoming alerts with existing information from previous surveys and compilations across the electromagnetic spectrum and from the history of past alerts. Comparison against a knowledge repository of properties and features of known or predicted kinds of variable phenomena is used for categorization. The architecture and algorithms being employed are described.
KEYWORDS: Large Synoptic Survey Telescope, Astronomy, Prototyping, Data modeling, Process modeling, Observatories, Electromagnetism, Telescopes, Galactic astronomy, Algorithm development
The Arizona-NOAO Temporal Analysis and Response to Events System (ANTARES) is a joint project of the National Optical Astronomy Observatory and the Department of Computer Science at the University of Arizona. The goal is to build the software infrastructure necessary to process and filter alerts produced by time-domain surveys, with the ultimate source of such alerts being the Large Synoptic Survey Telescope (LSST). The ANTARES broker will add value to alerts by annotating them with information from external sources such as previous surveys from across the electromagnetic spectrum. In addition, the temporal history of annotated alerts will provide further annotation for analysis. These alerts will go through a cascade of filters to select interesting candidates. For the prototype, ‘interesting’ is defined as the rarest or most unusual alert, but future systems will accommodate multiple filtering goals. The system is designed to be flexible, allowing users to access the stream at multiple points throughout the process, and to insert custom filters where necessary. We describe the basic architecture of ANTARES and the principles that will guide development and implementation.
The Large Synoptic Survey Telescope (LSST) relies on a set of calibration systems to achieve the survey photometric performances over a wide range of observing conditions. Its purpose is to consistently and accurately measure the observatory instrumental response and the atmospheric transparency during LSST observing. The instrumental response calibration will be performed regularly to monitor any variation of the transmission during the duration of the survey. The atmospheric data will be acquired nightly and processed to atmospheric models. In this paper, we describe the calibration screen system that will be used to perform the instrumental response calibration and the atmospheric calibration system including the auxiliary telescope dedicated to the acquisition of spectral data to determine the atmospheric transmission.
We present an innovative method for photometric calibration of massive survey data that will be applied to the
Large Synoptic Survey Telescope (LSST). LSST will be a wide-field ground-based system designed to obtain
imaging data in six broad photometric bands (ugrizy, 320-1050 nm). Each sky position will be observed multiple
times, with about a hundred or more observations per band collected over the main survey area (20,000 sq.deg.)
during the anticipated 10 years of operations. Photometric zeropoints are required to be stable in time to 0.5%
(rms), and uniform across the survey area to better than 1% (rms). The large number of measurements of
each object taken during the survey allows identification of isolated non-variable sources, and forms the basis
for LSST's global self-calibration method. Inspired by SDSS's uber-calibration procedure, the self-calibration
determines zeropoints by requiring that repeated measurements of non-variable stars must be self-consistent when
corrected for variations in atmospheric and instrumental bandpass shapes. This requirement constrains both the
instrument throughput and atmospheric extinction. The atmospheric and instrumental bandpass shapes will
be explicitly measured using auxiliary instrumentation. We describe the algorithm used, with special emphasis
both on the challenges of controlling systematic errors, and how such an approach interacts with the design of
the survey, and discuss ongoing simulations of its performance.
A survey program with multiple science goals will be driven by multiple technical requirements. On a ground-based
telescope, the variability of conditions introduces yet greater complexity. For a program that must be largely autonomous
with minimal dwell time for efficiency it may be quite difficult to foresee the achievable performance. Furthermore,
scheduling will likely involve self-referential constraints and appropriate optimization tools may not be available. The
LSST project faces these issues, and has designed and implemented an approach to performance analysis in its
Operations Simulator and associated post-processing packages. The Simulator has allowed the project to present detailed
performance predictions with a strong basis from the engineering design and measured site conditions. At present, the
Simulator is in regular use for engineering studies and science evaluation, and planning is underway for evolution to an
operations scheduling tool. We will describe the LSST experience, emphasizing the objectives, the accomplishments and
the lessons learned.
The Large Synoptic Survey Telescope (LSST) will continuously image the entire sky visible from Cerro Pachon
in northern Chile every 3-4 nights throughout the year. The LSST will provide data for a broad range of science
investigations that require better than 1% photometric precision across the sky (repeatability and uniformity)
and a similar accuracy of measured broadband color. The fast and persistent cadence of the LSST survey
will significantly improve the temporal sampling rate with which celestial events and motions are tracked. To
achieve these goals, and to optimally utilize the observing calendar, it will be necessary to obtain excellent
photometric calibration of data taken over a wide range of observing conditions - even those not normally
considered "photometric". To achieve this it will be necessary to routinely and accurately measure the full
optical passband that includes the atmosphere as well as the instrumental telescope and camera system. The
LSST mountain facility will include a new monochromatic dome illumination projector system to measure the
detailed wavelength dependence of the instrumental passband for each channel in the system. The facility will
also include an auxiliary spectroscopic telescope dedicated to measurement of atmospheric transparency at all
locations in the sky during LSST observing. In this paper, we describe these systems and present laboratory
and observational data that illustrate their performance.
KEYWORDS: Large Synoptic Survey Telescope, Observatories, Astronomy, Solar system, Image processing, Cameras, Data centers, Telescopes, Databases, Space telescopes
The astronomical time domain is entering an era of unprecedented growth. LSST will join current and future surveys at
diverse wavelengths in exploring variable and transient celestial phenomena characterizing astrophysical domains from
the solar system to the edge of the observable universe. Adding to the large but relatively well-defined load of a project
of the scale of the Large Synoptic Survey Telescope will be many challenging issues of handling the dynamic empirical
interplay between LSST and contingent follow-up facilities worldwide. We discuss concerns unique to this telescope,
while exploring consequences common to emerging observational time domain paradigms.
KEYWORDS: Large Synoptic Survey Telescope, Image processing, C++, Astronomy, Sensors, Imaging systems, Image sensors, Data processing, Data modeling, Data archive systems
The LSST Data Management System is built on an open source software framework that has middleware and
application layers. The middleware layer provides capabilities to construct, configure, and manage pipelines on
clusters of processing nodes, and to manage the data the pipelines consume and produce. It is not in any way specific
to astronomical applications. The complementary application layer provides the building blocks for constructing
pipelines that process astronomical data, both in image and catalog forms. The application layer does not directly
depend upon the LSST middleware, and can readily be used with other middleware implementations. Both layers
have object oriented designs that make the creation of more specialized capabilities relatively easy through class
inheritance.
This paper outlines the structure of the LSST application framework and explores its usefulness for constructing
pipelines outside of the LSST context, two examples of which are discussed. The classes that the framework provides
are related within a domain model that is applicable to any astronomical pipeline that processes imaging data.
Specifically modeled are mosaic imaging sensors; the images from these sensors and the transformations that result
as they are processed from raw sensor readouts to final calibrated science products; and the wide variety of catalogs
that are produced by detecting and measuring astronomical objects in a stream of such images. The classes are
implemented in C++ with Python bindings provided so that pipelines can be constructed in any desired mixture of
C++ and Python.
KEYWORDS: Data archive systems, Large Synoptic Survey Telescope, Data centers, Data storage, Calibration, Data communications, Data backup, Data acquisition, Imaging systems, Cameras
The LSST Data Management System (DMS) processes the incoming stream of images that the camera system generates
to produce transient alerts and to archive the raw images, periodically creates new calibration data products that other
processing functions will use, creates and archives an annual Data Release (a static self-consistent collection of data
products generated from all survey data taken from the date of survey initiation to the cutoff date for the Data Release),
and makes all LSST data available through an interface that uses community-based standards and facilitates user data
analysis and production of user-defined data products with supercomputing-scale resources.
This paper discusses DMS distributed processing and data, and DMS architecture and design, with an emphasis on the
particular technical challenges that must be met. The DMS publishes transient alerts in community-standard formats (e.g.
VOEvent) within 60 seconds of detection. The DMS processes and archives over 50 petabytes of exposures (over the 10-
year survey). Data Releases, include catalogs of tens of trillions of detected sources and tens of billions of astronomical
objects, 2000-deep co-added exposures, and calibration products accurate to standards not achieved in wide-field survey
instruments to date. These Data Releases grow in size to tens of petabytes over the survey period. The expected data
access patterns drive the design of the database and data access services. Finally, the DMS permits interactive analysis
and provides nightly summary statistics describing DMS output quality and performance.
KEYWORDS: Large Synoptic Survey Telescope, Data modeling, Astronomy, Calibration, Observatories, Cameras, Data processing, Telescopes, Image quality, Point spread functions
LSST will have a Science Data Quality Assessment (SDQA) subsystem for the assessment of the data products that will
be produced during the course of a 10 yr survey. The LSST will produce unprecedented volumes of astronomical data as
it surveys the accessible sky every few nights. The SDQA subsystem will enable comparisons of the science data with
expectations from prior experience and models, and with established requirements for the survey. While analogous
systems have been built for previous large astronomical surveys, SDQA for LSST must meet a unique combination of
challenges. Chief among them will be the extraordinary data rate and volume, which restricts the bulk of the quality
computations to the automated processing stages, as revisiting the pixels for a post-facto evaluation is prohibitively
expensive. The identification of appropriate scientific metrics is driven by the breadth of the expected science, the scope
of the time-domain survey, the need to tap the widest possible pool of scientific expertise, and the historical tendency of
new quality metrics to be crafted and refined as experience grows. Prior experience suggests that contemplative, off-line
quality analyses are essential to distilling new automated quality metrics, so the SDQA architecture must support
integrability with a variety of custom and community-based tools, and be flexible to embrace evolving QA demands.
Finally, the time-domain nature of LSST means every exposure may be useful for some scientific purpose, so the model
of quality thresholds must be sufficiently rich to reflect the quality demands of diverse science aims.
Science studies made by the Large Synoptic Survey Telescope will reach systematic limits in nearly all cases. Requirements for accurate photometric measurements are particularly challenging. Advantage will be taken of the rapid cadence and pace of the LSST survey to use celestial sources to monitor stability and uniformity of photometric data. A new technique using a tunable laser is being developed to calibrate the wavelength dependence of the total telescope and camera system throughput. Spectroscopic measurements of atmospheric extinction and emission will be made continuously to allow the broad-band optical flux observed in the instrument to be corrected to flux at the top of the atmosphere. Calibrations with celestial sources will be compared to instrumental and atmospheric calibrations.
The 8.4m Large Synoptic Survey Telescope (LSST) is a wide-field telescope facility that will add a qualitatively new capability in astronomy. For the first time, the LSST will provide time-lapse digital imaging of faint astronomical objects across the entire sky. The LSST has been identified as a national scientific priority by diverse national panels, including multiple National Academy of Sciences committees. This judgment is based upon the LSST's ability to address some of the most pressing open questions in astronomy and fundamental physics, while driving advances in data-intensive science and computing. The LSST will provide unprecedented 3-dimensional maps of the mass distribution in the Universe, in addition to the traditional images of luminous stars and galaxies. These mass maps can be used to better understand the nature of the newly discovered and utterly mysterious Dark Energy that is driving the accelerating expansion of the Universe. The LSST will also provide a comprehensive census of our solar system, including potentially hazardous asteroids as small as 100 meters in size. The LSST facility consists of three major subsystems: 1) the telescope, 2) the camera and 3) the data processing system. The baseline design for the LSST telescope is a 8.4m 3-mirror design with a 3.5 degree field of view resulting in an A-Omega product (etendue) of 302deg2m2. The camera consists of 3-element transmisive corrector producing a 64cm diameter flat focal plane. This focal plane will be populated with roughly 3 billion 10μm pixels. The data processing system will include pipelines to monitor and assess the data quality, detect and classify transient events, and establish a large searchable object database. We report on the status of the designs for these three major LSST subsystems along with the overall project structure and management.
The MACHO experiment is searching for dark matter in the halo of the Galaxy by monitoring more than 50 million stars in the LMC, SMC, and Galactic bulge for gravitational microlensing events. The hardware consists of a 50 inch telescope, a two-color 32 megapixel ccd camera and a network of computers. On clear nights the system generates up to 8 GB of raw data and 1 GB of reduced data. The computer system is responsible for all realtime control tasks, for data reduction, and for storing all data associated with each observation in a database. The subject of this paper is the software system that handles these functions. It is an integrated system controlled by Petri nets that consists of multiple processes communicating via mailboxes and a bulletin board. The system is highly automated, readily extensive, and incorporates flexible error recovery capabilities. It is implemented with C++ in a Unix environment.
We introduce a biologically based method for motion detection and track initiation. The model consists of five hexagonally packed layers of single compartment neurons. The model mimics some of the processing in the retina and has localized interactions between cells in each layer. To evaluate the gain from using a biological model versus a more conventional approach we have compared the performance of three methods: simple intensity thresholding, the biologically inspired model, and an algorithm involving a truncated sequential probability ratio test. These methods were tested with a large number of real and simulated astronomical data and with a standard set of 50 images using targets at various SNRs. We discuss the importance of various aspects of the biological model for the overall performance on a target detection task.
We have developed an astronomical imaging system that incorporates a total of eight 2048 X 2048 pixel CCDs into two focal planes, to allow simultaneous imaging in two colors. Each focal plane comprises four 'edge-buttable' detector arrays, on custom Kovar mounts. The clocking and bias voltage levels for each CCD are independently adjustable, but all the CCDs are operated synchronously. The sixteen analog outputs (two per chip) are measured at 16 bits with commercially available correlated double sampling A/D converters. The resulting 74 MBytes of data per frame are transferred over fiber optic links into dual-ported VME memory. The total readout time is just over one minute. We obtain read noise ranging from 6.5 e- to 10 e- for the various channels when digitizing at 34 Kpixels/sec, with full well depths (MPP mode) of approximately 100,000 e- per 15 micrometers X 15 micrometers pixel. This instrument is currently being used in a search of gravitational microlensing from compact objects in our Galactic halo, using the newly refurbished 1.3 m telescope at the Mt. Stromlo Observatory, Australia.
Lawrence Livermore National Laboratory (LLNL) has recently developed a wide-field-of- view (28 degree(s) X 44 degree(s)) camera for use as a star tracker navigational sensor. As for all sensors, stray light rejection performance is critical. Due to the baffle dimensions dictated by the large field angles, the 2-part sunshade/baffle configuration commonly seen on space- born telescopes is impractical. Meeting the required stray light rejection performance (of 10-7 Point Source Transmittance, (PST)) with a 1-part baffle required iterative APART modeling (APART is an industry standard stray light evaluation program), hardware testing, and mechanical design correction. This paper presents a chronology of lens and baffle improvements that resulted in the meeting of the stray light rejection goal outside the solar exclusion angle of the baffle stage. Comparisons with APART analyses are given, and future improvements in mechanical design are discussed. Stray light testing methods and associated experimental difficulties are presented.
Many applications require the ability to detect and track moving objects against moving backgrounds. If an object's signal is less than or comparable to the variations in the background, sophisticated techniques must be employed to detect the object. An analog retina model that adapts to the motion of the background in order to enhance objects moving with a velocity different than the background velocity is presented. A computer simulation that preserves the analog nature of this model and its application to real and simulated data are described. The concept of an analog 'Z' focal plane implementation is also presented.
A prototype wide-field-of-view (WFOV) star tracker camera has been fabricated and tested for use in spacecraft navigation. The most unique feature of this device is its 28 degree(s) X 44 degree(s) FOV, which views a large enough sector of the sky to ensure the existence of at least 5 stars of mv equals 4.5 or brighter in all viewing directions. The WFOV requirement and the need to maximize both collection aperture (F/1.28) and spectral input band (0.4 to 1.1 micrometers ) to meet the light gathering needs for the dimmest star have dictated the use of a novel concentric optical design, which employs a fiber optic faceplate field flattener. The main advantage of the WFOV configuration is the smaller star map required for position processing, which results in less processing power and faster matching. Additionally, a size and mass benefit is seen with a large FOV/smaller effective focal length (efl) sensor. Prototype hardware versions have included both image intensified and un-intensified CCD cameras. Integration times of
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.