Cloud computing offers unparalleled flexibility, a constantly increasing set of “Infrastructure as a Service’’ capabilities, resource elasticity and security isolation. One of the most significant barriers in astronomy to wholesale adoption of cloud infrastructures is the cost for hot storage of large datasets - particularly for Rubin, a Big Data project sized at 0.5 Exabytes (500 Petabytes) over the duration of its ten-year mission. We are planning to reconcile this with a “hybrid” model where user-facing services are deployed on Google Cloud with the majority of data holdings residing in our on-premises Data Facility at SLAC. We discuss the opportunities, status, risks, and technical challenges of this approach.
The Vera C. Rubin Observatory’s Data Butler provides a way for science users to retrieve data without knowing where or how it is stored. In order to support the 10,000 science users in a hybrid cloud environment, we have to modify the Data Butler to use a client/server architecture such that we can share authentication and authorization controls with the Rubin Science Platform and more easily support standard tooling for scaling up backend services. In this paper we describe the changes being made to support this and some of the difficulties that are being encountered.
The Rubin Observatory’s Data Butler is designed to allow data file location and file formats to be abstracted away from the people writing the science pipeline algorithms. The Butler works in conjunction with the workflow graph builder to allow pipelines to be constructed from the algorithmic tasks. These pipelines can be executed at scale using object stores and multi-node clusters, or on a laptop using a local file system. The Butler and pipeline system are now in daily use during Rubin construction and early operations.
The Rubin Observatory Commissioning Camera (ComCam) is a scaled down (144 Megapixel) version of the 3.2 Gigapixel LSSTCam which will start the Legacy Survey of Space and Time (LSST), currently scheduled to start in 2024. The purpose of the ComCam is to verify the LSSTCam interfaces with the major subsystems of the observatory as well as evaluate the overall performance of the system prior to the start of the commissioning of the LSSTCam hardware on the telescope. With the delivery of all the telescope components to the summit site by 2020, the team has already started the high-level interface verification, exercising the system in a steady state model similar to that expected during the operations phase of the project. Notable activities include a simulated “slew and expose” sequence that includes moving the optical components, a settling time to account for the dynamical environment when on the telescope, and then taking an actual sequence of images with the ComCam. Another critical effort is to verify the performance of the camera refrigeration system, and testing the operational aspects of running such a system on a moving telescope in 2022. Here we present the status of the interface verification and the planned sequence of activities culminating with on-sky performance testing during the early-commissioning phase.
The construction of the Vera C. Rubin Observatory is well underway, and when completed the telescope will carry out a precision photometric survey, scanning the entire sky visible from Chile every three days. The photometric performance of the survey is expected to be dominated by systematics; therefore, multiple calibration systems have been designed to measure, characterize and compensate for these effects, including a dedicated telescope and instrument to measure variations in the atmospheric transmission over the LSST bandpasses. Now undergoing commissioning, the Auxiliary Telescope system is serving as a pathfinder for the development of the Rubin Control systems. This paper presents the current commissioning status of the telescope and control software, and discusses the lessons learned which are applicable to other observatories.
KEYWORDS: Systems modeling, Large Synoptic Survey Telescope, Data modeling, Systems engineering, Integrated modeling, Model-based design, Safety, Telescopes
This paper describes the evolution of the processes, methodologies and tools developed and utilized on the Large Synoptic Survey Telescope (LSST) project that provide a complete end-to-end environment for verification planning, execution, and reporting. LSST utilizes No Magic’s MagicDraw Cameo Systems Modeler tool as the core tool for systems modeling, a Jira-based test case/test procedure/test plan tool called Test Management for Jira for verification execution, and Intercax’s Syndeia tool for bi-directional synchronization of data between Cameo Systems Modeler and Jira. Several additional supporting tools and services are also described to round out a complete solution. The paper describes the project’s needs, overall software platform architecture, and customizations developed to provide the end to- end solution.
The Large Synoptic Survey Telescope (LSST) is an 8.4m optical survey telescope being constructed on Cerro Pach´on in Chile. The data management system being developed must be able to process the nightly alert data, 20,000 expected transient alerts per minute, in near real time, and construct annual data releases at the petabyte scale. The development team consists of more than 90 people working in six different sites across the US developing an integrated set of software to realize the LSST science goals. In this paper we discuss our agile software development methodology and our API and developer decision making process. We also discuss the software tools that we use for continuous integration and deployment.
KEYWORDS: Systems modeling, Systems engineering, Large Synoptic Survey Telescope, Observatories, Connectors, Data processing, Data archive systems, Astronomy, Camera shutters, Information technology
We† provide an overview of the Model Based Systems Engineering (MBSE) language, tool, and methodology being used in our development of the Operational Plan for Large Synoptic Survey Telescope (LSST) operations. LSST’s Systems Engineering (SE) team is using a model-based approach to operational plan development to: 1) capture the topdown stakeholders’ needs and functional allocations defining the scope, required tasks, and personnel needed for operations, and 2) capture the bottom-up operations and maintenance activities required to conduct the LSST survey across its distributed operations sites for the full ten year survey duration. To accomplish these complimentary goals and ensure that they result in self-consistent results, we have developed a holistic approach using the Sparx Enterprise Architect modeling tool and Systems Modeling Language (SysML). This approach utilizes SysML Use Cases, Actors, associated relationships, and Activity Diagrams to document and refine all of the major operations and maintenance activities that will be required to successfully operate the observatory and meet stakeholder expectations. We have developed several customized extensions of the SysML language including the creation of a custom stereotyped Use Case element with unique tagged values, as well as unique association connectors and Actor stereotypes. We demonstrate this customized MBSE methodology enables us to define: 1) the rolls each human Actor must take on to successfully carry out the activities associated with the Use Cases; 2) the skills each Actor must possess; 3) the functional allocation of all required stakeholder activities and Use Cases to organizational entities tasked with carrying them out; and 4) the organization structure required to successfully execute the operational survey. Our approach allows for continual refinement utilizing the systems engineering spiral method to expose finer levels of detail as necessary. For example, the bottom-up, Use Case-driven approach will be deployed in the future to develop the detailed work procedures required to successfully execute each operational activity.
Construction of the Large Synoptic Survey Telescope system involves several different organizations, a situation that poses many challenges at the time of the software integration of the components. To ensure commonality for the purposes of usability, maintainability, and robustness, the LSST software teams have agreed to the following for system software components: a summary state machine, a manner of managing settings, a flexible solution to specify controller/controllee relationships reliably as needed, and a paradigm for responding to and communicating alarms. This paper describes these agreed solutions and the factors that motivated these.
Tim Jenness, James Bosch, Russell Owen, John Parejko, Jonathan Sick, John Swinbank, Miguel de Val-Borro, Gregory Dubois-Felsmann, K.-T. Lim, Robert Lupton, Pim Schellart, K. Krughoff, Erik Tollerud
The Large Synoptic Survey Telescope (LSST) will be an 8.4m optical survey telescope sited in Chile and capable of imaging the entire sky twice a week. The data rate of approximately 15TB per night and the requirements to both issue alerts on transient sources within 60 seconds of observing and create annual data releases means that automated data management systems and data processing pipelines are a key deliverable of the LSST construction project. The LSST data management software has been in development since 2004 and is based on a C++ core with a Python control layer. The software consists of nearly a quarter of a million lines of code covering the system from fundamental WCS and table libraries to pipeline environments and distributed process execution. The Astropy project began in 2011 as an attempt to bring together disparate open source Python projects and build a core standard infrastructure that can be used and built upon by the astronomy community. This project has been phenomenally successful in the years since it has begun and has grown to be the de facto standard for Python software in astronomy. Astropy brings with it considerable expectations from the community on how astronomy Python software should be developed and it is clear that by the time LSST is fully operational in the 2020s many of the prospective users of the LSST software stack will expect it to be fully interoperable with Astropy. In this paper we describe the overlap between the LSST science pipeline software and Astropy software and investigate areas where the LSST software provides new functionality. We also discuss the possibilities of re-engineering the LSST science pipeline software to build upon Astropy, including the option of contributing affliated packages.
The Arizona-NOAO Temporal Analysis and Response to Events System (ANTARES) is a joint effort of NOAO and the Department of Computer Science at the University of Arizona to build prototype software to process alerts from time-domain surveys, especially LSST, to identify those alerts that must be followed up immediately. Value is added by annotating incoming alerts with existing information from previous surveys and compilations across the electromagnetic spectrum and from the history of past alerts. Comparison against a knowledge repository of properties and features of known or predicted kinds of variable phenomena is used for categorization. The architecture and algorithms being employed are described.
We describe the Short Wavelength Camera (SWCam) for the CCAT observatory including the primary science drivers, the coupling of the science drivers to the instrument requirements, the resulting implementation of the design, and its performance expectations at first light. CCAT is a 25 m submillimeter telescope planned to operate at 5600 meters, near the summit of Cerro Chajnantor in the Atacama Desert in northern Chile. CCAT is designed to give a total wave front error of 12.5 μm rms, so that combined with its high and exceptionally dry site, the facility will provide unsurpassed point source sensitivity deep into the short submillimeter bands to wavelengths as short as the 200 μm telluric window. The SWCam system consists of 7 sub-cameras that address 4 different telluric windows: 4 subcameras at 350 μm, 1 at 450 μm, 1 at 850 μm, and 1 at 2 mm wavelength. Each sub-camera has a 6’ diameter field of view, so that the total instantaneous field of view for SWCam is equivalent to a 16’ diameter circle. Each focal plane is populated with near unit filling factor arrays of Lumped Element Kinetic Inductance Detectors (LEKIDs) with pixels scaled to subtend an solid angle of (λ/D)2 on the sky. The total pixel count is 57,160. We expect background limited performance at each wavelength, and to be able to map < 35(°)2 of sky to 5 σ on the confusion noise at each wavelength per year with this first light instrument. Our primary science goal is to resolve the Cosmic Far-IR Background (CIRB) in our four colors so that we may explore the star and galaxy formation history of the Universe extending to within 500 million years of the Big Bang. CCAT's large and high-accuracy aperture, its fast slewing speed, use of instruments with large format arrays, and being located at a superb site enables mapping speeds of up to three orders of magnitude larger than contemporary or near future facilities and makes it uniquely sensitive, especially in the short submm bands.
instrument’s twin focal planes, each with over 5000 superconducting Transition Edge Sensors (TES) that work simultaneously at 450 and 850 microns are producing excellent science results and in particular a unique series of JCMT legacy surveys. In this paper we give an update on the performance of the instrument over the past 2 years of science operations and present the results of a study into the noise properties of the TES arrays. We highlight changes that have been implemented to increase the efficiency and performance of SCUBA-2 and discus the potential for future enhancements.
The James Clerk Maxwell Telescope (JCMT) is the largest single-dish submillimetre telescope in the world, and throughout its lifetime the volume and impact of its science output have steadily increased. A key factor for this continuing productivity is an ever-evolving approach to optimising operations, data acquisition, and science product pipelines and archives. The JCMT was one of the first common-user telescopes to adopt flexible scheduling in 2003, and its impact over a decade of observing will be presented. The introduction of an advanced data-reduction pipeline played an integral role, both for fast real-time reduction during observing, and for science-grade reduction in support of individual projects, legacy surveys, and the JCMT Science Archive. More recently, these foundations have facilitated the commencement of remote observing in addition to traditional on-site operations to further increase on-sky science time. The contribution of highly-trained and engaged operators, support and technical staff to efficient operations will be described. The long-term returns of this evolution are presented here, noting they were achieved in face of external pressures for leaner operating budgets and reduced staffing levels. In an era when visiting observers are being phased out of many observatories, we argue that maintaining a critical level of observer participation is vital to improving and maintaining scientific productivity and facility longevity.
KEYWORDS: Large Synoptic Survey Telescope, Astronomy, Prototyping, Data modeling, Process modeling, Observatories, Electromagnetism, Telescopes, Galactic astronomy, Algorithm development
The Arizona-NOAO Temporal Analysis and Response to Events System (ANTARES) is a joint project of the National Optical Astronomy Observatory and the Department of Computer Science at the University of Arizona. The goal is to build the software infrastructure necessary to process and filter alerts produced by time-domain surveys, with the ultimate source of such alerts being the Large Synoptic Survey Telescope (LSST). The ANTARES broker will add value to alerts by annotating them with information from external sources such as previous surveys from across the electromagnetic spectrum. In addition, the temporal history of annotated alerts will provide further annotation for analysis. These alerts will go through a cascade of filters to select interesting candidates. For the prototype, ‘interesting’ is defined as the rarest or most unusual alert, but future systems will accommodate multiple filtering goals. The system is designed to be flexible, allowing users to access the stream at multiple points throughout the process, and to insert custom filters where necessary. We describe the basic architecture of ANTARES and the principles that will guide development and implementation.
CCAT will be a 25m diameter sub-millimeter telescope capable of operating in the 0.2 to 2.1mm wavelength range. It will be located at an altitude of 5600m on Cerro Chajnantor in northern Chile near the ALMA site. The anticipated first generation instruments include large format (60,000) kinetic inductance detector (KID) cameras, a large format heterodyne array and a direct detection multi-object spectrometer. The paper describes the architecture of the CCAT software and the development strategy.
KEYWORDS: Data archive systems, Telescopes, Astronomy, Heterodyning, Calibration, Data modeling, Astronomical telescopes, Signal to noise ratio, Space telescopes, Observatories
The JCMT Science Archive is a collaboration between the James Clerk Maxwell Telescope and the Canadian Astronomy Data Centre to provide access to raw and reduced data from SCUBA-2 and the telescope’s heterodyne instruments. It was designed to include a range of advanced data products, created either by external groups, such as the JCMT Legacy Survey teams, or by the JCMT staff at the Joint Astronomy Centre. We are currently developing the archive to include a set of advanced data products which combine all of the publicly available data. We have developed a sky tiling scheme based on HEALPix tiles to allow us to construct co-added maps and data cubes on a well-defined grid. There will also be source catalogs both of regions of extended emission and the compact sources detected within these regions.
SCUBA-2 is the largest submillimetre wide-field bolometric camera ever built. This 43 square arc- minute field-of-view instrument operates at two wavelengths (850 and 450 microns) and has been installed on the James Clerk Maxwell Telescope on Mauna Kea, Hawaii. SCUBA-2 has been successfully commissioned and operational for general science since October 2011. This paper presents an overview of the on-sky performance of the instrument during and since commissioning in mid- 2011. The on-sky noise characteristics and NEPs of the 450 μm and 850 μm arrays, with average yields of approximately 3400 bolometers at each wavelength, will be shown. The observing modes of the instrument and the on-sky calibration techniques are described. The culmination of these efforts has resulted in a scientifically powerful mapping camera with sensitivities that allow a square degree of sky to be mapped to 10 mJy/beam rms at 850 μm in 2 hours and 60 mJy/beam rms at 450 μm in 5 hours in the best weather.
KEYWORDS: Computing systems, Control systems, Data acquisition, Observatories, Telescopes, Data archive systems, Astronomy, Data communications, Software development, Data processing
The high data rates and unique operation modes of the SCUBA-2 instrument made for an especially challenging effort
to get it working with the existing JCMT Observatory Control System (OCS). Due to some forethought by the original
designers of the OCS, who had envisioned a SCUBA-2 like instrument years before it was reality, the JCMT was
already being coordinated by a versatile Real Time Sequencer (RTS). The timing pulses from the RTS are fanned out to
all of the SCUBA-2 Multi Channel Electronics (MCE) boxes allowing for precision timing of each data sample. The
SCUBA-2 data handing and OCS communications are broken into two tasks, one doing the actual data acquisition and
file writing, the other communicates with the OCS through Drama. These two tasks talk to each other via shared
memory and semaphores. It is possible to swap back and forth between heterodyne and SCUBA-2 observing simply by
selecting an observation for a particular instrument. This paper also covers the changes made to the existing OCS in
order to integrate it with the new SCUBA-2 specific software.
Commissioning of SCUBA-2 included a program of skydips and observations of calibration sources intended to
be folded into regular observing as standard methods of source flux calibration and to monitor the atmospheric
opacity and stability. During commissioning, it was found that these methods could also be utilised to characterise
the fundamental instrument response to sky noise and astronomical signals. Novel techniques for analysing onsky
performance and atmospheric conditions are presented, along with results from the calibration observations
and skydips.
This paper describes the key design features and performance of HARP, an innovative heterodyne focal-plane array
receiver designed and built to operate in the submillimetre on the James Clerk Maxwell Telescope (JCMT) in Hawaii.
The 4x4 element array uses SIS detectors, and is the first sub-millimetre spectral imaging system on the JCMT. HARP
provides 3-dimensional imaging capability with high sensitivity at 325-375 GHz and affords significantly improved
productivity in terms of speed of mapping. HARP was designed and built as a collaborative project between the
Cavendish Astrophysics Group in Cambridge UK, the UK-Astronomy Technology Centre in Edinburgh UK, the
Herzberg Institute of Astrophysics in Canada and the Joint Astronomy Centre in Hawaii. SIS devices for the mixers were
fabricated to a Cavendish Astrophysics Group design at the Delft University of Technology in the Netherlands. Working
in conjunction with the new Auto Correlation Spectral Imaging System (ACSIS), first light with HARP was achieved in
December 2005. HARP synthesizes a number of interesting features across all elements of the design; we present key
performance characteristics and images of astronomical observations obtained during commissioning.
SCUBA-2 is an innovative 10,000 pixel submillimeter camera due to be delivered to the James Clerk Maxwell Telescope in late 2006. The camera is expected to revolutionize submillimeter astronomy in terms of the ability to carry out wide-field surveys to unprecedented depths addressing key questions relating to the origins of galaxies, stars and planets. This paper presents an update on the project with particular emphasis on the laboratory commissioning of the instrument. The assembly and integration will be described as well as the measured thermal performance of the instrument. A summary of the performance results will be presented from the TES bolometer arrays, which come complete with in-focal plane SQUID amplifiers and multiplexed readouts, and are cooled to 100mK by a liquid cryogen-free dilution refrigerator. Considerable emphasis has also been placed on the operating modes of the instrument and the "common-user" aspect of the user interface and data reduction pipeline. These areas will also be described in the paper.
In the last few years the ubiquitous availability of high bandwidth networks has changed the way both robotic and non-robotic telescopes operate, with single isolated telescopes being integrated into expanding "smart" telescope networks that can span continents and respond to transient events in seconds. The Heterogeneous Telescope Networks (HTN)* Consortium represents a number of major research groups in the field of robotic telescopes, and together we are proposing a standards based approach to providing interoperability between the existing proprietary telescope networks. We further propose standards for interoperability, and integration with, the emerging Virtual Observatory.
We present the results of the first interoperability meeting held last year and discuss the protocol and transport standards agreed at the meeting, which deals with the complex issue of how to optimally schedule observations on geographically distributed resources. We discuss a free market approach to this scheduling problem, which must initially be based on ad-hoc agreements between the participants in the network, but which may eventually expand into a electronic market for the exchange of telescope time.
Linking ground based telescopes with astronomical satellites, and using the emerging field of intelligent agent architectures to provide crucial autonomous decision making in software, we have combined data archives and research class robotic telescopes along with distributed computing nodes to build an ad-hoc peer-to-peer heterogeneous network of resources. The eSTAR Project* uses intelligent agent technologies to carry out resource discovery, submit observation requests and analyze the reduced data returned from a meta-network of robotic telescopes. We present the current operations paradigm of the eSTAR network and describe the direction of in which the project intends to develop over the next several years. We also discuss the challenges facing the project, including the very real sociological one of user acceptance.
The Joint Astronomy Centre operates two telescopes at the Mauna Kea Observatory: the James Clerk Maxwell Telescope, operating in the submillimetre, and the United Kingdom Infrared Telescope, operating in the near and thermal infrared. Both wavelength regimes benefit from the ability to schedule observations flexibly according to observing conditions, albeit via somewhat different "site quality" criteria. Both UKIRT and JCMT now operate completely flexible schedules. These operations are based on telescope hardware which can quickly switch between observing modes, and on a comprehensive suite of software (ORAC/OMP) which handles observing preparation by remote PIs, observation submission into the summit database, conditions-based programme selection at the summit, pipeline data reduction for all observing modes, and instant data quality feedback to the PI who may or may not be remote from the telescope. This paper describes the flexible scheduling model and presents science statistics for the first complete year of UKIRT and JCMT observing under the combined system.
The James Clerk Maxwell Telescope (JCMT), the world's largest sub-mm telescope, will soon be switching operations from a VAX/VMS based control system to a new, Linux-based, Observatory Control System1 (OCS). A critical part of the OCS is the set of tasks that are associated with the observation queue and the observing recipe sequencer: 1) the JCMT observation queue task 2) the JCMT instrument task, 3) the JCMT Observation Sequencer (JOS), and 4) the OCS console task. The JCMT observation queue task serves as a staging area for observations that have been translated from the observer's science program into a form suitable for the various OCS subsystems. The queue task operates by sending the observation at the head of the queue to the JCMT instrument task and then waits for the astronomer to accept the data before removing the observation from the queue. The JCMT instrument task is responsible for running up the set of tasks required to observe with a particular instrument at the JCMT and passing the observation on to the JOS. The JOS is responsible for executing the observing recipe, pausing/continuing the recipe when commanded, and prematurely ending or aborting the observation when commanded. The OCS console task provides the user with a GUI window with which they can control and monitor the observation queue and the observation itself. This paper shows where the observation queue and recipe sequencer fit into the JCMT OCS, presents the design decisions that resulted in the tasks being structured as they are, describes the external interfaces of the four tasks, and details the interaction between the tasks.
The eSTAR Project uses intelligent agent technologies to carry out resource discovery, submit observation requests and analyze the reduced data returned from a network of robotic telescopes in an observational grid. The agents are capable of data mining and cross-correlation tasks using on-line catalogues and databases and, if necessary, requesting additional data and follow-up observations from the telescopes on the network. We discuss how the maturing agent technologies can be used both to provide rapid followup to time critical events, and for long term monitoring of known sources, utilising the available resources in an intelligent manner.
The JCMT, the world's largest sub-mm telescope, has had essentially the same VAX/VMS based control system since it was commissioned. For the next generation of instrumentation we are implementing a new Unix/VxWorks based system, based on the successful ORAC system that was recently released on UKIRT.
The system is now entering the integration and testing phase. This paper gives a broad overview of the system architecture and includes some discussion on the choices made. (Other papers in this conference cover some areas in more detail). The basic philosophy is to control the sub-systems with a small and simple set of commands, but passing detailed XML configuration descriptions along with the commands to give the flexibility required. The XML files can be passed between various layers in the system without interpretation, and so simplify the design enormously. This has all been made possible by the adoption of an Observation Preparation Tool, which essentially serves as an intelligent XML editor.
UKIRT and JCMT, two highly heterogeneous telescopes, have been embarking on several joint software projects covering all areas of observatory operations such as observation preparation and scheduling, telescope control and data reduction. In this paper we briefly explain the processes by which we have arrived at such a large body of shared code and discuss our experience with developing telescope-portable software and code re-use.
The Submillimeter Common-User Bolometer Array (SCUBA) is a new continuum camera operating on the James Clerk Maxwell Telescope (JCMT) on Mauna Kea, Hawaii. It consists of two arrays of bolometric detectors; a 91 pixel 350/450 micron array and a 37 pixel 750/850 micron array. Both arrays can be used simultaneously and have a field-of-view of approximately 2.4 arcminutes in diameter on the sky. Ideally, performance should be limited solely by the photon noise from the sky background at all wavelengths of operation. However, observations at submillimeter wavelengths are hampered by 'sky-noise' which is caused by spatial and temporal fluctuations in the emissivity of the atmosphere above the telescope. These variations occur in atmospheric cells that are larger than the array diameter, and so it is expected that the resultant noise will be correlated across the array and, possibly, at different wavelengths. In this paper, we describe our initial investigations into the presence of sky-noise for all the SCUBA observing modes, and explain our current technique for removing it from the data.
The Submillimeter Common-User Bolometer Array (SCUBA) is one of a new generation of cameras designed to operate in the submillimeter waveband. The instrument has a wide wavelength range covering all the atmospheric transmission windows between 300 and 2000 micrometer. In the heart of the instrument are two arrays of bolometers optimized for the short (350/450 micrometer) and long (750/850 micrometer) wavelength ends of the submillimeter spectrum. The two arrays can be used simultaneously, giving a unique dual-wavelength capability, and have a 2.3 arc-minute field of view on the sky. Background-limited performance is achieved by cooling the arrays to below 100 mK. SCUBA has now been in active service for over a year, and has already made substantial breakthroughs in many areas of astronomy. In this paper we present an overview of the performance of SCUBA during the commissioning phase on the James Clerk Maxwell Telescope (JCMT).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.