The Rapid Transient Surveyor (RTS) is a proposed rapid-response, high-cadence adaptive optics (AO) facility for the UH 2.2-m telescope on Maunakea. RTS will uniquely address the need for high-acuity and sensitive near-infrared spectral follow-up observations of tens of thousands of objects in mere months by combining an excellent observing site, unmatched robotic observational efficiency, and an AO system that significantly increases both sensitivity and spatial resolving power. We will initially use RTS to obtain the infrared spectra of ∼4,000 Type Ia supernovae identified by the Asteroid Terrestrial-Impact Last Alert System over a two year period that will be crucial to precisely measuring distances and mapping the distribution of dark matter in the z < 0.1 universe. RTS will comprise an upgraded version of the Robo-AO laser AO system and will respond quickly to target-of-opportunity events, minimizing the time between discovery and characterization. RTS will acquire simultaneous-multicolor images with an acuity of 0.07–0.10" across the entire visible spectrum (20% i′-band Strehl in median conditions) and <0.16" in the near infrared, and will detect companions at 0.5" at contrast ratio of ∼500. The system will include a high-efficiency prism integral field unit spectrograph: R = 70-140 over a total bandpass of 840–1830nm with an 8.7" by 6.0" field of view (0.15" spaxels). The AO correction boosts the infrared point-source sensitivity of the spectrograph against the sky background by a factor of seven for faint targets, giving the UH 2.2-m the H-band sensitivity of a 5.7-m telescope without AO.
The IFA and collaborators are embarking on a project to develop a 4-telescope synoptic survey instrument. While somewhat smaller than the 6.5m class telescope envisaged by the decadal review in their proposal for a LSST, this facility will nonetheless be able to accomplish many of the LSST science goals. In this paper we will describe the motivation for a 'distributed aperture' approach for the LSST, the current concept for Pan-STARRS -- a pilot project for the LSST proper -- and its performance goals and science reach. We will also discuss how the facility may be expanded.
The future of astronomy will be dominated with large and complex data bases. Megapixel CMB maps, joint analyses of surveys across several wavelengths, as envisioned in the planned National Virtual Observatory (NVO), TByte/day data rate of future surveys (Pan-STARRS) put stringent constraints on future data analysis methods: they have to achieve at least N log N scaling to be viable in the long term. This warrants special attention to computational requirements, which were ignored during the initial development of current analysis tools in favor of statistical optimality. Even an optimal measurement, however, has residual errors due to statistical sample variance. Hence a suboptimal technique with significantly smaller measurement errors than the unavoidable sample variance produces results which are nearly identical to that of a statistically optimal technique. For instance, for analyzing CMB maps, I present a suboptimal alternative, indistinguishable from the standard optimal method with N3 scaling, that can be rendered N log N with a hierarchical representation of the data; a speed up of a trillion times compared to other methods. In this spirit I will present a set of novel algorithms and methods for spatial statistical analyses of future large astronomical data bases, such as galaxy catalogs, megapixel CMB maps, or any point source catalog.
Datasets with tens of millions of galaxies present new challenges for the analysis of spatial clustering. We have built a framework, that integrates a database of object catalogs, tools for creating masks of bad regions, and a fast (NlogN) correlation code. This system has enabled unprecedented efficiency in carrying out the analysis of galaxy clustering in the SDSS catalog. A similar approach is used to compute the three-dimensional spatial clustering of galaxies on very large scales. We describe our strategy to estimate the effect of photometric errors using a database. We discuss our efforts as an early example of data-intensive science. While it would have been possible to get these results without the framework we describe, it will be infeasible to perform these computations on the future huge datasets without using this framework.