As diversity continues to grow in astronomy, creating working environments that are equally beneficial to all employees is imperative. Diversity in astronomical observatories is evident in a number of employee characteristics, including gender, race/ethnicity, age.
Since June 2017, ESO has created its Diversity and Inclusion Committee gathering a variety of employees from the different sites, with different backgrounds.
We will focus here on the status of the diversity and the strategies to develop a skilled and diverse operational workforce in the ESO observatories.
ESO introduced a User Portal for its scientific services in November 2007. Registered users have a central entry point for
the Observatory's offerings, the extent of which depends on the users' roles - see . The project faced and overcame a
number of challenging hurdles between inception and deployment, and ESO learned a number of useful lessons along the
way. The most significant challenges were not only technical in nature; organization and coordination issues took a
significant toll as well. We also indicate the project's roadmap for the future.
The European Southern Observatory (ESO) is in the process of creating a central access point for all services offered to its user community via the Web. That gateway, called the User Portal, will provide registered users with a personalized set of service access points, the actual set depending on each user's privileges.
Correspondence between users and ESO will take place by way of "profiles", that is, contact information. Each user may have several active profiles, so that an investigator may choose, for instance, whether their data should be delivered to their own address or to a collaborator.
To application developers, the portal will offer authentication and authorization services, either via database queries or an LDAP server.
The User Portal is being developed as a Web application using Java-based technology, including servlets and JSPs.
With the completion of the first generation instrumentation set on the Very Large Telescope, a total of eleven instruments are now provided at the VLT/VLTI for science operations. For each of them, ESO provides automatic data reduction facilities in the form of instrument pipelines developed in collaboration with the instrument consortia. The pipelines are deployed in different environments, at the observatory and at the ESO headquarters, for on-line assessment of observations, instruments and detector monitoring, as well as data quality control and products generation. A number of VLT pipelines are also distributed to the user community together with front-end applications for batch and interactive usage. The main application of the pipeline is to support the Quality Control process. However, ESO also aims to deliver pipelines that can generate science ready products for a major fraction of the scientific needs of the users. This paper provides an overview of the current developments for the VLT/VLTI next generation of instruments and of the prototyping studies of new tools for science users.
The ESO Very Large Telescope Interferometer (VLTI) is the first general-user interferometer that offers near- and mid-infrared long-baseline interferometric observations in service mode as well as visitor mode to the whole astronomical community. Regular VLTI observations with the first scientific instrument, the mid-infrared instrument MIDI, have started in ESO observing period P73, for observations between April and September 2004. The efficient use of the VLTI as a general-user facility implies the need for a well-defined operations scheme. The VLTI follows the established general operations scheme of the other VLT instruments. Here, we present from a users' point of view the VLTI specific aspects of this scheme beginning from the preparation of the proposal until the delivery of the data.
All ESO Science Operations teams operate on Observing Runs, loosely defined as blocks of observing time on a specific instrument. Observing Runs are submitted as part of an Observing Proposal and executed in Service or Visitor Mode. As an Observing Run progresses through its life-cycle, more and more information gets associated to it: Referee reports, feasibility and technical evaluations, constraints, pre-observation data, science and calibration frames, etc. The Manager of Observing Runs project (Moor) will develop a system to collect operational information in a database, offer integrated access to information stored in several independent databases, and allow HTML-based navigation over the whole information set. Some Moor services are also offered as extensions to, or complemented by, existing desktop applications.
The Data Flow System (DFS) for the ESO VLT provides a global approach to handle the flow of science related data in the VLT environment. It is a distributed system composed of a collection of components for preparation and scheduling of observations, archiving of data, pipeline data reduction and quality control. Although the first version of the system became operational in 1999 together with the first UT, additional developments were necessary to address new operational requirements originating from new and complex instruments which generate large amounts of data. This paper presents the hardware and software changes made to meet those challenges within the back-end infrastructure, including on-line and off-line archive facilities, parallel/distributed pipeline processing and improved association technologies.
Science interferometry instruments are now available at the Very Large Telescope for observations in service mode; the MID-Infrared interferometry instrument, MIDI, started commissioning and has been opened to observations in 2003 and the AMBER 3-beam instrument shall follow in 2004. The Data Flow System is the VLT end-to-end software system for handling astronomical observations from the initial observation proposal phase through to the acquisition, archiving, processing, and control of the astronomical data. In this paper we present the interferometry specific components of the Data Flow System and the software tools which are used for the VLTI.
The end-to-end operations of the ESO VLT has now seen three full years of service to the ESO community. During that time its capabilities have grown to four 8.2m unit telescopes with a complement of four optical and IR multimode instruments being operated in a mixed Service Mode and Visitor Mode environment. The input and output of programs and data to the system is summarized over this period together with the growth in operations manpower. We review the difficulties of working in a mixed operations and development environment and the ways in which the success of the end-to-end approach may be measured. Finally we summarize the operational lessons learned and the challenges posed by future developments of VLT instruments and facilities such as interferometry and survey telescopes.
In this article we present the Data Flow System (DFS) for the Very Large Telescope Interferometer (VLTI). The Data Flow System is the VLT end-to-end software system for handling astronomical observations from the initial observation proposal phase through the acquisition, processing and control of the astronomical data. The Data Flow system is now in the process of installation and adaptation for the VLT Interferometer. The DFS was first installed for VLTI first fringes utilising the siderostats together with the VINCI instrument and is constantly being upgraded in phase with the VLTI commissioning. When completed the VLT Interferometer will make it possible to coherently combine up to three beams coming from the four VLT 8.2m telescopes as well as from a set of initially three 1.8m Auxiliary Telescopes, using a Delay Line tunnel and four interferometry instruments. Observations of objects with some scientific interest are already being carried out in the framework of the VLTI commissioning using siderostats and the VLT Unit Telescopes, making it possible to test tools under realistic conditions. These tools comprise observation preparation, pipeline processing and further analysis systems. Work is in progress for the commissioning of other VLTI science instruments such as MIDI and AMBER. These are planned for the second half of 2002 and first half of 2003 respectively. The DFS will be especially useful for service observing. This is expected to be an important mode of observation for the VLTI, which is required to cope with numerous observation constraints and the need for observations spread over extended periods of time.
The VLT Data Flow System (DFS) has been developed to maximize the scientific output from the operation of the ESO observatory facilities. From its original conception in the mid 90s till the system now in production at Paranal, at La Silla, at the ESO HQ and externally at home institutes of astronomers, extensive efforts, iteration and retrofitting have been invested in the DFS to maintain a good level of performance and to keep it up to date. In the end what has been obtained is a robust, efficient and reliable 'science support engine', without which it would be difficult, if not impossible, to operate the VLT in a manner as efficient and with such great success as is the case today. Of course, in the end the symbiosis between the VLT Control System (VCS) and the DFS plus the hard work of dedicated development and operational staff, is what made the success of the VLT possible. Although the basic framework of DFS can be considered as 'completed' and that DFS has been in operation for approximately 3 years by now, the implementation of improvements and enhancements is an ongoing process mostly due to the appearance of new requirements. This article describes the origin of such new requirements towards DFS and discusses the challenges that have been faced adapting the DFS to an ever-changing operational environment. Examples of recent, new concepts designed and implemented to make the base part of DFS more generic and flexible are given. Also the general adaptation of the DFS at system level to reduce maintenance costs and increase robustness and reliability and to some extend to keep it conform with industry standards is mentioned. Finally the general infrastructure needed to cope with a changing system is discussed in depth.
The Data Flow System is the VLT end-to-end system for handling astronomical observations from the initial observation proposal phase through the acquisition, processing and control of the astronomical data. The VLT Data Flow System has been in place since the opening of the first VLT Unit Telescope in 1998. When completed the VLT Interferometer will make it possible to coherently combine up to three beams coming from the four VLT 8.2m telescopes as well as from a set of initially three 1.8m Auxiliary Telescopes, using a Delay Line tunnel and four interferometry instruments. The Data Flow system is now in the process of installation and adaptation for the VLT Interferometer. Observation preparation for a multi-telescope system, handling large data volume of several tens of gigabytes per night are among the new challenges offered by this system. This introduction paper presents the VLTI Data Flow system installed during the initial phase of VLTI commissioning. Observation preparation, data archival, and data pipeline processing are addressed.
On 1 April 1999, the first unit telescope (ANTU) of the ESO VLT began science operations. Two new instruments (FORS-1 for optical imaging and spectroscopy and ISAAC for IR imaging and spectroscopy) were offered in a mix of 50% visitor mode and 50% service mode. A Phase-I and Phase-II proposal and observation preparation process was conducted from 1 October 1998 until the middle of March 1999 involving approximately 280 proposals. A total of 1768 Observation Blocks for 83 approved service mode programs were scheduled and executed between 1 April and 1 October 1999. The resultant raw science and calibration data were subjected to quality control in Garching and released to the ESO user community starting from 15 June 1999 along with pipeline processed data products for a subset of instrument modes. The data flow loop for the first LT telescope is closed. The current operational VLT data flow system and the developments for the remainder of the VLT will be presented in the light of the first year of operational experience.
In order to realize the optimal scientific return from the VLT, ESO has undertaken to develop an end-to-end data flow system from proposal entry to science archive. The VLT Data Flow System (DFS) is being designed and implemented by the ESO Data Management and Operations Division in collaboration with VLT and Instrumentation Divisions. Tests of the DFS started in October 1996 on ESO's New Technology Telescope. Since then, prototypes of the Phase 2 Proposal Entry System, VLT Control System Interface, Data Pipelines, On-line Data Archive, Data Quality Control and Science Archive System have been tested. Several major DFS components have been run under operational conditions since February 1997. This paper describes the current status of the VLT DFS, the technological and operational challenges of such a system and the planing for VLT operations beginning in early 1999.
The data flow system (DFS) for the ESO VLT provides a global system approach to the flow of science related data in the VLT environment. It includes components for preparation and scheduling of observations, archiving of data, pipeline data reduction and quality control. Standardized data structures serve as carriers for the exchange of information units between the DFS subsystems and VLT users and operators. Prototypes of the system were installed and tested at the New Technology Telescope. They helped us to clarify the astronomical requirements and check the new concepts introduced to meet the ambitious goals of the VLT. The experience gained from these tests is discussed.
The basic objective of modern observatories is to globally maximize their efficiency and ensure a high, constant and predictable data quality. These challenges can only be met if the scientific operation of such facilities, from the submission of observing programs to the archiving of all information, is carried out in a consistent and well controlled manner. The size, complexity and long operational lifetime of such systems make it difficult to predict and control their behavior with the necessary accuracy. Moreover they are subject to changes and are cumbersome to maintain. We present in this paper an object-oriented end-to-end operations model which describes the flow of science data associated with the operation of the VLT. The analysis model helped us to get a clear understanding of the problem domain. We were able in the design phase to partition the system into subsystems, each of them being allocated to a team for detailed design and implementation. Each of these subsystems is addressed in this paper. Prototypes will be implemented in the near future and tested on the new technology telescope (NTT). They will allow us to clarify the astronomical requirements and check the new operational concepts introduced to meet the ambitious goals of the VLT.