We report here on the software Hack Day organised at the 2014 SPIE conference on Astronomical Telescopes and Instrumentation in Montréal. The first ever Hack Day to take place at an SPIE event, the aim of the day was to bring together developers to collaborate on innovative solutions to problems of their choice. Such events have proliferated in the technology community, providing opportunities to showcase, share and learn skills. In academic environments, these events are often also instrumental in building community beyond the limits of national borders, institutions and projects. We show examples of projects the participants worked on, and provide some lessons learned for future events.
Total Cost of Ownership (TCO) is a metric from management accounting that helps expose both the direct and indirect
costs of a business decision. However, TCO can sometimes be too simplistic for "make vs. buy" decisions (or even
choosing between competing design alternatives) when value and extensibility are more critical than total cost. A three-dimensional
value-based TCO, which was developed to clarify product decisions for an observatory prior to Final
Design Review (FDR), will be presented in this session. This value-based approach incorporates priority of
requirements, satisfiability of requirements, and cost, and can be easily applied in any environment.
In 1987, the U.S. Congress created the Malcolm Baldrige National Quality Award (MBNQA), a program that rewards
businesses and nonprofits that demonstrate effective, efficient operations. Underlying the MBNQA are criteria to help
organizations integrate seven key areas of operations, including: leadership, strategic planning, customer focus,
information management, workforce planning, process management, and results. Independent of the award process, the
Baldrige Criteria can be used to guide strategic and operations planning. This presentation includes an example of how
the Baldrige Criteria were used to quickly develop a Workforce Management Plan for the National Radio Astronomy
Observatory (NRAO), responding to funding agency requests.
The NRAO faced performance and usability issues after releasing a single-search-box ("Google-like") web application to
query data across all NRAO telescope archives. Running queries with several relations across multiple databases proved
to be very expensive in compute resources. An investigation for a better platform led to Solr and Blacklight, a solution
stack which allows in-house development to focus on in-house problems. Solr is an Apache project built on Lucene to
provide a modern search server with a rich set of features and impressive performance. Blacklight is a web user
interface (UI) for Solr primarily developed by libraries at the University of Virginia and Stanford University. Though
Blacklight targets libraries, it is highly adaptable for many types of search applications which benefit from the faceted
searching and browsing, minimal configuration, and flexible query parsing of Solr and Lucene. The result: one highly
reused codebase provides for millisecond response times and a flexible UI. Not just for observational data, NRAO is
rolling out Solr and Blacklight across domains of library databases, telescope proposals, and more -- in addition to
telescope data products, where integration with the Virtual Observatory is on-going.
In 2006 NRAO launched a formal organization, the Office of End to End Operations (OEO), to broaden access to its
instruments (VLA/EVLA, VLBA, GBT and ALMA) in the most cost-effective ways possible. The VLA, VLBA and
GBT are mature instruments, and the EVLA and ALMA are currently under construction, which presents unique
challenges for integrating software across the Observatory. This article 1) provides a survey of the new developments
over the past year, and those planned for the next year, 2) describes the business model used to deliver many of these
services, and 3) discusses the management models being applied to ensure continuous innovation in operations, while
preserving the flexibility and autonomy of telescope software development groups.
The Very Large Array (VLA) radio telescope, operated by the National Radio Astronomy Observatory (NRAO),
has been collecting interferometric data (visibilities) since the late 1970's. Converting visibility data into images
requires careful calibration of the data, fast Fourier transform processing, and deconvolution methods. To make
VLA data accessible to the astronomical community, the NRAO has undertaken the NRAO VLA Archive Survey
(NVAS). The goal of NVAS is to produce images, calibrated data, and diagnostics from the visibility data archive
and make these data products available to all astronomers. Survey results are obtained from a software pipeline,
the details of which are described here.
The emergence of the Virtual Observatory as a new model for doing science means that the value of a facility instrument is no longer limited to its own lifetime. Instead, value becomes the net effect of optimizing observational throughput, the quality and quantity of data in the archive, and the applicability of the data that can survive and be used for future research even after a telescope ceases operations. Valuation aims to answer two questions which are especially important to funding agencies: what am I getting for my investment, and why should I care? Policy establishes guidelines for achieving the goals that lead to increased value. The relative roles of valuation and policy in inventory control and archiving strategies, adoption of standards, and developing maintainable software systems to meet these future goals are examined.
Since 2003, the monitor and control software systems for the Robert C. Byrd Green Bank Telescope (GBT) have been
substantially redesigned to make the telescope easier to use. The result is the release of the Astronomer's Integrated
Desktop (Astrid), an observation management platform used to create and submit scheduling blocks, monitor their
progress on the telescope, and provide a real time, quick look data display. Using Astrid, the astronomer launches one
application and has access to all of the software, documentation, and feedback facilities that are required to conduct an
interactive observing session. These systems together provide a common look and feel for GBT software applications,
enable offline observation preparation, and facilitate dynamic scheduling and remote observing.
To improve operational availability (the proportion of time that a telescope is able to accomplish what a visiting observer
wants at the time the observation is scheduled), response time to faults must be minimized. One way this can be
accomplished is by characterizing the relationships and interdependencies between components in a control system,
developing algorithms to identify the root cause of a problem, and capturing expert knowledge of a system to simplify
the process of troubleshooting. Results from a prototype development are explained, along with deployment issues.
Implications for the future, such as effective knowledge representation and management, and learning processes which
integrate autonomous and interactive components, are discussed.
This study describes the goals, foundational work, and early returns associated with establishing a pilot quality cost program at the Robert C. Byrd Green Bank Telescope (GBT). Quality costs provide a means to communicate the results of process improvement efforts in the universal language of project management: money. This scheme stratifies prevention, appraisal, internal failure and external failure costs, and seeks to quantify and compare the up-front investment in planning and risk management versus the cost of rework. An activity-based Cost of Quality (CoQ) model was blended with the Cost of Software Quality (CoSQ) model that has been successfully deployed at Raytheon Electronic Systems (RES) for this pilot program, analyzing the efforts of the GBT Software Development Division. Using this model, questions that can now be answered include: What is an appropriate length for our development cycle? Are some observing modes more reliable than others? Are we testing too much, or not enough? How good is our software quality, not in terms of defects reported and fixed, but in terms of its impact on the user? The ultimate goal is to provide a higher quality of service to customers of the telescope.
The enterprise architecture presents a view of how software utilities and applications are related to one another under unifying rules and principles of development. By constructing an enterprise architecture, an organization will be able to manage the components of its systems within a solid conceptual framework. This largely prevents duplication of effort, focuses the organization on its core technical competencies, and ultimately makes software more maintainable. In the beginning of 2003, several prominent challenges faced software development at the GBT. The telescope was not easily configurable, and observing often presented a challenge, particularly to new users. High priority projects required new experimental developments on short time scales. Migration paths were required for applications which had proven difficult to maintain. In order to solve these challenges, an enterprise architecture was created, consisting of five layers: 1) the telescope control system, and the raw data produced during an observation, 2) Low-level Application Programming Interfaces (APIs) in C++, for managing interactions with the telescope control system and its data, 3) High-Level APIs in Python, which can be used by astronomers or software developers to create custom applications, 4) Application Components in Python, which can be either standalone applications or plug-in modules to applications, and 5) Application Management Systems in Python, which package application components for use by a particular user group (astronomers, engineers or operators) in terms of resource configurations. This presentation describes how these layers combine to make the GBT easier to use, while concurrently making the software easier to develop and maintain.
The software development process in Green Bank is managed in six-week development cycles, where two cycles fall within one quarter. Each cycle, a Plan of Record is devised which outlines the team's commitments, deliverables, technical leads and scientific sponsors. To be productive and efficient, the team must not only be able to track its progress towards meeting commitments, but also to communicate and circulate the information that will help it meet its goals effectively. In the early summer of 2003, the Software Development Division installed a wiki web site using the TWiki product to improve the effectiveness of the team. Wiki sites contain web pages that are maintainable using a web interface by anyone who becomes a registered user of the site. Because the site naturally supports group involvement, the Plan of Record on the wiki now serves as the central dashboard for project tracking each development cycle. As an example of how the wiki improves productivity, software documentation is now tracked as evidence of the software deliverable. Written status reports are thus not required when the Plan of Record and associated wiki pages are kept up to date. The wiki approach has been quite successful in Green Bank for document management as well as software development management, and has rapidly extended beyond the bounds of the software development group for information management.