Open Access
13 March 2020 Review on methods for wavefront reconstruction from pyramid wavefront sensor data
Author Affiliations +
Abstract

Pyramid wavefront sensors are planned to be a part of many instruments that are currently under development for the extremely large telescopes (ELT). The unprecedented scales of the upcoming ELT-era instruments are inevitably connected with serious challenges for wavefront reconstruction and control algorithms. Apart from the huge number of correcting elements to be controlled in real-time, real-life features such as the segmentation of the telescope pupil, the low wind effect, the nonlinearity of the pyramid sensor, and the noncommon path aberrations will have a significantly larger impact on the imaging quality in the ELT framework than they ever had before. We summarize various kinds of wavefront reconstruction algorithms for the pyramid wavefront sensor. Based on several forward models, different algorithms were developed in the last decades for linear and nonlinear wavefront correction. The core ideas of the algorithms are presented, and a detailed comparison of the presented methods with respect to underlying pyramid sensor models, computational complexities, and reconstruction qualities is given. In addition, we review the existing and possible solutions for the above-named real-life phenomena. At the same time, directions for further investigations are sketched.

1.

Introduction

Recently, the popularity of pyramid wavefront sensors (PWFS) has grown in the astronomical adaptive optics (AO) community due to their advantages compared to other types of wavefront sensors. The device is already utilized on existing telescope systems, such as the Large Binocular Telescope (LBT),13 and is going to be used for wavefront sensing in many of the instruments on the future extremely large telescopes (ELT)-sized telescopes.

Consequently, the development of wavefront reconstruction methods from pyramid sensor data is also a topic of high interest. The goal of this review paper is to provide a comprehensive overview of the research in the field of wavefront reconstruction from PWFS data.

In general, we distinguish between calibration- and model-based wavefront reconstruction algorithms. Standard calibration-based approaches rely on the registration of the interaction matrix (IM) of the system, which is the WFS response to poking the DM actuators. This approach couples the wavefront reconstruction and the DM control steps. The actuator commands are obtained by applying the (generalized) inverse of the IM to the vector of sensor data. The procedure is often called matrix–vector-multiplication (MVM) by the community. In the MVM approach using a synthetic IM, a physics-based mathematical model of the wavefront sensor is employed.

Other model-based approaches are matrix-free and therefore can have reduced computational complexities compared to MVM approaches. Among them are, for instance, Fourier-domain methods, approaches based on the inversion of the Hilbert transform, or applications of mathematical algorithms from the field of inverse problems. Note that any linear model-based reconstructor can be formulated as an MVM as well, which is, however, not preferable from the computational point of view.

The standard reconstruction approach is to derive an IM and compute its (generalized) inverse, which is then applied to the sensor data. However, more recently new ideas that are based on a throughout mathematical analysis of different pyramid sensor models have been developed. They frequently use simplifications of the full pyramid sensor model, e.g., a transmission mask model rather than the phase mask model. Subsequently, the pyramid model may be further simplified using the roof sensor approximation as well as suitable linearizations. Although these simplifications might be considered severe, numerical validation shows that these reconstruction methods deliver accurate wavefront reconstructions.

One of the distinguishing criteria between the various algorithms is their applicability to a pyramid sensor with or without modulation. While most of the algorithms are applicable with the same computational load to the PWFS both with and without modulations, several of them are not. For instance, an application of the algorithms that are based on the inversion of the Hilbert transform is justified for the nonmodulated sensor data only. Some other algorithms have an increased computational demand for the modulated pyramid sensor or display difficulties in closing the loop, in particular in the nonmodulated scenario.

The PWFS is known to be a nonlinear device. If the wavefronts aberrations are small, it behaves (almost) linear but with its range of linearity and sensitivity being inversely related. The most common way to overcome the nonlinearity of the sensor is to increase the linearity regime of the pyramid sensor by applying modulation at the costs of a reduced sensitivity.47 However, as recently was shown for telescope pupils fragmented by thick structures of the mirror support, the usage of a very low to no modulation is highly desirable. These sensor regimes provide the most sensitive measurements of piston jumps between the pupil segments, which are crucially important for accurate wavefront reconstruction of segment piston modes. Due to its enhanced sensitivity, the interest in applications of the nonmodulated PWFS is growing, which requires the development of nonlinear wavefront reconstruction algorithms expected to yield high-quality improvements for this type of sensor.

Moreover, a large part of the sensor’s nonlinearity is generated by the sensing environment itself. For instance, a pyramid sensor working in the visible and in bad atmospheric conditions is highly nonlinear with any amount of modulation. The high spatial frequency residuals that cannot be compensated are in this setting so large that they reduce the sensitivity of the pyramid sensor, a phenomenon known for years as the optical gain of the sensor.8,9 For astronomical observations with the PWFS in its nonlinear regime, a development of nonlinear methods able to compensate automatically for the reduced optical gain of the sensor is therefore of high importance.

Taking into account linear and nonlinear wavefront reconstruction methods for pyramid sensors, we provide in this paper a comparison of all relevant approaches with respect to underlying models, quality performance, and computational complexity. The performance of the methods is demonstrated in the context of an extreme adaptive optics (XAO) system and a single conjugate adaptive optics (SCAO) system, both planned for the ELT being built by the European Southern Observatory (ESO).

The paper is structured as follows. Section 2 provides an overview on the PWFS, describing its physical principles as well as its main characteristics.

Section 3 contains a review of various modeling approaches for the pyramid sensor. Geometrical and Fourier-optics models including amplitude (or transmission) mask and phase mask approaches are described. Section 4 presents explicit analytical forward models and approximations, both in the spatial and Fourier domains (FD).

A review of existing methods for wavefront reconstruction from PWFS data is given in Sec. 5. A comparison of the algorithms with respect to the underlying models, quality performance, and computational complexities is provided in Sec. 6.

Finally, Sec. 7 focuses on the studies of the algorithms’ performance under real-life features such as the pupil fragmentation, low wind effect (LWE), the sensor’s optical gain, and the presence of noncommon path aberrations (NCPA). For each of the mentioned special features, the existing solutions and techniques are reviewed as well as possible directions for further research are sketched.

2.

Pyramid Wavefront Sensor

Section 2.1 describes the physics of the image formation and the mapping of wavefront aberrations to intensity measurements of the pyramid sensor. Section 2.2 sketches the areas of application and the extents to which the pyramid sensor has spread.

2.1.

Physical Description of Pyramid Sensor

The principle of the pyramid sensor operation is based on the generalization of the Foucault knife test. Figure 1 provides a scheme of the pyramid WFS. First, the incoming light is focused by a lens onto the prism apex. Let Φ:R2R denote a phase screen (in radians) coming into the telescope. The complex amplitude ψaper:R2C corresponding to this phase screen Φ reads as

Eq. (1)

ψaper(x,y)=M(x,y)·exp[i·Φ(x,y)],(x,y)R2.

Fig. 1

Scheme of the optical setup of a pyramid WFS. The circular modulation path is shown in the dashed line.

JATIS_6_1_010901_f001.png

Here, M:R2R denotes the aperture mask defined as

Eq. (2)

M(x,y)={1,(x,y)Ω0,otherwise,
where Ω denotes the telescope aperture with a circular central obstruction.

Next, the main component of the device, a four-sided glass pyramidal prism, is placed in the Fourier plane of the lens. The prism is described by its phase mask Π shown in Fig. 2. The action of this phase mask on the focused light is described by the so-called optical transfer function (OTF)

Eq. (3)

OTFpyr(ξ,η)=exp[i·Π(ξ,η)],
which introduces certain phase changes according to the prism design.

Fig. 2

Phase mask Π (in arbitrary units) corresponding to a four-sided pyramid sensor.

JATIS_6_1_010901_f002.png

Finally, another lens forms an intensity image on the detector. The complex amplitude ψdet coming to the detector plane is a convolution of the incoming complex amplitude ψaper with the point spread function (PSF) of the glass pyramid

Eq. (4)

ψdet(x,y)=12π(ψaper*PSFpyr)(x,y).

The pyramid PSF is, in its turn, defined as the inverse Fourier transform (IFT) of its OTF

Eq. (5)

PSFpyr(x,y)=F1{OTFpyr(·,·)}(x,y).

The intensity I(x,y) in the detector plane is then defined as

Eq. (6)

I(x,y)=ψdet(x,y)·ψdet(x,y)¯.

In fact, the four facets of the pyramid split the incoming light in four beams, which propagate in slightly different directions. Most of the light falling onto detector is concentrated in the four pupil images denoted as Iij, i,j={0,1}. Note that by varying the parameters of the second lens, one can adjust the spatial sampling of the pupil subimages. Inside each of the four subimages Iij, the intensity is distributed slightly differently, due to the different optical paths for each of the beams. This inequality in the intensity distribution serves as a starting point for restoring the wavefront perturbations. According to the standard data definition, reminding the quad cell, the two measurement sets sx and sy are obtained from the four intensity patterns as

Eq. (7)

sx(x,y)=[I01(x,y)+I00(x,y)][I11(x,y)+I10(x,y)]I0,sy(x,y)=[I01(x,y)+I11(x,y)][I00(x,y)+I10(x,y)]I0,
where I0 is the average intensity per subaperture

Eq. (8)

I0(x,y)=I00(x,y)+I01(x,y)+I10(x,y)+I11(x,y)4.

Explicit forward models relating the incoming wavefront Φ with the sensor data sx, sy are considered in Sec. 3. Therein, various existing models, exact and approximate, are given together with a historical context allowing one to give a glance on the development of the understanding of how the pyramid sensor data are related to the incoming wavefront.

A dynamic circular modulation of the incoming beam allows to increase the linear and the dynamic range of the pyramid sensor7 and at the cost of a reduced sensitivity. The modulation can be accomplished in several ways: either by oscillating the pyramid itself,10 with a steering mirror,11,12 or using a static diffusive optical element.12,13 The circular modulation path of the focused beam on the pyramid apex is shown with a dashed circle in Fig. 1. Moreover, it was noticed that the uncorrected high-frequency aberrations can act as a “natural” modulation for the lower modes in the corrected field.14

2.2.

Pyramid Sensor Features, Applications, and Modern Challenges

Since the PWFS was introduced by Ragazzoni in the 1990s,10 it has gradually gained more and more attention from the scientific community. Multiple theoretical studies, numerical simulations,13,1519 and laboratory investigations in optical test benches2025 have demonstrated numerous advantages of the PWFS over the standard Shack–Hartmann wavefront sensor (SH-WFS). Among those are the ability to achieve an enhanced and adjustable sensitivity, improved signal-to-noise ratio (reduced noise propagation), an improved robustness to spatial aliasing, and an adjustable pupil sampling. The mentioned advantages of the pyramid sensor lead to significant improvements in the closed-loop performance of AO systems compared to systems equipped with the SH-WFS. For instance, in astronomical AO systems, the PWFS has been reported to provide higher Strehl ratios as well as higher guide star limiting magnitudes,26 which results in an increased sky coverage19 and lower residual speckle levels for high contrast imaging. Recently, it has been shown that all the advantages of the pyramid sensor are kept even when significant levels of NCPA have to be sensed and corrected by an AO system.19,27

In parallel to theoretical and laboratory studies, the advantages of the PWFS over the SH-WFS were successfully demonstrated on sky.2831 The PWFS was integrated into AO systems such as SCExAO at the Subaru Telescope, MagAO at the Magellan Telescope, Adaptive Optics module of Telescopio Nazionale Galileo (AdOpt@TNG), INO Demonstrator at the Mont Megantic Telescope, and PYRAMIR at the Calar Alto Telescope. Nowadays, the pyramid sensor is at the heart of high order, high contrast, and high precision wavefront sensing on the LBT,13,32 where it has provided outstanding operational results and set the new standard for the quality of AO correction to be achieved with ground-based telescopes.

Currently, the next-generation instrumentation on the so-called Extremely Large Telescopes (ELTs) with primary mirrors of 25 to 40 m in diameter is under development. Examples of the new era ground-based telescopes under construction are the ESO’s ELT, the Giant Magellan Telescope (GMT), and the Thirty Meter Telescope (TMT). The exceptional results achieved with the PWFS on the current-generation telescopes have led to the decision to include the pyramid sensor in the baseline for many instruments on the ELTs.

On the ESO’s ELT, the pyramid sensor is planned in the Natural Guide Star (NGS) SCAO modes on the three first light instruments MICADO,33 HARMONI,34,35 and METIS,36 in the NGS XAO mode on the planet imager EPICS,37 and in the postfocal laser tomography adaptive optics (LTAO) module ATLAS.38

On the GMT, the PWFS is assumed to be used in the NGS SCAO mode39 and as truth sensor in the LTAO system.40 Here, the pyramid sensor will measure the wavefront errors coming from both the atmospheric turbulence and telescope aberrations, including the differential segment piston errors and NCPA.39,40

On the TMT, the PWFS is planned in the NGS SCAO mode19 and as truth sensor in multiconjugate AO (MCAO) mode on the first light AO system NFIRAOS,41 as well as in the XAO mode for the Planet Formation Instrument.42

While currently PWFS are mainly applied or planned to be applied with NGS, theoretical analysis, numerical simulations, and on-sky demonstrations of its behavior with Laser Guide Stars or extended sources provided very promising results,19,4347 indicating advantageous performance compared to the SH-WFS also in these settings.46

On the ELTs under design, the primary mirrors are so large that they need to be segmented, which poses the key challenge of cophasing the segments in order to produce a single optical surface. Here is another very valuable feature of the pyramid sensor: its ability to sense differential pistons of a segmented mirror, which has been successfully demonstrated in the laboratory,48 supported by numerical simulations,49 and validated on sky under seeing-limited conditions.39 Moreover, it was found that among the available wavefront sensor types, the PWFS takes the most sensitive measurements of differential pistons on the segments.50

Apart from astronomical applications, the PWFS is also applied in adaptive loops in ophthalmology5155 and microscopy.56,57

Pyramid sensor is known to be a nonlinear device (see Proposition 1 in Sec. 4). The nonlinearity manifests itself, in particular, in the reduced sensor response under certain conditions compared to the theoretically predicted one, which is known to the pyramid community as the optical gain of the sensor. It has been acknowledged that, while optical gain is not an issue for sensing in the near-infrared (NIR) under nominal seeing conditions, it becomes very pronounced when sensing at shorter wavelengths and for worse seeing conditions. And, irrespectively of the sensing wavelength, the optical gain will influence the reconstruction quality achievable in the presence of spiders, the LWE, or NCPA since they push the pyramid sensor toward its nonlinear regime. Clearly, the ultimate performance of ELT instruments will crucially depend on the possibilities to cope with the sensor nonlinearity in an appropriate and reliable way.

One of the possibilities to overcome the challenge of sensor nonlinearity consists of the application of nonlinear wavefront reconstruction algorithms described in Secs. 5.6 and 5.7. First investigation steps in this direction have been taken and need to be intensified.

Other options that involve a recovery of the sensor’s optical gain followed by a corresponding tuning of the available linear methods or solutions based on the usage of additional optical components are described in Sec. 7.3.

3.

Modeling Approaches

In this section, the existing approaches to modeling of the pyramid-type wavefront sensors data are reviewed. The description of various modeling approaches is given in the historical context, which shows the development of understanding of how the sensor works. This perspective also provides an outlook on possible further directions for improvement in the forward modeling of the pyramid-type sensors. We restrict our considerations to sensor data sx only. For sy data, the expressions are symmetrical.

3.1.

Geometrical Approach

The first mathematical description of the pyramid sensor signal (providing an explicit relation between the wavefront and the sensor data) was derived in the geometrical optics framework for the modulated sensor case. Let α denote a modulation parameter (also called angle, amplitude, or radius in the literature)

Eq. (9)

α=bλ/D,
where b is a positive integer and λ is the sensing wavelength. In Refs. 7 and 58, it was shown that the signal sx of the pyramid WFS modulated with angle α is proportional to the slope of the incoming wavefront Φ as

Eq. (10)

sx=λαπ2dΦdx.

Later this model was acknowledged as approximate and valid for low-order sensors with large modulations.7 Reference was added here.

3.2.

Diffractive Transmission Mask Approach

A bit later, within the more accurate diffraction theory framework, analytical models for pyramid sensor with linear7 and circular11 modulation paths were derived using the roof sensor approximation, which assumes independency of the sensor response in x and y directions. Moreover, these analytical models used the so-called transmission mask approximation, which assumes the pyramidal prism to be sufficiently well described as a transmission-only mask object.59 In this case, the OTF of the pyramid is approximated as a sum of four independent amplitude-only Fourier-domain filters:5962

Eq. (11)

OTFpyr(ξ,η)=m=01n=01Tmn(ξ,η),
where the filters Tmn are the two-dimensional (2-D) Heaviside functions

Eq. (12)

Tmn(ξ,η)=H2d[(1)m·ξ,(1)n·η]={1,(1)m·ξ>0,(1)n·η>0.0,otherwise.

As a result, four independent beams of light fall onto detector forming four intensity patterns Imn, m,n{0,1}

Eq. (13)

I(x,y)m=01n=01Imn(x,y),
with

Eq. (14)

Imn(x,y)=ψdetmn(x,y)·ψdetmn(x,y)¯,

Eq. (15)

ψdetmn(x,y)=12π[ψaper(·,·)*F1{Tmn}(·,·)](x,y).

As such, the transmission mask approach does not take interference effects between the four intensity patterns on the detector into account. This assumption is valid if the four subbeams leaving the pyramidal prism reach the detector quadrants far enough from each other.

Together with the first diffraction models, the duality of the pyramid sensor response with respect to the incoming wavefront was discovered in the FD.7 As shown in Fig. 3, for the modulated pyramid sensor, its response to low frequencies in the wavefront is mathematically described as a linear filter in the FD, which corresponds to slopes measurements in the spatial domain. For higher frequencies, the sensor response is represented with a constant, which in the spatial domain corresponds to measuring the Hilbert transform of the wavefront.

Fig. 3

FD filter functions representing the pyramid and Shack–Hartmann responses to the incoming wavefronts of a given spatial frequency. Reproduced from Ref. 7. Here, Fc denotes the WFS cut-off frequency determined by the subaperture size d.

JATIS_6_1_010901_f003.png

In parallel, the transmission mask model for the nonmodulated sensor with the pyramidal prism was derived.60 Later, the transmission mask models for pyramid sensors for linear and circular modulations were obtained.63 Recently, all the diffraction models in the transmission mask approximations have been rederived and reproven in a more rigorous mathematical framework using the distribution theory.64 Also, a generalized framework for mathematical modeling of various kinds of Fourier-domain filtering sensors has been developed.5,6 Such a framework provides a common environment and allows to modify, compare, and explore types of sensors, such as, the flattened PWFS65,66 or generalized pyramid sensors with an arbitrary number of facets.6769

3.3.

Diffractive Phase Mask Approach

A more exact modeling approach is to consider the pyramidal prism as a phase mask object.59 It is easily done numerically, and the phase mask forward simulations of pyramid-type wavefront sensors have been implemented in AO simulation tools, such as OCTOPUS,70 YAO,71 COMPASS,72 PASSATA,73 and OOMAO.74 However, the phase mask model is connected with cumbersome analytical derivations when trying to compute an explicit analytical relation between the sensor data and the incoming wavefront. A final analytical description of the PWFS data in the phase mask model is yet to be developed.

Recently, other definitions of pyramid sensor signals have been explored. For instance, it was suggested to use the four intensity patterns directly or other combinations of them.5,6,7577 An analytical comparison of the standard difference-like data definition [Eq. (7)] with the usage of the four intensities directly showed that the former one provides an improved linearity with respect to the incoming wavefront. Therefore, in usage with linear reconstruction algorithms, the standard data definition is to be preferred. However, in case of pyramid imperfections, the extension of the sensor data definition by introducing other combinations of the four intensities was demonstrated to provide benefits for the reconstruction quality.

4.

Approximate Transmission Mask Models

The modeling approaches reviewed (in the most general way) in Sec. 3 provide a number of explicit mathematical relationships connecting the incoming wavefront Φ with the sensor data sx, sy:

Eq. (16)

[sx,sy]=PiΦ,i=1,2,,
with Pi being accurate or approximate operators representing the action of pyramid sensor on the wavefront. The final aim of such forward modeling is to obtain a relationship

Eq. (17)

[sx,sy]=P˜Φ,
such that the operator P˜ satisfies the following two contradictory conditions in an optimal way. First, the forward operator P˜ has to be still involved enough in order to describe the sensor with an adequate accuracy. And second, the explicit mathematical expression of P˜ has to be simple enough in order to be able to invert it with a reasonable amount of computations needed. At the current stage of research, diffractive transmission mask modeling approach (with additional simplifications) results in forward models around the balance point, allowing for fast and accurate wavefront reconstruction. Such forward models are the topic of this section. The models will be formulated as Propositions, and the corresponding proofs can be found in the provided references.

According to the diffractive model in the transmission mask approximation, the nonmodulated PWFS measures a combination of one-dimensional (1-D) and 2-D finite Hilbert transforms of nonlinear functions of the phase.63,78 For the modulated pyramid sensor, the full theoretical model becomes even more complicated. A comprehensive description of the full as well as approximate forward models of pyramid and roof sensors for all modulation scenarios can be found in Refs. 63 and 78.

Clearly, such forward models are mathematically difficult to invert. The model-based reconstruction methods often work with simplifications of the full pyramid sensor starting from a transmission mask model instead of the phase mask model.

Often, the pyramid model is further simplified using the roof sensor approximation, i.e., excluding the cross terms in the full pyramid model. The roof sensor operator [Eq. (18)] can be linearized and the linear model [Eq. (19)] can be further simplified resulting in a one-term operator [Eq. (20)]. The latter, in its turn, can be simplified using the infinite size telescope assumption resulting in a simple convolutive operator [Eq. (21)]. Although the simplifications leading to the one-term approximate models are rather significant, numerical validation shows that many of the reconstruction methods built on that idea still perform very accurate wavefront estimation, see Table 4 in Sec. 6. In the remainder of this section, we will focus on the simplifying assumptions and the respective approximate models. The corresponding proofs can be found in the dedicated Refs. 7, 11, 63, and 64.

4.1.

Roof WFS Approximation

The theoretical model of the PWFS becomes simpler when instead of the four-sided pyramidal prism one assumes two orthogonally placed two-sided roof prisms.7,11,79 Due to the physical decoupling of the prisms and their orthogonal placement with respect to each other, the two signal sets sx and sy are independent and contain information about the phase Φ only in x- and only in y-direction correspondingly.

Proposition 1.

Under the roof sensor assumption, the PWFS data sx{n,l,c} are approximated by

Eq. (18)

sx{n,l,c}(x,y)1πB(y)+B(y)sin[Φ(x,y)Φ(x,y)]k{n,l,c}(xx)(xx)dx,
where the functions k{n,l,c} are defined as kn(x)=1, kl(x)=sinc[αλ(x)], kc(x)=J0[αλ(x)]. Here, the superscripts {n,l,c} denote the cases of no modulation applied, linear and circular modulation of amplitude α=bλD with a positive integer b; {B(y),+B(y)} denote the boundaries of the pupil images for a fixed y, αλ=2παλ, and J0 denotes the zero-order Bessel function of the first kind.

4.2.

Closed-Loop Approximation

An additional assumption of small wavefront distortions Φ1, as expected in closed-loop AO, allows to linearize the models of the pyramid sensor measurements.

Proposition 2.

Under the closed-loop assumptions, the linearized roof sensor data are approximated by7,11,63

Eq. (19)

sx{n,l,c}(x,y)1πB(y)B(y)[Φ(x,y)Φ(x,y)]k{n,l,c}(xx)xxdx.

4.3.

One-Term Model

The energy distribution between the two terms in Eq. (19) is unequal, which, in case of small wavefront perturbations, allows one to focus on the component with most of the energy by ignoring the second term.

Proposition 3.

Under the roof sensor, finite telescope size, and small wavefront perturbations (closed loop) assumptions, the PWFS data are approximated as7,63

Eq. (20)

sx{n,l,c}(x,y)1πB(y)B(y)Φ(x,y)k{n,l,c}(xx)xxdx.

4.4.

Infinite Telescope Approximation

Assuming an infinite telescope size B(y), one can simplify the forward model further. This assumption is equivalent to extending the wavefront Φ with zeroes outside the pupil Ωy(x)=[B(y),B(y)]. Since the telescope pupil is finite anyway, this step does not change the model itself but allows one to use new inversion equations.

Proposition 4.

Under the roof sensor, closed loop, and infinite telescope size assumptions, the PWFS data are approximated as7,11,63

Eq. (21)

sx{n,l,c}(x,y)1π+χΩy(x)Φ(x,y)k{n,l,c}(xx)xxdx[Φ(·,y)*k{n,l,c}(·)π·](x,y),
where * denotes the convolution operator. Here, (·) denotes the silent variable over which the convolution is performed.

4.5.

Subaperture Discretization

So far the continuous model of the sensor data was considered, neglecting the finite sampling of the sensor. What is measured in practice are the averaged data values over the subapertures of size d. Following the approach in Ref. 7, the sensor data sx{n,l,c} are considered as discrete function values estimated in the (discrete) middle points {x¯,y¯} of WFS subapertures.

The discrete sensor data s¯x{n,l,c} are obtained from the continuous data sx{n,l,c} in the following two steps. First, the continuous data sx{n,l,c} are averaged over the subapertures, which is mathematically represented by a convolution of sx{n,l,c} with a characteristic function χ[1/2,1/2](x), defined as

Eq. (22)

χ[1/2,1/2](x){1,x[1/2,1/2],0,otherwise.

That is,

Eq. (23)

s¯x{n,l,c}(x)=[sx{n,l,c}(·)*1dχ[1/2,1/2](·d)](x).

Here, (·) denotes the silent variable over which the convolution is performed. Note that from definition Eq. (22), it follows that

Eq. (24)

s¯x{n,l,c}(x)=1d+sx{n,l,c}(x)χ[1/2,1/2](xxd)dx,=1dxd/2x+d/2sx{n,l,c}(x)dx.

In the second step, from the averaged data values s¯x{n,l,c}(x) given at continuous space variable x a set of discrete values {s^x{n,l,c}}={s¯x{n,l,c}(x¯)} is picked in the middle points x¯ of the subapertures. Mathematically, this step is represented by an application of the so-called sampling function Td to the averaged data s¯x{n,l,c}(x)

Eq. (25)

s^x{n,l,c}(x)=Td(x),s¯x{n,l,c}(x).

The sampling function Td, also known as the Dirac comb, from the mathematical point of view is a distribution, or a generalized function, and is defined as an infinite sum of the shifted delta distributions

Eq. (26)

Td(x)k=+δ(xkd)=1dT(xd).

Therefore, by an application of Td(x) to the averaged data s¯x{n,l,c}(x), we pick a discrete set of values of s¯x{n,l,c}(x¯) in a discrete set of points {x¯}={x|xdZ} representing the middle points of the sensor subapertures.

4.6.

Fourier-Domain Representation

Apart from the analytical models relating the incoming wavefronts with the registered sensor data in the spatial domain, also the Fourier-domain models provide a relationship between the spectra of the quantities are of interest. In this section, the Fourier-domain representations of the pyramid sensor data are summarized for the case of the linearized one-term roof approximation [Eq. (20)].

Note that due to the finite sampling (i.e., subaperture discretization) of the WFS, the spectrum of the measured sensor data contains only certain (discrete) frequencies ξ¯ sampled in the interval [ξcut,ξcut] with a sampling size ξstep1/D, determined by the telescope diameter D. The cut-off frequency ξcut is determined by the sensor subaperture size d as ξcut=1/(2d). In the modulated case, let the parameter ξmod>0 be defined as ξmod=α/λ=b/D, where b is a positive integer. The parameter ξmod defines the frequency at which the transition between the two regimes (slope versus phase mode) of the pyramid-type sensor happens, see Sec. 3.2 and Fig. 3 therein for more details.

4.6.1.

Spectrum of continuous data

Proposition 5.

For each of the modulation scenarios, the spectrum of the continuous sensor data is given as a product of the wavefront spectrum with a corresponding filter function gpyr{n,l,c}7,63

Eq. (27)

(Fsx{n,l,c})(ξ)=(FΦ)(ξ)·gpyr{n,l,c}(ξ),
where the Fourier-domain filters gpyr{n,l,c}, corresponding to the sensor without modulation, with linear and circular modulation of radius α, respectively, are given as

Eq. (28)

gpyrn(ξ)=isgn(ξ),  ξ[ξcut,ξcut],

Eq. (29)

gpyrl(ξ)={isgn(ξ),|ξ|>ξmod,iξ/ξmod,|ξ|ξmod,

Eq. (30)

gpyrc(ξ)={isgn(ξ),|ξ|>ξmod,2iπarcsin(ξ/ξmod),|ξ|ξmod.

4.6.2.

Spectrum of averaged continuous data

Proposition 6.

For each of the modulation scenarios, the spectrum of the averaged continuous sensor data sx{n,l,c} is given as the pointwise product7,63

Eq. (31)

(Fs¯x{n,l,c})(ξ)=(FΦ)(ξ)·hpyr{n,l,c}(ξ),
of the wavefront spectrum, evaluated at a discrete set of frequencies ξ, with the corresponding discrete filter function hpyr{n,l,c} given as

Eq. (32)

hpyr{n,l,c}(ξ¯)=gpyr{n,l,c}(ξ)·sinc(dξ).

4.6.3.

Spectrum of discrete data

Proposition 7.

For any modulation scenario, the spectrum of the discretized sensor data s^x{n,l,c} is a convolution of the spectrum of the averaged continuous sensor data s¯x{n,l,c} with the sampling function7,63

Eq. (33)

(Fs^x{n,l,c})(ξ)=[Fs¯x{n,l,c}(·)*T(d·)](ξ).

From now on, we do not distinguish between modulation scenarios anymore.

5.

Wavefront Reconstruction Methods Using Pyramid Sensor Data

The problem of wavefront reconstruction from pyramid sensor data is mathematically described by the WFS equation

Eq. (34)

s=PΦ+η,
where P stands for pyramid sensor operator, Φ denotes the incoming (residual) wavefront, s denotes the sensor data s=[sx,sy], and η represents the noise on the measurements.

Since the invention of the pyramid sensor, a considerable amount of various approaches were developed and implemented for wavefront reconstruction from its data.80 Among the earliest are the interaction-matrix-based matrix vector multiplication (MVM) approaches.70,8187 Later, the development of fast model-based linear reconstructors60,63,78,79,8896 began and experiments with nonlinear algorithms, including applications of learning approaches, were reported.4,60,77,97,98 The algorithms reviewed in this section are split into several groups according to the common underlying model they invert or the common idea they employ for the reconstruction. Before describing the reconstruction methods, we start with one more feature that distinguishes various reconstruction approaches—the way the wavefront control is handled.

5.1.

Coupled and Decoupled Control

Generally speaking, two approaches to AO loop control are to be distinguished. The traditional—coupled—approach combines wavefront reconstruction and DM fitting into one step. A typical example of such coupling is a calibration of an AO system. Calibration consists of registering a DM-to-WFS IM M, which contains the WFS responses to the input shapes created with a controllable DM. The DM shapes used as input for the WFS can be modal (spread all over the pupil) or zonal (localized). Examples of modal shapes are Zernike polynomials or Karuhnen–Loeve modes. As zonal shapes, the DM influence functions (IF) are typically used, which represent the DM shape when a single actuator is poked.

Lots of experience (both theoretical and practical) have been accumulated with the coupled AO control approach, especially with the modal control basis. In spite of the evidence of successful operation of the PWFS (both with and without modulation) with the coupled modal control under a variety of atmospheric conditions and AO systems configurations, in certain (rather typical, nonspecial) cases there have also been reported difficulties in closing the loop with a nonmodulated sensor. Moreover, the modal control is associated with the necessity to identify and correct on the fly the so-called modal optical gains.8,27,99103 Furthermore, the modal basis is not as well-suited for pupils with spiders as the zonal approach, which allows significantly more degrees of freedom in the representation of wavefront.49,69,104107

An alternative—decoupled—approach to AO loop control considers the two steps: wavefront reconstruction and DM fitting, separately and independently. In this situation, wavefront reconstruction can be based on a synthetic calibration (using a numerical implementation of the sensor’s forward model) done independently from the shapes a DM can produce. This approach is more general since for choosing a basis for wavefront representation in the reconstruction step one is not restricted to use only the DM IFs or the shapes that a DM can represent. Instead, one is free to choose any other basis, and to explore various bases with respect to their ability to represent the expected wavefronts in an optimal way. For instance, in the case of a segmented pupil mask (as for the ELT), it is clear that a zonal basis, due to its localization, should be better than a modal basis in representing the wavefront jumps between the pupil segments, as expected during the telescope operation due to the island and the LWE (see Secs. 7.1 and 7.2).

Moreover, in the decoupled paradigm, wavefront reconstruction can be based on the analytical inversion of the forward mathematical model of the WFS. In this case, the reconstruction algorithm produces the wavefront itself and does not require any basis to be used for wavefront representation. In both cases (synthetic IM registration and analytical inversion), no knowledge on either the DM IFs or actuator positions is needed or used for wavefront reconstruction. Instead, one attempts to reconstruct in the most accurate way the incoming wavefront shape from the available WFS data. Due to the decoupling of WFS and DM, the grid of reconstruction points can be arbitrary, in particular can be suited to WFS geometry (which is not the case in the coupled approach with nonregular DM actuator grids). Therefore, the reconstruction can be optimized toward the WF sensor and atmospheric characteristics.

5.2.

Interaction-Matrix-Based Reconstructors

The interaction-matrix-based methods are the widely used standard reconstructors employed on existing telescope facilities. An extensive overview on numerous variants of these approaches can be found in Ref. 49. The algorithms are generally applicable to pyramid sensors with and without modulation. Inverting the IM scales as O(na3)83 and the MVM step as O(na2) with the number na of active actuators. The computational complexity is demanding and makes the application of MVM methods challenging for large-scale AO systems having, e.g., 40,000 actuators to control in real-time.

It is important to understand that there does not exist a single interaction-matrix-based reconstructor. Instead, many instances of this general method are available, with big differences between them. For instance, the interaction-matrix-based control can be implemented in a coupled (DM is involved in WFS calibration) or decoupled paradigm, as already explained in Sec. 5.1. Based on the chosen paradigm, the registration of the system IM can be done in several ways: by physical calibration using the real devices, pseudosynthetic (relying on both the devices and computations), or completely synthetic (relying on computations only). Another related point is the choice of basis for wavefront representation and control. While in the coupled paradigm, the two bases are typically not distinguished, one is restricted to use a basis that can be fitted by a deformable mirror (DM). On the contrary, in the decoupled approach, one can choose two different bases for wavefront representation and DM control. This allows much more freedom and gives an ability to explore different bases with respect to their ability to represent typical wavefronts in an optimal way. Therefore, when talking about any interaction-matrix-based reconstructor, it is very important to understand the specific details of this particular instance of the method.

In the coupled paradigm, the idea of interaction-matrix-based algorithms is built on the simple DM-to-WFS matrix relation between discrete sensor data s and the sought-after mirror actuator commands a (which are related to the unknown incoming wavefront Φ) given as

Eq. (35)

s=Ma.

The wavefront reconstruction is coupled with the DM in the sense that for the generation of an IM one creates a certain (zonal or modal) shape with the DM, which is then sensed by the wavefront sensor. In this approach, one is restricted to wavefront shapes, which can be represented by the DM, i.e., are a linear combination of the DM IF

Eq. (36)

Φ(x,y)=i=1naaiIFi(x,y),
or the DM modes

Eq. (37)

Φ(x,y)=j=1nccjhjm(x,y),
with

Eq. (38)

hjm(x,y)=l=1namljIFl(x,y),
with actuator commands (mlj). This results in

Eq. (39)

Φ(x,y)=j=1nccjhjm(x,y)=j=1nccjl=1namljIFl(x,y).

In the coupled approach, a DM-to-WFS IM relates the sensor measurements s directly with the command vectors

Eq. (40)

s=PΦ+η=P(i=1naaiIFi)+η=i=1naaiP(IFi)+η=:MIFa+η,
or

Eq. (41)

s=PΦ+η=P(j=1nccjhjm)+η=j=1nccjP(hjm)+η=:Mmc+η.

After registration of the DM-to-WFS IM M, the next step in the coupled approach consists of a computation of the control matrix (CM) C as a stable (possibly, regularized) inverse of the IM M. Thus, any procedure for finding the generalized inverse M of the IM M [e.g., least-squares pseudoinverse, regularized least-squares pseudoinverse, or inversion using a truncated singular value decomposition (SVD)] can be seen as an interaction-matrix-based wavefront reconstruction approach. An overview on various existing approaches to the inversion of M is provided in Ref. 49. One of the typical ways to invert the IM is to use the truncated singular value decomposition of M. This method allows to stabilize the inversion by rejecting the (high-order) modes most susceptible to noise, i.e., having the smallest singular values.

Application of the CM to the sensor data provides in this case directly the DM commands to be applied in the prechosen basis (modes or actuators).

In the decoupled paradigm, a synthetic noise-free computation of a WF-to-WFS IM Ms is performed. At this step, one is free to choose for wavefront representation any suitable basis (hj), j=1,,na

Eq. (42)

Φ(x,y)=j=1nadjhj(x,y).

Here, two points can be taken into account. First, it is reasonable to make sure that the chosen basis allows to (easily) incorporate the characteristics of the atmosphere for regularization when inverting the WFS-to-WF IM. Second, one has to guarantee that reconstruction will be stable, i.e., the condition number of the according IM has a reasonable value. An example of a good basis is a set of bilinear functions on a regular grid of discretization points (can be viewed “artificial” actuators).49 The coefficients dj are in this case simply pointwise evaluations of the wavefront Φ on the chosen discretization grid.

With the basis (hj), j=1,,na being chosen, the synthetic calibration can be performed

Eq. (43)

s=PΦ=P(j=1nddjhj)=j=1nadjP(hj)=:Msd.

Then one computes a WFS-to-WF CM Cs by computing an (regularized) inverse of Ms. As a next step, one reconstructs the vector of wavefront coefficients d as

Eq. (44)

d=Css.

Next, having an accurate wavefront reconstruction on the chosen grid, one solves the DM fitting problem

Eq. (45)

i=1naaiIFi(x,y)=j=1nadjhj(x,y).

In this step, the actual DM actuator grid can be taken into account as well as a different basis (any of the available modal or zonal options) for the DM control can be chosen. In a practical implementation, the step can be easily combined with the WFR step, in order to save the total amount of computations.

5.3.

Fourier-Analysis-Based Methods

Now we consider the algorithms based on Fourier-domain analysis of the pyramid sensor data presented in Sec. 4.6. These are the preprocessed cumulative reconstructor with domain decomposition (P-CuReD), the convolution with the linearized inverse filter (CLIF), and several versions of the Fourier transform reconstructor (FTR).

5.3.1.

Preprocessed cumulative reconstructor with domain decomposition

The P-CuReD63,94,108 is a two-step approach consisting of a data-preprocessing part and the application of the CuReD,109112 originally developed for Shack–Hartmann sensors. The method is applicable to pyramid sensors with and without modulation.

The first step, the data preprocessing, is based on an analytical FD relation between linearized pyramid sensor data spyr (see Propositions 57) and Shack–Hartmann sensor data. Approximating the pyramid sensor with the simpler one-term infinite size telescope roof sensor model (see Propositions 24), this FD relation to SH measurements ssh is given as

Eq. (46)

F{ssh}(ξ)=F{spyr}(ξ)·gsh/pyr(ξ).

For the spatial frequency ξ, we consider the interval [ξcut,ξcut] with cut-off frequency ξcut=1/(2d) for the subaperture size d. Since for the roof sensor, the measurements are decoupled for x- and y-directions all these considerations can be made in 1d. The pyramid-to-SH transmission filter gsh/pyr is formulated as

Eq. (47)

gsh/pyr(ξ)F{ssh}(ξ)F{spyr}(ξ).

As derived in Refs. 7 and 94, for the nonmodulated sensor, the transmission filter is represented as

Eq. (48)

gsh/pyrn(ξ)=2πdξsgn(ξ)  ξ[ξcut,ξcut],
for the circularly modulated sensor as

Eq. (49)

gsh/pyrc(ξ)={2πdξsgn(ξ),|ξ|>ξmod,π2dξarcsin(ξ/ξmod),|ξ|ξmod,
and for the linearly modulated sensor as

Eq. (50)

gsh/pyrl(ξ)={2πdξsgn(ξ),|ξ|>ξmod,2πdξmod,|ξ|ξmod.

Converting the transmission filters into space-domain kernels by the application of the IFT, i.e.,

Eq. (51)

psh/pyr(x)=F1{gsh/pyr}(x),
and choosing a suited discretization approach one ends up with a representation of the kernels having only few nonzero values. Thus, the data preprocessing, which is approximated as a row- and columnwise convolution of the measurements with the corresponding kernel, is computationally cheap.

After the pyramid sensor measurements are transformed into SH-like data, the CuReD algorithm is applied to the modified pyramid signal. Previously, the high-quality and high-speed performance of the CuReD for SH sensors were demonstrated in numerous closed-loop end-to-end simulations as well as on-sky tests.113,114

The two steps together provide an accurate wavefront reconstruction method with a linear complexity of O(na). The P-CuReD is (to our knowledge) the fastest reconstruction method available for pyramid sensors and gives quality results, which are comparable to or even better than those obtained by interaction-matrix-based approaches.67,115

In the case of segmented pupils, the P-CuReD algorithm combined with a direct segment piston reconstructor (DSPR)49,116 demonstrates excellent performance with almost no loss in quality compared to the nonsegmented case.

5.3.2.

Fourier transform reconstructor with Shack–Hartmann filter

The first approach to wavefront reconstruction from the pyramid sensor data by means of a Fourier filtering88 consisted of the application of the algorithm originally developed for SH sensors.117 The method assumed the pyramid sensor forward model [Eq. (10)] derived within the geometrical optics framework, which is valid for large modulation amplitudes. Therefore, in order to provide a reasonable reconstruction quality, the algorithm requires a large amount of modulation being applied to the PWFS, which makes its response function linear and the sensor itself very similar to the SH sensor. In order to guarantee a spatial periodicity, the pyramid sensor signal has to be appropriately extended outside the pupil mask. The extended data are Fourier transformed, and an inverse filter relevant for SH sensors is applied. The final DM commands are obtained by taking the IFT of the filtered data spectrum.

The FTR has a computational complexity of O(nalogna) if the fast Fourier transform (FFT) is used. A close correlation between SH and pyramid data is only valid for low-order WFSs in case of a large amount of modulation being applied. Therefore, for high-order systems, the FTR with the SH filter is outperformed by other methods.

5.3.3.

Convolution with the linearized inverse filter

The CLIF63,91,92 is a spatial-domain algorithm based on the FD analysis of the PWFS data given in Propositions 57. The CLIF method is applicable to the pyramid sensor with and without modulation. Similarly to the P-CuReD, the algorithm assumes as forward model the linearized one-term roof approximation [Eq. (20)] of the pyramid sensor. The idea of the algorithm is the application of the inverse FD filter functions.

Let us recall that s˜ indicates the discrete pyramid sensor data and ξ¯ denotes a discrete set of frequencies. According to the descriptions in Refs. 7 and 92, the discrete spectrum F{s^x} of the pyramid data s˜x evaluated at frequency ξ˜ is a pointwise product of the wavefront spectrum F{Φ} with a filter h, i.e.,

Eq. (52)

F{s˜x}(ξ¯)=F{Φ}(ξ¯)·h(ξ¯).

The discrete filter h is given as

Eq. (53)

h(ξ˜)=gpyr(ξ˜)·sinc(dξ˜),
for the pyramid filter functions gpyr introduced above.

In the CLIF method, the wavefront is reconstructed in the spatial domain by the convolution with the kernel

Eq. (54)

Φ(x˜)=(s˜x*F1{h1})(x˜).

Since for the roof sensor approximation data in x-direction are independent from y-direction and vice versa, the considered convolutions and Fourier transforms are in 1d. Data in both directions are handled separately, and the two obtained reconstructions are averaged afterward. The CLIF method has a complexity of O(na3/2).

5.3.4.

Fourier transform reconstructor with pyramid filter

In a next step, two versions of FTR with dedicated pyramid filters were developed in parallel.91,92,118,119 Both of them are using the same idea and work with the correlation between the spectra of the discrete pyramid sensor data and the incoming wavefront but have different implementations. In Ref. 118, a first application of the FTR with an approximate pyramid FD filter was presented for a low-order PWFS. The roof senor approximation and the linearized operator were assumed there.

In parallel, in Refs. 91 and 92, a direct FD analog of the CLIF method was introduced, named the pyramid Fourier transform reconstructor (PFTR). The reconstruction is performed in the FD by the multiplication of the discrete pyramid sensor spectrum with the linearized roof inverse filter h1 (see Proposition 6)

Eq. (55)

F{Φ}(ξ˜)=F{s˜x}(ξ˜)·h1(ξ˜),
and a subsequent IFT. In contrast to the CLIF, the computational complexity scales as O(nalogna) if the FFT is used.

Finally, Ref. 119 introduces an iterative sensor data extension providing smoother reconstruction on the pupil boundaries and presents first laboratory demonstrations of the FTR.

5.4.

Hilbert Transform Methods

Now, we focus on algorithms that reconstruct the unknown wavefront Φ from sensor data sx approximated by its Hilbert transform HΦ, H:L2(R)L2(R), given as

Eq. (56)

(HΦ)(x,y)1πp.v.RΦ(x,y)xxdx.

Reconstructors based on the inversion of the Hilbert transform are generally only applicable to a pyramid sensor without modulation. If we assume an infinite telescope size (see Proposition 4), the nonmodulated pyramid sensor measurements can be approximated by the Hilbert transform operator applied to the incoming phase written as

Eq. (57)

sx=HΦ.

Thus, any attempts of inverting the Hilbert transform H can be utilized for reconstructing the wavefront Φ from nonmodulated pyramid sensor data sx.

5.4.1.

Hilbert transform reconstructor

The inverse of the Hilbert transform is given by its negative, i.e., H1=H. The inversion itself is based on the simple FD representation of the Hilbert transform given as

Eq. (58)

F{HΦ}(ξ)=isgn(ξ)F{Φ}(ξ).

In the Hilbert transform reconstructor (HTR) algorithm, the inversion of the Hilbert transform is performed in the FD as a multiplication of the phase spectrum F{Φ}(ξ) with the corresponding filter function isgn(ξ). The reconstructed phase spectrum is afterward converted to the spatial domain by the application of a 1-D IFT. Using the FFT algorithm, the mentioned reconstruction method has a computational complexity that scales as O(nalogna).

The idea was first proposed in Ref. 79. Later, an adaption of the algorithm named the Hilbert transform with mean restoration (HTMR) was introduced.93,95 It was observed that, when using the HTR algorithm, the mean values of each row for reconstructions in x-direction and of each column for reconstructions in y-direction are zero, and therefore the continuity of the wavefronts gets lost. The idea in the HTMR algorithm is to restore the mean values.

Compared to interaction-matrix-based results in closed-loop simulations for an XAO setting on the ESO’s ELT, these approaches give a reduced quality. One reason may be that the aperture mask has a strong influence on the sensor data. Hence, the assumption of an infinite telescope size possibly violates the reconstruction performance for annular telescope pupils.

5.4.2.

Two component reconstructor

Approximating the pyramid sensor by a linearized roof sensor [or, more precisely the one-term assumption of it as in Eq. (20)] and considering the modulated filter functions [Eq. (28)], one sees that the spectrum of the sensor data consists of two different components: the high-frequency part, which is constant and given by isgn(ξ) and the low-frequency part (almost) linear in ξ. While the high spatial frequencies of the wavefront are represented in the pyramid sensor data through the Hilbert transform, the low-frequency component is represented in the same way as for the SH sensor, i.e., the signals are essentially the gradients of the incoming phases.

The idea of the two component reconstructor (TCR)96 is to consider these two parts separately. For that reason, the sensor data sx are split into a high-frequency component sxhigh and a low-frequency component sxlow with respect to the threshold frequency ξmod. The high-frequency part is reconstructed using the HTMR algorithm, and the low frequencies are estimated by the application of the CuRe,109,110 a predecessor of the CuReD111 for SH sensors. Both reconstructions are then summed up to one final solution using two different gains, which are individually adapted to both regimes.

The TCR has a computational complexity of O(nalogna). After tests for an 8-m telescope having 40×40 subapertures carried out in OCTOPUS, the development of the algorithm was not continued in favor of more promising approaches.

5.4.3.

Finite Hilbert transform reconstructor

Another wavefront reconstruction method for nonmodulated pyramid sensors is the finite Hilbert transform reconstructor (FHTR).63 According to Proposition 3 in Sec. 4, the pyramid operator is approximated by the finite Hilbert transform T:L2{[B(y),B(y)]}L2{[B(y),B(y)]}, which is given as

Eq. (59)

(TΦ)(x)1πp.v.B(y)B(y)Φ(x)xxdx,
for a real valued interval [B(y),B(y)]. In contrast to the HTR and HTMR, the algorithm now takes finite telescope apertures into account.

In the FHTR approach, the wavefronts are reconstructed by applying the inverse T1 of the finite Hilbert transform operator T to the data. One can either utilize the linearized pyramid sensor model

Eq. (60)

sx(x,y)=(TΦ)(x,y)Φ(x,y)(T1)(x,y),1(x)=1  x,
where the function 1 represents the constant function being equal to 1 and reconstruct iteratively as

Eq. (61)

Φk+1(x,y)=T1[sx+Φk·(T1)](x,y),
or simplify the pyramid sensor measurements further as

Eq. (62)

sx(x,y)=(TΦ)(x,y),
and reconstruct just as

Eq. (63)

Φ(x,y)=(T1s)(x,y).

In contrast to the classical Hilbert transform H with inverse H1=H, the inversion of the finite Hilbert transform is not straightforward. However, the inversion of the finite Hilbert transform is nowadays a well-studied problem with many different implementations of the equations, depending on the boundedness of the involved functions on the boundaries of the considered area of interest, e.g., those found in Refs. 120121.122.

For the FHTR, the telescope aperture is mapped onto the interval [1,1]2, and the algorithm uses the inverse introduced in Ref. 121 as

Eq. (64)

(T1sx)(x,y)=1π111x21x2sx(x,y)xxdx,
for the operator in x-direction and a fixed y[1,1], which results in a 1d-problem. The integrals are understood in the p.v. (principal value) meaning in order to have well-defined operators on L2.

With the computational complexity of O(na3/2), the algorithm takes an intermediate position among the reviewed methods with respect to speed. Numerical closed-loop AO simulations in OCTOPUS showed that the reconstruction performance of the FHTR is rather limited compared to MVM or P-CuReD results.

5.4.4.

Singular value type reconstructor

Another reconstruction idea for nonmodulated pyramid sensors with a complexity of O(na3/2) is comparable to the FHTR. Instead of the direct inversion equation of the finite Hilbert transform, a different procedure for the inversion is employed. The approach uses the SVD of the finite Hilbert transform operator T. Due to the noncompactness of the involved operator, the classical theory for an SVD does not hold but the inversion procedure based on the decomposition is still applicable. In order to point to this fact, the method is called singular value type reconstructor (SVTR).89

As before, the pyramid data without modulation are represented by the finite Hilbert transform of the incoming phase as in Eq. (62). Recall that in the FHTR approach the reconstruction is obtained as

Eq. (65)

Φ(x,y)=(T1sx)(x,y),
where T1 denotes the direct inversion equation of the finite Hilbert transform. For this algorithm, the direct inversion equation is substituted and the Moore–Penrose inverse is expressed as a singular value type expansion in a weighted Lebesgue space L2ω([1,1]). Utilizing its SVD (σk,fk,gk)k0 with singular values (σk)k0 and singular functions (fk)k0 and (gk)k0, the wavefront sensor operator is decomposed into

Eq. (66)

Tfk=σkgk,  k0.

Based on the theory of analytical inversion using the SVD of an operator (e.g., Ref. 123), the incoming wavefront is reconstructed using data in x-direction as

Eq. (67)

Φ(x,y)=k=01σksx(·,y),gkωfk(x),
where ·,·ω indicates the inner product in the space L2ω([1,1]). Data in y-direction are handled respectively. After detailed studies of the operator, it was found that the singular values in that specific L2ω([1,1])-setting are all equal to 1 and the singular functions (fk)k0 and (gk)k0 are weighted Chebychev polynomials on the interval [1,1]. In contrast to matrix-based SVD inversion, this approach is completely matrix-free since the SVD of the simplified pyramid sensor operator is calculated analytically.

A numerical analysis of the SVTR showed that the method slightly outperforms the FHTR but its quality performance does not reach that obtained of, e.g., the P-CuReD or CLIF. In addition, it was experienced that this method is better suited for smaller subaperture sizes as, for instance, those in XAO systems.

5.5.

Linear iterative methods

Several iterative reconstructors, both in the deterministic and Bayesian framework, have been applied to the problem of wavefront reconstruction from pyramid sensor data.

5.5.1.

Deterministic methods

The common idea of these reconstruction methods is the application of iterative algorithms, which are well known in the mathematical community related to inverse problems. In order to reconstruct the incoming wavefront from pyramid sensor data, the following algorithms have been used:

  • steepest descent (SD),78,90

  • steepest descent-Kaczmarz algorithm (SD-K),78,90

  • conjugate gradient for the normal equation (CGNE),63,78,90

  • linear Landweber iteration for pyramid sensors (LIPS),78,90

  • linear Kaczmarz–Landweber iteration for pyramid sensors (KLIPS).78,90

Although the methods are iterative, the real-time computational complexity is reduced due to the possibility of precomputing the most time-consuming parts having the knowledge on the system parameters. All proposed algorithms scale as O(na3/2) and are applicable to pyramid sensors with and without modulation.

Several of the algorithms mentioned in Sec. 4 reconstruct two versions of the wavefront, one in x-direction and another one in y-direction. These are averaged in order to obtain one final reconstructed wavefront. Kaczmarz versions of the algorithms allow to connect the reconstruction in x- and y-directions. Here, the two data sets are used cyclically. As a consequence, on the one hand computation time is saved, and on the other hand higher reconstruction performance is expected.

The basic [Eq. (62)] for the iterative approaches is identical to that used in the reconstructors based on the finite Hilbert transform inversion, i.e.,

Eq. (68)

sx(x,y)=(TΦ)(x,y).

CGNE is based on the normal equation

Eq. (69)

T*TΦ=T*sx,
which constitutes a self-adjoint and positive definite problem for the operator T*T with T* denoting the adjoint operator. It is known that the CG-iterates converge to a solution of the considered inverse problem, utilizing the fewest number of iterations. In general, for wavefront reconstruction from pyramid sensor data, a rather fast convergence was experienced when using iterative methods. Together with a warm restart technique, less than six iterations were necessary for all proposed algorithms in the setting tested in Ref. 90. Utilizing more iterates still improves the reconstruction quality, but since further iterates provide only slight quality improvements, the cost-and-quality balance indicates the optimal number of iterations. The warm restart in these methods means that as initial value for the current iteration the reconstruction of the last time step is chosen, which yields an accelerated convergence.

For the SD method applied to pyramid sensors, the least-squares functional

Eq. (70)

J(Φ)=TΦsxL22min,
is minimized by starting at an initial guess Φ0 and searching the minimum in direction of the negative gradient of the functional J(Φ) [Eq. (70)]. Hence, the iterative process is written as

Eq. (71)

Φi+1=Φi+τidi,

Eq. (72)

di=J(Φi),
for iN0 and a properly chosen step size τi, e.g., the classical SD step size, the minimal gradient step size, or several variants of the step sizes introduced by Barzilai and Borwein.124127

Choosing a fixed step size τi=β in the SD approach reduces to the standard LIPS. The normal Eq. (69) is transformed into the equivalent fixed point equation

Eq. (73)

Φ=Φ+T*(sxTΦ),
which results in the iteration123,128

Eq. (74)

Φi+1=Φi+βT*(sxTΦi),iN0.

For the relaxation parameter β chosen according to 0<β<T2, the iterates converge to a solution of Eq. (62).

If data in x- and in y-directions are not considered independently but cyclically in the iteration process, the Landweber algorithm [Eq. (74)] translates to

Eq. (75)

Φi,1=Φi,0+β1Tx*(sxTxΦi,0),

Eq. (76)

Φi,2=Φi,1+β2Ty*(syTyΦi,1),

Eq. (77)

Φi+1,0=Φi,2,
and coincides with the KLIPS. Here, Tx and Ty indicate the corresponding operators in x- and y- directions. The SD-K for pyramid sensors shares the same idea as the KLIPS. Here, we apply SD in x-direction for even time steps and SD in y-direction for odd time steps, resulting in a reduced computational load.

Note that for noisy data, the problem is regularized by the number of iterations applied in the methods presented above.

Among the investigated iterative methods, the CGNE is prominent for pyramid sensors due to the low number of iterations the algorithm requires and the KLIPS because of its slightly higher quality performance. All algorithms provide similar reconstruction quality that is slightly under the quality achieved with, e.g., the P-CuReD. Especially for the nonmodulated sensor, the iterative methods yield highly precise estimations of the reconstructed wavefront.

5.5.2.

Finite element-wavelet hybrid algorithm for pyramid sensor

Recently an application of the finite element-wavelet hybrid algorithm (FEWHA)129131 to pyramid sensor data has been reported.116 FEWHA is a wavelet-based iterative method for wavefront reconstruction and atmospheric tomography. The method relies on pseudo-open loop control, which allows one to use atmospheric statistics as a regularization. The algorithm calculates the Bayesian maximum a posterior (MAP) estimate

Eq. (78)

ΦMAP=argminΦCΦ1/2Φ2+Cη1/2(sPΦ)2,
using a preconditioned conjugate gradient method that is coupled with a multiscale strategy. Here, CΦ and Cη denote the prior wavefront and noise covariance matrices, and P denotes the forward wavefront sensor operator in case of SCAO. Note that the FEWHA is applicable for MCAO systems, in which case P denotes the tomography operator. For the discretization of the turbulent layers, the method utilizes a finite element and a wavelet basis simultaneously.

Originally, FEWHA was designed for gradient data provided by the Shack–Hartmann sensor. Therefore, an adaption is needed when the method is applied to pyramid sensor data. To distinguish from the SH-WFS case, we named the method P-FEWHA when it is applied to PWFS data. The first approach was to apply the data preprocessing from the P-CuReD algorithm, i.e., the pyramid sensor data are transformed into Shack–Hartmann-like data.63,94,108 In this version, the overall complexity of the P-FEWHA scales linearly as O(n) with the number of unknowns.

Another version of the P-FEWHA has also been recently formulated for the linearized one-term roof forward model [Eq. (20)]. The complexity of this approach scales as O(n3/2).

5.6.

Nonlinear Iterative Methods

In principle, the relation between the incoming wavefront and the pyramid sensor signal is nonlinear. In this section, we consider nonlinear reconstruction approaches.

5.6.1.

Landweber method

Due to the nonlinear relation between the wavefront and the sensor data (see Proposition 1 in Sec. 4), the application of the nonlinear Landweber process or the nonlinear Landweber–Kaczmarz process is suggested, which results in the nonlinear versions of the LIPS and KLIPS.98

As a simplification of the pyramid sensor model, the algorithms are concentrated on the nonlinear roof sensor operator equation

Eq. (79)

sx=R(Φ),
where R represents the nonlinear roof sensor. Similarly to the linear LIPS [Eq. (74)], but now in a nonlinear setting, the iteration procedure is given as

Eq. (80)

Φi+1=Φi+R(Φi)*[sxR(Φi)],iN0.

The term R(Φi)* represents the adjoint of the Fréchet derivative at Φi. The concept of the nonlinear KLIPS is analogous to that one of linear KLIPS, i.e., applying the nonliner Landweber process cyclically to data in x- and y-directions.

As for their linear versions, rather accelerated convergence is experienced for closed-loop AO with PWFSs using the nonlinear iterative methods. The idea of the nonlinear algorithms is applicable to both wavefront sensor types with and without modulation. The computational load scales as O(na3/2). Concerning the quality performance of the algorithms, differences between the nonmodulated and modulated sensor were observed. For the nonmodulated sensor, which is known to suffer from higher nonlinearity influences, the algorithms provide accurate wavefront estimation outperforming their linear versions and almost all other wavefront reconstruction methods for nonmodulated pyramid sensors, or at least reach comparable quality. The situation is different for the modulated sensor. Closed-loop simulations with a pyramid sensor of modulation 4λ/D show that the linear LIPS and KLIPS outmatch their nonlinear alternatives. Furthermore, there exist several reconstruction approaches as, e.g., the P-CuReD or interaction-matrix-based methods that give even more precise wavefront estimates. Therefore, the usage of the nonlinear algorithms is suggested for the nonmodulated sensor and linear methods for the application to modulated pyramid sensor data in closed-loop AO at least as long as no perturbing effects as, for instance, NCPA are present.

Note that the above conclusions were drawn based on the simulations with sensing performed in the K-band, where residual wavefronts are small enough so that the pyramid sensor is close to the linear regime. The results comparing linear and nonlinear versions of the algorithms may be different for sensing at shorter wavelengths. It is well known that in this case the uncompensated residuals are much larger and make the sensor work far away from its linear regime.

5.6.2.

Phase retrieval algorithm

Phase retrieval algorithms in their general form are iterative FD methods for finding the unknown phase, which satisfies a set of constraints for a measured amplitude, from a given complex signal. In Ref. 4, phase retrieval is performed in the context of AO and aims at reconstructing the incoming wavefront Φ from intensity measurements provided by a flat pyramid-like sensor type. The authors adapted two well-known algorithms, namely the Gerchberg–Saxton132 and the error-reduction method,133 to be used in conjunction with a lenslet array placed at the focal plane, which constitutes such a sensor. In the paper, the Gerchberg–Saxton algorithm outperforms the error-reduction approach. The twin-image ambiguity problem represented with a lenslet array in the pupil plane, in contrary to SH sensors, can be avoided. The phase retrieval is performed in three Fourier planes and any confusion between an object and its complex conjugate can be removed because of the subdivision at the focal plane. The authors proposed two different choices for the starting value of the algorithm, either a zero phase or the reconstruction obtained from a linear interaction-matrix-based approach. The second idea obviously brings higher reconstruction performance. This means that an additive MVM step is executed, which supplementary increases the computational load of 200 expensive phase retrieval iterates.

As reported in Ref. 4 in simulations on a circular pupil, the phase retrieval approaches yield better reconstruction quality than an interaction-matrix-based MAP reconstructor at costs of the computational complexity highly outnumbering even that of the MVM. The latter constitutes the major drawback of these algorithms making them unfeasible for large AO systems on ELTs.

5.6.3.

Jacobian reconstruction

A nonlinear wavefront reconstruction algorithm named Jacobian reconstruction (JR) method based on the full transmission mask model of the nonmodulated pyramid sensor has been presented in Ref. 60. The idea is related to an iterative approach utilizing the analytical model of the sensor and Newton’s method for reconstruction.

If only one Newton iteration is applied, the procedure is linear, having a computational complexity comparable to that of conventional MVM algorithms given by O(na2). In the nonlinear approach, one has to apply more Newton iterations, which dramatically increase the amount of computations. The Jacobian matrices need to be recomputed at each step. The computational requirements of the Jacobian matrix calculations increase to the fourth power of the Jacobian resolution size, where the Jacobian resolution is at least as big as the size of the wavefront sensor measurement grid in one direction. This results in 50 to 1000 times slower reconstruction speed depending on the incorporated solver methods compared to, e.g., the linear approach.

The pyramid sensor model used for deriving this wavefront reconstruction method as well as the numerical simulations does not take interference effects among the four images on the detector into account. Simulation results are obtained for an 8-m telescope having a nonmodulated pyramid sensor with 40×40 subapertures. It is reported in Ref. 60 that in a closed-loop simulation the conventional MVM using Karhunen–Loève modes gives comparable results or is slightly outperformed by the JR method with one iteration, i.e., its linear version, and that the gain in performance when using additional Jacobi iterations was negligible. While correctly calibrated linear interaction-matrix-based algorithms are powerful strategies for reconstructions if a sensor is fully or almost linear, the JR method was experienced to be most useful at the nonlinear regime of the pyramid sensor. In high turbulence, the AO performance of a conventional calibrated MVM method is improved using a synthetic Jacobian-based reconstruction matrix. According to Ref. 60, the JR method mainly reduces the residual energy at low spatial frequencies, which is of particular importance for exoplanet detection. In addition, it was found that the roof sensor is more linear than the pyramid, i.e., most of the nonlinearity properties are present in the cross terms of the pyramid sensor model.

Note that this approach describes a nonlinear strategy for wavefront reconstruction. Nevertheless, as mentioned in Ref. 60, most of the Strehl ratio improvement was achieved by applying only one Newton iteration, which again results in a linear reconstructor. Enhancements when using more iterations are negligible.

5.6.4.

Quasi-Newton method

A nonlinear iterative reconstructor for pyramid sensors that utilizes the pyramidal phase mask model including interference effects is presented in Refs. 77 and 97. The wavefront is estimated by solving an unconstrained nonlinear minimization problem using Newton’s method as in the previously summarized JR method.

Contrary to the common definition using the intensity difference scheme, the pyramid operator is defined in Ref. 77 as the electromagnetic field in the detector plane. Although performing wavefront reconstruction from PWFS data, which are related in a nonlinear way to the incoming phase, the idea is based on the fact that the pyramid operator is indeed nonlinear with respect to the incoming phase Φ but linear with respect to the electric field

Eq. (81)

Ψ=Ω·eiΦ,
where Ω describes the real-valued amplitude.

Newton’s method in its general form requires the Jacobian and the Hessian of the cost function. The Hessian is inverted iteratively by solving a system of equations using CG. A possible avoidance of these computational expensive steps is found by a variety of quasi-Newton methods that only need the gradient of the cost function. The quasi-Newton algorithm used in this approach is the Broyden–Fletcher–Goldfarb–Shannon method. As an initial guess, the solution of the linear least-squares approach is used. This means that the quality improvement relies on two successive wavefront reconstruction processes at the price of computational complexity, as in the phase retrieval iterative method. However, the algorithm applied to pyramid sensors is efficient in the sense that most computationally demanding calculations can be computed offline. The attempt in Ref. 77 examines the pyramid sensor without modulation, but it is mentioned that for a modulated sensor, the computational expense of calculating the intensity and its derivatives will increase.

Simulations were carried out for a nonmodulated pyramid sensor using a setting with parameters similar to the SCExAO/Subaru on a circular aperture. It was assumed that a first-stage AO system has already removed many of the low order aberrations, i.e., there are wavefronts simulated that already correspond to a given Strehl ratio such as 0.3.

The author compared linear least-squares with the initial guess chosen as a flat wavefront and nonlinear least-squares with the solution of the linear problem as starting point for the iteration. No straight conclusion can be drawn regarding which approach—linear or nonlinear—provides a better reconstruction quality. Both methods have shown their advantages in different simulations depending on the photon flux, the signal-to-noise ratio, and the height of the Strehl ratio already obtained with the first-stage AO system if the nonlinear method is used.

Throughout the paper, we will name the reconstruction algorithm presented in Ref. 77 the quasi-Newton method for pyramid sensors.

5.7.

Learning Approach for Nonlinear Wavefront Reconstruction

An alternative way to phase reconstruction from wavefront sensor data is provided by the machine learning framework. In contrast to model-based reconstruction approaches, in this environment, one omits the need in the explicit knowledge of the exact optical model of the sensor. Instead, one relies on specific learning algorithms able to build themselves a connection between sensor signal and the phase to be reconstructed using a set of “training data” containing wavefront shapes and corresponding pyramid sensor data. The trained algorithms are then capable of making predictions when being exposed to new data. Such algorithms are able to restore and invert the underlying nonlinear forward models. Recently, a first attempt at applying a neural network for nonlinear wavefront reconstruction from pyramid sensor data has been reported.134

6.

Complexity and Performance Comparisons

In this section, we provide a comparison of the above-mentioned algorithms in terms of their numerical complexity and quality performance of closed-loop correction.

6.1.

Complexity Comparison

In order to give a clear overview on the mentioned algorithms for wavefront reconstruction in astronomical AO using PWFSs, we present Tables 1 and 2, where selected properties for all methods are listed. More precisely, we consider the distinguishing criteria already mentioned in Sec. 1. The characteristics we recall are the pyramidal prism mask models on which the reconstruction methods are based, i.e., phase or transmission mask, whether the algorithms are linear or nonlinear, and if the attempts are based on the full pyramid sensor model, the roof sensor, or the one-term assumption. In addition, we once more bring up the adaptability of the reconstruction processes to non- and modulated sensor data and the computational complexity of all approaches.

Table 1

Overview on the computational complexities of existing wavefront reconstruction methods for the PWFS. The methods are arranged from least to most demanding in terms of computational load.

AlgorithmComplexity
P-CuReD94O(na)
FTR88O(nalogna)
PFTR92O(nalogna)
HTR79,95O(nalogna)
TCR96O(nalogna)
P-FEWHA116O(na)O(na3/2)
CLIF92O(na3/2)
FHTR63O(na3/2)
SVTR89O(na3/2)
CGNE90O(na3/2)
SD90O(na3/2)
SD-K90O(na3/2)
Linear Landweber iteration for PWFS (LIPS)90O(na3/2)
Linear Kaczmarz–Landweber iteration for PWFS (KLIPS)90O(na3/2)
Nonlinear Landweber iteration (LIPS)98O(na3/2)
Nonlinear Kaczmarz–Landweber iteration for PWFS (KLIPS)98O(na3/2)
IM inversion (MVM)70,8187O(na2)
Phase retrieval iterative algorithm4O(na2)
JR method60O(na2)
Quasi-Newton method for PWFS77O(na2)
Learning approach134O(n2)

Table 2

Overview on existing wavefront reconstruction methods for the PWFS with respect to underlying pyramid sensor models. The check marks in brackets mean that an according extension is possible and has already been considered in theory.

AlgorithmPyramidal maskLinearitySensorModulation
PhaseTransmissionNonlinearLinearPyramidRoofOne-termYesNo
MVM
P-CuReD
FTR
CLIF
PFTR
HTR
TCR
FHTR
SVTR
CGNE(✓)(✓)
SD(✓)(✓)
SD-K(✓)(✓)
Linear LIPS(✓)(✓)
Linear KLIPS(✓)(✓)
P-FEWHA(✓)(✓)
Phase retrieval
JR method
Quasi-Newton method(✓)
Nonlinear LIPS(✓)
Nonlinear KLIPS(✓)
Learning approach

6.2.

Quality Comparison

To analyze the performance quality of the algorithms for the pyramid sensor, we simulate ESO’s ELT currently under construction in Chile. Simulations are carried out for the METIS135 and the EPICS136 instrument in a closed-loop setting. The reconstruction quality is quantified in terms of the long-exposure (LE) Strehl ratio. The observing wavelength for the results presented in the following corresponds to λscience=2.2  μm (K-band). Table 3 provides an overview of the simulation parameters.

Table 3

Overview of simulation parameters for the currently scheduled METIS and an EPICS-like instrument.

Simulation parametersMETIS-like simulationEPICS-like simulation
Telescope diameter37 m42 m
Central obstruction30%28%
Science targetOn-axis (SCAO)On-axis (XAO)
WFSPWFSPWFS
Sensing band λK (2.2  μm)R (0.7  μm)
Evaluation band λscienceK (2.2  μm)K (2.2  μm)
Modulation[0, 4][0, 4]
ControllerIntegratorIntegrator
Atmospheric modelvon Karmanvon Karman
Number of simulated layers359
Outer scale L025 m25 m
AtmosphereMedianMedian
Fried radius r0 at λ=500  nm0.157 m0.129 m
Number of subapertures74×74200×200
Number of active subapertures3912 out of 547628,796 out of 40,000
Linear size of simulation grid740 pixels2000 pixels
DM geometryELT M4 modelFried
Telescope spidersYes/noNo
DM delay11
Frame rate[1000, 500] Hz3300 Hz
Photon flux[600, 10,000] ph/subap/frame50 ph/subap/frame
Detector read-out noise1 electron/pixel2.8 electron/pixel
Background flux0.000321 photons/pixel/frame0 photons/pixel/frame
Simulation time0.5 to 2 s ([500, 1000] iterations)0.15  s (500 iterations)

For an SCAO simulation, we consider an METIS-like case of ESO’s ELT having a primary mirror diameter of 39 m of which only the inner 37 m are used for the instrument. The edges of the real 39-m primary mirror are cropped such that it remains a circular pupil with roughly 30% of the primary mirror being obstructed by the secondary mirror. Six telescope spiders being 50 cm thick are taken into account in two of the simulations. The end-to-end simulation software generates a von Karman realization of median atmospheric conditions having 35 frozen layers at heights between 30 m and 26.5 km. The Fried parameter is r0=15.7  cm at λ=500  nm, and the outer scale is L0=25  m. The simulated screens are resolved with 0.05 m per pixel, which results in 740×740  pixels on the aperture for a 37-m telescope. Sensing is performed in the K-band at a wavelength of λ=2.2  μm. The data in OCTOPUS are simulated using the built-in model of a PWFS without modulation and with modulation 4λ/D on 74×74 subapertures, i.e., the subaperture size is 0.5 m. The pyramid sensor measurements are read out 500 or 1000 times per second. The DM geometry corresponds to the hexagonal M4 geometry that is planned for the ESO’s ELT. In OCTOPUS, a total number na=5190 of mirror actuators are controlled.

For the XAO case, we simulate a variant of the EPICS instrument on the originally planned 42 m ESO’s ELT. The simulation parameters of the closed-loop setting are summarized in Table 3. We have a central obstruction of 28% and do not take telescope spiders into account. The phase screens are generated according to the von Karman statistics for nine atmospheric layers at heights between 47 m and 18 km. The seeing conditions are median, the Fried parameter is equal to r0=12.9  cm at λ=500  nm, and the outer scale corresponds to L0=25  m. The resolution of the incoming screens is given by 2000×2000  pixels on the pupil. Sensing is performed in the visible at λ=0.7  μm. The data are provided by a non- and modulated PWFS on 200×200 subapertures, each having a size of 0.21 m. The pyramid sensor measurements are read out 3330 times per second. For XAO, we have to control a total number of na=29,618 mirror actuators of the DM positioned according to the Fried geometry.

The numerical results in Table 4 indicate that the optimal choice of the wavefront reconstructor heavily depends on physical parameters related to the telescope facility and the sensor device such as subaperture size or the modulation amplitude of the pyramid sensor and on atmospheric parameters. Definitely, the most advanced reconstruction approaches for telescope systems having nonsegmented pupils are interaction-matrix-based methods, which in the past were and nowadays are running in AO systems of ground-based observing facilities having mirror sizes up to about 10 m, and therefore are those algorithms for which users have the most practical experience. A benefit of MVM approaches and also of learning approaches is that the calibration/training can be performed in realistic environments. Unfortunately, the MVM methods have a major drawback—their high computational complexity. While the computational load is expected to be manageable at the time of future ELT launches for comparably small AO systems such as in SCAO, achieving the speed required for large-scale AO systems is doubtful. As an alternative, we suggest fast, model-based wavefront reconstruction algorithms. As such, the P-CuReD is outstanding for its quality results, its speed, and its ease of usage in all performed test cases. For the nonmodulated sensor, the nonlinear LIPS and KLIPS give promising quality results. In particular in the XAO simulation and for the modulated sensor, the performance of the linear CGNE approach must be emphasized.

Table 4

Reconstruction quality of selected algorithms for the METIS (SCAO) and EPICS (XAO) instrument of the ESO’s ELT using pyramid sensors with or without modulation. For the METIS instrument, we used the currently in OCTOPUS implemented M4 geometry, the EPICS system is considered on the Fried geometry. In addition, we took telescope spiders into account for two simulation settings. The P-CuReD results for segmented pupils were obtained within the split approach.49 The fields are left empty if no simulations were performed and “NA” means that the method is not applicable to this setting.

AlgorithmQuality in end-to-end simulations (OCTOPUS) (LE Strehl ratios in the K-band)
SCAOSCAOSCAOSCAOXAOXAO
Modulation (λ/D)Mod 0Mod 4Mod 0Mod 4Mod 0Mod 4
Photon flux (ph/pix/it)10,00010,00010,0006005050
Frame rate (kHz)10.510.533
Mirror geometryM4M4M4M4FriedFried
Telescope spidersNoNoNoNo
IM inversion: modal0.621370.8880.859490.96
IM inversion: zonal0.890.8900.89449
P-CuReD0.8710.8870.865640.878490.9160.961
CLIF0.880.94
PFTR0.880.94
FHTR0.779NANA0.853NA
SVTR0.74064NANA0.88464NA
CGNE0.842900.860900.90164
SD0.841900.85890
SD-K0.841900.85890
LIPS0.840900.86090
KLIPS0.842900.858900.89764
LIPS0.853980.83498
KLIPS0.853980.826980.90364

The mentioned approaches were in particular tested on nonsegmented pupils for a PWFS acting in its linear regime, e.g., in closed-loop AO. One reason for starting to investigate nonlinear approaches for wavefront reconstruction using pyramid sensor data was the presence of large NCPA on ELTs which affect the nonlinearity issue of the pyramid sensor. For large NCPAs, the linearity of the pyramid sensor may be violated and a usage of a nonlinear reconstructor may become beneficial. However, we would like to mention that the results of the nonlinear LIPS and KLIPS are rather preliminary. Detailed studies in the future shall bring a better understanding of the nonlinearity effects of the sensor and, based on that, improvements of the methods themselves.

Moreover, METIS simulations demonstrated that variants of zonal interaction-matrix-based MVM approaches and the P-CuReD coupled with a DSPR provide (almost) differential piston-free wavefront estimates for fragmented telescope pupils, a phenomenon that has an especially big impact on ELT-sized telescopes.

7.

Methods for Real-Life Features

The instruments for the ELTs are currently under design and a multitude of analytical, and simulation studies are undertaken for the analysis and comparison of various reconstructors with respect to the expected performance under real-life conditions. Some of the features are specific to ELTs, e.g., large support structures of the secondary mirrors (also known as spiders) causing the island effect (or pupil fragmentation into disconnected domains). Others are common to all instruments equipped with AO systems, e.g., NCPA. Yet others, while already being known at currently operating telescope instrumentation systems, are expected to be especially pronounced on ELTs due to their big sizes, such as the LWE. Another very important aspect for reaching the ultimate goal of diffraction-limited imaging when employing a pyramid sensor is to take into account its optical gain related to the sensor nonlinearity.

The brief enumeration above should in no way be perceived as an exhaustive or complete list of issues that need to be analyzed but rather as a current status of the authors’ understanding of those. There are, evidentially, much more engineering and technical aspects to be considered when it comes to running an AO loop on a real telescope. We limit our considerations on purpose to the mentioned real-life issues that, in our opinion, are closely related to the reconstruction algorithms. Although in the following we consider the listed features in dedicated subsections, we keep in mind that the interplay between them should not be ignored since they all will be simultaneously present on the running telescopes and also because they are interrelated with each other.

7.1.

Island Effect

Besides central obstruction, on some ELTs the pupil will be additionally obstructed by rather large support structures (also known as spiders) of secondary mirrors. This fragmentation of the pupil into disjoint domains induces a discontinuity in sensor data. Because of such data fragmentation, most of the available wavefront reconstruction algorithms per se are not able to control fragmented piston modes of the wavefront.49,69,104,105 This manifests itself via uncontrolled pistons on disjoint pupil “islands” seen in the residual screens and the dramatically reduced Strehl ratio.

There have been suggested hardware-based solutions to overcome the island effect. These assume the usage of additional components in the mechanical/optical setup of the system. For instance, one approach is to introduce an additional dedicated focal plane sensor that measures the fragmented low-order modes during the AO loop, as initially suggested for compensating the LWE in Refs. 138139.140.141.

In parallel, software-based solutions have been developed, aiming at coping with pupil fragmentation by means of an appropriate adaptation of wavefront reconstruction algorithms only. They do not require any changes in the optical setup. It was found that the usage of sensor data from subapertures shaded by the spiders is crucial for the reconstruction of pistons on pupil fragments since the information about phase corruption by edges is in fact contained in those shaded subapertures. A successful control of the island effect has been demonstrated using a zonal matrix vector multiplication,49,116,142 with an adapted model-based reconstructor in the split approach,49,116 and with edge actuator coupling for a modal matrix vector multiplication.104,105,143

It has been acknowledged that the island effect becomes more severe at shorter wavelengths and larger seeing values. Accurate reconstruction of fragmented pistons is much more difficult in the visible compared to the NIR, despite the fact that in the calibration stage the pyramid WFS is sensitive to those modes. This is explained by the pyramid sensor nonlinearity. In contrast to the calibration phase, during the actual loop the sensor receives as input not the pure piston modes but a combination of them with the high-frequency uncompensated residual component. Since at shorter sensing wavelengths, the impact of the residuals is stronger than in the NIR, the piston footprint gets weaker in the sensor response. Thus, it is the sensor nonlinearity that imposes limitations on the ability of the sensor to provide quantitative information about the fragmented piston modes and makes the reconstruction in the presence of spiders much more difficult in the visible compared to the NIR.

While all the solutions mentioned above are working quite well (without or with a negligible loss in quality compared to the spider-free case) for a wide range of atmospheric conditions when sensing in the NIR, none of them is able so far to provide the same high-quality level correction for bad seeing conditions when sensing in the visible. Therefore, this topic remains of high interest and research is continued in this direction.

7.2.

Low Wind Effect

Another phenomenon related to the pupil spiders and causing similar effects is the so-called LWE. On the contrary to the island effect, which is due to a reconstructor-related error induced by corrupted data, the LWE is a real dynamically evolving low-order distortion in the wavefront caused by the heating of the air in the vicinity of spiders in very low wind conditions. The effect has been observed for the first time on the very large telescope144 and is expected to be significantly more pronounced on the ELTs due to the largely increased mirror position heights and the related temperature gradient across the pupil.105

A typical approach toward the elimination of the LWE consists of its measurement and a consequent correction with a corresponding offset being applied on the DM. It has been demonstrated that focal plane techniques such as focal plane phase diversity138141,145,146 or focal plane sharpening146 can continuously provide information about the LWE in the loop.

If such focal plane techniques that involve additional optomechanical components are not available, or not desired in the system design, it is important to analyze and compare the stability and performance of available reconstruction algorithms in the presence of an uncompensated LWE. Moreover, an important question is whether it is possible, similarly to pupil fragmentation, to compensate for the LWE by means of appropriately extended wavefront reconstruction methods. Is it possible to develop a wavefront reconstruction algorithm that is able to provide an accurate correction for the LWE despite the fact that only corrupted sensor data is available? Moreover, the amplitudes of LWE-related phase distortions may be rather large, which means that the wavefront sensor may not be in its linear regime and may not provide adequate measurements for the low-order modes of interest. Can one take this into account in the reconstruction step making it an attractive solution being free from additional hardware? This is currently one of the most important challenges in AO control—to find reliable solutions able to cope in a stable way with high amplitude low-order distortions, which can suddenly enter the telescope pupil during the observations.

7.3.

Sensor’s Nonlinearity

As already mentioned in Sec. 3, one approach to overcoming the sensor’s nonlinearity is the development and application of nonlinear wavefront reconstruction methods as those described in Secs. 5.6 and 5.7.

Currently, the nonlinearity of the pyramid sensor is tightly associated in the community with the so-called modal optical gain of the sensor. This is an effect that is observed with modal calibration-based reconstructors in the coupled paradigm and consists of a reduction of the sensor response to certain wavefront modes in the presence of large uncompensated residuals. To our opinion, this effect seems (at least to a large extend) to be caused by the usage of the global modes for wavefront representation since other reconstructors (analytical or zonal decoupled MVM) seem to not suffer from it and outperform the modal MVM in the same conditions.147 A detailed analysis and explanation of this phenomenon are the topic of a manuscript currently under preparation.

For the moment, we review in this section the existing methods toward a recovery of the sensor’s modal optical gain followed by a corresponding tuning of the modal calibration-based MVM reconstructors in the coupled paradigm. An early approach goes back to 2008. The procedure in Ref. 8 suggests an iterative estimation and update of sensitivity compensation coefficients in combination with the standard interaction-matrix-based modal reconstruction algorithm. The approach relies on an online (during the closed loop) estimation of the power spectral density of the residuals using the DM commands and sensor data. The estimated statistical information is then used as input for the off-line forward simulations providing a new calibration of the pyramid sensor in the presence of uncompensated residuals. The sensitivity compensation coefficients are then computed by comparing (mode per mode) the simulated IM with the standard calibration performed around the zero residual.

Later, it was noticed that low frequencies are affected at most by the sensor’s optical gain. Therefore, a correction is especially needed for low-order wavefront modes. One approach consists of the continuous in-the-loop tracking of the optical gain, which is performed by introducing an additional low-frequency periodic input with a known shape, called dithering signal, to the WFS.27,100 By comparing the known amplitude of the dithering signal with the measured one, one draws a conclusion about the current optical gain of the sensor, obtaining a scalar coefficient as its measure. Subsequently, the reconstructed low-order modes are scaled with the obtained parameter to provide an improved correction.

A similar but slightly different approach for recovering the mode-dependent nonlinearity factors was described in Ref. 99. The authors suggest injecting a dither signal in an additional local loop with a dedicated low-order DM. This technique requires a corresponding filtering of sensor data to eliminate the introduced low-frequency aberration.

Similarly to Ref. 8, the approach suggested in Ref. 102 is reconstructor-dependent and is based on a comparison of the amplitudes of the reconstructor output to the amplitudes of the known wavefront input. This method relies on running numerous off-line simulations for various atmospheric conditions, sensor characteristics (modulation), and reconstructors (modal, zonal, Fourier-domain-based) resulting in a collection of ready-to-use tables. Here, the optical gain is again treated not as a scalar value but as a frequency- or mode-dependent function. Depending on the information from atmospheric monitors, the user should choose the optimal frequency-dependent gain function from a corresponding table. Similarly to the previous approach, it was noticed that correction in the low frequency-domain brings most of the quality improvement. Also, for on-sky operation an efficient reduction of the method has been suggested allowing quick automated modal gain updates in the loop to be provided.101,103 In this case, similar to dithering signal, the method relies on poking a few modes only, which provides a few scalar values for the optical gain. The rest of the optical gain function is then restored using the look-up tables.

Recently, another interesting noninvasive approach has been suggested, which is able to compute the vector of modal gains relying on temporal correlations of the modal decomposition of WFS measurements.75

Moreover, Ref. 148 suggests a completely different technique to overcome the pyramid sensor nonlinearity. The concept is known as the very linear wavefront sensor and consists of introducing an additional internal low-order loop in the sensing path. This dedicated low-order loop would precompensate large aberrations and guarantee that the pyramid sensor works in the linear regime.

The compensation of the sensor’s optical gain by any of the mentioned methods has proven to be effective and is very important on its own for pushing higher the limit of possible correction quality. But it is even more important as a crucial constitute for accurate compensation of the island effect, LWE, and NCPA.

7.4.

Noncommon Path Aberrations

The quality of telescope imaging, especially the high-contrast one, suffers from various static and quasistatic optical imperfections and misalignments appearing within the instrument. The most serious impact is that of the so-called NCPA, quasistatic aberrations that happen in the instrument after the beam splitter position that splits and diverts light into the science and the wavefront sensor paths. Typically, these are low-order aberrations, which, however, may have large amplitudes. Moreover, the NCPAs have a dynamic component—they experience changes on a timescale from minutes to hours. Clearly, there is a need in techniques able to quantify the shape of the NCPAs, which then can be corrected with a DM, resulting in an improved imaging quality.

Some ELT instruments plan to a dedicated wavefront sensor placed in the noncommon path of the instrument for measuring the noncommon path phase maps. The NCPA correction can in this case be performed with the main DM or with a dedicated low-order DM located in the noncommon path as well. Other instruments will rely on dedicated focal-plane wavefront sensing employed for an interferometric restoration of the noncommon path phase maps. Such speckle-analyzing techniques, such as phase-sorting interferometry149 or phase diversity,146,150,151 can provide continuously in the loop information about the NCPAs, which is used to iteratively correct for them with the main DM. Another approach is to use focal plane sharpening methods for a blind control of the DM.146,151,152

When the NCPA measurements are available, the usual way to correct for them in a closed loop is to inject the measured phase shape offset onto the DM27,153 or to introduce an offset in the WFS measurements, making the DM converge in closed loop to the shape, which cancels out the NCPAs in the science path.142,154 Both of these schemes of compensation make the WFS see the NCPA and involve a reconstruction algorithm assuming linearity of both the sensor and the reconstructor. However, it is well known that the pyramid sensor is nonlinear. Depending on the severity of NCPAs, the sensor may work far away from its linear regime (have reduced optical gain) and may provide saturated measurements of the residual wavefront, which leads to a wrong NCPA compensation or even bring instability of the closed loop.9,27

To overcome this difficulty, several solutions are available. First, tracking of the optical gain of the sensor and a corresponding scaling of the NCPA compensation to be applied allows it to overcome the nonlinearity of sensor and leads to an improved compensation of NCPAs.27 Second, an approach was recently suggested for an additional countercorrection of NCPAs in the sensor path. The idea is to use a low-order DM or an adaptive lens155 placed in the WFS path just before the sensor, applying the deformation opposite to the NCPA shape sent to the DM. This approach allows one to bring the PWFS back to its linear regime, i.e., restore its sensitivity.

Another approach could be to let the WFS be exposed to NCPAs and work in the nonlinear regime and use nonlinear wavefront reconstruction algorithms (cf Sec. 5.6) able to provide accurate reconstruction from saturated sensor data.

8.

Conclusions

In this review paper, we have provided an extensive overview on the state-of-the-art of wavefront reconstruction algorithms for the pyramid sensor. We have described plenty of algorithms divided into interaction-matrix-based approaches, FD methods, iterative algorithms, methods based on the inversion of the Hilbert transform, as well as several nonlinear approaches including machine learning. The algorithms were compared to each other in terms of underlying forward models, numerical complexity, and the achieved performance.

From the modeling point of view, we distinguished between phase mask and transmission mask pyramid sensor models, between full pyramid sensor or roof sensor models, as well as nonlinear and linear approximations. Due to the closed-loop operation, when the pyramid is essentially in its linear regime, the largely simplified forward models are sufficient as foundation for reconstruction algorithms to provide high-quality correction. Several nonlinear algorithms have been developed based on more accurate forward modeling. However, the question still remains whether these methods outperform those using the approximations, at least in the closed-loop setting and especially for the modulated sensor.

The detailed descriptions of algorithms and underlying forward models are rounded off with comparisons of computational complexities of all the reviewed methods and numerical results for selected algorithms. We have seen that model-based algorithms using the most simplified forward models are not only able to provide very accurate correction but also require significantly fewer computations compared to interaction-matrix-based methods. While the algorithm speed seems not to be a crucial point for real-time computations systems on the majority of instruments currently under development, it may still be an important issue for the future extreme or tomographic AO systems, involving several tens of thousands of unknowns.

Finally, the developed algorithms are currently being analyzed and tested with respect to their performance and stability under real-life circumstances. We provided an overview of important ELT-specific features that the AO control will have to deal with. Among those are, e.g., the support structures of secondary mirrors shading sensor subapertures completely. These destroy the connectedness of sensor data and cause the island effect—a failure of reconstructors to control piston modes on the fragmented pupil parts. The similar in its manifestation, but different in its origin is the recently discovered LWE, which is expected to be very pronounced on ELTs due to their big sizes. Finally, any instrument equipped with an AO system suffers from NCPA. It is therefore important to know how reconstruction algorithms behave with respect to their amplitude and if they are able to cope with the introduced errors. Another important point is the pyramid sensor sensitivity or the optical gain. We have briefly sketched the currently foreseen solutions as well as shortcomings and challenges for all of the mentioned real-life features expected on the instruments of the next generation.

Based on the outlined state-of-the-art and the expected challenges in the field of wavefront reconstruction and control, an interesting direction for future research is further elaboration of forward models aimed at the embedding/inclusion of the mentioned real-life features into the forward models. This shall provide a possibility to develop reconstruction algorithms initially tailored for and able to cope with such effects as telescope spiders, the sensor’s optical gain, or the system’s NCPA.

Acknowledgments

The authors are grateful to Miska Le Louarn from the European Southern Observatory (ESO) for fruitful discussions and continuous support with using the OCTOPUS. Special acknowledgments are to be expressed to the Austrian Adaptive Optics team, especially Andreas Obereder and Stefan Raffetseder for their active collaboration on the pyramid WFS-related topics. This work was partially funded by the Austrian Federal Ministry of Science and Research (HRSM) and the Austrian Science Fund (FWF) F68-N36, project 5.

References

1. 

S. Esposito et al., “Laboratory characterization and performance of the high-order adaptive optics system for the Large Binocular Telescope,” Appl. Opt., 49 G174 –G189 (2010). https://doi.org/10.1364/AO.49.00G174 APOPAI 0003-6935 Google Scholar

2. 

S. Esposito et al., “Large binocular telescope adaptive optics system: new achievements and perspectives in adaptive optics,” Proc. SPIE, 8149 814902 (2011). https://doi.org/10.1117/12.898641 PSISDG 0277-786X Google Scholar

3. 

F. Pedichini et al., “High contrast imaging in the visible: first experimental results at the large binocular telescope,” Astron. J., 154 (2), 74 (2017). https://doi.org/10.3847/1538-3881/aa7ff3 ANJOAA 0004-6256 Google Scholar

4. 

R. M. Clare and R. G. Lane, “Phase retrieval from subdivision of the focal plane with a lenslet array,” Appl. Opt., 43 4080 –4087 (2004). https://doi.org/10.1364/AO.43.004080 APOPAI 0003-6935 Google Scholar

5. 

O. Fauvarque et al., “General formalism for Fourier-based wave front sensing,” Optica, 3 1440 –1452 (2016). https://doi.org/10.1364/OPTICA.3.001440 Google Scholar

6. 

O. Fauvarque et al., “General formalism for Fourier-based wave front sensing: application to the pyramid wave front sensors,” J. Astron. Telesc. Instrum. Syst., 3 (1), 019001 (2017). https://doi.org/10.1117/1.JATIS.3.1.019001 Google Scholar

7. 

C. Vérinaud, “On the nature of the measurements provided by a pyramid wave-front sensor,” Opt. Commun., 233 27 –38 (2004). https://doi.org/10.1016/j.optcom.2004.01.038 OPCOB8 0030-4018 Google Scholar

8. 

V. Korkiakoski, C. Vérinaud and M. Le Louarn, “Improving the performance of a pyramid wavefront sensor with modal sensitivity compensation,” Appl. Opt., 47 79 –87 (2008). https://doi.org/10.1364/AO.47.000079 APOPAI 0003-6935 Google Scholar

9. 

C. B. Luymes, “Compensation of non-common path aberrations for the pyramid wavefront sensor,” Delft University of Technology, (2015). Google Scholar

10. 

R. Ragazzoni, “Pupil plane wavefront sensing with an oscillating prism,” J. Mod. Opt., 43 (2), 289 –293 (1996). https://doi.org/10.1080/09500349608232742 JMOPEW 0950-0340 Google Scholar

11. 

A. Burvall et al., “Linearity of the pyramid wavefront sensor,” Opt. Express, 14 (25), 11925 –11934 (2006). https://doi.org/10.1364/OE.14.011925 OPEXFF 1094-4087 Google Scholar

12. 

J. LeDue et al., “Calibration and testing with real turbulence of a pyramid sensor employing static modulation,” Opt. Express, 17 (9), 7186 –7195 (2009). https://doi.org/10.1364/OE.17.007186 OPEXFF 1094-4087 Google Scholar

13. 

R. Ragazzoni, E. Diolaiti and E. Vernet, “A pyramid wavefront sensor with no dynamic modulation,” Opt. Commun., 208 51 –60 (2002). https://doi.org/10.1016/S0030-4018(02)01580-8 OPCOB8 0030-4018 Google Scholar

14. 

J. B. Costa et al., “Is there need of any modulation in the pyramid wavefront sensor?,” Proc. SPIE, 4839 288 –298 (2003). https://doi.org/10.1117/12.459032 PSISDG 0277-786X Google Scholar

15. 

R. Ragazzoni and J. Farinato, “Sensitivity of a pyramidic wave front sensor in closed loop adaptive optics,” Astron. Astrophys., 350 L23 –L26 (1999). AAEJAF 0004-6361 Google Scholar

16. 

S. Esposito and A. Riccardi, “Pyramid wavefront sensor behavior in partial correction adaptive optic systems,” Astron. Astrophys., 369 L9 –L12 (2001). https://doi.org/10.1051/0004-6361:20010219 AAEJAF 0004-6361 Google Scholar

17. 

S. Esposito, A. Riccardi and O. Feeney, “Closed-loop performance of pyramid wavefront sensor,” Proc. SPIE, 4034 184 –189 (2000). https://doi.org/10.1117/12.391870 PSISDG 0277-786X Google Scholar

18. 

C. Vérinaud et al., “Adaptive optics for high-contrast imaging: pyramid sensor versus spatially filtered Shack–Hartmann sensor,” Mon. Not. R. Astron. Soc.: Lett., 357 L26 –L30 (2005). https://doi.org/10.1111/j.1745-3933.2005.08638.x 1745-3925 Google Scholar

19. 

J.-P. Véran et al., “Pyramid versus Shack–Hartmann: trade study results for the NFIRAOS NGS WFS,” in Proc. Fourth AO4ELT Conf., (2015). Google Scholar

20. 

E. Pinna et al., “The pyramid wavefront sensor for the high order testbench (HOT),” Proc. SPIE, 7015 701559 (2008). https://doi.org/10.1117/12.789483 PSISDG 0277-786X Google Scholar

21. 

K. E. Hadi, M. Vignaux and T. Fusco, “Development of a pyramid wave-front sensor,” in Proc. Third AO4ELT Conf., (2013). Google Scholar

22. 

V. Viotto et al., “A study of pyramid WFS behaviour under imperfect illumination,” in Proc. Third AO4ELT Conf., (2013). Google Scholar

23. 

S. Turbide et al., “Development of a pyramidal wavefront sensor test-bench at INO,” in Proc. Third AO4ELT Conf., (2013). Google Scholar

24. 

O. Martin et al., “INO pyramidal wavefront sensor demonstrator: first closed-loop on-sky operation at Mont–Mégantic telescope,” in Proc. Fourth AO4ELT Conf., (2015). Google Scholar

25. 

C. Z. Bond et al., “Experimental study of an optimised pyramid wave-front sensor for extremely large telescopes,” Proc. SPIE, 9909 990964 (2016). https://doi.org/10.1117/12.2232968 PSISDG 0277-786X Google Scholar

26. 

V. Viotto et al., “Expected gain in the pyramid wavefront sensor with limited Strehl ratio,” Astron. Astrophys., 593 A100-1 –A100-6 (2016). https://doi.org/10.1051/0004-6361/201528023 AAEJAF 0004-6361 Google Scholar

27. 

S. Esposito et al., “Noncommon path aberration correction with nonlinear WFSs,” in Proc. Fourth AO4ELT Conf., (2015). Google Scholar

28. 

A. Ghedina et al., “On-sky test of the pyramid wavefront sensor,” Proc. SPIE, 4839 869 –877 (2003). https://doi.org/10.1117/12.458962 PSISDG 0277-786X Google Scholar

29. 

M. Feldt et al., “PYRAMIR: first on-sky results from an infrared pyramid wavefront sensor,” Proc. SPIE, 6272 627218 (2006). https://doi.org/10.1117/12.671305 PSISDG 0277-786X Google Scholar

30. 

D. Peter et al., “PYRAMIR: exploring the on-sky performance of the world’s first near-infrared pyramid wavefront sensor,” Pub. Astron. Soc. Pac., 122 63 –70 (2010). https://doi.org/10.1086/648997 Google Scholar

31. 

S. Esposito et al., “LBT AO on-sky results,” in Proc. Second AO4ELT Conf., (2011). Google Scholar

32. 

E. Pinna et al., “XAO at LBT: current performances in the visible and upcoming upgrade,” in Proc. Fourth AO4ELT Conf., (2015). Google Scholar

33. 

Y. Clénet et al., “Joint MICADO-MAORY SCAO mode: specifications, prototyping, simulations and preliminary design,” Proc. SPIE, 9909 99090A (2016). https://doi.org/10.1117/12.2231192 PSISDG 0277-786X Google Scholar

34. 

T. Fusco et al., “Adaptive optics systems for HARMONI: a visible and near-infrared integral field spectrograph for the E-ELT,” Proc. SPIE, 7736 773633 (2010). https://doi.org/10.1117/12.857507 PSISDG 0277-786X Google Scholar

35. 

B. Neichel et al., “The adaptive optics modes for HARMONI: from classical to laser assisted tomographic AO,” Proc. SPIE, 9909 990909 (2016). https://doi.org/10.1117/12.2231681 PSISDG 0277-786X Google Scholar

36. 

B. R. Brandl et al., “METIS: the mid-infrared E-ELT imager and spectrograph,” (2014) https://arxiv.org/pdf/1409.3087.pdf Google Scholar

37. 

V. Korkiakoski and C. Vérinaud, “Extreme adaptive optics simulations for EPICS,” in Proc. First AO4ELT Conf., 03007 (2010). Google Scholar

38. 

T. Fusco et al., “ATLAS: the E-ELT laser tomographic adaptive optics system,” Proc. SPIE, 7736 77360D (2010). https://doi.org/10.1117/12.857468 PSISDG 0277-786X Google Scholar

39. 

S. Esposito et al., “Wavefront sensor design for the GMT Natural Guide Star AO system,” Proc. SPIE, 8447 84471L (2012). https://doi.org/10.1117/12.927158 PSISDG 0277-786X Google Scholar

40. 

M. A. van Dam et al., “Design of a truth sensor for the GMT laser tomography adaptive optics system,” Proc. SPIE, 8447 844717 (2012). https://doi.org/10.1117/12.923198 PSISDG 0277-786X Google Scholar

41. 

E. Mieda et al., “Testing the pyramid truth wavefront sensor for NFIRAOS in the lab,” Proc. SPIE, 9909 99091J (2016). https://doi.org/10.1117/12.2231838 PSISDG 0277-786X Google Scholar

42. 

B. Macintosh et al., “Extreme adaptive optics for the thirty meter telescope,” Proc. SPIE, 6272 62720N (2006). https://doi.org/10.1117/12.672032 PSISDG 0277-786X Google Scholar

43. 

E. Pinna et al., “The pyramid wavefront sensor with extended reference source,” in Proc. Second AO4ELT Conf., (2011). Google Scholar

44. 

F. Quiros-Pacheco et al., “Pyramid wavefront sensor performance with laser guide stars,” in Proc. Third AO4ELT Conf., (2013). Google Scholar

45. 

S. Esposito et al., “Pyramid wavefront sensing using laser guide star for 8m and ELT class telescopes,” Proc. SPIE, 9909 99096B (2016). https://doi.org/10.1117/12.2234423 PSISDG 0277-786X Google Scholar

46. 

C. Blain et al., “Use of laser guide star with pyramid wavefront sensor,” in Proc. Fourth AO4ELT Conf., (2015). Google Scholar

47. 

M. Feldt et al., “Sensing wavefronts on resolved sources with pyramids on ELTs,” Proc. SPIE, 9909 990961 (2016). https://doi.org/10.1117/12.2232601 PSISDG 0277-786X Google Scholar

48. 

S. Esposito et al., “Pyramid sensor for segmented mirror alignment,” Opt. Lett., 30 2572 –2574 (2005). https://doi.org/10.1364/OL.30.002572 OPLEDP 0146-9592 Google Scholar

49. 

V. Hutterer et al., “Advanced wavefront reconstruction methods for segmented extremely large telescope pupils using pyramid sensors,” J. Astron. Telesc. Instrum. Syst., 4 (4), 049005 (2018). https://doi.org/10.1117/1.JATIS.4.4.049005 Google Scholar

50. 

I. Surdej, “Co-phasing segmented mirrors: theory, laboratory experiments and measurements on sky,” Ludwig-Maximilians-Universität München, (2011). Google Scholar

51. 

I. Iglesias et al., “Extended source pyramid wave-front sensor for the human eye,” Opt. Express, 10 (9), 419 –428 (2002). https://doi.org/10.1364/oe.10.000419 OPEXFF 1094-4087 Google Scholar

52. 

S. R. Chamot, C. Dainty and S. Esposito, “Adaptive optics for ophthalmic applications using a pyramid wavefront sensor,” Opt. Express, 14 (2), 518 –526 (2006). https://doi.org/10.1364/opex.14.000518 OPEXFF 1094-4087 Google Scholar

53. 

C. A. Diez, “A 3-sided pyramid wavefront sensor controlled by a neural network for adaptive optics to reach diffraction-limited imaging of the retina,” Germany (2006). Google Scholar

54. 

E. M. Daly and C. Dainty, “Ophthalmic wavefront measurements using a versatile pyramid sensor,” Appl. Opt., 49 G67 –G77 (2010). https://doi.org/10.1364/AO.49.000G67 APOPAI 0003-6935 Google Scholar

55. 

E. Brunner et al., “In-vivo demonstration of AO-OCT with a 3-sided pyramid wavefront sensor,” Proc. SPIE, 11218 112180R (2020). https://doi.org/10.1117/12.2549506 Google Scholar

56. 

I. Iglesias, “Pyramid phase microscopy,” Opt. Lett., 36 (18), 3636 –3638 (2011). https://doi.org/10.1364/OL.36.003636 OPLEDP 0146-9592 Google Scholar

57. 

I. Iglesias and F. Vargas-Martin, “Quantitative phase microscopy of transparent samples using a liquid crystal display,” J. Biomed. Opt., 18 (2), 026015 (2013). https://doi.org/10.1117/1.JBO.18.2.026015 JBOPFO 1083-3668 Google Scholar

58. 

A. Riccardi et al., “Laboratory characterization of a Foucault-like wavefront sensor for adaptive optics,” Proc. SPIE, 3353 941 –951 (1998). https://doi.org/10.1117/12.321702 Google Scholar

59. 

M. Carbillet et al., “Modelling astronomical adaptive optics–I. The software package CAOS,” Mon. Not. R. Astron. Soc., 356 1263 –1275 (2005). https://doi.org/10.1111/j.1365-2966.2004.08524.x MNRAA4 0035-8711 Google Scholar

60. 

V. Korkiakoski et al., “Comparison between a model-based and a conventional pyramid sensor reconstructor,” Appl. Opt., 46 (24), 6176 –6184 (2007). https://doi.org/10.1364/AO.46.006176 APOPAI 0003-6935 Google Scholar

61. 

A. Garcia-Rissmann and M. Le Louarn, “Scao simulation results with a pyramid sensor on an elt-like telescope,” in 1st AO4ELT Conf. Adapt. Opt. for Extremely Large Telesc. Proc., (2010). https://doi.org/10.1051/ao4elt/201003011 Google Scholar

62. 

C. Vérinaud et al., “Layer oriented multi-conjugate adaptive optics systems: performance analysis by numerical simulations,” Proc. SPIE, 4839 524 –535 (2003). https://doi.org/10.1117/12.458965 PSISDG 0277-786X Google Scholar

63. 

I. Shatokhina, “Fast wavefront reconstruction algorithms for eXtreme adaptive optics,” Johannes Kepler University Linz, (2014). Google Scholar

64. 

V. Hutterer, “Model-based wavefront reconstruction approaches for pyramid wavefront sensors in adaptive optics,” Johannes Kepler University Linz, (2018). Google Scholar

65. 

O. Fauvarque et al., “Variation around the pyramid theme: optical recombination and optimal use of photons,” in Proc. Fourth AO4ELT Conf., (2015). Google Scholar

66. 

O. Fauvarque et al., “Variation around a pyramid theme: optical recombination and optimal use of photons,” Opt. Lett., 40 3528 –3531 (2015). https://doi.org/10.1364/OL.40.003528 OPLEDP 0146-9592 Google Scholar

67. 

R. M. Clare et al., “Numerical evaluation of pyramid type sensors for extreme adaptive optics for the European Extremely Large Telescope,” in Proc. Fifth AO4ELT Conf., (2017). Google Scholar

68. 

B. Engler, S. Weddell and R. Clare, “Wavefront sensing with prisms for astronomical imaging with adaptive optics,” in Int. Conf. Image and Vision Comput. N. Z. (IVCNZ), (2017). Google Scholar

69. 

B. Engler et al., “Effects of the telescope spider on extreme adaptive optics systems with pyramid wavefront sensors,” Proc. SPIE, 10703 107035F (2018). https://doi.org/10.1117/12.2310050 PSISDG 0277-786X Google Scholar

70. 

M. Le Louarn et al., “Parallel simulation tools for AO on ELTs,” Proc. SPIE, 5490 705 –712 (2004). https://doi.org/10.1117/12.551088 PSISDG 0277-786X Google Scholar

71. 

F. Rigaut and M. Van Dam, “Simulating astronomical adaptive optics systems using YAO,” in Proc. AO4ELT3, (2013). https://doi.org/10.12839/AO4ELT3.13173 Google Scholar

72. 

F. Ferreira et al., “Real-time end-to-end AO simulations at ELT scale on multiple GPUS with the COMPASS platform,” Proc. SPIE, 10703 1070347 (2018). https://doi.org/10.1117/12.2312593 PSISDG 0277-786X Google Scholar

73. 

G. Agapito, A. Puglisi and S. Esposito, “PASSATA: object oriented numerical simulation software for adaptive optics,” Proc. SPIE, 9909 99097E (2016). https://doi.org/10.1117/12.2233963 PSISDG 0277-786X Google Scholar

74. 

R. Conan and C. Correia, “Object-oriented MATLAB adaptive optics toolbox,” Proc. SPIE, 9148 91486C (2014). https://doi.org/10.1117/12.2054470 PSISDG 0277-786X Google Scholar

75. 

V. Deo et al., “Enlarging the control space of the pyramid wavefront sensor: numerical simulations and bench validation,” in Proc. AO4ELT5, (2017). Google Scholar

76. 

V. Deo et al., “Wavefront reconstruction for imperfect pyramid wavefront sensor assemblies: generalizing the controller slope space,” in Talk at Wavefront Sens. in the VLT/ELT era II, (2017). Google Scholar

77. 

R. A. Frazin, “Efficient, nonlinear phase estimation with the nonmodulated pyramid wavefront sensor,” J. Opt. Soc. Am. A, 35 594 –607 (2018). https://doi.org/10.1364/JOSAA.35.000594 JOAOD6 0740-3232 Google Scholar

78. 

V. Hutterer, R. Ramlau and I. Shatokhina, “Real-time adaptive optics with pyramid wavefront sensors: a theoretical analysis of the pyramid sensor model,” Inverse Prob., 35 34 (2019). https://doi.org/10.1088/1361-6420/ab0656 Google Scholar

79. 

D. W. Phillion and K. Baker, “Two-sided pyramid wavefront sensor in the direct phase mode,” Proc. SPIE, 6272 627228 (2006). https://doi.org/10.1117/12.671961 PSISDG 0277-786X Google Scholar

80. 

V. Hutterer et al., “Wavefront reconstruction for ELT-sized telescopes with pyramid wavefront sensors,” Proc. SPIE, 10703 1070344 (2018). https://doi.org/10.1117/12.2312423 PSISDG 0277-786X Google Scholar

81. 

J. M. Bardsley, “Wavefront reconstruction methods for adaptive optics systems on ground-based telescopes,” SIAM J. Matrix Anal. Appl., 30 67 –83 (2008). https://doi.org/10.1137/06067506X SJMAEL 0895-4798 Google Scholar

82. 

C. Béchet, M. Tallon and E. Thiébaut, “Comparison of minimum-norm maximum likelihood and maximum a posteriori wavefront reconstructions for large adaptive optics systems,” J. Opt. Soc. Am. A, 26 497 –508 (2009). https://doi.org/10.1364/JOSAA.26.000497 JOAOD6 0740-3232 Google Scholar

83. 

B. L. Ellerbroek, “Efficient computation of minimum-variance wave-front reconstructors with sparse matrix techniques,” J. Opt. Soc. Am. A, 19 (9), 1803 –1816 (2002). https://doi.org/10.1364/JOSAA.19.001803 Google Scholar

84. 

R. H. Hudgin, “Wave-front reconstruction for compensated imaging,” J. Opt. Soc. Am., 67 375 –378 (1977). https://doi.org/10.1364/JOSA.67.000375 JOSAAH 0030-3941 Google Scholar

85. 

M. Le Louarn et al., “Adaptive optics simulations for the European extremely large telescope,” Proc. SPIE, 6272 627234 (2006). https://doi.org/10.1117/12.787201 PSISDG 0277-786X Google Scholar

86. 

I. Montilla et al., “Performance comparison of wavefront reconstruction and control algorithms for extremely large telescopes,” J. Opt. Soc. Am. A, 27 A9 –A18 (2010). https://doi.org/10.1364/JOSAA.27.0000A9 JOAOD6 0740-3232 Google Scholar

87. 

E. Thiébaut and M. Tallon, “Fast minimum variance wavefront reconstruction for extremely large telescopes,” J. Opt. Soc. Am. A, 27 1046 –1059 (2010). https://doi.org/10.1364/JOSAA.27.001046 JOAOD6 0740-3232 Google Scholar

88. 

F. Quirós-Pacheco, C. Correia and S. Esposito, “Fourier transform-wavefront reconstruction for the pyramid wavefront sensor,” in Proc. First AO4ELT Conf., 07005 (2010). Google Scholar

89. 

V. Hutterer and R. Ramlau, “Wavefront reconstruction from non-modulated pyramid wavefront sensor data using a singular value type expansion,” Inverse Prob., 34 (3), 035002 (2018). https://doi.org/10.1088/1361-6420/aaa4a3 INPEEY 0266-5611 Google Scholar

90. 

V. Hutterer, R. Ramlau and I. Shatokhina, “Real-time adaptive optics with pyramid wavefront sensors: accurate wavefront reconstruction using iterative methods,” Inverse Prob., 35 27 (2019). https://doi.org/10.1088/1361-6420/ab0900 Google Scholar

91. 

I. Shatokhina, V. Hutterer and R. Ramlau, “Two novel algorithms for wavefront reconstruction from pyramid sensor data: convolution with linearized inverse filter and pyramid Fourier transform reconstructor,” in Proc. AO4ELT5, (2017). Google Scholar

92. 

I. Shatokhina and R. Ramlau, “Convolution- and Fourier-transform-based reconstructors for pyramid wavefront sensor,” Appl. Opt., 56 (22), 6381 –6390 (2017). https://doi.org/10.1364/AO.56.006381 APOPAI 0003-6935 Google Scholar

93. 

I. Shatokhina, M. Zhariy and R. Ramlau, “Wavefront reconstruction for XAO,” in Poster Presented at the Second Conf. Adapt. Opt. for Extrem. Large Telesc., (2011). Google Scholar

94. 

I. Shatokhina et al., “Preprocessed cumulative reconstructor with domain decomposition: a fast wavefront reconstruction method for pyramid wavefront sensor,” Appl. Opt., 52 (12), 2640 –2652 (2013). https://doi.org/10.1364/AO.52.002640 APOPAI 0003-6935 Google Scholar

95. 

M. Zhariy, “Hilbert transform reconstructor for non-modulated PWS measurements,” Austrian In-Kind Contribution - AO, (2011). Google Scholar

96. 

M. Zhariy, “Two component reconstructor for modulated PWS measurements,” Austrian In-Kind Contribution - AO, (2011). Google Scholar

97. 

R. A. Frazin and L. Jolissaint, “Nonlinear estimation with a pyramid wavefront sensor,” Proc. SPIE, 10703 1070354 (2018). https://doi.org/10.1117/12.2309369 PSISDG 0277-786X Google Scholar

98. 

V. Hutterer and R. Ramlau, “Nonlinear wavefront reconstruction methods for pyramid sensors using Landweber and Landweber–Kaczmarz iteration,” Appl. Opt., 57 (30), 8790 –8804 (2018). https://doi.org/10.1364/AO.57.008790 APOPAI 0003-6935 Google Scholar

99. 

V. Viotto et al., “PWFSs on GMCAO: a different approach to the non-linearity issue,” Proc. SPIE, 9909 99096H (2016). https://doi.org/10.1117/12.2232834 PSISDG 0277-786X Google Scholar

100. 

C. Bond et al., “Optimising the performance of a Pyramid WFS: tracking the optical gain,” in Talk at Wavefront Sens. Workshop, (2017). Google Scholar

101. 

V. Deo et al., “A modal approach to optical gain compensation for the PWFS,” in Talk at Wavefront Sens. and Control in the VLT/ELT era III, (2018). Google Scholar

102. 

V. Deo et al., “A modal approach to optical gain compensation for the pyramid wavefront sensor,” Proc. SPIE, 10703 1070320 (2018). https://doi.org/10.1117/12.2311631 PSISDG 0277-786X Google Scholar

103. 

V. Deo et al., “A telescope-ready approach for modal compensation of pyramid wavefront sensor optical gain,” Astron. Astrophys., 629 A107 (2019). https://doi.org/10.1051/0004-6361/201935847 AAEJAF 0004-6361 Google Scholar

104. 

N. Schwartz et al., “Sensing and control of segmented mirrors with a pyramid wavefront sensor in the presence of spiders,” in Proc. AO4ELT5, (2017). Google Scholar

105. 

N. Schwartz et al., “Analysis and mitigation of pupil discontinuities on adaptive optics performance,” Proc. SPIE, 10703 1070322 (2018). https://doi.org/10.1117/12.2313129 PSISDG 0277-786X Google Scholar

106. 

A. Bertrou-Cantou et al., “Analysis of the island effect for ELT MICADO MAORY SCAO mode,” in Proc. AO4ELT6, (2019). Google Scholar

107. 

B. Engler et al., “Pyramid wavefront sensing in the presence of thick spiders,” in Proc. AO4ELT6, (2019). Google Scholar

108. 

I. Shatokhina, A. Obereder and R. Ramlau, “Fast algorithm for wavefront reconstruction in XAO/SCAO with pyramid wavefront sensor,” Proc. SPIE, 9148 91480P (2014). https://doi.org/10.1117/12.2057375 PSISDG 0277-786X Google Scholar

109. 

M. Rosensteiner, “Cumulative reconstructor: fast wavefront reconstruction algorithm for extremely large telescopes,” J. Opt. Soc. Am. A, 28 2132 –2138 (2011). https://doi.org/10.1364/JOSAA.28.002132 JOAOD6 0740-3232 Google Scholar

110. 

M. Zhariy et al., “Cumulative wavefront reconstructor for the Shack-Hartman sensor,” Inverse Prob. Imaging, 5 893 –913 (2011). https://doi.org/10.3934/ipi.2011.5.893 Google Scholar

111. 

M. Rosensteiner, “Wavefront reconstruction for extremely large telescopes via CuRe with domain decomposition,” J. Opt. Soc. Am. A, 29 2328 –2336 (2012). https://doi.org/10.1364/JOSAA.29.002328 JOAOD6 0740-3232 Google Scholar

112. 

A. Neubauer, “A new cumulative wavefront reconstructor for the Shack–Hartmann sensor,” J. Inverse Ill-Posed Prob., 21 451 –476 (2013). https://doi.org/10.1515/jip-2013-0003 Google Scholar

113. 

U. Bitenc et al., “On-sky tests of the CuReD and HWR fast wavefront reconstruction algorithms with CANARY,” Mon. Not. R. Astron. Soc., 448 (2), 1199 –1205 (2015). https://doi.org/10.1093/mnras/stv003 MNRAA4 0035-8711 Google Scholar

114. 

U. Bitenc et al., “Tests of novel wavefront reconstructors on sky with CANARY,” in Proc. Third AO4ELT Conf., (2013). Google Scholar

115. 

R. Clare and M. L. Louarn, “Numerical simulations of an extreme AO system for an ELT,” in Proc. AO4ELT2, 1100 –1107 (2011). Google Scholar

116. 

A. Obereder et al., “Dealing with spiders on ELTs: using a pyramid WFS to overcome residual piston effects,” Proc. SPIE, 10703 107031D (2018). https://doi.org/10.1117/12.2313419 PSISDG 0277-786X Google Scholar

117. 

L. A. Poyneer, D. T. Gavel and J. M. Brase, “Fast wave-front reconstruction in large adaptive optics systems with use of the Fourier transform,” J. Opt. Soc. Am. A, 19 2100 –2111 (2002). https://doi.org/10.1364/JOSAA.19.002100 JOAOD6 0740-3232 Google Scholar

118. 

C. Bond et al., “Real-time pyramid WF reconstruction in the Fourier domain: preparing simulations for ELTs,” in Talk at Wavefront Sens. in the VLT/ELT era I, (2016). Google Scholar

119. 

C. Z. Bond et al., “Fourier wavefront reconstruction with a pyramid wavefront sensor,” Proc. SPIE, 10703 107034M (2018). https://doi.org/10.1117/12.2314028 PSISDG 0277-786X Google Scholar

120. 

H. Hochstadt, Integral Equations, Wiley Classics Library, New York (2011). Google Scholar

121. 

A. Polyanin and A. Manzhirov, Handbook of Integral Equations, Taylor & Francis, CRC Press LLC, Boca Raton (1998). Google Scholar

122. 

F. Tricomi, Integral Equations, Interscience, New York (1957). Google Scholar

123. 

H. W. Engl, M. Hanke and A. Neubauer, Regularization of Inverse Problems, Kluwer Academic Publishers, Dordrecht (2000). Google Scholar

124. 

H. Akaike, On a Successive Transformation of Probability Distribution and Its Application to the Analysis of the Optimum Gradient Method, 37 –52 Springer, New York (1998). Google Scholar

125. 

J. Barzilai and J. Borwein, “Two-point step size gradient methods,” IMA J. Numer. Anal., 8 141 –148 (1988). https://doi.org/10.1093/imanum/8.1.141 IJNADH 0272-4979 Google Scholar

126. 

Y.-H. Dai and Y.-X. Yuan, “Alternate minimization gradient method,” IMA J. Numer. Anal., 23 (3), 377 –393 (2003). https://doi.org/10.1093/imanum/23.3.377 IJNADH 0272-4979 Google Scholar

127. 

G. E. Forsythe and T. S. Motzkin, “Asymptotic properties of the optimum gradient method,” Bull. Am. Math. Soc., 57 183 (1951). BAMOAD 0273-0979 Google Scholar

128. 

A. K. Louis, Inverse und schlecht geschtellte Probleme, Stuttgart(1989). Google Scholar

129. 

M. Yudytskiy, “Wavelet methods in adaptive optics,” Johannes Kepler University Linz, (2014). Google Scholar

130. 

M. Yudytskiy, T. Helin and R. Ramlau, “A frequency dependent preconditioned wavelet method for atmospheric tomography,” in Third AO4ELT Conf. – Adapt. Opt. for Extremely Large Telesc., (2013). Google Scholar

131. 

M. Yudytskiy, T. Helin and R. Ramlau, “Finite element-wavelet hybrid algorithm for atmospheric tomography,” J. Opt. Soc. Am. A, 31 550 –560 (2014). https://doi.org/10.1364/JOSAA.31.000550 JOAOD6 0740-3232 Google Scholar

132. 

R. W. Gerchberg and W. O. Saxton, “A practical algorithm for the determination of phase from image and diffraction plane pictures,” Optik, 35 (2), 237 –246 (1972). OTIKAJ 0030-4026 Google Scholar

133. 

J. R. Fienup, “Reconstruction of an object from the modulus of its Fourier transform,” Opt. Lett., 3 27 –29 (1978). https://doi.org/10.1364/OL.3.000027 OPLEDP 0146-9592 Google Scholar

134. 

S. Haffert et al., “Past, present and future of the generalized optical differentiation wavefront sensor,” in Talk at Wavefront Sens. and Control in the VLT/ELT era III, (2018). Google Scholar

135. 

B. R. Brandl et al., “Status of the mid-infrared E-ELT imager and spectrograph METIS,” Proc. SPIE, 9908 990820 (2016). https://doi.org/10.1117/12.2233974 PSISDG 0277-786X Google Scholar

136. 

M. Kasper et al., “Epics: direct imaging of exoplanets with the E-ELT,” Proc. SPIE, 7735 77352E (2010). https://doi.org/10.1117/12.856850 PSISDG 0277-786X Google Scholar

137. 

M. L. Louarn et al., “Latest AO simulation results for the E-ELT,” in Poster Presented at AO4ELT5 Conf., (2017). Google Scholar

138. 

M. J. Wilby et al., “A “Fast and Furious” solution to the low-wind effect for SPHERE at the VLT,” Proc. SPIE, 9909 99096C (2016). https://doi.org/10.1117/12.2233462 PSISDG 0277-786X Google Scholar

139. 

K. El Hadi et al., “Pupil phase discontinuity measurement: comparison of different wavefront sensing concepts,” Proc. SPIE, 9909 990967 (2016). https://doi.org/10.1117/12.2232818 PSISDG 0277-786X Google Scholar

140. 

M. N’Diaye et al., “Mitigate the impact of ELT architecture on AO performance: learn from today’s telescopes to characterize and prevent the island effect,” in Proc. AO4ELT5, (2017). Google Scholar

141. 

M. N’Diaye et al., “Calibration of the island effect: experimental validation of closed-loop focal plane wavefront control on Subaru/SCExAO,” Astron. Astrophys., 610 A18 (2018). https://doi.org/10.1051/0004-6361/201731985 AAEJAF 0004-6361 Google Scholar

142. 

S. Hippler et al., “Single conjugate adaptive optics for the ELT instrument METIS,” Exp. Astron., 47 65 –105 (2019). https://doi.org/10.1007/s10686-018-9609-y Google Scholar

143. 

N. Schwartz et al., “Update on the mitigation of the island effect for the ELT,” (2018) https://www.dur.ac.uk/resources/cfai/workshop2018/schwartzFriday.pdf Google Scholar

144. 

J.-F. Sauvage et al., “Tackling down the low wind effect on SPHERE instrument,” Proc. SPIE, 9909 990916 (2016). https://doi.org/10.1117/12.2232459 PSISDG 0277-786X Google Scholar

145. 

M. Lamb et al., “Estimating the low wind effect on SPHERE with experimental and on-sky data,” in Proc. AO4ELT5, (2017). Google Scholar

146. 

M. P. Lamb et al., “Quantifying telescope phase discontinuities external to adaptive optics systems by use of phase diversity and focal plane sharpening,” J. Astron. Telesc. Instrum. Syst., 3 (3), 039001 (2017). https://doi.org/10.1117/1.JATIS.3.3.039001 Google Scholar

147. 

I. Shatokhina et al., “Adaptive optics with a pyramid wavefront sensor in the visible versus near-infrared,” in Poster Presented at the AO4ELT6, (2019). Google Scholar

148. 

D. Greggio et al., “Avoiding to trade sensitivity for linearity in a real world WFS,” in Proc. AO4ELT3, 8 (2013). Google Scholar

149. 

J. L. Codona and M. Kenworthy, “Focal plane wavefront sensing using residual adaptive optics speckles,” Astrophys. J., 767 (2), 100 (2013). https://doi.org/10.1088/0004-637X/767/2/100 ASJOAB 0004-637X Google Scholar

150. 

R. A. Frazin, “Utilization of the wavefront sensor and short-exposure images for simultaneous estimation of quasi-static aberration and exoplanet intensity,” Astrophys. J., 767 (1), 21 (2013). https://doi.org/10.1088/0004-637X/767/1/21 ASJOAB 0004-637X Google Scholar

151. 

M. Lamb et al., “Non-common path aberration corrections for current and future AO systems,” Proc. SPIE, 9148 914857 (2014). https://doi.org/10.1117/12.2054154 PSISDG 0277-786X Google Scholar

152. 

L. F. Rodrguez-Ramos et al., “Non-common path aberration compensation using the NWIWM method,” in Proc. AO4ELT5, (2017). Google Scholar

153. 

C. Z. Bond et al., “Optimized calibration of the adaptive optics system on the LAM pyramid bench,” in Proc. AO4ELT5, (2017). Google Scholar

154. 

J.-F. Sauvage et al., “Calibration and precompensation of noncommon path aberrations for extreme adaptive optics,” J. Opt. Soc. Am. A, 24 2334 –2346 (2007). https://doi.org/10.1364/JOSAA.24.002334 JOAOD6 0740-3232 Google Scholar

155. 

D. Magrin et al., “Recovering pyramid WS gain in non-common path aberration correction mode via deformable lens,” in Proc. AO4ELT5, (2017). Google Scholar

Biography

Iuliia Shatokhina is a research assistant at the Johann Radon Institute for Computational and Applied Mathematics (RICAM), Austrian Academy of Sciences. She received her BS and MS degrees in physics from the National University of Kyiv-Mohyla Academy in 2006 and 2007, respectively, and her PhD in mathematics from the Johannes Kepler University Linz, Austria, in 2014. Her current research interests include inverse problems, adaptive optics, pyramid wavefront sensors, and wavefront reconstruction algorithms.

Victoria Hutterer is a research assistant at the Johannes Kepler University Linz, Austria. She received her BS, MS, and PhD degrees in mathematics from the Johannes Kepler University Linz in 2014, 2015, and 2018, respectively. Her current research interests include inverse problems, adaptive optics, Fourier-based wavefront sensors, wavefront reconstruction algorithms, and iterative and nonlinear algorithms.

Ronny Ramlau is a university professor at the Johannes Kepler University Linz, Austria, where he leads the Industrial Mathematics Institute. Also, he is a scientific director at RICAM, Austrian Academy of Sciences, leading the Transfer Group. He received his MS degree in mathematics in 1994 and his PhD in mathematics from the University of Potsdam, Germany, in 1997. He is the author of more than 50 journal papers and has written three book chapters. His current research interests include nonlinear inverse problems, medical imaging, regularization methods, and iterative methods.

© 2020 Society of Photo-Optical Instrumentation Engineers (SPIE)
Iuliia Shatokhina, Victoria Hutterer, and Ronny Ramlau "Review on methods for wavefront reconstruction from pyramid wavefront sensor data," Journal of Astronomical Telescopes, Instruments, and Systems 6(1), 010901 (13 March 2020). https://doi.org/10.1117/1.JATIS.6.1.010901
Received: 27 March 2019; Accepted: 26 February 2020; Published: 13 March 2020
Lens.org Logo
CITATIONS
Cited by 16 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Sensors

Reconstruction algorithms

Wavefront reconstruction

Wavefronts

Modulation

Telescopes

Adaptive optics

Back to Top