High-contrast imaging instruments face performance limitations due to non-common path aberrations, which hinder the detection of exoplanets. We have successfully applied convolutional neural networks to estimate these aberrations using simulations. However, training a model on simulated data before inferring the phase aberrations on real data leads to inaccuracies. In this study, we propose a domain adaptation method, based on a variational autoencoder architecture, to swiftly adapt models from simulations to real data. We employ the Subaru/SCExAO instrument and showcase how our approach significantly enhances phase retrieval.
The Spectro-Polarimetric High-contrast Exoplanet REsearch (SPHERE) instrument is a high-contrast imager designed for detecting exoplanets. It has been operational at the Very Large Telescope since 2014. To make the most of the extensive data generated by SPHERE, improve future observation planning, and advance instrument development, it is crucial to understand how its performance is affected by various environmental factors. The primary goal of this project is to use machine learning and deep learning techniques to predict detection limits, measured by the contrast between exoplanets and their host stars. Two types of models will be developed : random forest models and Multi-Layer Perceptron (MLP) models. The aim is to better understand the relationship between input parameters and detection limits, providing deeper insights into this field. Additionally, a neural network will be used to capture uncertainties in the input features, thus providing confidence intervals for its predictions.
Instrumental aberrations strongly limit high-contrast imaging of exoplanets, especially when they produce quasistatic speckles in the science images. With the help of recent advances in deep learning, we have developed in previous works an approach that applies convolutional neural networks (CNN) to estimate pupil-plane phase aberrations from point spread functions (PSF). In this work we take a step further by incorporating into the deep learning architecture the physical simulation of the optical propagation occurring inside the instrument. This is achieved with an autoencoder architecture, which uses a differentiable optical simulator as the decoder. Because this unsupervised learning approach reconstructs the PSFs, knowing the true phase is not needed to train the models, making it particularly promising for on-sky applications. We show that the performance of our method is almost identical to a standard CNN approach, and that the models are sufficiently stable in terms of training and robustness. We notably illustrate how we can benefit from the simulator-based autoencoder architecture by quickly fine-tuning the models on a single test image, achieving much better performance when the PSFs contain more noise and aberrations. These early results are very promising and future steps have been identified to apply the method on real data.
High-contrast imaging instruments are today primarily limited by non-common path aberrations appearing between the wavefront sensor of the adaptive optics system and the science camera. Early attempts at using artificial neural networks for focal-plane wavefront sensing showed some successful results but today's higher computational power and deep architectures promise increased performance, flexibility and robustness that have yet to be exploited. We implement two convolutional neural networks to estimate wavefront errors from simulated point-spread functions. We notably train mixture density models and show that they can assess the ambiguity on the phase sign by predicting each Zernike coefficient as a probability distribution. Our method is also applied with the Vector Vortex coronagraph (VVC), comparing the phase retrieval performance with classical imaging. Finally, preliminary results indicate that the VVC combined with polarized light can lift the sign ambiguity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.