Achieving both high angular resolution and frequent revisit times for Earth (or planet) observation from low Earth orbit poses numerous challenges. The trade-off between increasing aperture size and the associated costs necessitates a novel approach. AZIMOV is a payload prototype project of a 6U CubeSat segmented deployable telescope with an aperture diameter of 30 cm currently in design phase. The large primary mirror enables a 1 m ground sampling distance in the visible. Optimal telescope performance requires precise phasing of the primary mirror, but Cubesat limitations (volume, power, computing) preclude conventional dedicated wavefront sensing methods. Only focal plane sensing appears feasible on small platforms. However, classical methods are iterative and computationally heavy due to the non-linearity between phase and image intensity. In this paper, we investigate deep learning for correcting piston, tip, and tilt aberrations across the primary mirror's four segments from a single focal plane image. We demonstrate diffraction-limited performance on a point source. This method, based on Convolutional Neural Network (CNN), is robust to noise and higher order aberrations, and outperforms classical iterative methods in terms of speed, accuracy and robustness. Finally, when imaging an unknown extended object on Earth’s surface, we demonstrate that our methods can consistently meet diffraction limited performance.
For space-based Earth Observations and solar system observations, obtaining both high revisit rates (using a constellation of small platforms) and high angular resolution (using large optics and therefore a large platform) is an asset for many applications. Unfortunately, they prevent the occurrence of each other. A deployable satellite concept has been suggested that could grant both assets by producing jointly high revisit rates and high angular resolution of roughly 1 meter on the ground. This concept relies however on the capacity to maintain the phasing of the segments at a sufficient precision (a few tens of nanometers at visible wavelengths), while undergoing strong and dynamic thermal gradients. In the constrained volume environment of a CubeSat, the system must reuse the scientific images to measure the phasing errors. We address in this paper the key issue of focal-plane wave-front sensing for a segmented pupil using a single image with deep learning. We show a first demonstration of measurement on a point source. The neural network is able to identify properly the phase piston-tip-tilt coefficients below the limit of 15nm per petal.
Sign Language Recognition (SLR) has become an appealing topic in modern societies because such technology can ideally be used to bridge the gap between deaf and hearing people. Although important steps have been made towards the development of real-world SLR systems, signer-independent SLR is still one of the bottleneck problems of this research field. In this regard, we propose a deep neural network along with an adversarial training objective, specifically designed to address the signer-independent problem. Concretely speaking, the proposed model consists of an encoder, mapping from input images to latent representations, and two classifiers operating on these underlying representations: (i) the signclassifier, for predicting the class/sign labels, and (ii) the signer-classifier, for predicting their signer identities. During the learning stage, the encoder is simultaneously trained to help the sign-classifier as much as possible while trying to fool the signer-classifier. This adversarial training procedure allows learning signer-invariant latent representations that are in fact highly discriminative for sign recognition. Experimental results demonstrate the effectiveness of the proposed model and its capability of dealing with the large inter-signer variations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.