Synthetic-Aperture-Radar (SAR) is a commonly used modality in mission-critical remote-sensing applications, including battlefield intelligence, surveillance, and reconnaissance (ISR). Processing SAR sensory inputs with deep learning is challenging because deep learning methods generally require large training datasets and high- quality labels, which are expensive for SAR. In this paper, we introduce a new approach for learning from SAR images in the absence of abundant labeled SAR data. We demonstrate that our geometrically-inspired neural architecture, together with our proposed self-supervision scheme, enables us to leverage the unlabeled SAR data and learn compelling image features with few labels. Finally, we show the test results of our proposed algorithm on the Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.