We present a study to address the reconstruction of synthetic aperture radar (SAR) images using machine learning. From previous work, we utilize a single, fully-connected layer to learn the sensing matrix of the forward scattering problem by using a set of reflectivites as the inputs and the SAR measurements as the outputs. Using this result, we estimate the reflectivity of the SAR measurements by applying the conjugate transpose of the learned sensing matrix to the SAR measurement data. Then we further improve the reconstructions of the reflectivity using convolutional layers. Employing a training set made up of 50,000 images of randomly placed point scatterers as the reflectivities, we simulate SAR measurement data using a physical model for the sensing matrix. We apply the learned sensing matrix to our SAR measurement data and use this estimate of the reflectivity as the inputs to the model, while the true reflectivities are the outputs. The model is trained to reconstruct images containing a single target. We find that the resulting reconstructions are sharper images than those from the initial estimate from applying the conjugate transpose of the learned sensing matrix. In particular, we find that the background noise is significantly decreased. In addition, we test this model on a different dataset with multiple targets as reflectivities. Similar to previous results, and with no additional training, the model applied to data with multiple targets also demonstrated improved reconstructions of reflectivities.
|