Very-long baseline interferometry has been one of the major astronomical imaging techniques used in the last century for tasks ranging from measuring diameters of stars to imaging black holes at the center of galaxies. However, the usual heterodyne technique is typically limited to radio wavelengths for the longest baselines due to fundamental noise from the local oscillator, which is used to measure the collected electric field in time at each aperture. Further, the visible and near-infrared (V-NIR) wavelengths do not easily allow such measurements due to their higher frequency; so, for optimal performance, the collected fields must be directly interfered with each other to measure the spatial correlation of the stellar light between each aperture. This implies, at V-NIR wavelengths, a practical limitation on the distance between the receivers and the brightness of stellar sources since bringing the fields together is lossy. Several theoretical proposals have promised reduction of this loss by using single photons along with quantum networks and/or quantum memories. We demonstrate a proof-of-principle, table-top experiment of one proposal by interfering path-entangled single photons generated from parametric down conversion and the light collected from a quasi-thermal source occupying a single spectral-temporal mode representing light from a star. The interference signal was then used to recover the spatial autocorrelation of two source distributions: 1 and 2 mm separated double slits. We compare this to a theoretical model and see good agreement. This model allows further comparison to other weak, non-single-photon, local-oscillator sources such as coherent states.
|