This study reports a new diffractive optical network design strategy that incorporates object scaling, translation, and rotation as part of the forward training model using uniformly-distributed random variables to provide immunity and resilience against such variations at the input object plane. By guiding the evolution of the diffractive layers towards a scale-, shift- and rotation-invariant network solution, this training strategy provides >30-70% improvement in the all-optical blind inference accuracies achieved under various unknown object transformations. This training method constitutes a promising approach to bring the advantages of all-optical diffractive inference with low-latency, power-efficiency, and parallelization to various machine vision applications.
|