Presentation + Paper
13 March 2024 Fat-U-Net: non-contracting U-Net for free-space optical neural networks
Riad Ibadulla, Constantino C. Reyes-Aldasoro, Thomas M. Chen
Author Affiliations +
Proceedings Volume 12903, AI and Optical Data Sciences V; 1290308 (2024) https://doi.org/10.1117/12.3008618
Event: SPIE OPTO, 2024, San Francisco, California, United States
Abstract
This paper describes the advantages and disadvantages of adapting the U-Net architecture from a traditional GPU to a 4f free-space optical environment. The implementation is based on an optical-based acceleration called FatNet and thus this adaption is called Fat-U-Net. Fat-U-Net neglects the pooling operations in UNet, but maintains a similar number of weights and pixels per layer as U-Net. Our results demonstrate that the conversion to Fat-U-Net offers significant improvement in speed for segmentation tasks, with Fat-U-Net achieving a remarkable ×538 acceleration in inference compared to U-Net when both are run on optical devices and x37 acceleration in inference compared to the results provided by U-Net on GPU. The performance loss after conversion remains minimal in two datasets, with reductions of 4.24% in IoU for the Oxford IIIt pet dataset and 1.76% in IoU of HeLa cells nucleus segmentation.
Conference Presentation
(2024) Published by SPIE. Downloading of the abstract is permitted for personal use only.
Riad Ibadulla, Constantino C. Reyes-Aldasoro, and Thomas M. Chen "Fat-U-Net: non-contracting U-Net for free-space optical neural networks", Proc. SPIE 12903, AI and Optical Data Sciences V, 1290308 (13 March 2024); https://doi.org/10.1117/12.3008618
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Image segmentation

Free space optics

Convolution

Neural networks

Optical components

Feature extraction

Network architectures

Back to Top