Presentation
3 October 2024 Energy-efficient acceleration of deep neural networks with silicon photonics
Avinash Karanth
Author Affiliations +
Abstract
Specialized hardware accelerators have been proposed to improve the throughput and energy-efficiency of deep neural network (DNN) models. However, collective data movement primitives such as multicast and broadcast that are required for multiply-and-accumulate (MAC) computation in DNN models are expensive and require excessive energy and latency when implemented with electrical networks. Emerging technology such as silicon photonics can inherently provide efficient implementation of multicast and broadcast operations, making photonics more amenable to exploit parallelism within DNN models. Moreover, when coupled with other unique features such as low energy consumption, high channel capacity, silicon photonics could potentially provide a viable technology for scaling DNN acceleration. In this talk, I will discuss an analog photonic architecture for scaling DNN acceleration using microring resonators and Mach-Zehnder modulators. Using detailed device models, an efficient broadcast combined with multicast data distribution by leveraging parameter sharing through unique WDM dot product processing will be discussed.
Conference Presentation
© (2024) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Avinash Karanth "Energy-efficient acceleration of deep neural networks with silicon photonics", Proc. SPIE PC13113, Photonic Computing: From Materials and Devices to Systems and Applications, PC131130K (3 October 2024); https://doi.org/10.1117/12.3028398
Advertisement
Advertisement
KEYWORDS
Neural networks

Silicon photonics

Back to Top