PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This paper proposed to adopt advanced monolithic silicon-photonics integrated-circuits manufacturing capabilities to achieve a system-on-chip photonic-electronic linear-algebra accelerator with the features of broadband incoherent photo-detections and high-dimensional operations of consecutive matrix-matrix multiplications for enabling substantial leaps in computation density and energy efficiency with practical considerations of power/area overhead due to photonic-electronic on-chip conversions, integrations and calibrations through holistic co-design approaches to support attention-head mechanism based deep-learning neural networks used in Large Language Models and other emergent applications.
Tzu-Chien Hsueh,Yeshaiahu Fainman, andBill Lin
"ChatGPT at the speed of light: Monolithic photonic-electronic linear-algebra accelerators for large language models", Proc. SPIE PC13113, Photonic Computing: From Materials and Devices to Systems and Applications, PC131130I (3 October 2024); https://doi.org/10.1117/12.3027309
ACCESS THE FULL ARTICLE
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The alert did not successfully save. Please try again later.
Tzu-Chien Hsueh, Yeshaiahu Fainman, Bill Lin, "ChatGPT at the speed of light: Monolithic photonic-electronic linear-algebra accelerators for large language models," Proc. SPIE PC13113, Photonic Computing: From Materials and Devices to Systems and Applications, PC131130I (3 October 2024); https://doi.org/10.1117/12.3027309