-
Abstract
Rising demands for bandwidth, speed, and energy efficiency are reshaping the landscape of computing beyond the limits of von Neumann electronics. Neuromorphic photonics—using light to emulate neural computation—offers ultrafast, massively parallel, and low-energy information processing, positioning integrated photonic neural networks (IPNNs) as promising hardware for next-generation artificial intelligence (AI). By combining the architectural efficiency of neuromorphic models with the physical advantages of integrated photonics, IPNNs enable high-speed and programmable linear operations during the in-plane optical transmission, while leaving room for compact and reconfigurable on-chip optical nonlinearities and memory functions. Firstly, we review the concepts and principles of key building blocks in IPNN, that are photonic synapses, neurons, and photonic memristors which offer optical memory and storage capabilities. And then, we summarize the representative IPNN architectures and their recent advances, including coherent, parallel, diffractive, and reservoir computing, for photonic neuromorphic computing with high throughput and high efficiency. Finally, we outline practical considerations—calibration and stability of large-scale networks, routes toward co-integration with electronics, diffractive–interferometric hybrid architectures, and programmable photonic architectures for general AI purposes. We highlight a forward outlook on enabling IPNN with low energy consumption, robust photonic operations, and efficient training strategies, aiming to guide the maturation of general-purpose, low-power photonic AI. -
E-mail Alert
RSS

