-
Abstract
Photonic neuromorphic computing provides an effective route to overcoming the fundamental bottlenecks of the von Neumann architecture. However, the realization of complex optical nonlinearities by time-division multiplexing in traditional photonic neuromorphic computing limits both information-processing speed and scalability of these systems. Here, we break this bottleneck by introducing an accuracy-compensated compression strategy that co-optimizes latency and speed without sacrificing computational fidelity. Our core innovation is an eigenvector-driven compression-decompression framework that projects input data into a task-relevant subspace, drastically reducing the data volume processed by the hidden layer. This paradigm shift enables photonic neuromorphic computing to achieve image recognition and time-series prediction tasks in the optical domain at speeds of 100 million images and 600 million data points per second, respectively. We experimentally validate our approach across both a continuous-wave system based on off-the-shelf lasers and a spiking system composed of a specifically fabricated photonic neuro-synaptic chip, demonstrating that the compression-decompression framework effectively recovers the accuracy to the level of uncompressed networks—e.g., achieving >95% on MNIST and >84% on Fashion-MNIST—while slashing latency and hardware resource demands by an order of magnitude. This strategy establishes a scalable pathway towards high-throughput, low-power neuromorphic chips, bridging a critical gap between algorithmic efficiency and physical computing paradigms. -
E-mail Alert
RSS

