-
Abstract
Deep learning has transformed perception and inference but remains constrained by memory–compute bottlenecks, latency, and energy costs. All-optical diffractive deep neural networks (D2NNs) alleviate these limitations by computing with light, yet most implementations trade image formation for direct classification, limiting downstream processing. Here we introduce a polarization-multiplexed meta-neural network (PMNN) that unifies imaging and classification within a single, static optical platform. The PMNN employs cascaded metasurfaces whose meta-atoms jointly harness geometric (Pancharatnam–Berry) and propagation phases to engineer distinct phase profiles for left- and right-circularly polarized (LCP and RCP) channels. This polarization contrast enables dual-channel functionality. Under LCP illumination, the system performs lens-like imaging, whereas under RCP illumination, it executes all-optical classification via diffractive routing to predefined detection regions. Built on a differentiable angular-spectrum forward model and trained end-to-end, the PMNN achieves 96.51% accuracy on handwritten-digit recognition while delivering an imaging mean squared error of 5.38×10−3, a peak signal-to-noise ratio of 22.70 dB, and a structural similarity index measure of 0.90. By coupling perception with inference without mechanical switching or electronic post-processing, the proposed approach enhances utility, reduces computational load, and offers a practical path toward compact, scalable, and energy-efficient optical intelligent systems. -
E-mail Alert
RSS

