Current Issue

2024 Vol. 7, No. 4

Cover story: Wan YJ, Liu XD, Wu GZ et al. Efficient stochastic parallel gradient descent training for on-chip optical processor. Opto-Electron Adv 7, 230182 (2024).

To cope with the mixing between different channels in space-division multiplexing (SDM) optical communication systems, the integrated reconfigurable optical processor has been used for optical switching and mitigating the channel crosstalk. However, efficient online training is intricate and challenging, particularly when dealing with a significant number of channels. Hence, it is of great significance to find an efficient optimization algorithm suitable for optical matrix configuration of on-chip optical processor.

Recently, Professor Jian Wang’s group, named Multi-Dimensional Photonics Laboratory (MDPL) at Wuhan National Laboratory for Optoelectronics (WNLO) of Huazhong University of Science and Technology, reported that they used the Stochastic Parallel Gradient Descent (SPGD) algorithm to configure the integrated optical processor and applied it to the SDM optical communication system to compensate the crosstalk between different channels, showing less computation than the traditional gradient descent (GD) algorithm. They designed and fabricated a 6×6 on-chip optical processor on silicon platform to implement optical switching and descrambling assisted by the online training with the SPGD algorithm. Moreover, they applied the on-chip processor configured by the SPGD algorithm to optical communications for optical switching and efficiently mitigating the channel crosstalk in SDM systems. In comparison with the traditional GD algorithm, it is found that the SPGD algorithm features better performance and costs less computation especially when the scale of matrix is large, which means it has the potential to optimize large-scale optical matrix computation acceleration chips.


cover

2024 Vol. 7, No. 4

ISSN (Print) 2096-4579
ISSN (Online) 2097-3993
CN 51-1781/TN
Editor-in-Chief:
Prof. Xiangang Luo
Executive Editor-in-Chief:
{{module.content}}