Citation: | Feng HG, Chen X, Zhu RZ et al. Seeing at a distance with multicore fibers. Opto-Electron Adv 7, 230202 (2024). doi: 10.29026/oea.2024.230202 |
[1] | Yang JC, Wang CG, Jiang B et al. Visual perception enabled industry intelligence: state of the art, challenges and prospects. IEEE Trans Ind Inform 17, 2204–2219 (2021). doi: 10.1109/TII.2020.2998818 |
[2] | Esteva A, Chou K, Yeung S et al. Deep learning-enabled medical computer vision. npj Digit Med 4, 5 (2021). doi: 10.1038/s41746-020-00376-2 |
[3] | Guan BO, Jin L, Ma J et al. Flexible fiber-laser ultrasound sensor for multiscale photoacoustic imaging. Opto-Electron Adv 4, 200081 (2021). doi: 10.29026/oea.2021.200081 |
[4] | Ma XD, Fan MZ, Cai YQ et al. A Fabry–Pérot fiber-optic array for photoacoustic imaging. IEEE Trans Instrum Meas 71, 4501508 (2022). |
[5] | Yang LY, Li YP, Fang F et al. Highly sensitive and miniature microfiber-based ultrasound sensor for photoacoustic tomography. Opto-Electron Adv 5, 200076 (2022). doi: 10.29026/oea.2022.200076 |
[6] | Zhao HX, Li K, Yang F et al. Customized anterior segment photoacoustic imaging for ophthalmic burn evaluation in vivo. Opto-Electron Adv 4, 200017 (2021) |
[7] | Zhang XG, Sun YL, Zhu BC et al. A metasurface-based light-to-microwave transmitter for hybrid wireless communications. Light Sci Appl 11, 126 (2022). doi: 10.1038/s41377-022-00817-5 |
[8] | Guan BL, Su ZL, Yu QF et al. Monitoring the blades of a wind turbine by using videogrammetry. Opt Lasers Eng 152, 106901 (2022). doi: 10.1016/j.optlaseng.2021.106901 |
[9] | Cao JH, Yang ZB, Li HQ et al. Rotating blade frequency identification by single-probe blade tip timing. Mech Syst Signal Process 172, 108961 (2022). doi: 10.1016/j.ymssp.2022.108961 |
[10] | Voulodimos A, Doulamis N, Doulamis A et al. Deep learning for computer vision: A brief review. Comput Intell Neurosci 2018, 7068349 (2018). |
[11] | Sharma P, Pardeshi S, Arora RK et al. A review of the development in the field of fiber optic communication systems. Int J Emerg Technol Adv Eng 3, 113–119 (2013). |
[12] | Morero DA, Castrillon MA, Aguirre A et al. Design tradeoffs and challenges in practical coherent optical transceiver implementations. J Light Technol 34, 121–136 (2016). doi: 10.1109/JLT.2015.2470114 |
[13] | Kim I, Martins RJ, Jang J et al. Nanophotonics for light detection and ranging technology. Nat Nanotechnol 16, 508–524 (2021). doi: 10.1038/s41565-021-00895-3 |
[14] | Pan MY, Fu YF, Zheng MJ et al. Dielectric metalens for miniaturized imaging systems: progress and challenges. Light Sci Appl 11, 195 (2022). doi: 10.1038/s41377-022-00885-7 |
[15] | Shin YJ, Powers EJ, Choe TS et al. Application of time-frequency domain reflectometry for detection and localization of a fault on a coaxial cable. IEEE Trans Instrum Meas 54, 2493–2500 (2005). doi: 10.1109/TIM.2005.858115 |
[16] | Yang J, Fletcher JE, O'Reilly J. Short-circuit and ground fault analyses and location in VSC-based DC network cables. IEEE Trans Ind Electron 59, 3827–3837 (2012). doi: 10.1109/TIE.2011.2162712 |
[17] | Ji MY, Caire G, Molisch AF. Wireless device-to-device caching networks: Basic principles and system performance. IEEE J Sel Areas Commun 34, 176–189 (2016). doi: 10.1109/JSAC.2015.2452672 |
[18] | Gong L, Zhao Q, Zhang H et al. Optical orbital-angular-momentum-multiplexed data transmission under high scattering. Light Sci Appl 8, 27 (2019). doi: 10.1038/s41377-019-0140-3 |
[19] | Richardson DJ, Fini JM, Nelson LE. Space-division multiplexing in optical fibres. Nat Photon 7, 354–362 (2013). doi: 10.1038/nphoton.2013.94 |
[20] | Puttnam BJ, Rademacher G, Luís RS. Space-division multiplexing for optical fiber communications. Optica 8, 1186–1203 (2021). doi: 10.1364/OPTICA.427631 |
[21] | Sun JW, Wu JC, Wu S et al. Quantitative phase imaging through an ultra-thin lensless fiber endoscope. Light Sci Appl 11, 204 (2022). doi: 10.1038/s41377-022-00898-2 |
[22] | Du Y, Turtaev S, Leite IT et al. Hybrid multimode-multicore fibre based holographic endoscope for deep-tissue neurophotonics. Light Adv Manuf 3, 29 (2022). |
[23] | Shin J, Tran DN, Stroud JR et al. A minimally invasive lens-free computational microendoscope. Sci Adv 5, eaaw5595 (2019). doi: 10.1126/sciadv.aaw5595 |
[24] | Lin X, Rivenson Y, Yardimci NT et al. All-optical machine learning using diffractive deep neural networks. Science 361, 1004–1008 (2018). doi: 10.1126/science.aat8084 |
[25] | Wetzstein G, Ozcan A, Gigan S et al. Inference in artificial intelligence with deep optics and photonics. Nature 588, 39–47 (2020). doi: 10.1038/s41586-020-2973-6 |
[26] | Shastri BJ, Tait AN, De Lima TF et al. Photonics for artificial intelligence and neuromorphic computing. Nat Photon 15, 102–114 (2021). doi: 10.1038/s41566-020-00754-y |
[27] | Shi WX, Huang Z, Huang HH et al. LOEN: Lensless opto-electronic neural network empowered machine vision. Light Sci Appl 11, 121 (2022). doi: 10.1038/s41377-022-00809-5 |
[28] | Bai BJ, Luo Y, Gan TY et al. To image, or not to image: class-specific diffractive cameras with all-optical erasure of undesired objects. eLight 2, 14 (2022). doi: 10.1186/s43593-022-00021-3 |
[29] | Bai BJ, Li YH, Luo Y et al. All-optical image classification through unknown random diffusers using a single-pixel diffractive network. Light Sci Appl 12, 69 (2023). doi: 10.1038/s41377-023-01116-3 |
[30] | Chen YT, Zhou TK, Wu JM et al. Photonic unsupervised learning variational autoencoder for high-throughput and low-latency image transmission. Sci Adv 9, eadf8437 (2023). doi: 10.1126/sciadv.adf8437 |
[31] | Okamoto K. Fundamentals of Optical Waveguides 3rd ed (Academic Press, San Diego, 2022). |
[32] | Zhang Q, Rothe S, Koukourakis N et al. Learning the matrix of few-mode fibers for high-fidelity spatial mode transmission. APL Photon 7, 066104 (2022). doi: 10.1063/5.0088605 |
[33] | Rothe S, Zhang Q, Koukourakis N et al. Intensity-only mode decomposition on multimode fibers using a densely connected convolutional network. J Light Technol 39, 1672–1679 (2021). doi: 10.1109/JLT.2020.3041374 |
[34] | Sun JW, Koukourakis N, Czarske JW. Complex wavefront shaping through a multi-core fiber. Appl Sci 11, 3949 (2021). doi: 10.3390/app11093949 |
[35] | Pareek NK, Patidar V, Sud KK. Image encryption using chaotic logistic map. Image Vis Comput 24, 926–934 (2006). doi: 10.1016/j.imavis.2006.02.021 |
[36] | Kumari M, Gupta S, Sardana P. A survey of image encryption algorithms. 3D Res 8, 37 (2017). doi: 10.1007/s13319-017-0148-5 |
[37] | Yan LS, Liu X, Shieh W. Toward the Shannon limit of spectral efficiency. IEEE Photon J 3, 325–330 (2011). doi: 10.1109/JPHOT.2011.2127468 |
[38] | Sorokina MA, Turitsyn SK. Regeneration limit of classical Shannon capacity. Nat Commun 5, 3861 (2014). doi: 10.1038/ncomms4861 |
[39] | Xiong YF, Xu F. Multifunctional integration on optical fiber tips: challenges and opportunities. Adv Photon 2, 064001 (2020). |
Supplementary information for Seeing at a distance with multicore fibers |
Concept diagram of Multicore Fiber Acquisition and Transmission Image System (MFAT). (a) Different modes in different channels of a multicore fiber are excited by different incident angles of light (in red and yellow). (b) Directly acquired image of the proximal end face of a multicore fiber. The red region of interest zooms in on the detail of a fiber core. The yellow and orange circles divide two different areas for decoding. (c) Workflow of traditional optical fiber-based image acquisition and transmission and MFAT.
Principle and implementation of MFAT. (a) The whole process of MFAT: image acquisition, encoding, multi-channel parallel transmission, and decoding. (b) Calibration and solving process based on digital aperture decoder. The yellow and orange circles indicate example averaging regions for full-aperture and center-aperture images, respectively. (c) Optical setup in experiments. The real image on the screen is displayed on the distal of the multicore fiber by a scaled combination of lens1 (f1 = 12 mm) and objective lens1 (50×). The pattern at the proximal of the multicore fiber is projected on the image sensor by a lens2 (f2 = 30 mm) and an objective lens2 (40×).
Direct transmission performance of simple images. (a) Left: Patterns at the output of a multicore fiber after a transmission distance of 10 m for images of different resolutions and the reconstruction results after digital aperture decoding. The Structure Similarity Index Measure (SSIM) and fidelity decrease with increasing image resolution. Right: SSIM reconstruction results for all test images. (b) Left: Transmission and reconstruction results at an ultra-long distance (1010 m). Right: SSIM reconstruction results for all test images. Sparse images can be fully reconstructed. The intensity of the record is normalized.
Results of multidimensional image transfer and reconstruction. (a) Natural grayscale images are transferred and reconstructed by MFAT as a result. Two polarization (P1 and P2) end-maps are recorded and the intensity is normalized. 5×5-pixel grayscale maps can be completely reconstructed by 100%. (b) The results of the acquisition and reconstruction of RGB trichromatic images. Two polarization (P1 and P2) end-maps are recorded for decoding. 5×5-pixel color maps can be accurately reconstructed.
Results of high-quality encoded transmission. (a) Flow chart of high-quality MFAT-based encoded acquisition and transmission. The cloud storage address of high-definition images is encoded into seven 6×6-pixel 2D maps for transmission via MCF. Fast access is possible based on the reconstructed address. (b) Quality of transmission reconstruction with different encoding sizes.
(a) Some bias exists in the reconstruction of a single polarization or wavelength channel when the complexity of the transmitted image is greater than the transmission capacity. P1 and P2 represent two different polarization directions. λ1 and λ2 are red and green channels. More channels are developed to constrain finding the optimal solution. (b) Simulation results of reconstruction with different algorithms at different noise levels. (c) Correlation coefficients of images acquired at different temperatures. (d) The feature values correspond to different images at 25 °C. (e) Deviation of the feature values from the standard feature values at different temperatures.