Citation: | Hao JY, Lin X, Lin YK, Chen MY, Chen RX et al. Lensless complex amplitude demodulation based on deep learning in holographic data storage. Opto-Electron Adv 6, 220157 (2023). doi: 10.29026/oea.2023.220157 |
[1] | Reinsel D, Gantz J, Rydning J. The Digitization of the World from Edge to Core (International Data Corporation, Framingham, 2018). |
[2] | Flexible, scalable and reliable storage solution. Panasonic Connect. https://panasonic.net/cns/archiver/concept/ |
[3] | Anderson P, Black R, Cerkauskaite A, Chatzieleftheriou A, Clegg J et al. Glass: a new media for a new era?. In Proceedings of the 10th USENIX Workshop on Hot Topics in Storage and File Systems (HotStorage 2018) (USENIX Association, 2018). |
[4] | Zhang JY, Gecevičius M, Beresna M, Kazansky PG. Seemingly unlimited lifetime data storage in nanostructured glass. Phys Rev Lett 112, 033901 (2014). doi: 10.1103/PhysRevLett.112.033901 |
[5] | Dhar L, Curtis K, Fäcke T. Coming of age. Nat Photonics 2, 403–405 (2008). doi: 10.1038/nphoton.2008.120 |
[6] | Lin X, Liu JP, Hao JY, Wang K, Zhang YY et al. Collinear holographic data storage technologies. Opto-Electron Adv 3, 190004 (2020). doi: 10.29026/oea.2020.190004 |
[7] | Horimai H, Tan XD, Li J. Collinear holography. Appl Opt 44, 2575–2579 (2005). doi: 10.1364/AO.44.002575 |
[8] | Project HSD: holographic storage device for the cloud. Microsoft. https://www.microsoft.com/en-us/research/project/hsd/ |
[9] | Liu JP, Zhang L, Wu AN, Tanaka Y, Shigaki M et al. High noise margin decoding of holographic data page based on compressed sensing. Opt Express 28, 7139–7151 (2020). doi: 10.1364/OE.386953 |
[10] | Katano Y, Muroi T, Kinoshita N, Ishii N, Hayashi N. Data demodulation using convolutional neural networks for holographic data storage. Jpn J Appl Phys 57, 09SC01 (2018). doi: 10.7567/JJAP.57.09SC01 |
[11] | Shimobaba T, Kuwata N, Homma M, Takahashi T, Nagahama Y et al. Convolutional neural network-based data page classification for holographic memory. Appl Opt 56, 7327–7330 (2017). doi: 10.1364/AO.56.007327 |
[12] | Lin X, Huang Y, Shimura T, Fujimura R, Tanaka Y et al. Fast non-interferometric iterative phase retrieval for holographic data storage. Opt Express 25, 30905–30915 (2017). doi: 10.1364/OE.25.030905 |
[13] | Hao JY, Wang K, Zhang YY, Li H, Lin X et al. Collinear non-interferometric phase retrieval for holographic data storage. Opt Express 28, 25795–25805 (2020). doi: 10.1364/OE.400599 |
[14] | Lin X, Hao JY, Wang K, Zhang YY, Li H et al. Frequency expanded non-interferometric phase retrieval for holographic data storage. Opt Express 28, 511–518 (2020). doi: 10.1364/OE.380365 |
[15] | Lin X, Huang Y, Li Y, Liu JY, Liu JP et al. Four-level phase pair encoding and decoding with single interferometric phase retrieval for holographic data storage. Chin Opt Lett 16, 032101 (2018). doi: 10.3788/COL201816.032101 |
[16] | Nobukawa T, Nomura T. Multilevel recording of complex amplitude data pages in a holographic data storage system using digital holography. Opt Express 24, 21001–21011 (2016). doi: 10.1364/OE.24.021001 |
[17] | Katano Y, Nobukawa T, Muroi T, Kinoshita N, Ishii N. CNN-based demodulation for a complex amplitude modulation code in holographic data storage. Opt Rev 28, 662–672 (2021). doi: 10.1007/s10043-021-00687-z |
[18] | Bunsen M, Tateyama S. Detection method for the complex amplitude of a signal beam with intensity and phase modulation using the transport of intensity equation for holographic data storage. Opt Express 27, 24029–24042 (2019). doi: 10.1364/OE.27.024029 |
[19] | Chen RX, Hao JY, Yu CY, Zheng QJ, Qiu XY et al. Dynamic sampling iterative phase retrieval for holographic data storage. Opt Express 29, 6726–6736 (2021). doi: 10.1364/OE.419630 |
[20] | Horisaki R, Fujii K, Tanida J. Single-shot and lensless complex-amplitude imaging with incoherent light based on machine learning. Opt Rev 25, 593–597 (2018). doi: 10.1007/s10043-018-0452-1 |
[21] | Zuo HR, Xu ZY, Zhang JL, Jia G. Visual tracking based on transfer learning of deep salience information. Opto-Electron Adv 3, 190018 (2020). doi: 10.29026/oea.2020.190018 |
[22] | Liao MH, Zheng SS, Pan SX, Lu DJ, He WQ et al. Deep-learning-based ciphertext-only attack on optical double random phase encryption. Opto-Electron Adv 4, 200016 (2021). doi: 10.29026/oea.2021.200016 |
[23] | Liao K, Chen Y, Yu ZC, Hu XY, Wang XY et al. All-optical computing based on convolutional neural networks. Opto-Electron Adv 4, 200060 (2021). doi: 10.29026/oea.2021.200060 |
[24] | Sinha A, Lee J, Li S, Barbastathis G. Lensless computational imaging through deep learning. Optica 4, 1117–1125 (2017). doi: 10.1364/OPTICA.4.001117 |
[25] | Wang H, Lyu M, Situ G. eHoloNet: a learning-based end-to-end approach for in-line digital holographic reconstruction. Opt Express 26, 22603–22614 (2018). doi: 10.1364/OE.26.022603 |
[26] | Wang KQ, Dou JZ, Kemao Q, Di JL, Zhao JL. Y-Net: a one-to-two deep learning framework for digital holographic reconstruction. Opt Lett 44, 4765–4768 (2019). doi: 10.1364/OL.44.004765 |
[27] | Wang KQ, Kemao Q, Di JL, Zhao JL. Y4-Net: a deep learning solution to one-shot dual-wavelength digital holographic reconstruction. Opt Lett 45, 4220–4223 (2020). doi: 10.1364/OL.395445 |
[28] | Wang F, Bian YM, Wang HC, Lyu M, Pedrini G et al. Phase imaging with an untrained neural network. Light Sci Appl 9, 77 (2020). doi: 10.1038/s41377-020-0302-3 |
[29] | Situ G. Deep holography. Light Adv Manuf 3, 278–300 (2022). doi: 10.37188/lam.2022.013 |
[30] | Hao JY, Lin X, Lin YK, Song HY, Chen RX et al. Lensless phase retrieval based on deep learning used in holographic data storage. Opt Lett 46, 4168–4171 (2021). doi: 10.1364/OL.433955 |
[31] | Goodman JW. Introduction to Fourier Optics 2nd ed (McGraw-Hill, Singapore, 1996). |
[32] | Tokoro M, Fujimura R. Single-shot detection of four-level phase modulated signals using inter-pixel crosstalk for holographic data storage. Jpn J Appl Phys 60, 022004 (2021). doi: 10.35848/1347-4065/abd86b |
[33] | Ronneberger O, Fischer P, Brox T. U-Net: convolutional networks for biomedical image segmentation. In Proceedings of the 18th International Conference on Medical Image Computing and Computer-Assisted Intervention 234–241 (Springer, 2015); http://doi.org/10.1007/978-3-319-24574-4_28. |
[34] | LeCun Y, Bengio Y, Hinton G. Deep learning. Nature 521, 436–444 (2015). doi: 10.1038/nature14539 |
[35] | Ferguson TS. An inconsistent maximum likelihood estimate. J Am Stat Assoc 77, 831–834 (1982). doi: 10.1080/01621459.1982.10477894 |
[36] | Kingma DP, Ba J. Adam: a method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations (2015). |
[37] | Korhonen J, You JY. Peak signal-to-noise ratio revisited: is simple beautiful?. In Proceedings of the Fourth International Workshop on Quality of Multimedia Experience 37–38 (IEEE, 2012); http://doi.org/10.1109/QoMEX.2012.6263880. |
[38] | Wang Z, Bovik AC, Sheikh HR, Simoncelli EP. Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 13, 600–612 (2004). doi: 10.1109/TIP.2003.819861 |
Conceptual diagram of HDS with complex-amplitude-modulated data page.
Simplified theoretical model of diffraction process in HDS demodulation.
Complex amplitude encoding modes and the corresponding diffraction features. (a) Four-level amplitude data page. (b) Four-level phase data page. (c) Diffraction intensity image of the complex-amplitude data page (z = 2 mm). (d) Uniform amplitude with no encoding. (e) Four-level phase data page. (f) Diffraction pattern of the phase-only data page (z = 2 mm). (g) Four-level amplitude data page. (h) Uniform phase without encoding. (i) Diffraction pattern of the amplitude-only data page (z = 2 mm). (j) Intensity distribution of the three intensity images in the same horizontal section.
(a) Contour map of the intensity image in Fig. 3(c). (b) 3D contour map of the intensity image in Fig. 3(c). (c) Intensity distribution on the red line in (a) and (b).
(a–e) Symbols with different phase patterns. (f–j) Diffraction intensity distributions corresponding to symbols (a)–(e). (k) General architecture of the symbol with 3×3 data points. (l) Intensity profile on the red line in intensity distribution (f–j).
Change in the MSE using two different phase encoding datasets with (a) (0, π/2, π, 3π/2) and (b) (π/6, 2π/3, π, 3π/2) when training the same neural network. (1) and (3) are the same intensity images fed to the CNN and (2) and (4) are the ground truth of the phase data page. The pages at the 1st, 20th, 40th, and 60th epoch are the phase data page retrieved from the intensity image using the trained CNN.
Architecture of the convolutional neural network.
Entire process flow. The green and brown lines represent the training and testing processes, respectively.
(a) Recording and reading system of HDS. (b) Experimental setup. BS: beam splitter; HWP1, HWP2: half-wave plate; L1: collimating lens (f = 300 mm); L2–L7: relay lens (f = 150 mm); SLM1: amplitude-modulated spatial light modulator; SLM2: phase-modulated spatial light modulator; P1, P2: linear polarizer, P1 is horizontally polarized and P2 is vertically polarized. (c) Calibration curve of SLM1. (d) Calibration curve of SLM2. (e) Intensity image captured by the CMOS.
Complex-amplitude encoded data page and the intensity image captured by CMOS. (a) Data page uploaded on the amplitude SLM1. (b) Phase data page uploaded on the phase SLM2. (c) Diffraction intensity image captured by the CMOS at a diffraction distance of 2 mm.
Experimental results. (a) Diffraction intensity image at z = 2 mm. (b) Ground truth of amplitude data page. (c) Amplitude predicted by CNN1. (d) Difference between retrieved amplitude (c) and amplitude ground truth (b). (e) Ground truth of the phase data page. (f) Phase data page predicted by CNN2. (g) Difference between the retrieved phase data page (f) and the phase ground truth (e).
Distribution and margin histogram of the demodulated complex-amplitude signals. The horizontal axis represents the phase, and the vertical axis represents the amplitude. The histogram represents the distribution of the decoded data. The 16 colors represent the 16 complex-amplitude classifications. The arrows represent the direction of the wrong data point offset.
PSNR and SSIM of the retrieved phase and amplitude data pages in the training and test datasets: (a) PSNR and (b) SSIM.
BER distribution of all the complex-amplitude data pages in the test dataset. (a) Amplitude BER. (b) Phase BER.
Superposition distribution of errors of all retrieved images in the data set. (a) Pixel differences of the amplitude training set. (b) Error data of the decoded amplitude pages in the training set. (c) Pixel differences of the phase training set. (d) Error data of decoded phase pages in the training set. (e) Pixel differences of the amplitude test set. (f) Error data of the decoded amplitude pages in the test set. (g) Pixel differences of the phase test set. (h) Error data of the decoded phase pages in the test set. (k) Intensity image under the influence of noise.