Loading [MathJax]/jax/element/mml/optable/GreekAndCoptic.js

Liu KX, Wu JC, He ZH, Cao LC. 4K-DMDNet: diffraction model-driven network for 4K computer-generated holography. Opto-Electron Adv 6, 220135 (2023). doi: 10.29026/oea.2023.220135
Citation: Liu KX, Wu JC, He ZH, Cao LC. 4K-DMDNet: diffraction model-driven network for 4K computer-generated holography. Opto-Electron Adv 6, 220135 (2023). doi: 10.29026/oea.2023.220135

Article Open Access

4K-DMDNet: diffraction model-driven network for 4K computer-generated holography

More Information
  • Deep learning offers a novel opportunity to achieve both high-quality and high-speed computer-generated holography (CGH). Current data-driven deep learning algorithms face the challenge that the labeled training datasets limit the training performance and generalization. The model-driven deep learning introduces the diffraction model into the neural network. It eliminates the need for the labeled training dataset and has been extensively applied to hologram generation. However, the existing model-driven deep learning algorithms face the problem of insufficient constraints. In this study, we propose a model-driven neural network capable of high-fidelity 4K computer-generated hologram generation, called 4K Diffraction Model-driven Network (4K-DMDNet). The constraint of the reconstructed images in the frequency domain is strengthened. And a network structure that combines the residual method and sub-pixel convolution method is built, which effectively enhances the fitting ability of the network for inverse problems. The generalization of the 4K-DMDNet is demonstrated with binary, grayscale and 3D images. High-quality full-color optical reconstructions of the 4K holograms have been achieved at the wavelengths of 450 nm, 520 nm, and 638 nm.
  • Computer-generated holography is a technology that can accurately modulate the light field distribution in three-dimensional (3D) space. It has been widely applied in various fields, including holographic display1-4, meta-surface design5-8, and laser fabrication9. Currently, only a few spatial light modulators (SLMs) can modulate both the amplitude and phase of the wavefront simultaneously. The complex amplitude hologram (CAH) needs to be converted to an amplitude hologram or a phase-only hologram (POH) for display. Compared with the amplitude hologram, the POH provides higher diffraction efficiency and avoids the twin image in the reconstruction. The POH is calculated only with the amplitude constraints on the object plane without any phase constraints. So the solution is not unique. And the amplitude constraints on the hologram plane are equal to 1, so the solution may not exist. The POH generation process is a typical ill-posed inverse problem. The algorithms used to fit this inverse problem are mostly iterative methods, such as Gerchberg–Saxton (GS) algorithm10, 11, Wirtinger algorithm12, and non-convex optimization algorithm13. But they are time-consuming and only can find the local optimal solutions with speckle noise.

    Since the high-performance network structure ResNet was proposed in 201614, the powerful ability of deep learning on highly ill-posed inverse problems has been gradually demonstrated15-17. Deep learning has been widely used for types of hologram generation since its parallel operational framework and complex structure based on convolution layers. Compared with the traditional POH algorithms4, 18-21, the learning-based POH has the great potential to realize real-time and speckle-free holographic display. Nowadays, the learning-based POH can be mainly divided into two types: the data-driven deep learning1, 22-29 and the model-driven deep learning30-38.

    In the data-driven deep learning, the neural networks fit the inverse problem by learning the coding method from the approximate solutions calculated by traditional algorithms. The image datasets and their corresponding approximate POHs compose the labeled training datasets. The network is trained to extract the non-linear mapping between the labeled training dataset, which can be regarded as a black box. The training process is realized by calculating the loss function between the output POHs and the label POHs, and updating the network parameters by gradient descent algorithms. The trained data-driven network can effectively speed up the POH generation processes of images outside the training dataset1, 22 and has the advantage of simple network structures. However, since the neural network with the data-driven deep learning purely learns the mapping of the image dataset and label POHs, the labeled training dataset quality limits the ceiling of the training performance and POH generalization. And the label POHs need to be generated in advance, which requires large datasets, huge computing resources, and long calculation time. The above two challenges are particularly prominent in the learning-based POH generation problem24-26, which prohibits the practical application of the data-driven method. To avoid using iterative algorithms in the label POH generation process, random phase patterns and their propagating speckle-like intensity patterns are used for training27. This method was also used for binary amplitude holograms28. However, the trained networks work well only for the speckle-like patterns.

    To solve the above two challenges, the model-driven deep learning is proposed for the POH generation30, 31. In the model-driven deep learning, the corresponding forward process model of the inverse problem is used as the constraints to train the networks. It can directly fit the inverse problem without the limitation of the approximate solutions. For POH generation, the physical diffraction model is incorporated into the network structure. The loss function can be directly calculated between the image dataset and the output reconstructions, which effectively eliminates the need for label POHs. The network can automatically learn the latent encodings of POHs in an unsupervised way. Current studies have successfully explored and demonstrated the availability of model-driven deep learning for various POH generation tasks. Peng et al. optimized the diffraction model by the camera-in-the-loop (CITL) strategies, which obtained speckle-free holographic images with coherent light sources32, 33 and partially coherent light sources34. Shimobaba et al. realized zoomable reconstruction larger than the holograms35. Liu et al. proposed the phase dual-resolution network (PDRNet) structure to learn the mapping on the same optical plane rather than crossing optical planes36. Sun et al. solved dual tasks of amplitude reconstruction and phase smoothing jointly by loss function optimization37. In our previous research, the model-driven network Holo-Encoder could generate one single-wavelength 4K POH in 0.15 s38. However, due to the insufficient constraints, the existing model-driven networks face the limited convergence result problem39, 40. The transfer learning with the single target image is needed for better reconstruction quality. There is a trade-off between the reconstruction quality and calculation speed, which limits the practical application. The combined-driven method, which combines the advantages of the data-driven and model-driven method, was proposed and achieved high-quality reconstructions of 3D objects41. But it still faces the time-consuming generation challenge of the labeled training dataset.

    In this paper, we systematically investigate the existing learning-based POH research and especially analyze the advantages of the model-driven method over the data-driven method. We propose a high-fidelity 4K POH generation network, called 4K Diffraction Model-driven Network (4K-DMDNet). The constraint of the reconstructed images in the frequency domain is strengthened. The network structure combines the residual method and sub-pixel convolution method, which effectively enhances the fitting ability of the network for inverse problems. The generalization of the 4K-DMDNet is demonstrated with both binary and grayscale images, which can achieve high-fidelity and high-speed POH generation for 4K display.

    Compared with the data-driven deep learning method, the 4K-DMDNet consists of the convolutional neural network (CNN) for POH generation and the diffraction model for loss function calculation, as shown in Fig. 1. The output of the neural network with the data-driven method is the predicted POHs. The loss function calculates the error between the output POHs and the label POHs, as shown in Fig. 1(a). Thus, it requires the preparation of a labeled training dataset consisting of the image dataset and its corresponding label POHs. However, the label POHs need to be generated by iterative algorithms, which is time-consuming and limits the ceiling of the training performance and generalization. In comparison, the proposed 4K-DMDNet works in an unsupervised way by incorporating the diffraction model as a part of the neural network, as is shown in Fig. 1(b). The diffraction model simulates the light field propagation process. Therefore, the loss function can be directly calculated between the image dataset and the output images. The latent POH encodings can be sought without the label POHs by 4K-DMDNet:

    Figure 1.  Training processes of (a) data-driven deep learning and (b) 4K-DMDNet, respectively.
    findHs.t.|PROP(H)|2=I, (1)

    where H represents the POH, I represents the target image, and PROP represents the propagation process from the hologram plane to the object plane. Moreover, 4K-DMDNet directly learns the optimal encoding POHs, avoiding the quality degradation caused by the extra complex-to-phase-only conversion operation. Since no random phase is added in the whole calculation process, the ubiquitous speckle noises are significantly suppressed.

    Once the network training is complete, the network parameters can be solidified into the computer chip to realize the rapid generation of holograms. The generation and reconstruction process of 4K POHs by the 4K-DMDNet is shown in Fig. 2. A set of images or series of video frames outside the image dataset are input into the network in sequence. The trained network is used to predict the POHs of images. The output POHs are uploaded on the 4K SLM. To realize full-color optical reconstruction, the 4K-DMDNet is trained in three different wavelength versions corresponding to the three channels of the RGB image. The POHs corresponding to the three channels are loaded onto the SLM in turn, and three illumination lasers with different wavelengths are switched simultaneously. This time-multiplexing method can present high-quality color images without interference-induced noise. Finally, observers can see the optically reconstructed image or video at the set distance.

    Figure 2.  Generation and reconstruction process of 4K POHs by the 4K-DMDNet. The sub-pixel convolution method and oversampling method have played decisive roles to achieve it.

    The U-Net network as the CNN of 4K-DMDNet is shown in Fig. 3(a). It consists of a contracting downsampling path to capture context and a symmetric expanding upsampling path that enables precise localization, which achieves excellent performance on different image-to-image problems42. And the skip connection from the downsampling path to the upsampling path is another feature of the U-Net, which makes the output images include more details. Here the downsampling path is a residual neural network, consisting of downsampling blocks and corresponding residual blocks. Each block is composed of two sets of batch normalization, nonlinearity (ReLU), and a 3×3 convolutional layer stacked one above the other. And the residual block effectively solves the degradation problem with skip connections14. The downsampling is realized directly by convolutional layers with a stride of two. The upsampling path consists of upsampling blocks, as shown in Fig. 3(b). In order to achieve 4K hologram generation, the upsampling is realized by the sub-pixel convolution method43. It includes convolutional layers to increase the channel number and a pixel shuffle layer to turn the tensor from H×W×2C to 2H×2W×C/2. And the residual method is also used. Finally, the output layer of the U-Net is a tanh function that limits the POH value in the range of [−π, π].

    Figure 3.  (a) U-Net neural network architecture of 4K-DMDNet. (b) Upsampling block architecture. The figures between the brackets present the kernel size and the stride of the convolutional layer, respectively.

    The learnable parameter number presents the learning capability of network frameworks. For CNN, it is the number and size of the convolution kernels. When the pending data exceeds the learning capability, the network will not converge stably. Increasing the number of convolutional layers is usually used for more learnable parameters. But the deeper the network, the more difficult the training. The upsampling method is another important factor which prominently affects the learnable parameter number. Here we use the sub-pixel convolution method to achieve the 4K hologram generation. It can increase the learnable parameters of the upsampling path by four times without changing the network depth. To highlight the strong learning capability of the sub-pixel convolution method, we compare it with the other two common upsampling methods, as shown in Fig. 4.

    Figure 4.  (ac) Schematic diagram of the transposed convolution, NN-resize convolution, and sub-pixel convolution with their corresponding numerical simulations.

    The task of the upsampling path is to upscale the low-resolution feature map gradually to the target size. The transposed convolution method, also called the deconvolution method, realizes the upsampling by zero-padding around the input layer to double size and then convoluting with a stride of one. It is the original upsampling method used in the U-Net network. However, its reconstructions face the “checkerboard artifacts” problem, caused by the uneven overlap in the convolution process44, as shown in Fig. 4(a). The nearest neighbor resize convolution (NN-resize convolution) can effectively solve this problem. It replaces the zero-padding operation with the nearest neighbor interpolation, adding more valid information. The sub-pixel convolution further uses learnable parameters instead of interpolation information to enhance network performance. It includes a convolutional layer with a stride of one and four times the original channel number, and a pixel shuffle layer to permute data Li from the channel dimension into blocks of 2-D spatial data,

    PS(Li)h,w,c=Lih/2,w/2,c2mod(w,2)+cmod(h,2). (2)

    The peak signal-to-noise ratio (PSNR) is employed to quantitatively evaluate the reconstruction quality of the above three upsampling methods. We can see that the sub-pixel convolution method can achieve high-fidelity and artifact-free images compared with the other two methods. The PSNR of the reconstruction with the sub-pixel convolution method is 19.27 dB.

    The Fresnel diffraction model is used as the diffraction model of 4K-DMDNet, which is advantageous for computational speed. It calculates the propagation process from the hologram plane to the object plane, as shown in Fig. 5(a). The intensity distribution on the object plane can be formulated as

    Figure 5.  (a) Schematic diagram of the Fresnel diffraction model. (b) Fresnel diffraction model with oversampling method realized in the neural network layer manner. (c) Comparison between the numerical simulation and optical reconstruction with the undersampling problem. (d) Schematic diagram of the oversampling method.
    \begin{split} {\hat{\boldsymbol{I}}}\left( {{\boldsymbol{x}},{\boldsymbol{y}}} \right) =\;& {| {{\hat{\boldsymbol{C}}}\left( {{\boldsymbol{x}},{\boldsymbol{y}}} \right)} |^2} = {| {{\mathcal{F}}\left\{ {{\rm{exp}} [{\rm{i}}{{{\boldsymbol{Φ}}}} \left( {{\boldsymbol{x}}_0,{\boldsymbol{y}}_0} \right)]} \right\}} |^2} \\ =\;& {| {{\mathcal{F}}\left\{ {{\rm{exp}} [{\rm{i}}{{\boldsymbol{\varphi}}} \left( {{\boldsymbol{x}}_0,{\boldsymbol{y}}_0} \right)] \cdot {\rm{exp}} \left[ {{\rm{i}}\frac{{\rm{\pi}} }{{\lambda d}}\left( {{\boldsymbol{x}}_0^2 + {\boldsymbol{y}}_0^2} \right)} \right]} \right\}} |^2}\;, \end{split} (3)

    where {{\boldsymbol{x}},{\boldsymbol{y}}},{{\boldsymbol{x}}_0,{\boldsymbol{y}}_0}represent the coordinates on the object plane and the hologram plane, respectively, \hat {\boldsymbol{C}}\left( {{\boldsymbol{x}},{\boldsymbol{y}}} \right) represents the complex amplitude distribution on the object plane, \mathcal{F} denotes the Fourier transform, \boldsymbol{\varphi} \left( {{\boldsymbol{x}}_0,{\boldsymbol{y}}_0}\right) is the output POH of the U-Net, \lambda is the wavelength of the light sources, and d is the distance between the two planes. By changing the parameters \lambda and d , full-color and multi-plane holographic displays can be obtained by the 4K-DMDNet. The calculation process of the Fresnel diffraction model is realized in the network layer manner, as shown in Fig. 5(b). Because the neural network can only backpropagate with real numbers, the light field distribution on the hologram plane is split into real and imaginary parts at the beginning according to the Euler’s formula:

    {\rm{exp}} [{\rm{i}}\boldsymbol{Φ} \left( {{\boldsymbol{x}}_0,{\boldsymbol{y}}_0} \right)] = \cos [\boldsymbol{Φ} \left( {{\boldsymbol{x}}_0,{\boldsymbol{y}}_0} \right)] + {\rm{i}}\sin [ \boldsymbol{Φ} \left( {{\boldsymbol{x}}_0,{\boldsymbol{y}}_0} \right)]. (4)

    Since the FFT results of real numbers are complex numbers, they need to be split again according to the following formulas:

    \begin{split} &{\rm{Re}}\left( {{{\hat {\boldsymbol{C}}}_{{\rm{shift}}}}} \right) = {{\boldsymbol{R}}_{\rm{c}}} + {\rm{i}} \cdot {\rm{i}} \cdot {{\boldsymbol{I}}_{\rm{s}}} = {{\boldsymbol{R}}_{\rm{c}}} - {{\boldsymbol{I}}_{\rm{s}}}\;, \\ & {\rm{Im}}\left( {{{\hat {\boldsymbol{C}}}_{{\rm{shift}}}}} \right) = {{\boldsymbol{I}}_{\rm{c}}} + {{\boldsymbol{R}}_{\rm{s}}}\;, \end{split} (5)

    where {\hat {\boldsymbol{C}}_{\rm{shift}}} is the complex amplitude distribution on the object plane before the second fftshift operation, {{\boldsymbol{R}}_{\rm{c}}} and {{\boldsymbol{I}}_{\rm{c}}} are the real and imaginary parts of FFT result in the cos path, {{\boldsymbol{R}}_{\rm{s}}} and {{\boldsymbol{I}}_{\rm{s}}} are the real and imaginary parts of FFT result in the sin path. The intensity distribution on the object plane is calculated by \hat I = {\left| {\hat {\boldsymbol{C}}} \right|^2} = {\rm{Re}}{\left( {\hat {\boldsymbol{C}}} \right)^2} + {\rm{Im}}{\left( {\hat {\boldsymbol{C}}} \right)^2}.

    Although the model-driven deep learning is an effective tool for high-quality POH generation, the insufficient constraints cause the artifacts on the reconstructions. Here we propose to strengthen the constraints in the frequency domain for solving this problem. The spectrum of the light field is zero padded to double the size in the calculation process. The spectrum of the object plane {\boldsymbol{S}} can be calculated as

    \begin{split} {\boldsymbol{S}} =\;& \mathcal{F}\left\{ {\hat {\boldsymbol{C}}\left( {{\boldsymbol{x}},{\boldsymbol{y}}} \right)} \right\} = \mathcal{F}\left\{ {\mathcal{F}\left\{ {{\rm{exp}} [{\rm{i}}\boldsymbol{Φ} \left( {{\boldsymbol{x}}_0,{\boldsymbol{y}}_0} \right)]} \right\}} \right\}\\ =\;& {\rm{exp}} [{\rm{i}}\boldsymbol{Φ} \left( { - {\boldsymbol{x}}_0, - {\boldsymbol{{y}}_0}} \right)]\;. \end{split} (6)

    So the spectrum S is the inverted image of {\rm{exp}} [{\rm{i}}\boldsymbol{Φ} \left( {{\boldsymbol{x}}_0,{\boldsymbol{y}}_0} \right)]. The zero padding operations are directly added after the sin and cos layer, as shown in Fig. 5(b).

    In addition, this method also plays the role of oversampling. Here we discuss the cause and practical elimination method of the undersampling problem in the Fresnel diffraction calculation. The frequency analysis below is one-dimensional for simplicity. According to the Nyquist-Shannon sampling theorem, the maximum sampling interval on the object plane is determined by the maximum spatial frequency of the light field. The light ray formed by connecting the edge point of the hologram and the center point of the object represents the maximum spatial frequency that the hologram needs to recover, as shown in Fig. 5(a). It can be formulated as

    {f_{{\rm{max}}}} = \frac{2}{\lambda }{\rm{sin}}{\theta _{{\rm{max}}}} = \frac{{np}}{{\lambda d}}\;, (7)

    where n and p are the pixel number and pixel pitch of the hologram, respectively. The maximum sampling interval satisfying the Nyquist-Shannon sampling theorem is

    {{{\Delta}} _{\max }} = \frac{1}{{2{f_{{\rm{max}}}}}} = \frac{{\lambda d}}{{2np}}\;. (8)

    However, according to Eq. (3), the complex amplitude field on the object plane is obtained by multiplying the spherical phase and performing a single FFT. According to the corresponding relationship in the frequency domain, the sampling interval of the object plane in the Fresnel diffraction calculation process is

    {{\Delta}} = \frac{{\lambda d}}{{np}} = 2{{{\Delta}} _{{\rm{max}}}}\;. (9)

    Therefore, the Fresnel diffraction model generally faces the undersampling problem. The numerical simulations can’t accurately represent the practical reconstructions, as shown in Fig. 5(c). The speckle noise often exists in experiments, which is a common problem of POH that need to be addressed45. According to the reciprocal relationship between the frequency domain range and the spatial sampling interval, the zero-padding to the spectrum is Fourier transferred to the double pixel number and half pixel pitch on the object plane, as shown in Fig. 5(d). Therefore, the Nyquist-Shannon sampling theorem is satisfied without any additional information.

    The network training process is realized by calculating the loss function between the image dataset and the corresponding output images, and updating the convolution kernel by the Adam optimization algorithm according to the loss46. The negative Pearson correlation coefficient (NPCC) is chosen as the loss function for the 4K-DMDNet. It guarantees linear amplification and bias-free reconstruction, which increases the convergence probability. The NPCC between the input image {\boldsymbol{I}} and output image \hat {\boldsymbol{I}} can be formulated as

    {\mathcal{L}_{{\rm{NPCC}}}}\left( {\hat {\boldsymbol{I}},{\boldsymbol{I}}} \right) = - \frac{{\sum\nolimits_i^n {\left( {{{\hat {\boldsymbol{I}}}_i} - \overline {\hat {\boldsymbol{I}}} } \right)\left( {{{\boldsymbol{I}}_i} - \overline {\boldsymbol{I}} } \right)} }}{{{{\left\{ {\sum\nolimits_i^n {{{\left( {{{\hat {\boldsymbol{I}}}_i} - \overline {\hat {\boldsymbol{I}}} } \right)}^2}\sum\nolimits_i^n {{{\left( {{{\boldsymbol{I}}_i} - \overline {\boldsymbol{I}} } \right)}^2}} } } \right\}}^{1/2}}}}\;. (10)

    We verified the feasibility of the proposed 4K-DMDNet by both numerical simulations and optical reconstructions. The training epoch is set as 40 and the distance between the object plane and the hologram plane is 0.3 m. The network is trained and tested with public image datasets, DIV2K_train_HR and DIV2K_valid_HR, respectively. And we use the Matlab Deep Learning Toolbox to realize the network building and training. The trained network model and training code are shown in ref.47. All the algorithms were run on the same workstation with an Intel Xeon Gold 6248R CPU and an NVIDIA Quadro GV100 GPU. Note that the transfer training was employed in our previous Holo-Encoder work for better display effects. In order to more intuitively compare the performance of different algorithms, this method was not used for all the following results.

    We first compared the full-color simulations of the POHs generated by the traditional GS algorithm, Holo-Encoder, and the proposed 4K-DMDNet, as shown in Fig. 6(a–c). The test image was selected from the DIV2K_valid_HR dataset, which wasn’t seen by the network before. From the detail views, we can see the GS algorithm faces the speckle noise problem caused by the initial random phase and the amplitude information loss. The contrast of the simulation was low. The Holo-Encoder faces the quality reduction caused by the limited learning capability. The 4K-DMDNet effectively suppressed the above problems and obtained the natural-looking reconstruction. The blurs in Fig. 6(c) are mainly caused by the detailed information loss in the downsampling path. And it can be effectively improved by using other advanced network structures for POH generation. For example, the HRNet maintains high resolution through the whole process, while the U-Net recovers high resolution from low resolution48. Fig. 6(d) shows the PSNR values under different runtimes of the above three algorithms. The GS algorithm achieved a better quality with more iterations and converged to an average quality of 16 dB for above 100 s. However, the 4K-DMDNet broke the trade-off between computation time and reconstruction quality. It can generate the POH in just 0.26 s, with the PSNR of 20.49 dB.

    Figure 6.  Contrast between the numerical simulations of POHs by (a) the GS algorithm, (b) Holo-Encoder, and (c) 4K-DMDNet. (d) Evaluation of algorithm runtime and image quality. The length of the bar represents the standard deviation of 100 samples (DIV2K_valid_HR).

    The experimental setup is shown in Fig. 7(a). A coherent beam was attenuated, expanded, and polarized before illuminating the SLM. A Holoeye GAEA-2 phase-only SLM with the resolution of 3840 × 2160 pixels was employed. The pixel pitch of the SLM was 3.74 μm. The POHs of the target object were uploaded on the SLM. The reconstructed pattern was photographed at the distance of 0.3 m. Color holographic display was realized by time-multiplexing with the 638 nm red, 520 nm green, and 450 nm blue laser sources, as shown in Fig. 7(b). During the time period T, the POHs corresponding to the three channels were loaded onto the SLM in turn. A programmable light switch synchronously controls one of the red, green, and blue lasers that passes through and illuminates the SLM. When the period T is less than the human eye response time, the reconstructed images become a color image. It was high-fidelity and without interference-induced noise, as shown in Fig. 7(c). As to the GS algorithm, since the practical light propagation is not ideal, the speckle noise problem is always magnified on the optical reconstructions. An incoherent Light Emitting Diode (LED) or a partially coherent Self-scanning Light Emitting Device (SLED) source could be employed to reduce the speckle noise. However, the low-coherence light sources also create blurred details and reduced image sharpness. By contrast, in this work, the POHs predicted by the 4K-DMDNet are too smooth to generate the vortex phase. Therefore, compared with the simulations, the optical reconstructions with the laser illumination have no quality degradation and the speckle noise is mostly suppressed.

    Figure 7.  (a) Photograph of the experimental setup. (b) Schematic diagram of the time multiplexing method for full-color display. (c) Full-color 4K optical reconstruction by 4K-DMDNet and its detail views. (d) Optical reconstruction by GS algorithm. (e) Optical reconstruction by Holo-Encoder.

    The 4K-DMDNet learns the latent encodings of POHs in an unsupervised way, which enables a better generalization compared with the data-driven deep learning networks. In the field of two-photon microscopic imaging, optical micromanipulation, and laser nuclear fusion, the patterns with simple shapes and high contrast are widely used. A binary object was experimentally reconstructed to demonstrate the high generalization of the 4K-DMDNet, as shown in Fig. 8. The intensity of the signal part was uniform and the background showed no bottom noise. It is pretty applicable to the head-up display (HUD) and diffractive optical elements (DOEs) design.

    Figure 8.  (a) Object, (b) POH, and (c) optical reconstruction of the binary target.

    The ability to reconstruct 3D scenes of the 4K-DMDNet is presented in Fig. 9. The objects in the scene were at different depths which is indicated by the grayvalue in Fig. 9(b). The 4K-DMDNet could be applied to generate POHs for the layer-oriented objects with the value of the depth. The reconstruction distances were set as 0.28 m, 0.3 m and 0.32 m, respectively. The obvious focusing and defocusing effects can be observed by using a camera.

    Figure 9.  (a) All-in-focus image of the 3D scene. (b) Depth map of the 3D scene. (c) (d) and (e) Optical reconstructions of the 4K-DMDNet for 3D scene at the 28 cm, 30 cm and 32 cm, respectively. The enlarged views are presented at the right side.

    In summary, we propose the 4K-DMDNet model-driven neural network capable of generating high-fidelity 4K computer-generated holograms. The constraint of the reconstructions in the frequency domain is strengthened, which ensures the high-precision optical reconstructions. The sub-pixel convolution method solves the limited learning capability problem which typically appears in the existing hologram generation networks. Compared with the transposed convolution method and NN-resize convolution method, the image quality can be improved to 19.27 dB. Full-color and binary optical reconstructions have been obtained. The display quality outperforms the traditional iterative algorithms and data-driven deep learning algorithms. We believe that our approach further makes the computer-generated holographic display theory to be a viable technology for productive practice.

    The current network architecture is based on the universal U-Net. It is suggested that the accurate physical models and smart mapping relations can also be applied to other advanced network architectures, such as generative adversarial network and graph neural network. More efforts will be needed to accelerate the calculation speed in the future. The proposed 4K-DMDNet can also be integrated for laboratory studies such as metasurface design and additive manufacturing. It should be a very powerful algorithm for portable virtual and augmented reality with the rapid development of ASICs. And it provides a versatile CNN framework for the solutions of various ill-posed inverse problems with mass data.

  • We are grateful for financial supports from National Natural Science Foundation of China (62035003, 61775117), China Postdoctoral Science Foundation (BX2021140), and Tsinghua University Initiative Scientific Research Program (20193080075).

  • The authors declare no competing financial interests.

  • [1] Shi L, Li BC, Kim C, Kellnhofer P, Matusik W. Towards real-time photorealistic 3D holography with deep neural networks. Nature 591, 234–239 (2021). doi: 10.1038/s41586-020-03152-0

    CrossRef Google Scholar

    [2] Zhang CL, Zhang DF, Bian ZP. Dynamic full-color digital holographic 3D display on single DMD. Opto-Electron Adv 4, 200049 (2021). doi: 10.29026/oea.2021.200049

    CrossRef Google Scholar

    [3] He ZH, Sui XM, Jin GF, Cao LC. Progress in virtual reality and augmented reality based on holographic display. Appl Opt 58, A74–A81 (2019). doi: 10.1364/AO.58.000A74

    CrossRef Google Scholar

    [4] Zhao Y, Cao LC, Zhang H, Kong DZ, Jin GF. Accurate calculation of computer-generated holograms using angular-spectrum layer-oriented method. Opt Express 23, 25440–25449 (2015). doi: 10.1364/OE.23.025440

    CrossRef Google Scholar

    [5] Jiang Q, Jin GF, Cao LC. When metasurface meets hologram: principle and advances. Adv Opt Photonics 11, 518–576 (2019). doi: 10.1364/AOP.11.000518

    CrossRef Google Scholar

    [6] Huang LL, Chen XZ, Mühlenbernd H, Zhang H, Chen SM et al. Three-dimensional optical holography using a plasmonic metasurface. Nat Commun 4, 2808 (2013). doi: 10.1038/ncomms3808

    CrossRef Google Scholar

    [7] Guo JY, Wang T, Quan BG, Zhao H, Gu CZ et al. Polarization multiplexing for double images display. Opto-Electron Adv 2, 180029 (2019). doi: 10.29026/oea.2019.180029

    CrossRef Google Scholar

    [8] Gao H, Fan XH, Xiong W, Hong MH. Recent advances in optical dynamic meta-holography. Opto-Electron Adv 4, 210030 (2021). doi: 10.29026/oea.2021.210030

    CrossRef Google Scholar

    [9] Saha SK, Wang D, Nguyen VH, Chang Y, Oakdale JS et al. Scalable submicrometer additive manufacturing. Science 366, 105–109 (2019). doi: 10.1126/science.aax8760

    CrossRef Google Scholar

    [10] Gerchberg RW, Saxton WOA. A practical algorithm for the determination of phase from image and diffraction plane pictures. Optik 35, 237–246 (1972).

    Google Scholar

    [11] Tian SZ, Chen LZ and Zhang H. Optimized Fresnel phase hologram for ringing artifacts removal in lensless holographic projection. Appl Opt 61, B17–B24 (2022).

    Google Scholar

    [12] Chakravarthula P, Peng YF, Kollin J, Fuchs H, Heide F. Wirtinger holography for near-eye displays. ACM Trans Graph 38, 213 (2019). doi: 10.1145/3355089.3356539

    CrossRef Google Scholar

    [13] Zhang JZ, Pégard N, Zhong JS, Adesnik H, Waller L. 3D computer-generated holography by non-convex optimization. Optica 4, 1306–1313 (2017). doi: 10.1364/OPTICA.4.001306

    CrossRef Google Scholar

    [14] He KM, Zhang XY, Ren SQ, Sun J. Deep residual learning for image recognition. In Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition 770–778 (IEEE, 2016);http://doi.org/10.1109/CVPR.2016.90.

    Google Scholar

    [15] Liao MH, Zheng SS, Pan SX, Lu DJ, He WQ et al. Deep-learning-based ciphertext-only attack on optical double random phase encryption. Opto-Electron Adv 4, 200016 (2021). doi: 10.29026/oea.2021.200016

    CrossRef Google Scholar

    [16] Li YX, Qian JM, Feng SJ, Chen Q, Zuo C. Deep-learning-enabled dual-frequency composite fringe projection profilometry for single-shot absolute 3D shape measurement. Opto-Electron Adv 5, 210021 (2022). doi: 10.29026/oea.2022.210021

    CrossRef Google Scholar

    [17] Blinder D, Birnbaum T, Ito T, Shimobaba T. The state-of-the-art in computer generated holography for 3D display. Light Adv Manuf 3, 35 (2022). doi: 10.37188/lam.2022.035

    CrossRef Google Scholar

    [18] He ZH, Sui XM, Jin GF, Chu DP, Cao LC. Optimal quantization for amplitude and phase in computer-generated holography. Opt Express 29, 119–133 (2021). doi: 10.1364/OE.414160

    CrossRef Google Scholar

    [19] Sui XM, He ZH, Jin GF, Chu DP, Cao LC. Band-limited double-phase method for enhancing image sharpness in complex modulated computer-generated holograms. Opt Express 29, 2597–2612 (2021). doi: 10.1364/OE.414299

    CrossRef Google Scholar

    [20] Liu KX, He ZH, Cao LC. Double amplitude freedom Gerchberg–Saxton algorithm for generation of phase-only hologram with speckle suppression. Appl Phys Lett 120, 061103 (2022). doi: 10.1063/5.0080797

    CrossRef Google Scholar

    [21] Liu KX, He ZH, Cao LC. Pattern-adaptive error diffusion algorithm for improved phase-only hologram generation. Chin Opt Lett 19, 050501 (2021). doi: 10.3788/COL202119.050501

    CrossRef Google Scholar

    [22] Kang JW, Park BS, Kim JK, Kim DW, Seo YH. Deep-learning-based hologram generation using a generative model. Appl Opt 60, 7391–7399 (2021). doi: 10.1364/AO.427262

    CrossRef Google Scholar

    [23] Lee J, Jeong J, Cho J, Yoo D, Lee B et al. Deep neural network for multi-depth hologram generation and its training strategy. Opt Express 28, 27137–27154 (2020). doi: 10.1364/OE.402317

    CrossRef Google Scholar

    [24] Zheng HD, Hu JB, Zhou CJ, Wang XX. Computing 3D phase-type holograms based on deep learning method. Photonics 8, 280 (2021). doi: 10.3390/photonics8070280

    CrossRef Google Scholar

    [25] Liu SC, Chu DP. Deep learning for hologram generation. Opt Express 29, 27373–27395 (2021). doi: 10.1364/OE.418803

    CrossRef Google Scholar

    [26] Khan A, Zhang ZJ, Yu YJ, Khan MA, Yan KT et al. GAN-Holo: generative adversarial networks-based generated holography using deep learning. Complexity 2021, 6662161 (2021). doi: 10.1155/2021/6662161

    CrossRef Google Scholar

    [27] Horisaki R, Takagi R, Tanida J. Deep-learning-generated holography. Appl Opt 57, 3859–3863 (2018). doi: 10.1364/AO.57.003859

    CrossRef Google Scholar

    [28] Goi H, Komuro K, Nomura T. Deep-learning-based binary hologram. Appl Opt 59, 7103–7108 (2020). doi: 10.1364/AO.393500

    CrossRef Google Scholar

    [29] Chang CL, Wang D, Zhu DC, Li JM, Xia J et al. Deep-learning-based computer-generated hologram from a stereo image pair. Opt Lett 47, 1482–1485 (2022). doi: 10.1364/OL.453580

    CrossRef Google Scholar

    [30] Hossein Eybposh M, Caira NW, Atisa M, Chakravarthula P, Pégard NC. DeepCGH: 3D computer-generated holography using deep learning. Opt Express 28, 26636–26650 (2020). doi: 10.1364/OE.399624

    CrossRef Google Scholar

    [31] Horisaki R, Nishizaki Y, Kitaguchi K, Saito M, Tanida J. Three-dimensional deeply generated holography [Invited]. Appl Opt 60, A323–A328 (2021). doi: 10.1364/AO.404151

    CrossRef Google Scholar

    [32] Peng YF, Choi S, Padmanaban N, Wetzstein G. Neural holography with camera-in-the-loop training. ACM Trans Graph 39, 185 (2020). doi: 10.1145/3414685.3417802

    CrossRef Google Scholar

    [33] Gopakumar M, Kim J, Choi S, Peng YF, Wetzstein G. Unfiltered holography: optimizing high diffraction orders without optical filtering for compact holographic displays. Opt Lett 46, 5822–5825 (2021). doi: 10.1364/OL.442851

    CrossRef Google Scholar

    [34] Peng YF, Choi S, Kim J, Wetzstein G. Speckle-free holography with partially coherent light sources and camera-in-the-loop calibration. Sci Adv 7, eabg5040 (2021). doi: 10.1126/sciadv.abg5040

    CrossRef Google Scholar

    [35] Ishii Y, Shimobaba T, Blinder D, Birnbaum T, Schelkens P et al. Optimization of phase-only holograms calculated with scaled diffraction calculation through deep neural networks. Appl Phys B 128, 22 (2022). doi: 10.1007/s00340-022-07753-7

    CrossRef Google Scholar

    [36] Yu T, Zhang SJ, Chen W, Liu J, Zhang XY et al. Phase dual-resolution networks for a computer-generated hologram. Opt Express 30, 2378–2389 (2022). doi: 10.1364/OE.448996

    CrossRef Google Scholar

    [37] Sun XH, Mu XY, Xu C, Pang H, Deng QL et al. Dual-task convolutional neural network based on the combination of the U-Net and a diffraction propagation model for phase hologram design with suppressed speckle noise. Opt Express 30, 2646–2658 (2022). doi: 10.1364/OE.440956

    CrossRef Google Scholar

    [38] Wu JC, Liu KX, Sui XM, Cao LC. High-speed computer-generated holography using an autoencoder-based deep neural network. Opt Lett 46, 2908–2911 (2021). doi: 10.1364/OL.425485

    CrossRef Google Scholar

    [39] Situ GH. Deep holography. Light Adv Manuf 3, 13 (2022). doi: 10.37188/lam.2022.013

    CrossRef Google Scholar

    [40] Zuo C, Qian JM, Feng SJ, Yin W, Li YX et al. Deep learning in optical metrology: a review. Light Sci Appl 11, 39 (2022). doi: 10.1038/s41377-022-00714-x

    CrossRef Google Scholar

    [41] Shi L, Li BC, Matusik W. End-to-end learning of 3D phase-only holograms for holographic display. Light Sci Appl 11, 247 (2022). doi: 10.1038/s41377-022-00894-6

    CrossRef Google Scholar

    [42] Ronneberger O, Fischer P, Brox T. U-Net: convolutional networks for biomedical image segmentation. In Proceedings of the 18th International Conference on Medical Image Computing and Computer-Assisted Intervention 234–241 (Springer, 2015);http://doi.org/10.1007/978-3-319-24574-4_28.

    Google Scholar

    [43] Shi WZ, Caballero J, Huszár F, Totz J, Aitken AP et al. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition 1874–1883 (IEEE, 2016); http://doi.org/10.1109/CVPR.2016.207.

    Google Scholar

    [44] Dumoulin V, Shlens J, Kudlur M. A learned representation for artistic style. In Proceedings of the 5th International Conference on Learning Representations (IEEE, 2016). https://arxiv.org/abs/1610.07629

    Google Scholar

    [45] Shimobaba T, Blinder D, Birnbaum T, Hoshi I, Shiomi H et al. Deep-learning computational holography: a review. Front Photonics 3, 854391 (2022). doi: 10.3389/fphot.2022.854391

    CrossRef Google Scholar

    [46] Kingma DP, Ba J. Adam: a method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations (2014). https://arxiv.org/abs/1412.6980

    Google Scholar

    [47] Source code: https://github.com/THUHoloLab/4K-DMDNet

    Google Scholar

    [48] Wang JD, Sun K, Cheng TH, Jiang BR, Deng CR et al. Deep high-resolution representation learning for visual recognition. IEEE Trans Pattern Anal Mach Intell 43, 3349–3364 (2021). doi: 10.1109/TPAMI.2020.2983686

    CrossRef Google Scholar

  • Related articles

    Vittorio Bianco, Pietro Ferraro
    Opto-Electronic Advances    2024, 7(1)   doi: 10.29026/oea.2024.230176
    Sylvain Gigan
    Opto-Electronic Advances    2024, 7(9)   doi: 10.29026/oea.2024.240158
    Pengcheng Huo, Ruixuan Yu, Mingze Liu, Hui Zhang, Yan-qing Lu, Ting Xu
    Opto-Electronic Advances    2024, 7(2)   doi: 10.29026/oea.2024.230184
    Cong He, Dan Zhao, Fei Fan, Hongqiang Zhou, Xin Li, Yao Li, Junjie Li, Fei Dong, Yin-Xiao Miao, Yongtian Wang, Lingling Huang
    Opto-Electronic Advances    2024, 7(2)   doi: 10.29026/oea.2024.230005
    Yuncheng Liu, Ke Xu, Xuhao Fan, Xinger Wang, Xuan Yu, Wei Xiong, Hui Gao
    Opto-Electronic Advances    2024, 7(1)   doi: 10.29026/oea.2024.230108
    Yangyundou Wang, Hao Wang, Min Gu
    Opto-Electronic Advances    2023, 6(2)   doi: 10.29026/oea.2023.220049
    Junkyeong Park, Gyeongtae Kim, Junsuk Rho
    Opto-Electronic Advances    2025, 8(1)   doi: 10.29026/oea.2025.240267
    Changdi Zhou, Yu Huang, Yigong Yang, Deyu Cai, Pei Zhou, Kuenyao Lau, Nianqiang Li, Xiaofeng Li
    Opto-Electronic Advances    2025, 8(1)   doi: 10.29026/oea.2025.240135
  • 4K-DMDNet code
  • 1.  Li, Z.-S., Liu, C., Li, X.-W. et al. Real-time holographic camera for obtaining real 3D scene hologram. Light: Science and Applications, 2025, 14(1): 74.
    2.  Wang, Y., Liu, Y., Su, Y. et al. Improving image quality in holographic near-eye display for variable eye pupil positions and sizes. Optics and Laser Technology, 2025.
    3.  Imtiaz, S.M., Amgalan, T., Hossain, F.M.F. et al. Advanced deep learning model for direct phase-only hologram generation using complex-valued neural networks. Neurocomputing, 2025.
    4.  González-Moncada, J.A., Velez-Zea, A., Barrera-Ramírez, J.F. Multiplane experimental optical data encryption using phase only holography. Optics and Lasers in Engineering, 2025.
    5.  Li, S., Cao, L. Multidimensional crosstalk analysis in autostereoscopic displays: integrating subjective and objective evaluations for image quality assessment. Optics Express, 2025, 33(8): 16911-16924.
    6.  Xu, Z., Leng, J., Dai, P. et al. DSCCNet for high-quality 4K computer-generated holograms. Optics Express, 2025, 33(6): 13733-13747.
    7.  Chen, R., Wang, J., Zhang, S. et al. Deep learning-based phase retrieval with embedded data for holographic data storage. Optics Express, 2025, 33(6): 12731-12746.
    8.  Liang, L., Wang, Z., Wang, Y. et al. A time-multiplexed neural network framework for enhancing focal cues in multi-depth 3D holographic displays from 2D image input. Optics and Lasers in Engineering, 2025.
    9.  Yu, X., Chen, J., Wang, X. Optical cryptography based on computational ghost imaging and computer-generated holography. Optics and Lasers in Engineering, 2025.
    10.  Lou, B., Yao, G., Cui, F. et al. Design of hybrid metagrating microstructures with high efficiency and large angle focusing. Indian Journal of Physics, 2025, 99(3): 1145-1156.
    11.  Xu, D., Ma, Y., Jin, G. et al. Intelligent Photonics: A Disruptive Technology to Shape the Present and Redefine the Future. Engineering, 2025.
    12.  Zhang, Y., Cheng, D., Wang, Y. et al. Real-time multi-depth holographic display using complex-valued neural network. Optics Express, 2025, 33(4): 7380-7395.
    13.  Yan, X., Li, J., Zhang, Y. et al. Generation of Multiple-Depth 3D Computer-Generated Holograms from 2D-Image-Datasets Trained CNN. Advanced Science, 2025, 12(8): 2408610.
    14.  Pi, D., Ye, Y., Cheng, K. et al. Temporal multiplexing complex amplitude holography for 3D display with natural depth perception. Optics Letters, 2025, 50(4): 1160-1163.
    15.  Endo, Y., Oikawa, M., Wilkinson, T.D. et al. Quantized neural network for complex hologram generation. Applied Optics, 2025, 64(5): A12-A18.
    16.  Fang, Q., Zheng, H., Xia, X. et al. Generating high-quality phase-only holograms of binary images using global loss and stochastic homogenization training strategy. Optics and Laser Technology, 2025.
    17.  Cai, R., Guo, H., Li, X. et al. Sub-diffraction-limited single-photon 3D imaging based on domain features extraction network at kilometer-scale distance. Optics and Laser Technology, 2025.
    18.  Liu, Q., Zhao, C., Zhao, F. et al. Rapid computer-generated hologram with lightweight local and global self-attention network. Optics and Laser Technology, 2025.
    19.  Xu, D., Ma, Y., Jin, G. et al. Intelligent Photonics: A Disruptive Technology to Shape the Present and Redefine the Future. Engineering, 2025.
    20.  Fahmid Hossain, F.M., Wu, H.-Y., Imtiaz, S.Md. et al. U2-Net-Combined End-to-End Unsupervised Learning Method for Implementing Accurate Computer-Generated Phase-Only Hologram. IEEE Access, 2025.
    21.  Zhang, Y., Yu, G., Chen, C. et al. A Depth-Aware Network for Real-Time and High-Quality Neural Holography. IEEE Signal Processing Letters, 2025.
    22.  Rymov, D.A., Svistunov, A.S., Starikov, R.S. et al. 3D-CGH-Net: Customizable 3D-hologram generation via deep learning. Optics and Lasers in Engineering, 2025.
    23.  Fricke, S., Castillo, S., Eisemann, M. et al. Real-Time Rendering Framework for Holography. Computer Graphics Forum, 2025.
    24.  Fang, Q., Zheng, H., Xia, X. et al. Diffraction model-driven neural network with semi-supervised training strategy for real-world 3D holographic photography. Optics Express, 2024, 32(26): 45406-45420.
    25.  Qin, H., Han, C., Shi, X. et al. Complex-valued generative adversarial network for real-time and high-quality computer-generated holography. Optics Express, 2024, 32(25): 44437-44451.
    26.  Yuan, G., Zhou, M., Liu, F. et al. Physics-aware cross-domain fusion aids learning-driven computer-generated holography. Photonics Research, 2024, 12(12): 2747-2756.
    27.  Zhu, R., Chen, L., Xiao, J. et al. Three-dimensional computer holography with phase space tailoring. PhotoniX, 2024, 5(1): 34.
    28.  Wang, J., Wang, M., Wu, Y. et al. Ultra-wide viewing angle holographic display system based on spherical diffraction. Displays, 2024.
    29.  Chen, T., Wang, Z., Wang, Y. et al. Multiple viewpoints optimization for holographic near-eye display based on a pupil mask. Optics and Laser Technology, 2024.
    30.  Huang, J., Chen, Y., Li, G. Advances in large viewing angle and achromatic 3D holography. Light: Science and Applications, 2024, 13(1): 133.
    31.  Chen, C., Nam, S.-W., Kim, D. et al. Ultrahigh-fidelity full-color holographic display via color-aware optimization. PhotoniX, 2024, 5(1): 20.
    32.  Huang, Z., Cao, L. Deep learning sheds new light on non-orthogonal optical multiplexing. Light: Science and Applications, 2024, 13(1): 105.
    33.  Wang, D., Li, Z.-S., Zheng, Y. et al. Liquid lens based holographic camera for real 3D scene hologram acquisition using end-to-end physical model-driven network. Light: Science and Applications, 2024, 13(1): 62.
    34.  Wang, K., Song, L., Wang, C. et al. On the use of deep learning for phase recovery. Light: Science and Applications, 2024, 13(1): 4.
    35.  Wei, Y., Chen, Y., Zhou, M. et al. Speckle-free holography with a diffraction-aware global perceptual model. Photonics Research, 2024, 12(11): 2418-2423.
    36.  Liu, K., Wu, J., Cao, L. High-quality and high-speed computer-generated holography via deep-learning-assisted bidirectional error diffusion method. Optics Express, 2024, 32(21): 37342-37354.
    37.  Zhu, J., Bi, Y., Sun, M. et al. Rapid hologram generation through backward ray tracing and adaptive-resolution wavefront recording plane. Chinese Physics B, 2024, 33(11): 114204.
    38.  Huang, Y., Wang, J., Su, P. et al. Lensless holographic dynamic projection system based on weakly supervised learning. Optics and Laser Technology, 2024.
    39.  Chen, R., Wang, J., Zhang, S. et al. Image segmentation of phase-modulated holographic data storage based on deep learning. Optics Express, 2024, 32(20): 35002-35014.
    40.  Zhou, J., Wang, J., Yu, G. et al. Quality improvement of unfiltered holography by optimizing high diffraction orders with fill factor. Optics Letters, 2024, 49(18): 5043-5046.
    41.  Yang, T., Lu, Z. Holo-U2Net for High-Fidelity 3D Hologram Generation. Sensors, 2024, 24(17): 5505.
    42.  Zheng, Y., Shen, C., Wang, Z. et al. Unsupervised deep neural network for fast computer-generated holography with continuous depth control. Optics and Lasers in Engineering, 2024.
    43.  Zhang, K., Sun, X., Lv, Y. et al. Band-amplified angle spectrum method for the phase hologram design to achieve high-quality holographic imaging in long distance. Optics and Laser Technology, 2024.
    44.  Chen, Q., Chen, Z., Chen, T. et al. Image quality improvement for a hybrid compressive light field display based on gradient descent with a mixed loss function. Optics Express, 2024, 32(18): 32218-32231.
    45.  Lin, Y., Ke, S., Song, H. et al. Anti-noise performance analysis in amplitude-modulated collinear holographic data storage using deep learning. Optics Express, 2024, 32(17): 29666-29677.
    46.  Chen, Q., Ding, W., Jiang, F. et al. Complex phase modulation of liquid crystal devices with deep learning. Optics Express, 2024, 32(15): 25883-25891.
    47.  Ma, X., Yang, K., Ma, H. et al. Research on Co-Phasing Closed-Loop Experiment for Optical Synthetic Aperture Using Deep Learning | [深 度 学 习 光 学 合 成 孔 径 共 相 闭 环 实 验 研 究]. Zhongguo Jiguang/Chinese Journal of Lasers, 2024, 51(13): 1317001.
    48.  Yuan, G., Zhou, M., Peng, Y. et al. Error-compensation network for ringing artifact reduction in holographic displays. Optics Letters, 2024, 49(11): 3210-3213.
    49.  Wang, X., Zhou, Q., Zhang, L. et al. Computational Imaging Encryption with a Steganographic and Holographic Authentication Strategy. Laser and Photonics Reviews, 2024, 18(6): 2300820.
    50.  Wang, Y., Wang, H., Liu, S. et al. Unsupervised Deep Learning Enables 3D Imaging for Single-Shot Incoherent Holography. Laser and Photonics Reviews, 2024, 18(6): 2301091.
    51.  Gao, C., Tan, X., Li, H. et al. Image Synthesis of Compressive Light Field Displays with U-Net | [基于 U-Net 的压缩光场显示图案生成方法]. Guangxue Xuebao/Acta Optica Sinica, 2024, 44(10): 1026027.
    52.  Wang, Z., Pang, Y., Liang, L. et al. Holographic near-eye display with improved image quality and depth cue based on pupil optimization. Optics and Lasers in Engineering, 2024.
    53.  Chang, C., Ding, X., Wang, D. et al. Split Lohmann computer holography: fast generation of 3D hologram in single-step diffraction calculation. 2024, 3(3): 036001.
    54.  Zhou, J., Jiang, L., Yu, G. et al. Solution to the issue of high-order diffraction images for cylindrical computer-generated holograms. Optics Express, 2024, 32(9): 14978-14993.
    55.  Xu, J., Xu, L., Liu, F. et al. Numerical Modeling of Tunable Reflection Scattering Angle Control Based on Ge2Sb2Te5 Phase Change Metamaterials. Physics of Wave Phenomena, 2024, 32(2): 105-116.
    56.  Zhang, X., Gong, A., He, W. et al. A Lithium Battery Health Evaluation Method Based on Considering Disturbance Belief Rule Base. Batteries, 2024, 10(4): 129.
    57.  Kiriy, S.A., Rymov, D.A., Svistunov, A.S. et al. Generative adversarial neural network for 3D-hologram reconstruction. Laser Physics Letters, 2024, 21(4): 045201.
    58.  Wang, Z., Liang, L., Chen, T. et al. High quality holographic 3D display with enhanced focus cues based on multiple directional light reconstruction. Optics Letters, 2024, 49(6): 1548-1551.
    59.  Cheremkhin, P.A., Rymov, D.A., Svistunov, A.S. et al. Neural-network-based methods in digital and computer-generated holography: a review. Journal of Optical Technology (A Translation of Opticheskii Zhurnal), 2024, 91(3): 170-180.
    60.  Wang, Y., Zhou, C. Unsupervised CNN-based DIC method for 2D displacement measurement. Optics and Lasers in Engineering, 2024.
    61.  Huang, W., Li, C., Fang, B. et al. Research progress of terahertz wave dynamic control of digital coded metasurfaces. Optics and Lasers in Engineering, 2024.
    62.  Wang, Z., Dong, X., Zuo, J. et al. TD-H: A Method for High-Quality Computer-Generated Holograms. 2024.
    63.  Fang, Q., Zheng, H., Peng, J. et al. Diffraction model-driven neural network trained using multi-scale frequency domain loss for layer-based high-quality computergenerated holography. Proceedings of SPIE - The International Society for Optical Engineering, 2024.
    64.  Tee, C.A.T.H., Majlis, B.Y., Li, S. et al. Deep Learning (DL) based Computer Generated Hologram (CGH) for Beamsteering in Reconfigurable Holographic Switches. 2024.
    65.  Fu, T., Luo, G., Zheng, H. et al. Photonics-Driven Real-Time Holography: Generating Holograms with Extended Depth of Focus Using Diffractive Neuron Networks. ACS Photonics, 2024.
    66.  Ye, Y., Pi, D., Gu, M. et al. Research progress and applications of vectorial holography | [矢量全息技术的研究进展与应用]. Guangdian Gongcheng/Opto-Electronic Engineering, 2024, 51(8): 240082.
    67.  TU, K., SUN, F., WANG, Z. et al. Holographic retinal projection near-eye display with enhanced depth cues. Chinese Journal of Liquid Crystals and Displays, 2024, 39(7): 901-908.
    68.  Peng, J., Zheng, H., Wang, Z. et al. Dataset enhancement training of diffraction model-driven neural networks and extension to generate multi-depth computer-generated holograms. Proceedings of SPIE - The International Society for Optical Engineering, 2024.
    69.  Bianco, V., Ferraro, P. Advancing computer-generated holographic display thanks to diffraction model-driven deep nets. Opto-Electronic Advances, 2024, 7(1): 230176.
    70.  Qi, J., Li, C., Xia, Y. et al. Research progress of biomolecular detection based on metasurfaces. Infrared Physics and Technology, 2024.
    71.  Yin, W., Che, Y.X., Li, X.S. et al. Physics-informed deep learning for fringe pattern analysis. Opto-Electronic Advances, 2024, 7(1): 230034.
    72.  Yang, Y., Gao, Y., Liu, K. et al. Contactless human–computer interaction system based on three-dimensional holographic display and gesture recognition. Applied Physics B: Lasers and Optics, 2023, 129(12): 192.
    73.  Zhong, C., Sang, X., Yan, B. et al. Real-time 4K computer-generated hologram based on encoding conventional neural network with learned layered phase. Scientific Reports, 2023, 13(1): 19372.
    74.  Zhou, Q., Zhang, L., Wang, X. et al. Optical encryption using a sparse-data-driven framework. Optics and Lasers in Engineering, 2023.
    75.  Wang, Z., Lv, G., Pang, Y. et al. Lens array-based holographic 3D display with an expanded field of view and eyebox. Optics Letters, 2023, 48(21): 5559-5562.
    76.  Dong, Z., Ling, Y., Xu, C. et al. Gaze-contingent efficient hologram compression for foveated near-eye holographic displays. Displays, 2023.
    77.  Meng, W., Li, B., Luan, H. et al. Orbital Angular Momentum Neural Communications for 1-to-40 Multicasting with 16-Ary Shift Keying. ACS Photonics, 2023, 10(8): 2799-2807.
    78.  Liu, N., Huang, Z., He, Z. et al. DGE-CNN: 2D-to-3D holographic display based on a depth gradient extracting module and ZCNN network. Optics Express, 2023, 31(15): 23867-23876.
    79.  Sui, X., Wu, W., Pivnenko, M. et al. Polarimetric calibrated robust dual-SLM complex-amplitude computer-generated holography. Optics Letters, 2023, 48(13): 3625-3628.
    80.  Zheng, H., Peng, J., Wang, Z. et al. Diffraction model-driven neural network trained using hybrid domain loss for real-time and high-quality computer-generated holography. Optics Express, 2023, 31(12): 19931-19944.
    81.  Wang, X., He, Z., Cao, L. Analysis of reconstruction quality for computer-generated holograms using a model free of circular-convolution error. Optics Express, 2023, 31(12): 19021-19035.
    82.  Tu, K., Chen, Q., Wang, Z. et al. Depth-Enhanced Holographic Super Multi-View Maxwellian Display Based on Variable Filter Aperture. Micromachines, 2023, 14(6): 1167.
    83.  Wang, Y., Wang, H., Zhou, C. et al. Real-time phase imaging with physics-enhanced network and equivariance. Optics Letters, 2023, 48(10): 2732-2735.
    84.  Yang, Z., Zhao, T., Zhang, P. et al. All-dielectric tunable zero-refractive index metamaterials based on phase change materials. Reviews on Advanced Materials Science, 2023, 62(1): 20230161.
    85.  Jin, L., Xie, J., Pan, B. et al. Generalized Phase Retrieval Model based on Physics-Inspired Network for Holographic Metasurface. Progress in Electromagnetics Research, 2023.
    86.  Wang, Y., Zhou, C., ShuChun, S. et al. Digital Image-correlation for 2D Displacement Measurement Based on Unsupervised Neural Network. Proceedings of SPIE - The International Society for Optical Engineering, 2023.
    87.  Liu, K., Wu, J., He, Z. et al. High-fidelity model-driven deep learning network for phase-only computer-generated holography. Digest of Technical Papers - SID International Symposium, 2023, 54(S1): 402-404.
    88.  Liu, K.-X., Wu, J.-C., He, Z.-H. et al. Progress of learning-based computer-generated holography. Chinese Journal of Liquid Crystals and Displays, 2023, 38(6): 819-828.
    89.  Shui, X., Zheng, H., Xinxing, X. et al. Diffraction model-informed neural network for unsupervised layer-based computer-generated holography. Optics Express, 2022, 30(25): 44814-44826.
  • Created with Highcharts 5.0.7Amount of accessChart context menuAbstract Views, PDF Downloads StatisticsAbstract ViewsPDF Downloads2024-052024-062024-072024-082024-092024-102024-112024-122025-012025-022025-032025-04050100150200250Highcharts.com
    Created with Highcharts 5.0.7Chart context menuAccess Class DistributionFULLTEXT: 77.7 %FULLTEXT: 77.7 %PDF: 22.3 %PDF: 22.3 %FULLTEXTPDFHighcharts.com
    Created with Highcharts 5.0.7Chart context menuAccess Area DistributionOthers: 62.5 %Others: 62.5 %United States: 14.5 %United States: 14.5 %Russia: 7.1 %Russia: 7.1 %United Kingdom: 2.2 %United Kingdom: 2.2 %France: 1.8 %France: 1.8 %India: 1.5 %India: 1.5 %Japan: 1.4 %Japan: 1.4 %Germany: 1.3 %Germany: 1.3 %Romania: 1.1 %Romania: 1.1 %Poland: 0.5 %Poland: 0.5 %Republic of Korea: 0.5 %Republic of Korea: 0.5 %Canada: 0.5 %Canada: 0.5 %Italy: 0.5 %Italy: 0.5 %Australia: 0.5 %Australia: 0.5 %Sweden: 0.4 %Sweden: 0.4 %Brazil: 0.3 %Brazil: 0.3 %Greece: 0.3 %Greece: 0.3 %Belgium: 0.3 %Belgium: 0.3 %Singapore: 0.2 %Singapore: 0.2 %Ukraine: 0.2 %Ukraine: 0.2 %Netherlands: 0.2 %Netherlands: 0.2 %Spain: 0.1 %Spain: 0.1 %Mexico: 0.1 %Mexico: 0.1 %Slovenia: 0.1 %Slovenia: 0.1 %Colombia: 0.1 %Colombia: 0.1 %Turkey: 0.1 %Turkey: 0.1 %Finland: 0.1 %Finland: 0.1 %Egypt: 0.1 %Egypt: 0.1 %Lithuania: 0.1 %Lithuania: 0.1 %Slovakia: 0.1 %Slovakia: 0.1 %Israel: 0.1 %Israel: 0.1 %Hungary: 0.1 %Hungary: 0.1 %Bahrain: 0.1 %Bahrain: 0.1 %New Zealand: 0.1 %New Zealand: 0.1 %Saudi Arabia: 0.1 %Saudi Arabia: 0.1 %Switzerland: 0.1 %Switzerland: 0.1 %Denmark: 0.1 %Denmark: 0.1 %Malaysia: 0.0 %Malaysia: 0.0 %Kazakhstan: 0.0 %Kazakhstan: 0.0 %Chile: 0.0 %Chile: 0.0 %Argentina: 0.0 %Argentina: 0.0 %South Africa: 0.0 %South Africa: 0.0 %Nigeria: 0.0 %Nigeria: 0.0 %Bulgaria: 0.0 %Bulgaria: 0.0 %Philippines: 0.0 %Philippines: 0.0 %Norway: 0.0 %Norway: 0.0 %Thailand: 0.0 %Thailand: 0.0 %Pakistan: 0.0 %Pakistan: 0.0 %Portugal: 0.0 %Portugal: 0.0 %Mongolia: 0.0 %Mongolia: 0.0 %Luxembourg: 0.0 %Luxembourg: 0.0 %Seychelles: 0.0 %Seychelles: 0.0 %Somalia: 0.0 %Somalia: 0.0 %Costa Rica: 0.0 %Costa Rica: 0.0 %Republic of Ireland: 0.0 %Republic of Ireland: 0.0 %Kenya: 0.0 %Kenya: 0.0 %Estonia: 0.0 %Estonia: 0.0 %Qatar: 0.0 %Qatar: 0.0 %OthersUnited StatesRussiaUnited KingdomFranceIndiaJapanGermanyRomaniaPolandRepublic of KoreaCanadaItalyAustraliaSwedenBrazilGreeceBelgiumSingaporeUkraineNetherlandsSpainMexicoSloveniaColombiaTurkeyFinlandEgyptLithuaniaSlovakiaIsraelHungaryBahrainNew ZealandSaudi ArabiaSwitzerlandDenmarkMalaysiaKazakhstanChileArgentinaSouth AfricaNigeriaBulgariaPhilippinesNorwayThailandPakistanPortugalMongoliaLuxembourgSeychellesSomaliaCosta RicaRepublic of IrelandKenyaEstoniaQatarHighcharts.com
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Figures(9)

Article Metrics

Article views(11184) PDF downloads(3095) Cited by(89)

Access History

Other Articles By Authors

Catalog

/

DownLoad:  Full-Size Img  PowerPoint