Inquiry
Form loading...

Generating Multi-Depth 3D Holograms Using a Fully Convolutional Neural Network

2024-07-18 10:45:14

Spatial light modulator is an optical device that uses its own properties to modulate the amplitude, phase and other parameters of the input light under active control, and obtains the expected light field distribution at the final receiving surface by controlling the quantisation and directivity of the light wave front and light wave beam. The application of spatial light modulators to optical neural networks has been developed for decades, and with the improvement of the modulation accuracy of spatial light modulators and the continuous optimisation of the computational algorithms, the great potential of optical neural networks has been continuously explored, with potential applications in machine vision, medical image processing, optical sensor networks and other fields.

This paper presents a method for generating multi-depth phase holography using a fully convolutional neural network (FCN). The method mainly involves a forward-backward diffraction framework to compute the multi-depth diffracted field, and a layer-by-layer replacement method (L2RM) to handle the occlusion relation. The diffracted fields computed by the former are fed into a well-designed FCN, which utilises its powerful non-linear fitting capability to generate a multi-depth hologram of the 3D scene. The latter can improve the reconstruction quality of the hologram by supplementing the information of the occluded objects and smoothing the boundaries of different layers in the scene reconstruction. Refreshing and dynamic 3D displays are achieved in the experiments by loading a computer-generated hologram (CGH) onto the core component spatial light modulator (SLM).

Part of the experimental procedure and experimental results:

A non-polarised semiconductor laser with a wavelength of 638 (±8) nm and a power of 30 mW was used in the experiments, as shown in Fig. 1.The output of the fibre was placed at the focal point of a collimated lens with a focal length of 100 mm to obtain a plane wave, and a neutral density filter was used as an attenuator and polariser to obtain a line polarised light. A half-wave plate (HWP) was rotated so that the direction of light polarisation was aligned with the direction of the LCOS collimation angle, followed by inserting a rectangular aperture to obtain a rectangular profile. The incident light was phase modulated and reflected using a spatial light modulator (Zhongke Microstar FSLM-4K70-P02), and the scene was reconstructed by further magnification using a Fourier lens with a focal length of 100 mm. A spatial filter is used so that the desired diffraction order passes through and other diffraction orders are filtered. The reconstructed enlarged 3D scene was captured using a camera.

图1.png

Fig. 1 Experimental setup (phase-type spatial light modulator, model: FSLM-4K70-P02)

The parameter specifications of the spatial light modulator used in the experiment are as follows:

图片 1.png

Model

FSLM-4K70-P02

Modulation

phase type

LCOS type

Reflection

Grayscale level

8 bit, 256 order

Resolution

4094×2400

Image Size

3.74μm

Effective area

0.7"
15.31mm×8.98mm

Phase range

2π@633nm

Fill factor

90%

Optical utilisation

60%@532nm

Angle of orientation

diffraction efficiency

>97%@32 order 633nm

Refresh frequency

30Hz 

Spectral range

420nm-750nm

Damage Threshold

2W/cm²

Response time

Up 10.8ms, down 18.5ms

Power input

12V 2A

Data Interface

HDMI

图2.png

Fig 2. The generation of 3D graphical dataset. A) 3D random scene. B) Sampling process. C) Intensity image. D) Depth image. E) 3D graphical dataset.

图3.png

Fig. 3 Generation of multi-depth holograms with FCN. a) Calculation of multi-depth diffracted fields using the front-back diffraction framework. b) Structure of FCN. c) Calculation of multi-depth error.

图4.png

Fig 4. Quality comparison of reconstructions. A) Target scene. B) Numerical reconstruction of standard method and L2RM respectively. C) Optical reconstruction of standard method and L2RM respectively.

图5.png

Fig 5. The complex 3D scene and the corresponding hologram. A) Intensity image and B) depth image of the 3D scene. C) The multi-depth hologram generated by FCN.

图6.png

Fig 6. The numerical reconstruction and optical reconstruction of A) WH, B) DPH, and C) L2RM. The images in rows 1, 3, and 5 represent numerical reconstruction, while rows 2, 4, and 6 depict optical reconstruction. In columns 1 and 2, the camera focuses on the front focus plane (“football”) and the rear focus plane (“guitar”) of the “football-guitar” pair, respectively. In columns 3 and 4, the camera focuses on the front focus plane (“airplane”) and the rear focus plane (“dog”) of the “airplane-dog” pair, respectively.

图7.png

Fig 7. Reconstructed objects at different depth planes.

Written at the end:

Optical neural networks have received much attention due to their potential for parallel large-scale computation, low-power operation, and fast response, and spatial light modulators as diffractive devices play an important role in diffractive neural networks and are used in many fields, such as 3D holographic imaging computation for AR/VR, biomedical imaging, and optical sensing. Based on the programmability of diffractive neural networks, higher performance diffractive neural networks are expected to be realised in the future.